id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2310.00426
PixArt-$α$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis
The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-$\alpha$, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-$\alpha$'s training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-$\alpha$ only takes 10.8% of Stable Diffusion v1.5's training time (675 vs. 6,250 A100 GPU days), saving nearly \$300,000 (\$26,000 vs. \$320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-$\alpha$ excels in image quality, artistry, and semantic control. We hope PIXART-$\alpha$ will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch.
Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, Zhenguo Li
2023-09-30T16:18:00Z
http://arxiv.org/abs/2310.00426v3
Pixart-\(\alpha\): Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis ###### Abstract The most advanced text-to-image (T2I) models require significant training costs (_e.g._, millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO\({}_{2}\) emissions. This paper introduces Pixart-\(\alpha\), a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (_e.g._, Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024 \(\times\) 1024 resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that respectively optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, Pixart-\(\alpha\)'s training speed markedly surpassing existing large-scale T2I models, _e.g._, Pixart-\(\alpha\) only takes 10.8% of Stable Diffusion v1.5's training time (\(\sim\)675 _vs._\(\sim\)6,250 A100 GPU days), saving nearly $300,000 ($26,000 _vs._$320,000) and reducing 90% CO\({}_{2}\) emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that Pixart-\(\alpha\) excels in image quality, aritstry, and semantic control. We hope Pixart-\(\alpha\) will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch. ## 1 Introduction Recently, the advancement of text-to-image (T2I) generative models, such as DALL-E 2 (OpenAI, 2023), Imagen (Saharia et al., 2022), and Stable Diffusion (Rombach et al., 2022) has started a new era of photorealistic image synthesis, profoundly impacting numerous downstream applications, such as image editing (Kim et al., 2022), video generation (Wu et al., 2022), 3D assets creation (Poole et al., 2022), _etc_. However, the training of these advanced models demands immense computational resources. For instance, training SDv1.5 (Podell et al., 2023) necessitates 6K A100 GPU days, approximately costing ## 1 Introduction Figure 1: Samples produced by PixArt-\(\alpha\) exhibit exceptional quality, characterized by a remarkable level of fidelity and precision in adhering to the provided textual descriptions. SS320,000, and the recent larger model, RAPHAEL (Xue et al., 2023b), even costs 60K A100 GPU days - requiring around $3,080,000, as detailed in Table 2. Additionally, the training contributes substantial CO2 emissions, posing environmental stress; _e.g._ RAPHAEL's (Xue et al., 2023b) training results in 35 tons of CO2 emissions, equivalent to the amount one person emits over 7 years, as shown in Figure 2. Such a huge cost imposes significant barriers for both the research community and entrepreneurs in accessing those models, causing a significant hindrance to the crucial advancement of the AIGC community. Given these challenges, a pivotal question arises: _Can we develop a high-quality image generator with affordable resource consumption?_ In this paper, we introduce PixArt-\(\alpha\), which significantly reduces computational demands of training while maintaining competitive image generation quality to the current state-of-the-art image generators, as illustrated in Figure 1. To achieve this, we propose three core designs: Training strategy decomposition.We decompose the intricate text-to-image generation task into three streamlined subtasks: (1) learning the pixel distribution of natural images, (2) learning text-image alignment, and (3) enhancing the aesthetic quality of images. For the first subtask, we propose initializing the T2I model with a low-cost class-condition model, significantly reducing the learning cost. For the second and third subtasks, we formulate a training paradigm consisting of pretraining and fine-tuning: pretraining on text-image pair data rich in information density, followed by fine-tuning on data with superior aesthetic quality, boosting the training efficiency. Efficient T2I Transformer.Based on the Diffusion Transformer (DiT) (Peebles and Xie, 2023), we incorporate cross-attention modules to inject text conditions and streamline the computation-intensive class-condition branch to improve efficiency. Furthermore, we introduce a re-parameterization technique that allows the adjusted text-to-image model to load the original class-condition model's parameters directly. Consequently, we can leverage prior knowledge learned from ImageNet (Deng et al., 2009) about natural image distribution to give a reasonable initialization for the T2I Transformer and accelerate its training. High-informative data.Our investigation reveals notable shortcomings in existing text-image pair datasets, exemplified by LAION (Schuhmann et al., 2021), where textual captions often suffer from a lack of informative content (_i.e._, typically describing only a partial of objects in the images) and a severe long-tail effect (_i.e._, with a large number of nouns appearing with extremely low frequencies). These deficiencies significantly hamper the training efficiency for T2I models and lead Figure 2: Comparisons of CO2 emissions1 and training cost2 among T2I generators. PixArt-\(\alpha\) achieves an exceptionally low training cost of S26,000. Compared to RAPHAEL, our CO2 emissions and training costs are merely 1.1% and 0.85%, respectively. to millions of iterations to learn stable text-image alignments. To address them, we propose an auto-labeling pipeline utilizing the state-of-the-art vision-language model (LLaVA (Liu et al., 2023)) to generate captions on the SAM (Kirillov et al., 2023). Referencing in Section 2.4, the SAM dataset is advantageous due to its rich and diverse collection of objects, making it an ideal resource for creating high-information-density text-image pairs, more suitable for text-image alignment learning. Our effective designs result in remarkable training efficiency for our model, costing only 675 A100 GPU days and $26,000. As demonstrated in Figure 2, our method consumes less than 0.2% training data volume compared to Imagen and costs less than 2% training time compared to RAPHAEL. Compared to RAPHAEL, our training costs are only 1%, saving approximately $3,000,000 (PixArt-\(\alpha\)'s $26,000 _vs._RAPHAEL's $3,080,000). Regarding generation quality, our user study experiments indicate that PixArt-\(\alpha\) offers superior image quality and semantic alignment compared to existing SOTA T2I models (_e.g._, DALL-E 2 (OpenAI, 2023), Stable Diffusion (Rombach et al., 2022), _etc._), and its performance on T2I-CompBench (Huang et al., 2023) also evidences our advantage in semantic control. We hope our attempts to train T2I models efficiently can offer valuable insights for the AIGC community and help more individual researchers or startups create their own high-quality T2I models at lower costs. ## 2 Method ### Motivation The reasons for slow T2I training lie in two aspects: the training pipeline and the data. The T2I generation task can be decomposed into three aspects: **Capturing Pixel Dependency**: Generating realistic images involves understanding intricate pixel-level dependencies within images and capturing their distribution; **Alignment between Text and Image:** Precise alignment learning is required for understanding how to generate images that accurately match the text description; **High Aesthetic Quality:** Besides faithful textual descriptions, being aesthetically pleasing is another vital attribute of generated images. Current methods entangle these three problems together and directly train from scratch using vast amount of data, resulting in inefficient training. To solve this issue, we disentangle these aspects into three stages, as will be described in Section 2.2. Another problem, depicted in Figure 3, is with the quality of captions of the current dataset. The current text-image pairs often suffer from text-image misalignment, deficient descriptions, infrequent diverse vocabulary usage, and inclusion of low-quality data. These problems introduce difficulties in training, resulting in unnecessarily millions of iterations to achieve stable alignment between text and images. To address this challenge, we introduce an innovative auto-labeling pipeline to generate precise image captions, as will be described in Section 2.4. Figure 3: LAION raw captions _vs._ LLaVA refined captions. LLaVA provides high-information-density captions that aid the model in grasping more concepts per iteration and boost text-image alignment efficiency. ### Training strategy Decomposition The model's generative capabilities can be gradually optimized by partitioning the training into three stages with different data types. **Stage1: Pixel dependency learning.** The current class-guided approach (Peebles and Xie, 2023) has shown exemplary performance in generating semantically coherent and reasonable pixels in individual images. Training a class-conditional image generation model (Peebles and Xie, 2023) for natural images is relatively easy and inexpensive, as explained in Appendix A.5. Additionally, we find that a suitable initialization can significantly boost training efficiency. Therefore, we boost our model from an ImageNet-pretrained model, and the architecture of our model is designed to be compatible with the pretrained weights. **Stage2: Text-image alignment learning.** The primary challenge in transitioning from pretrained class-guided image generation to text-to-image generation is on how to achieve accurate alignment between significantly increased text concepts and images. This alignment process is not only time-consuming but also inherently challenging. To efficiently facilitate this process, we construct a dataset consisting of precise text-image pairs with high concept density. The data creation pipeline will be described in Section 2.4. By employing accurate and information-rich data, our training process can efficiently handle a larger number of nouns in each iteration while encountering considerably less ambiguity compared to previous datasets. This strategic approach empowers our network to align textual descriptions with images effectively. **Stage3: High-resolution and aesthetic image generation.** In the third stage, we fine-tune our model using high-quality aesthetic data for high-resolution image generation. Remarkably, we observe that the adaptation process in this stage converges significantly faster, primarily owing to the strong prior knowledge established in the preceding stages. Decoupling the training process into different stages significantly alleviates the training difficulties and achieves highly efficient training. ### Efficient T2I Transformer PixArt-\(\alpha\) adopts the Diffusion Transformer (DiT) (Peebles and Xie, 2023) as the base architecture and innovatively tailors the Transformer blocks to handle the unique challenges of T2I tasks, as depicted in Figure 4. Several dedicated designs are proposed as follows: * _Cross-Attention layer._ We incorporate a multi-head cross-attention layer to the DiT block. It is positioned between the self-attention layer and feed-forward layer so that the model can flexibly interact with the text embedding extracted from the language model. To facilitate the pretrained weights, we initialize the output projection layer in the cross-attention layer to zero, effectively acting as an identity mapping and preserving the input for the subsequent layers. * _AdaLN-single._ We find that the linear projections in the adaptive normalization layers (Perez et al., 2018) (_adaLN_) module of the DiT account for a substantial proportion (27%) of the parameters. Such a large number of parameters is not useful since the class condition is not employed for our T2I model. Thus, we propose adaLN-single, which only uses time embedding as input in the first block for independent control (shown on the right side of Figure 4). Specifically, in Figure 4: Model architecture of PixArt-\(\alpha\). A cross-attention module is integrated into each block to inject textual conditions. To optimize efficiency, all blocks share the same adaLN-single parameters for time conditions. the \(i\)th block, let \(S^{(i)}=[\beta_{1}^{(i)},\beta_{2}^{(i)},\gamma_{1}^{(i)},\gamma_{2}^{(i)},\alpha_ {1}^{(i)},\alpha_{2}^{(i)}]\) be a tuple of all the scales and shift parameters in _adaLN_. In the DIT, \(S^{(i)}\) is obtained through a block-specific MLP \(S^{(i)}=f^{(i)}(c+t)\), where \(c\) and \(t\) denotes the class condition and time embedding, respectively. However, in adaLN-single, one global set of shifts and scales are computed as \(\overline{S}=f(t)\) only at the first block which is shared across all the blocks. Then, \(S^{(i)}\) is obtained as \(S^{(i)}=g(\overline{S},E^{(i)})\), where \(g\) is a summation function, and \(E^{(i)}\) is a layer-specific trainable embedding with the same shape as \(\overline{S}\), which adaptively adjusts the scale and shift parameters in different blocks. * _Re-parameterization._ To utilize the aforementioned pretrained weights, all \(E^{(i)}\)'s are initialized to values that yield the same \(S^{(i)}\) as the DiT without \(c\) for a selected \(t\) (empirically, we use \(t=500\)). This design effectively replaces the layer-specific MLPs with a global MLP and layer-specific trainable embeddings while preserving compatibility with the pretrained weights. Experiments demonstrate that incorporating a global MLP and layer-wise embeddings for time-step information, as well as cross-attention layers for handling textual information, persists the model's generative abilities while effectively reducing its size. ### Dataset construction Image-text pair auto-labeling.The captions of the LAION dataset exhibit various issues, such as text-image misalignment, deficient descriptions, and infrequent vocabulary as shown in Figure 3. To generate captions with high information density, we leverage the state-of-the-art vision-language model LLaVA (Liu et al., 2023). Employing the prompt, "_Describe this image and its style in a very detailed manner_", we have significantly improved the quality of captions, as shown in Figure 3. However, it is worth noting that the LAION dataset predominantly comprises of simplistic product previews from shopping websites, which are not ideal for training text-to-image generation that seeks diversity in object combinations. Consequently, we have opted to utilize the SAM dataset (Kirillov et al., 2023), which is originally used for segmentation tasks but features imagery rich in diverse objects. By applying LLaVA to SAM, we have successfully acquired high-quality text-image pairs characterized by a high concept density, as shown in Figure 10 and Figure 11 in the Appendix. In the third stage, we construct our training dataset by incorporating JourneyDB (Pan et al., 2023) and a 10M internal dataset to enhance the aesthetic quality of generated images beyond realistic photographs. Refer to Appendix A.5 for details. As a result, we show the vocabulary analysis (NLTK, 2023) in Table 1, and we define the valid distinct nouns as those appearing more than 10 times in the dataset. We apply LLaVA on LAION to generate LAION-LLaVA. The LAION dataset has 2.46 M distinct nouns, but only 8.5% are valid. This valid noun proportion significantly increases from 8.5% to 13.3% with LLaVA-labeled captions. Despite LAION's original captions containing a staggering 210K distinct nouns, its total noun number is a mere 72M. However, LAION-LLaVA contains 234M noun numbers with 85K distinct nouns, and the average number of nouns per image increases from 6.4 to 21, indicating the incompleteness of the original LAION captions. Additionally, SAM-LLaVA outperforms LAION-LLaVA with a total noun number of 328M and 30 nouns per image, demonstrating SAM contains richer objectives and superior informative density per image. Lastly, the internal data also ensures sufficient valid nouns and average information density for fine-tuning. LLaVA-labeled captions significantly increase the valid ratio and average noun count per image, improving concept density. ## 3 Experiment This section begins by outlining the detailed training and evaluation protocols. Subsequently, we provide comprehensive comparisons across three main metrics. We then delve into the critical de \begin{table} \begin{tabular}{c c c c} \hline \hline **Dataset** & **VN/DN** & **Total Noun** & **Average** \\ \hline LAION & 210K/2461K = 8.5\% & 72.0M & 6.4/Image \\ LAION-LLaVA & 85K/646K = 13.3\% & 233.9M & 20.9/Img \\ SAM-LLaVA & 23K/124K = 18.6\% & 327.9M & 29.3/Img \\ Internal & 152K/582K = 26.1\% & 136.6M & 12.2/Img \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of noun concepts for different datasets. **VN**: valid distinct nouns (appearing more than 10 times); **DN**: total distinct nouns; **Average**: average noun count per image. signs implemented in PixArt-\(\alpha\) to achieve superior efficiency and effectiveness through ablation studies. Finally, we demonstrate the versatility of our PixArt-\(\alpha\) through application extensions. ### Implementation details **Training Details.** We follow Imagen (Saharia et al., 2022) and DeepFloyd (DeepFloyd, 2023) to employ the T5 large language model (_i.e._, 4.3B Flan-T5-XXL) as the text encoder for conditional feature extraction, and use DiT-XL/2 (Peebles and Xie, 2023) as our base network architecture. Unlike previous works that extract a standard and fixed 77 text tokens, we adjust the length of extracted text tokens to 120, as the caption curated in PixArt-\(\alpha\) is much denser to provide more fine-grained details. To capture the latent features of input images, we employ a pre-trained and frozen VAE from LDM (Rombach et al., 2022). Before feeding the images into the VAE, we resize and center-crop them to have the same size. We also employ multi-aspect augmentation introduced in SDXL (Podell et al., 2023) to enable arbitrary aspect image generation. The AdamW optimizer (Loshchilov and Hutter, 2017) is utilized with a weight decay of 0.03 and a constant 2e-5 learning rate. Our final model is trained on 64 V100 for approximately 22 days. See more details in Appendix A.5. **Evaluation Metrics.** We comprehensively evaluate PixArt-\(\alpha\) via three primary metrics, _i.e._, Frechet Inception Distance (FID) (Heusel et al., 2017) on MSCOCO dataset (Lin et al., 2014), compositionality on T2I-CompBench (Huang et al., 2023), and human-preference rate on user study. ### Performance Comparisons and Analysis **Fidelity Assessment.** The FID is a metric to evaluate the quality of generated images. The comparison between our method and other methods in terms of FID and their training time is summarized in Table 2. When tested for zero-shot performance on the COCO dataset, PixArt-\(\alpha\) achieves a FID score of 10.65. It is particularly notable as it is accomplished in merely 10% of the training time (675 _vs._ 6250 A100 GPU days) and merely 10% of the training samples (25M _vs._ 270M images) relative to the second most efficient method. Compared to state-of-the-art methods typically trained using substantial resources, PixArt-\(\alpha\) remarkably consumes approximately 2% of the training resources while achieving a comparable FID performance. Although the best-performing model (RAPHEAL) exhibits a lower FID, it relies on unaffordable resources (_i.e._, \(200\times\) more training samples, \(88\times\) longer training time, and \(5\times\) more network parameters than PixArt-\(\alpha\)). We argue that FID may not be an appropriate metric for image quality evaluation, and it is more appropriate to use the evaluation of human users, as stated in Appendix A.8. We leave scaling of PixArt-\(\alpha\) for future exploration for performance enhancement. **Alignment Assessment.** Beyond the above evaluation, we also assess the alignment between the generated images and text condition using T2I-Compbench (Huang et al., 2023), a comprehensive benchmark for evaluating the compositional text-to-image generation capability. As depicted in Table 3, we evaluate several crucial aspects, including attribute binding, object relationships, and complex compositions. PixArt-\(\alpha\) exhibited outstanding performance across nearly all (5/6) evaluation metrics. This remarkable performance is primarily attributed to the text-image alignment learning in Stage 2 training described in Section 2.2, where high-quality text-image pairs were leveraged to achieve superior alignment capabilities. **User Study.** While quantitative evaluation metrics measure the overall distribution of two image sets, they may not comprehensively evaluate the visual quality of the images. Consequently, we conducted a user study to supplement our evaluation and provide a more intuitive assessment of PixArt-\(\alpha\)'s performance. Since user study involves human evaluators and can be time-consuming, we selected the top-performing models, namely DALLE-2, SDv2, SDXL, and DeepFloyd, which are accessible through APIs and capable of generating images. For each model, we employ a consistent set of 300 prompts from Feng et al. (2023) to generate images. These images are then distributed among 50 individuals for evaluation. Participants are asked to rank each model based on the perceptual quality of the generated images and the precision of alignments between the text prompts and the corresponding images. The results presented in Figure 5 clearly indicate that PixArt-\(\alpha\) excels in both higher fidelity and superior alignment. For example, compared to SDv2, a current top-tier T2I model, PixArt-\(\alpha\) exhibits a 7.2% improvement in image quality and a substantial 42.4% enhancement in alignment. ### Ablation Study We then conduct ablation studies on the crucial modifications discussed in Section 2.3, including structure modifications and re-parameterization design. In Figure 6, we provide visual results and perform a FID analysis. We randomly choose 8 prompts from the SAM test set for visualization and compute the zero-shot FID-5K score on the SAM dataset. Details are described below. "_w/o re-param_" results are generated from the model trained from scratch without re-parameterization design. We supplemented with an additional 200K iterations to compensate for the missing iterations from the pretraining stage for a fair comparison. "_adaLN_" results are from the model following the D1T structure to use the sum of time and text feature as input to the MLP layer for the scale and shift parameters within each block. "_adaLN-single_" results are obtained from the model using Transformer blocks with the adaLN-single module in Section 2.3. In both "_adaLN_" and "_adaLN-single_", we employ the re-parameterization design and training for 200K iterations. As depicted in Figure 6, despite "_adaLN_" performing lower FID, its visual results are on par with our "_adaLN-single_" design. The GPU memory consumption of "_adaLN_" is 29GB, whereas "_adaLN-single_" achieves a reduction to 23GB, saving 21% in GPU memory consumption. Furthermore, considering the model parameters, the "_adaLN_" method consumes 833M, whereas our approach reduces to a mere 611M, resulting in an impressive 26% reduction. "_adaLN-single-L (Ours)_" results are generated from the model with same setting as "_adaLN-single_", but training for a longer training period of 1500K iterations. Considering memory and parameter efficiency, we incorporate the "_adaLN-single-L_" into our final design. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Method** & **Type** & \(\#\)**Params** & \(\#\)**Images** & **FID-30K\(\downarrow\)** & **GPU days** \\ \hline DALL-E & Diff & 12.0B & 1.54B & 27.50 & - \\ GLIDE & Diff & 5.0B & 5.94B & 12.24 & - \\ LDM & Diff & 1.4B & 0.27B & 12.64 & - \\ DALL-E 2 & Diff & 6.5B & 5.63B & 10.39 & 41,667 A100 \\ SDv1.5 & Diff & 0.9B & 3.16B & 9.62 & 6,250 A100 \\ GigaGAN & GAN & 0.9B & 0.98B & 9.09 & 4,783 A100 \\ Imagen & Diff & 3.0B & 15.36B & 7.27 & 7,132 A100 \\ RAPHAEL & Diff & 3.0B & 5.0B & 6.61 & 60,000 A100 \\ \hline PixArt-\(\alpha\) & Diff & 0.6B & 0.025B & 10.65 & 675 A100 \\ \hline \hline \end{tabular} \end{table} Table 2: We thoroughly compare the PixArt-\(\alpha\) with recent T2I models, considering several essential factors: model size, the total volume of training images, COCO FID-30K scores (zero-shot), and the computational cost (GPU days3). Our highly effective approach significantly reduces resource consumption, including training data usage and training time. The baseline data is sourced from GigaGAN Kang et al. (2023). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**Attribute Binding**} & \multicolumn{3}{c}{**Object Relationship**} & **Complex\(\uparrow\)** \\ \cline{2-6} & **Color**\(\uparrow\) & **Shape\(\uparrow\)** & **Texture\(\uparrow\)** & **Spatial\(\uparrow\)** & **Non-Spatial\(\uparrow\)** & \\ \hline Stable v1.4 & 0.3765 & 0.3576 & 0.4156 & 0.1246 & 0.3079 & 0.3080 \\ Stable v2 & 0.5065 & 0.4221 & 0.4922 & 0.1342 & 0.3096 & 0.3386 \\ Composable v2 & 0.4063 & 0.3299 & 0.3645 & 0.0800 & 0.2980 & 0.2898 \\ Structured v2 & 0.4990 & 0.4218 & 0.4900 & 0.1386 & 0.3111 & 0.3355 \\ Attn-Exct v2 & 0.6400 & 0.4517 & 0.5963 & 0.1455 & 0.3109 & 0.3401 \\ GORS & 0.6603 & 0.4785 & 0.6287 & 0.1815 & 0.3193 & 0.3328 \\ Dalle-2 & 0.5750 & 0.5464 & 0.6374 & 0.1283 & 0.3043 & 0.3696 \\ SDXL & 0.6369 & 0.5408 & 0.5637 & 0.2032 & 0.3110 & 0.4091 \\ \hline PixArt-\(\alpha\) & 0.6886 & 0.5582 & 0.7044 & 0.2082 & 0.3179 & 0.4117 \\ \hline \hline \end{tabular} \end{table} Table 3: Alignment evaluation on T2I-CompBench. PixArt-\(\alpha\) demonstrated exceptional performance in attribute binding, object relationships, and complex compositions, indicating our method achieves superior compositional generation ability. We highlight the best value in \(\mathsf{blue}\), and the second-best value in \(\mathsf{green}\). The baseline data are sourced from \(\mathsf{Huang}\) et al. (2023). The visual results clearly indicate that, although the differences in FID scores between the "_adadLN_" and "_adaLN-single_" models are relatively small, a significant discrepancy exists in their visual outcomes. The "_w/o re-param_" model consistently displays distorted target images and lacks crucial details across the entire test set. ## 4 Related work We review related works in three aspects: Denoising diffusion probabilistic models (DDPM), Latent Diffusion Model, and Diffusion Transformer. More related works can be found in Appendix A.1. DDPMs (Ho et al., 2020; Sohl-Dickstein et al., 2015) have emerged as highly successful approaches for image generation, which employs an iterative denoising process to transform Gaussian noise into an image. Latent Diffusion Model (Rombach et al., 2022) enhances the traditional DDPMs by employing score-matching on the image latent space and introducing cross-attention-based controlling. Witnessed the success of Transformer architecture on many computer vision tasks, Diffusion Transformer (DiT) (Peebles and Xie, 2023) and its variant (Bao et al., 2023; Zheng et al., 2023) further replace the Convolutional-based U-Net (Ronneberger et al., 2015) backbone with Transformers for increased scalability. Figure 5: User study on 300 fixed prompts from Feng et al. (2023). The ratio values indicate the percentages of participants preferring the corresponding model. PixArt-\(\alpha\) achieves a superior performance in both quality and alignment. Figure 6: **Left**: Visual comparison of ablation studies are presented. **Right**: Zero-shot FID-2K on SAM, and GPU memory usage. Our method is on par with the “_adaLN_” and saves 21% in GPU memory. Better zoom in 200%. ## 5 Conclusion In this paper, we introduced PixArt-\(\alpha\), a Transformer-based text-to-image (T2I) diffusion model, which achieves superior image generation quality while significantly reducing training costs and CO\({}_{2}\) emissions. Our three core designs, including the training strategy decomposition, efficient T2I Transformer and high-informative data, contribute to the success of PixArt-\(\alpha\). Through extensive experiments, we have demonstrated that PixArt-\(\alpha\) achieves near-commercial application standards in image generation quality. With the above designs, PixArt-\(\alpha\) provides new insights to the AIGC community and startups, enabling them to build their own high-quality yet low-cost T2I models. We hope that our work inspires further innovation and advancements in this field. Appendix ### Related work #### a.1.1 Denoising diffusion probabilistic models Diffusion models (Ho et al., 2020; Sohl-Dickstein et al., 2015) and score-based generative models (Song and Ermon, 2019; Song et al., 2021) have emerged as highly successful approaches for image generation, surpassing previous generative models such as GANs (Goodfellow et al., 2014), VAEs (Kingma and Welling, 2013), and Flow (Rezende and Mohamed, 2015). Unlike traditional models that directly map from a Gaussian distribution to the data distribution, diffusion models employ an iterative denoising process to transform Gaussian noise into an image that follows the data distribution. This process can be reversely learned from an untrainable forward process, where a small amount of Gaussian noise is iteratively added to the original image. #### a.1.2 Latent Diffusion Model Latent Diffusion Model (_a.k.a._ Stable diffusion) (Rombach et al., 2022) is a recent advancement in diffusion models. This approach enhances the traditional diffusion model by employing score-matching on the image latent space and introducing cross-attention-based controlling. The results obtained with this approach have been impressive, particularly in tasks involving high-density image generation, such as text-to-image synthesis. This has served as a source of inspiration for numerous subsequent works aimed at improving text-to-image synthesis, including those by Saharia et al. (2022); Balaji et al. (2022); Feng et al. (2023); Xue et al. (2023); Podell et al. (2023), and others. Additionally, Stable diffusion and its variants have been effectively combined with various low-cost fine-tuning (Hu et al., 2021; Xie et al., 2023) and customization (Zhang et al., 2023; Mou et al., 2023) technologies. #### a.1.3 Diffusion Transformer Transformer architecture (Vaswani et al., 2017) have achieved great success in language models (Radford et al., 2018, 2019), and many recent works (Dosovitskiy et al., 2020; He et al., 2022) show it is also a promising architecture on many computer vision tasks like image classification (Touvron et al., 2021; Zhou et al., 2021; Yuan et al., 2021; Han et al., 2021), object detection (Liu et al., 2021; Wang et al., 2021, 2022; Ge et al., 2023; Carion et al., 2020), semantic segmentation (Zheng et al., 2021; Xie et al., 2021; Strudel et al., 2021) and so on (Sun et al., 2020; Li et al., 2022; Zhao et al., 2021; Liu et al., 2022; He et al., 2022; Li et al., 2022). The Diffusion Transformer (DiT) (Peebles and Xie, 2023) and its variant (Bao et al., 2023; Zheng et al., 2023) follow the step to further replace the Convolutional-based U-Net (Ronneberger et al., 2015) backbone with Transformers. This architectural choice brings about increased scalability compared to U-Net-based diffusion models, allowing for the straightforward expansion of its parameters. In our paper, we leverage DiT as a scalable foundational model and adapt it for text-to-image generation tasks. ### PixArt-\(\alpha\)_vs._Midjourney In Figure 7, we present the images generated using PixArt-\(\alpha\) and the current SOTA product-level method Midjourney (Midjourney, 2023) with randomly sampled prompts online. Here, we conceal the annotations of images belonging to which method. Readers are encouraged to make assessments based on the prompts provided. The answers will be disclosed at the end of the appendix. ### PixArt-\(\alpha\)_vs._Prestigious Diffusion Models In Figure 8 and 9, we present the comparison results using a test prompt selected by RAPHAEL. The instances depicted here exhibit performance that is on par with, or even surpasses, that of existing powerful generative models. ### Auto-labeling Techniques To generate captions with high information density, we leverage state-of-the-art vision-language models LLaVA (Liu et al., 2023). Employing the prompt, "_Describe this image and its style in a very detailed manner_", we have significantly improved the quality of captions. We show the prompt design and process of auto-labeling in Figure 10. More image-text pair samples on the SAM dataset are shown in Figure 11. ### Additional Implementation Details We include detailed information about all of our PixArt-\(\alpha\) models in this section. As shown in Table 4, among the 256\(\times\)256 phases, our model primarily focuses on text-to-image alignment stage, with less time on fine-tuning and only 1/8 of that time spent on ImageNet pixel dependency. PixArt-\(\alpha\) model details.For the embedding of input timesteps, we employ a 256-dimensional frequency embedding (Dhariwal and Nichol, 2021). This is followed by a two-layer MLP that features a dimensionality matching the transformer's hidden size, coupled with SiLU activations. We adopt the DiT-XL model, which has 28 Transformer blocks in total for better performance, and the patch size of the PatchEmbed layer in ViT (Dosovitskiy et al., 2020) is 2\(\times\). Multi-scale training.Inspired by Podell et al. (2023), we incorporate the multi-scale training strategy into our pipeline. Specifically, We divide the image size into 40 buckets with different aspect ratios, each with varying aspect ratios ranging from 0.25 to 4, mirroring the method used in SDXL. During optimization, a training batch is composed using images from a single bucket, and we alternate the bucket sizes for each training step. In practice, we only apply multi-scale training in the high-asethetics stage after pretraining the model at a fixed aspect ratio and resolution (_i.e_. 256px). We adopt the positional encoding trick in DiffFit (Xie et al., 2023) since the image resolution and aspect change during different training stages. Sampling algorithm.In this study, we incorporated three sampling algorithms, namely iDDPM (Nichol and Dhariwal, 2021), DPM-Solver (Lu et al., 2022), and SA-Solver (Xue et al., 2023). We observe these three algorithms perform similarly in terms of semantic control, albeit with minor differences in sampling frequency and color representation. To optimize computational efficiency, we ultimately chose to employ the DPM-Solver with 20 inference steps. ### Hyper-parameters analysis In Figure 20, we illustrate the variations in the model's metrics under different configurations across various datasets. we first investigate FID for the model and plot FID-vs-CLIP curves in Figure 20 for 10k text-image based from MSCOCO. The results show a marginal enhancement over SDv1.5. In Figure 20 and 20, we demonstrate the corresponding T2ICompBench scores across a range of classifier-free guidance (cfg) (Ho and Salimans, 2022) scales. The outcomes reveal a consistent and commendable model performance under these varying scales. \begin{table} \begin{tabular}{l|l c c c c c c} \hline \hline Method & Stage & Image Resolution & \#Images & Training Step (K) & Batch Size & Learning Rate & GPU days (V100) \\ \hline PixArt-\(\alpha\) & Pixel dependency & 256\(\times\)256 & 1M ImageNet & 300 & 178\(\times\)64 & 2\(\times\)10\({}^{-5}\) & **88** \\ PixArt-\(\alpha\) & Text-Image align & 256\(\times\)256 & 10M SAM & 1500 & 178\(\times\)64 & 2\(\times\)10\({}^{-5}\) & 672 \\ PixArt-\(\alpha\) & High aesthetics & 256\(\times\)256 & 14M HQ & 900 & 178\(\times\)64 & 2\(\times\)10\({}^{-5}\) & 416 \\ \hline PixArt-\(\alpha\) & High aesthetics & 512\(\times\)512 & 14M HQ & 100 & 40\(\times\)64 & 2\(\times\)10\({}^{-5}\) & 320 \\ PixArt-\(\alpha\) & High aesthetics & 1024\(\times\)1024 & 14M HQ & 16 & 12\(\times\)32 & 2\(\times\)10\({}^{-5}\) & 160 \\ \hline \hline \end{tabular} \end{table} Table 4: We report detailed information about every PixArt-\(\alpha\) training stage in our paper. Note that HQ (High Quality) dataset here includes 4M JourneyDB (Pan et al., 2023) and 10M internal data. The count of GPU days excludes the time for VAE feature extraction and T5 text feature extraction, as we offline prepare both features in advance so that they are not part of the training process and contribute no extra time to it. ### More Images generated by PixArt-\(\alpha\) More visual results generated by PixArt-\(\alpha\) are shown in Figure 12, 13, and 14. The samples generated by PixArt-\(\alpha\) demonstrate outstanding quality, marked by their exceptional fidelity and precision in faithfully adhering to the given textual descriptions. As depicted in Figure 15, PixArt-\(\alpha\) demonstrates the ability to synthesize high-resolution images up to \(1024\times 1024\) pixels and contains rich details, and is capable of generating images with arbitrary aspect ratios, enhancing its versatility for real-world applications. Figure 16 illustrates PixArt-\(\alpha\)'s remarkable capacity to manipulate image styles through text prompts directly, demonstrating its versatility and creativity. ### Disccusion of FID metric for evaluating image quality During our experiments, we observed that the FID (Frechet Inception Distance) score may not accurately reflect the visual quality of generated images. Recent studies such as SDXL (Podell et al., 2023) and Pick-a-pic (Kirstain et al., 2023) have presented evidence suggesting that the COCO zero-shot FID is negatively correlated with visual aesthetics. Furthermore, it has been stated by Betzalel et al. (Betzalel et al., 2022) that the feature extraction network used in FID is pretrained on the ImageNet dataset, which exhibits limited overlap with the current text-to-image generation data. Consequently, FID may not be an appropriate metric for evaluating the generative performance of such models, and (Betzalel et al., 2022) recommended employing human evaluators for more suitable assessments. Thus, we conducted a user study to validate the effectiveness of our method. ### Customized Extension In text-to-image generation, the ability to customize generated outputs to a specific style or condition is a crucial application. We extend the capabilities of PixArt-\(\alpha\) by incorporating two commonly used customization methods: DreamBooth (Ruiz et al., 2022) and ControlNet (Zhang et al., 2023). DreamBooth.DreamBooth can be seamlessly applied to PixArt-\(\alpha\) without further modifications. The process entails fine-tuning PixArt-\(\alpha\) using a learning rate of 5e-6 for 300 steps, without the incorporation of a class-preservation loss. As depicted in Figure 16(a), given a few images and text prompts, PixArt-\(\alpha\) demonstrates the capacity to generate high-fidelity images. These images present natural interactions with the environment under various lighting conditions. Additionally, PixArt-\(\alpha\) is also capable of precisely modifying the attribute of a specific object such as color, as shown in 16(b). Our appealing visual results demonstrate PixArt-\(\alpha\) can generate images of exceptional quality and its strong capability for customized extension. ControlNet.Following the general design of ControlNet (Zhang et al., 2023), we freeze each DiT Block and create a trainable copy, augmenting with two zero linear layers before and after it. The control signal \(c\) is obtained by applying the same VAE to the control image and is shared among all blocks. For each block, we process the control signal \(c\) by first passing it through the first zero linear layer, adding it to the layer input \(x\), and then feeding it into the trainable copy and the second zero linear layer. The processed control signal is then added to the output \(y\) of the frozen block, which is obtained from input \(x\). We trained the ControlNet on HED (Xie and Tu, 2015) signals using a learning rate of 5e-6 for 20,000 steps. As depicted in Figure 18, when provided with a reference image and control signals, such as edge maps, we leverage various text prompts to generate a wide range of high-fidelity and diverse images. Our results demonstrate the capacity of PixArt-\(\alpha\) to yield personalized extensions of exceptional quality. ### Discussion on Transformer _vs_. U-Net The Transformer-based network's superiority over convolutional networks has been widely established in various studies, showcasing attributes such as robustness (Zhou et al., 2022; Xie et al., 2021), effective modality fusion (Girdhar et al., 2023), and scalability (Peebles and Xie, 2023). Similarly, the findings on multi-modality fusion are consistent with our observations in this study compared to the CNN-based generator (U-Net). For instance, Table 3 illustrates that our model, PixArt-\(\alpha\), significantly outperforms prevalent U-Net generators in terms of compositionality. This advantage is not solely due to the high-quality alignment achieved in the second training stage but also to the multi-head attention-based fusion mechanism, which excels at modeling long dependencies. This mechanism effectively integrates compositional semantic information, guiding the generation of vision latent vectors more efficiently and producing images that closely align with the input texts. These findings underscore the unique advantages of Transformer architectures in effectively fusing multi-modal information. ### Limitations & Failure cases In Figure 19, we highlight the model's failure cases in red text and yellow circle. Our analysis reveals the model's weaknesses in accurately controlling the number of targets and handling specific details, such as features of human hands. Additionally, the model's text generation capability is somewhat weak due to our data's limited number of font and letter-related images. We aim to explore these unresolved issues in the generation field, enhancing the model's abilities in text generation, detail control, and quantity control in the future. ### Unveil the answer In Figure 7, we present a comparison between PixArt-\(\alpha\) and Midjourney and conceal the correspondence between images and their respective methods, inviting the readers to guess. Finally, in Figure 21, we unveil the answer to this question. It is difficult to distinguish between PixArt-\(\alpha\) and Midjourney, which demonstrates PixArt-\(\alpha\)'s exceptional performance. Figure 7: Comparisons with Midjourney. The prompts used here are randomly sampled online. To ensure a fair comparison, we select the first result generated by both models. _We encourage readers to guess which image corresponds to Midjourney and which corresponds to_PixArt-_a_. _The answer is revealed at the end of the paper._ Figure 8: **Comparisons of PixArt-\(\alpha\) with recent representative generators, Stable Diffusion XL, DeepFloyd, DALL-E 2, ERNIE-ViLG 2.0, and RAPHAEL. They are given the same prompts as in RAPHAEL(Xue et al., 2023), where the words that the human artists year to preserve within the generated images are highlighted in red. The specific prompts for each row are provided at the bottom of the figure. Better zoom in 200%.** Figure 9: The prompts (Xue et al., 2023b) for each column are given in the figure. We give the comparisons between DALL-E 2 Midjourney v5.1, Stable Diffusion XL, ERNIE ViLG 2.0, DeepFloyd, and RAPHAEL. They are given the same prompts, where the words that the human artists year to preserve within the generated images are highlighted in red. Better zoom in 200%. Figure 11: Examples from the SAM dataset using LLaVA-produced labels. The detailed image descriptions in LLaVA captions can aid the model to grasp more concepts per iteration and boost text-image alignment efficiency. Figure 10: We present auto-labeling with custom prompts for LAION (left) and SAM (right). The words highlighted in green represent the original caption in LAION, while those marked in red indicate the detailed captions labeled by LLaVA. Figure 12: The samples generated by PixArt-\(\alpha\) demonstrate outstanding quality, marked by an exceptional level of fidelity and precision in aligning with the given textual descriptions. Better zoom in 200%. Figure 13: The samples generated by PixArt-\(\alpha\) demonstrate outstanding quality, marked by an exceptional level of fidelity and precision in aligning with the given textual descriptions. Better zoom in 200%. Figure 14: The samples generated by PixArt-\(\alpha\) demonstrate outstanding quality, marked by an exceptional level of fidelity and precision in aligning with the given textual descriptions. Better zoom in 200%. Figure 15: PixArt-\(\alpha\) is capable of generating images with resolutions of up to \(1024\times 1024\) while preserving rich, complex details. Additionally, it can generate images with arbitrary aspect ratios, providing flexibility in image generation. Figure 16: **Prompt mixing:**PixArt-\(\alpha\) can directly manipulate the image style with text prompts. In this figure, we generate five outputs using the styles to control the objects. For instance, the second picture of the first sample, located at the left corner of the figure, uses the prompt ‘pixelArt of the black hole in the space”. Better zoom in 200%. Figure 17: PixArt-\(\alpha\) can be combined with Dreambooth. Given a few images and text prompts, PixArt-\(\alpha\) can generate high-fidelity images, that exhibit natural interactions with the environment 17a, precise modification of the object colors 17b, demonstrating that PixArt-\(\alpha\) can generate images with exceptional quality, and has a strong capability in customized extension. Figure 18: ControlNet customization samples from PixArt-\(\alpha\). We use the reference images to generate the corresponding HED edge images and use them as the control signal for PixArt-\(\alpha\) ControlNet. Better zoom in 200%. Figure 19: Instances where PixArt-\(\alpha\) encounters challenges include situations that necessitate precise counting or accurate representation of human limbs. In these cases, the model may face difficulties in providing accurate results. Figure 20: (a) Plotting FID _vs_. CLIP score for different cfg scales sampled from [1.5, 2.0, 3.0, 4.0, 5.0, 6.0]. PixArt-\(\alpha\) shows slight better performance than SDv1.5 on MSCOCO. (b) and (c) demonstrate the ability of PixArt-\(\alpha\) to maintain robustness across various cfg scales on the T2I-CompBench. Figure 21: This figure presents the **answers** to the image generation quality assessment as depicted in Appendix A.2. The method utilized for each pair of images is annotated at the top-left corner.
2309.12141
Guided rewiring of social networks reduces polarization and accelerates collective action
Global challenges like climate change may be considered as collective action problems that require sufficient cooperation with pro-mitigation norms, soon enough to be effective. Socio-political polarization is a barrier to collective action. Prior agent-based models of behavioural change on structured networks in a shared socio-political environment have shown that polarization emerges naturally in such systems and that the speed of consensus formation is limited by the rate at which polarized clusters can be dissolved. Here we study how guided social link rewiring affects the speed of network depolarization. We investigate rewiring algorithms representing random meetings, introduction by mutual acquaintances, and bridging between socially distant communities. We find that building lasting links between polarized individuals and communities can accelerate consensus formation when the sociopolitical environment is favourable. This strengthens the evidence that promoting connection between polarized communities could accelerate collective action on urgent global challenges.
Lilli Frei, Jordan Everall, Andrew K. Ringsmuth
2023-09-21T15:02:24Z
http://arxiv.org/abs/2309.12141v1
# Guided rewiring of social networks reduces polarization and accelerates collective action ###### Abstract Global challenges like climate change may be considered as collective action problems that require sufficient cooperation with pro-mitigation norms, soon enough to be effective. Socio-political polarization is a barrier to collective action. Prior agent-based models of behavioural change on structured networks in a shared socio-political environment have shown that polarization emerges naturally in such systems and that the speed of consensus formation is limited by the rate at which polarized clusters can be dissolved. Here we study how guided social link rewiring affects the speed of network depolarization. We investigate rewiring algorithms representing random meetings, introduction by mutual acquaintances, and bridging between socially distant communities. We find that building lasting links between polarized individuals and communities can accelerate consensus formation when the sociopolitical environment is favourable. This strengthens the evidence that promoting connection between polarized communities could accelerate collective action on urgent global challenges. **Keywords:** social network, common-pool resource, collective action, polarization, link rewiring ## 1 Introduction Many societal challenges are social dilemmas in which individual rationality leads to irrational large-scale behaviour [1]. Examples include common-pool resource (CPR) management challenges such as deforestation, the depletion of carbon budgets and loss of biodiversity. Evidence of successful bottom-up CPR management shows that individuals can overcome myopic strategies that lead to resource depletion and limit their resource usage to a sustainable level [2, 3]. Such bottom-up management is more likely to succeed for small-scale CPRs that have clear boundaries and enforceable access rules but becomes more difficult for resources that are large relative to the scale of resource extraction by individuals [4]. Especially for global challenges like climate change, it is difficult to define and monitor clear boundaries, such as national or per-capita emissions permits [5]. The mismatch of scales between the large CPR and comparatively small extractions by individuals may make it difficult for individuals to perceive the resource's response to their actions. Without obvious feedback, individuals might not realize the need for restraint until critical thresholds have been crossed [6]. While formal institutions such as state governments or supranational institutions have the power to alter structures such that conservation strategies become individually rational, they depend on social support to ensure successful implementation and avoid backlash [7]. Without individual behaviour obviously threatening the resource, however, finding social support for conservation policies is challenging [8]. Although the urgency of global socio-ecological challenges such as climate action and pandemic containment has become clearer [9], governments struggle to enforce policies aimed at tackling them through collective action [10, 11, 12]. Rather, the opposite is the case; research shows that political and social polarization around such global challenges is high [13, 14, 15]. Empirical studies have found that high levels of political and social polarization can significantly inhibit collective action [11, 13, 16] and suggest that dissolving polarization could promote efficient policy-making and restore a functioning democracy [11, 12, 16]. In general, a polity is polarized when dominated by two opposite or contradictory tendencies that lack significant common ground [17]. This usually also involves increased support for extreme political views relative to more moderate ones [15]. Since individuals' opinions are highly impacted by their social context [3, 18, 19], polarization is catalyzed by social structures that reinforce ideological separation by excluding external information [14, 20, 21]. It may be that information does not enter polarized clusters because of filters that impair its flow. Such filtering may be individuals' natural reaction to exposure to more information than they can process [21]. It may also be externally imposed through media control, systematic censorship or algorithmic filtering [20, 21]. Even if information does reach polarized clusters, it may be discredited through targeted manipulation of trust [21]. In both cases, social restructuring can help to provide members of polarized clusters with relevant information or to restore chains of trust to external information sources [21, 22]. In the latter case, building trust in social, political, and scientific institutions is crucial, as simply confronting members with outside information will not help to dissolve the polarized cluster and may even reinforce the value system under attack [16, 21]. These empirical results underscore the importance of careful engagement with different views. Consistent with this, previous modeling work analyzing opinion dynamics has shown that the rate of depolarization is essentially determined by the frequency of interactions between different opinion clusters and the response assumed when opinions are exchanged [16, 23, 24]. These findings suggest that increasing opportunities for interaction between agents with dissimilar views through social rewiring might speed up depolarization and, accordingly, collective action. In this paper we test this hypothesis by extending a pre-existing agent-based model (ABM) on a social network. The ABM was introduced by Andersson et al. [24] and simulates the spread of a pro-conservation norm on a clustered scale-free social network embedded in a shared sociopolitical environment. Andersson et al. [24] used this framework to explore factors affecting the speed of collective action to conserve a large CPR. We extend this by adding link rewiring, which allows agents to alter their social environment by breaking links to neighbours and forming new links to non-neighbours. This allows an agent inside an opinion cluster to form links to agents outside the cluster and become exposed to dissimilar views. We aim to identify any dependence of network depolarization rate on properties of link rewiring algorithms, and how this might be used to accelerate depolarization relative to static networks. We test three rewiring algorithms. First, purely random rewiring as a baseline scenario to analyse the effect of a variable network structure without further assumptions. Second, a 'local rewiring' algorithm based on current understanding of the processes by which social networks usually change: individuals are most likely to form new relationships with people who are similar and topologically close to them [20, 25, 26]. Finally, 'bridge rewiring' that incorporates recommendations for dissolving polarization from the literature and establishes links between differently thinking agents to increase their exposure to differing views [14, 20, 21]. ## 2 Results To learn how rewiring algorithms affect depolarization we study the time evolution of the average agent cooperativity (hereafter simply 'cooperativity') and polarization. Parameters are defined in the Methods section. We quantify polarization by calculating the standard deviation of the cooperativity in each network realization and averaging these to obtain the polarization shown in the plots. ### Representative dynamics In figure 1A-D we see that the qualitative stages of the dynamics originally observed by Andersson et al. for a static network [24] are maintained under dynamic network structures. In the first stage, the slight defector majority that is assumed to exist at \(t=0\) dominates and cooperativity declines. Agents with strong opinions spread them to their neighbours, resulting in growing opinion clusters that rapidly polarize the network. This is quantified by the increasing standard deviation of the cooperativity. Any interactions within the opinion clusters serve to reinforce opinions and, thus, polarization. Even without assuming bounded confidence, this cluster formation stage ends in a Figure 1: **Time evolution of cooperativity and polarization for a static and dynamic networks.** The standard deviation of cooperativity represents polarization. **A** The static network scenario serves as a benchmark. Cooperativity initially declines rapidly due to the predominance of defectors at \(t=0\). The slightly positive (pro-cooperation) external field drives the system towards full cooperation through a slower process of polarized cluster dissolution. **B** Representative dynamics under random rewiring. The legend refers to different probabilities for establishing and breaking links. Convergence is faster compared to the static network, this is matched by a faster decline in polarization. Different rewiring probabilities have minimal impact on convergence. **C** Representative dynamics under local rewiring. Convergence is slower relative to the static network. **D** Representative dynamics under bridge rewiring. Convergence is faster relative to the static network which is matched by an earlier decline in polarization. fully polarized state in which only extreme opinions are present. This corresponds to the maximum standard deviation. The next stage of the dynamics is a slower process of cluster dissolution (depolarization), where meaningful interactions take place only at the cluster perimeters, between agents holding opposing opinions. Here the assumption of a slightly positive external field, representing a socio-political environment favorable to resource conservation, is pivotal, as it slightly biases the interactions such that defector-to-cooperator conversion is more likely than cooperator-to-defector conversion. We refer to the time derivative of the cooperativity during the depolarization stage as the convergence rate. The depolarization stage can itself be divided into two substages. In the first, cooperativity grows approximately exponentially. In the second, the convergence rate slows, likely due to the shrinking clusters' finite sizes in combination with fluctuations [24]. As social change depends more on achieving a cooperative majority than winning over the last few holdouts [24, 27, 28], we focus on the convergence rate within the exponential stage of cluster dissolution. ### Random rewiring We first study the effects of random (unguided) rewiring as a baseline for introducing dynamic network structure. Figure 1B compares the cooperativity and polarization dynamics under random rewiring with the dynamics in a static network. During the initial cluster formation stage, the two systems behave almost identically until the standard deviation reaches roughly half of its eventual maximum value. This is unsurprising since, in an unpolarized network, random rewiring does not on average introduce agents to opinions that are different from those in their local environment. However, once polarized clusters have formed, random rewiring allows agents inside opinion clusters to link with agents holding opposite opinions, effectively lengthening the cluster perimeters along which opinion change is possible. Since we assume a slightly positive external field, more frequent interactions between differently thinking agents cause a slightly earlier peak in polarization and then accelerate convergence towards a fully cooperative network. Concurrently, the rate of depolarization is also accelerated relative to the static network case. Random rewiring, however, cannot counteract the finite size effects that slow down convergence as full cooperation approaches. The similar effect of different rewiring probabilities suggests that convergence rates are insensitive to the exact rewiring probability chosen. ### Local rewiring We analyse how social rewiring dynamics commonly seen in real social networks affect the convergence rate through modified, local rewiring that limits potential new neighbours to those within two network steps. We further enforce a high level of homophily (see Methods). Figure 1C shows that this slows the convergence rate relative to random rewiring. Under local rewiring, links are established only with agents holding the same opinion, while links are broken with uniform probability across all neighbours. This increases the link density within opinion clusters and reduces inter-cluster links. The resulting decrease in interactions between agents holding different opinions lowers the chance of opinion change, which slows convergence. ### Bridge rewiring As recommended by [14, 20, 23], we implement an algorithm that guides the establishment of links between differently thinking agents. We limit the potential new neighbours to those outside agent \(i\)'s own cluster and enforce a high level of heterophily (see Methods). Figure 1D shows that this reduces the initial dip in cooperation and then accelerates convergence relative to random rewiring. This is likely due to a reduction in link density between similar opinion clusters and concurrent increase in link density between polarized clusters resulting from agent \(i\) rewiring only to agents holding the opposite opinion while links are broken with uniform probability across all neighbours. Together with the assumed slightly positive external field, the resulting increased likelihood of cooperator-defector interaction reduces opinion reinforcement and extremization during the polarization stage and subsequently accelerates depolarization and convergence.
2309.14530
Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework
This study conducts a thorough examination of the research stream focusing on AI risks in healthcare, aiming to explore the distinct genres within this domain. A selection criterion was employed to carefully analyze 39 articles to identify three primary genres of AI risks prevalent in healthcare: clinical data risks, technical risks, and socio-ethical risks. Selection criteria was based on journal ranking and impact factor. The research seeks to provide a valuable resource for future healthcare researchers, furnishing them with a comprehensive understanding of the complex challenges posed by AI implementation in healthcare settings. By categorizing and elucidating these genres, the study aims to facilitate the development of empirical qualitative and quantitative research, fostering evidence-based approaches to address AI-related risks in healthcare effectively. This endeavor contributes to building a robust knowledge base that can inform the formulation of risk mitigation strategies, ensuring safe and efficient integration of AI technologies in healthcare practices. Thus, it is important to study AI risks in healthcare to build better and efficient AI systems and mitigate risks.
Apoorva Muley, Prathamesh Muzumdar, George Kurian, Ganga Prasad Basyal
2023-09-25T21:09:21Z
http://arxiv.org/abs/2309.14530v1
# Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework ###### Abstract This study conducts a thorough examination of the research stream focusing on AI risks in healthcare, aiming to explore the distinct genres within this domain. A selection criterion was employed to carefully analyze 39 articles to identify three primary genres of AI risks prevalent in healthcare: clinical data risks, technical risks, and socio-ethical risks. Selection criteria was based on journal ranking and impact factor. The research seeks to provide a valuable resource for future healthcare researchers, furnishing them with a comprehensive understanding of the complex challenges posed by AI implementation in healthcare settings. By categorizing and elucidating these genres, the study aims to facilitate the development of empirical qualitative and quantitative research, fostering evidence-based approaches to address AI-related risks in healthcare effectively. This endeavor contributes to building a robust knowledge base that can inform the formulation of risk mitigation strategies, ensuring safe and efficient integration of AI technologies in healthcare practices. Thus, it is important to study AI risks in healthcare to build better and efficient AI systems and mitigate risks. _Keywords: AI; healthcare; public health; medical science._ ## 1 Introduction Some individuals and organizations argue that the overall potential of AI in medicine has been largely overestimated, as there is a lack of concrete data demonstrating significant improvements in patient outcomes. This viewpoint raises skepticism about the widespread adoption and transformative power of medical AI. Additionally, experts have voiced concerns in recent years regarding potential adverse consequences of medical AI [1]. These concerns encompass various aspects, including clinical, technical, and socio-ethical risks. Such issues highlight the need for careful evaluation and regulation of AI technologies in the healthcare domain to ensure patient safety and to address any unintended negative impacts. While AI in healthcare holds promise and continues to advance, it is essential to critically assess its performance, potential benefits, and risks to make informed decisions about its integration into medical practice. As with any emerging technology, cautious and responsible implementation is key to unlocking its true potential while minimizing potential downsides [2]. In the literature, several main risks and challenges have been identified as likely to arise from the introduction of AI in future healthcare. These risks and challenges can be categorized into seven major categories: 1. Patient harm due to AI errors, 2. Misuse of medical AI tools, 3. Risk of bias in medical AI and perpetuation of inequities, 4. Lack of transparency, 5. Privacy and security issues, 6. Gaps in AI accountability, 7. Obstacles to implementation in real-world healthcare Addressing these risks requires close collaboration between various stakeholders, including healthcare professionals, AI developers, policymakers, and ethicists. Ensuring robust evaluation, regulation, and continuous monitoring of AI systems are crucial to maximize the benefits of AI while minimizing potential negative impacts on patient care and healthcare delivery [3]. The study has reviewed 39 articles from last 5 years from 2018 to 2023 and come up with a risk of AI in healthcare study framework incorporating the 7 major categories of AI risk. This study deep dives in each category to help readers understand the findings from the reviewed literature [4]. Overall, the study helps to summarize the recent finding and develop a review study framework. The aim of this study is to provide a clear and coherent framework for understanding and mitigating the risks of AI in healthcare. ## 2 Methods for Comprehensive Review This study represents a pioneering effort, employing a 5-year retrospective literature analysis of research papers addressing the topic of AI risks in healthcare. To select the relevant articles, the criteria focused on the presence of the term "Risk of AI in healthcare/medical science/medicine/biomedical science" in the title, abstract, or keywords. Through this meticulous search process, a total of 39 journal articles were identified, subsequently read, coded, and categorized according to specific themes and classifications detailed in Table 1 from the Appendix. While a majority of the research on AI risks in healthcare explicitly adopts a theoretical framework, these frameworks primarily rely on real-world medical cases derived from medical data. To better comprehend the diverse range of AI risks, the study classified them into three main genres: clinical data risks, technical risks, and socio-ethical risks, as illustrated in Fig. 1. Moreover, each genre was further sub-categorized to facilitate a comprehensive understanding and examination of AI risks, given their variability across different scenarios. The study delves deeply into each genre of AI risks, providing a detailed and insightful explanation of their implications and significance in the healthcare context. ## 3 CLINICAL RISKS ### Patient Harm Due to AI Errors In spite of continuous advancements in data availability and machine learning, AI-driven clinical solutions in healthcare may lead to failures, posing potential safety concerns for healthcare service users. Even with AI developers having access to extensive, high-quality datasets for training their AI technologies, there remain at least three major sources of error in AI implementation within clinical practice [22]. Firstly, the accuracy of AI predictions can be significantly affected by noise in the input data while using the AI tool [6]. Secondly, AI misclassifications may occur due to dataset shift, a common machine learning issue where the statistical distribution of data used in clinical practice deviates, even slightly, from the original dataset used for AI training [31][1]. This shift could arise from differences in population groups, acquisition protocols among hospitals, or the use of machines from various manufacturers [39]. For instance, a multi-center study in the United States developed a highly accurate AI system for diagnosing pneumonia based on data from two hospitals [3]. However, when tested with data from a third hospital, a considerable decrease in accuracy was observed, indicating potential hospital-specific biases. Lastly, predictions can be prone to errors because AI algorithms struggle to adapt to unexpected changes in the environment and context of their application. For example, in medical imaging, the AI model might mistake regular artifacts as observational errors, leading to false positives. ### Remedies to Prevent Patient Harm due to AI Errors To ensure the safe and effective use of AI solutions in healthcare, several key steps must be taken. First and foremost, standardized methods and procedures should be established for extensive evaluation and regulatory approval of AI technologies [40]. This evaluation process should specifically assess the ability of AI solutions to generalize to new populations and their sensitivity to noise. Secondly, AI algorithms should be designed and implemented as assistive tools rather than fully autonomous systems [40]. This means that clinicians should remain an integral part of the data processing workflow, enabling them to detect and report potential errors and contextual changes. By doing so, the aim is to minimize any potential harm to patients. In addition to being designed as assistive tools, future AI solutions in healthcare must be dynamic. They should be equipped with mechanisms that enable them to continuously learn from new scenarios and mistakes detected in real-world practice [40]. However, it's important to strike a balance and maintain a certain degree of human control and vigilance to promptly identify and address any emerging problems. Figure 1: Risk of AI in healthcare literature review study framework This may lead to increased costs and initially reduce some of the immediate benefits of AI implementation. To support these dynamic AI systems, infrastructural and technical developments will be necessary to facilitate regular AI updates based on past and new training data [40]. Moreover, policies should be implemented to ensure seamless integration of these mechanisms into healthcare settings, thereby fostering a safer and more efficient use of AI in the medical field. ## 4 Technical Risks ### Misuse of Medical AI Tools Like any health technology, medical AI carries the risk of human error and misuse. Even when AI algorithms are accurate and robust, their proper use in practice by end-users, including clinicians, healthcare professionals, and patients, is crucial [28]. Incorrect usage can lead to erroneous medical assessments and decisions, potentially harming the patient. Therefore, it is not sufficient for clinicians and the public to have access to medical AI tools; they must also understand how and when to use these technologies. Several factors make existing medical AI technologies susceptible to human error or incorrect use [32]. Often, these technologies have been designed by computer/data scientists with limited involvement from end-users and clinical experts. Consequently, users, such as clinicians, nurses, data managers, or patients, are required to learn and adapt to the new AI technology, leading to complex interactions and experiences. This complexity can hinder the effective application of AI algorithms in day-to-day clinical practice, reducing the potential for informed decision-making and increasing the likelihood of human error. These concerns about AI education and literacy also extend to citizens and patients who will use future medical AI solutions. Another potential cause for misuse of medical AI, resulting in harm to citizens and patients, is the proliferation of easily accessible AI applications [24]. While such tools offer convenient options for remote diagnosis and disease monitoring, there is often limited information about how the AI algorithms have been developed and validated, and their reliability and clinical efficacy may not be adequately demonstrated. This situation is reminiscent of easily accessible online Figure 2: Patient harm due to AI errors pharmacies contributing to medication abuse by citizens, raising public health concerns. Since the development and commercialization of Al-powered web/mobile health applications offer significant financial opportunities, this sector attracts many players and companies with varying standards of ethics, excellence, and quality. While companies offering these Al medical tools may include disclaimers on their websites, users may not always come across, read, or fully understand these disclaimers. As a result, they might rely on potentially incorrect information and diagnoses provided by the Al tools, adversely impacting their health-related decision-making process. #### 4.1.1Remedies to prevent misuse of Al medical tools To ensure the successful and responsible integration of Al in healthcare, it is essential to involve end-users such as healthcare professionals, specialists, technicians, and patients closely in the design and development of Al solutions. This collaborative approach ensures that their perspectives, preferences, and real-world contexts are integrated into the final tools that will be deployed and used. Moreover, it is crucial to develop and generalize education and literacy programs on Al and medical Al across education circles and society. These programs will enhance the knowledge and skills of future Al end-users, reducing the likelihood of human error and promoting the effective use of Al technologies in healthcare. Public agencies should also play a vital role in regulating the sector of web/mobile medical Al [40]. Through proper regulation and oversight, citizens can be well-informed and protected against the misuse and abuse of easily accessible Al technologies. Transparent guidelines and standards are necessary to maintain public trust and safeguard patient interests in the rapidly evolving field of Al in healthcare. ### 4.2Risk of Bias in Medical Al and Perpetuation of Inequities Despite significant advancements in medical research and healthcare delivery, inequalities and inequities persist within most countries worldwide. These disparities are influenced by various factors, including sex/gender, age, ethnicity, income, education, and geography [9]. Some of these inequities stem from systemic issues, such as socioeconomic differences and discrimination, while human biases also contribute significantly [27]. An illustrative example of bias is evident in research conducted in the United States, which revealed that doctors may not take complaints of pain from Black patients as seriously or respond to them as quickly as they do for their White counterparts. This disparity persists in varying degrees in many countries worldwide. Another example of common bias embedded in healthcare systems is gender-based discrimination [20]. Studies in pain management have highlighted how female patients' reports of pain may be psychological or disregarded, indicating a concerning gender bias. Figure 3: Misuse of Al medical tools These biases and inequalities in medical care underscore the importance of addressing systemic issues and human biases in healthcare systems globally [29]. By recognizing and actively working to eliminate these disparities, we can strive towards a more equitable and just healthcare environment for all individuals, regardless of their background or characteristics. There are growing concerns that if not implemented, evaluated, and regulated property, future AI solutions could embed and possibly amplify the systemic disparities and human biases contributing to healthcare inequities. Several examples of algorithmic biases have already received significant attention in recent years, some of which are outlined below. One prominent form of bias is algorithmic bias based on demographic factors such as race, gender, or age [35]. For instance, certain AI algorithms might inadvertently discriminate against specific population groups due to biased data used for training. Geographic bias is another type of bias that appears in datasets [26]. It occurs when AI models are trained on data that primarily represent specific regions or healthcare environments, leading to disparities in care for individuals from other geographic areas. Moreover, bias in data labeling during clinical assessment can impact AI model training and predictions. If certain groups have historically been misdiagnosed or stigmatized, this bias can be reflected in the data used to train AI models, potentially perpetuating disparities in healthcare. Interestingly, some healthcare conditions, such as injuries, poisonings, congenital malformations, and infectious diseases, exhibit discrepancies that cannot be explained by anatomical or genetic differences. If health registries' data labels are affected by these disparities, AI models might inadvertently learn and perpetuate the biases present in the data [11]. To address these concerns, it is crucial to take steps towards ensuring fairness, transparency, and equity in AI development and deployment in healthcare. Robust evaluation of AI algorithms for bias and the establishment of ethical guidelines and regulations can help mitigate these potential biases and promote more equitable healthcare outcomes for all individuals. #### 4.2.1Remedies to prevent risk of bias in medical AI and perpetuation of inequities To ensure fairness and reduce biases in AI development for healthcare, collaboration between AI developers, clinical experts, healthcare professionals, and various stakeholders is essential. Data selection and labelling should be thoughtfully done, with a focus on representing diverse attributes like sex/gender, age, socioeconomics, ethnicity, and geographic location. Building interdisciplinary Figure 4: Risk of Bias development teams that include data scientists, biomedical researchers, social scientists, biomedical ethicists, public health experts, patients, and citizens can result in more inclusive and equitable AI tools [40]. Adequate representation of diverse backgrounds and needs through community engagement fosters the creation of AI solutions that genuinely serve the population they are designed for. Transparency and explainability in AI models enable developers to understand the reasoning behind the model's decisions, making it easier to identify and address potential biases [40]. Continuous monitoring and evaluation of AI models in real-world healthcare settings are necessary to identify and rectify any biases that may arise over time, ensuring the AI tools remain fair and unbiased. By adhering to these principles, AI developers can work towards creating healthcare AI solutions that are more representative, fair, and equitable, contributing to the advancement of inclusive healthcare practices. ### Privacy and Security Issues The increasing development of AI solutions and technology in healthcare, particularly evident during the COVID-19 pandemic, brings potential risks to data privacy, confidentiality, and patient and citizen protection [21]. These risks include the exposure and misuse of sensitive data, which may violate individual rights and lead to non-medical use of patient data. A critical aspect of these issues is informed consent, which involves providing patients with sufficient information to make informed decisions about sharing their personal health data. Informed consent is integral to the patient's healthcare experience and is rooted in ethical principles like protection from harm, respect for autonomy, and privacy rights [23]. However, the introduction of opaque AI algorithms and complex informed consent forms can limit patient autonomy and shared decision-making with physicians [34]. Patients may struggle to understand the data-sharing process and their options for opting out. Big data research and digital platform-based health data research pose additional challenges as patients may not fully comprehend the extent of data sharing and reuse [16][13]. Moreover, the use of AI in healthcare introduces data security risks, with potential breaches leading to privacy violations and identity theft. Cyberattacks on AI systems and personal medical devices controlled by AI also pose serious concerns, highlighting vulnerabilities in the technology [10][12]. Addressing these issues requires comprehensive efforts to enhance transparency, provide clear and accessible information to patients, and strengthen data security measures [19]. Ethical considerations and robust safeguards are essential to ensure the responsible and secure integration of AI in healthcare while safeguarding patient rights and privacy. Figure 5: Privacy & security issues #### 4.3.1 Remedies to privacy and security issues To tackle these critical issues, it is crucial to increase awareness and knowledge about privacy and security risks, informed consent, and cybersecurity. Additionally, regulatory frameworks should be expanded to address accountability and protect individuals from data breaches and data repurposing. Encouraging decentralized and federated approaches to Al can leverage clinical data without compromising its security [40]. Research should be prioritized to enhance security in cloud-based systems and protect Al algorithms from cyberattacks, ensuring the safe and responsible use of Al in healthcare. Collaboration among stakeholders, including researchers, policymakers, and healthcare professionals, is essential to address these challenges and create a trustworthy Al-driven healthcare environment. ### Obstacles to Implementation in Real-World Healthcare Over the past five years, numerous medical Al algorithms have been developed and proposed, covering a wide range of medical applications. Despite being well-validated, clinically robust, safe, and ethically compliant, the implementation and integration of medical Al technologies in healthcare face various challenges in the real world. Healthcare professionals have traditionally been slower to adopt new technologies compared to other fields. The implementation phase is a crucial stage in the innovation process, and it requires more than just inventing and testing the Al technology. One significant obstacle to medical Al implementation is the quality of electronic health data in real-world practice. Medical data is often unstructured and noisy, and existing datasets may not be suitable for Al algorithms. Moreover, different clinical centers and EU member states may have varying formats and data quality, necessitating costly human revision, quality control, cleaning, and relabeling before Al tools can be effectively used on a large scale. To address this, efforts should be made to promote better re-use of diverse health data types, such as electronic health records, genomics data, and patient registries across EU countries, thereby benefiting emerging Al algorithms. Al technologies also have the potential to alter the dynamics of the clinician-patient relationship in ways that are not yet fully understood. Communicating information about Al-derived risks of developing illnesses, like cancer or dementia predisposition, raises personal and ethical implications that need careful consideration. Updating clinical guidelines and care models will be necessary to account for the Al-mediated interactions between healthcare workers and patients, as the introduction of Al technology into everyday practice will have practical, technical, and clinical implications for both clinicians and patients. ### Lack of Transparency Despite significant advancements in medical Al, many individuals and experts still perceive existing algorithms as complex and difficult to comprehend, leading to challenges in fully trusting and adopting these technologies [37]. The lack of transparency is a prominent issue in the development and use of current Al tools in healthcare. This lack of transparency is particularly concerning in sensitive areas like medicine and healthcare, where the well-being and health of citizens are at stake [17]. As a consequence, there is a considerable lack of trustworthiness associated with Al, especially in the medical domain. The limited trustworthiness is expected to have a significant impact on the adoption of emerging Al algorithms by patients, clinicians, and healthcare systems [30]. Al transparency is closely associated with traceability and explainability. These concepts represent two distinct levels of transparency required in Al applications. First, there is the transparency of the Al development and usage processes, which refers to traceability [38]. Second, there is the transparency of Al decisions, known as explainability. Both traceability and explainability are crucial in instilling trust and confidence in Al systems used in healthcare. Traceability is a crucial aspect of ensuring trustworthy Al systems [14]. It involves transparently documenting the entire Al development process, from its creation to its real-world performance after deployment. This includes tracking various aspects: 1. Model Details: Documenting specific information about the Al model, such as its intended use, type of algorithm or neural network, hyper-parameters, and any pre- and post-processing steps applied [25]. Training and Validation Data: Maintaining a complete account of the data used to train and validate the AI model, including the data gathering process, data composition, acquisition protocols, and data labelling methods [33]. 3. AI Tool Monitoring: Keeping track of AI tool performance metrics, instances of failures, and periodic evaluations to assess its effectiveness and potential limitations [36]. Unfortunately, in the practical implementation of existing AI tools in healthcare, full traceability is often lacking. Companies may choose not to disclose comprehensive information about their algorithms, leading to the delivery of opaque tools that are difficult for independent parties to understand and examine. This lack of transparency reduces the level of trust in these AI tools and hinders their adoption into real-world practice. Enhanced traceability is necessary to build confidence and foster broader acceptance of AI in healthcare by ensuring transparency, accountability, and reliability. AI explainability plays a critical role in providing transparency for each AI prediction and decision. However, some AI solutions, particularly deep neural networks, lack transparency and are often referred to as 'black box AI.' This term reflects the complexity of these models, which learn intricate functions that are challenging for humans to comprehend, making their decision-making processes invisible and difficult to understand [18][8]. The absence of transparency poses significant challenges for clinicians and other stakeholders when incorporating AI solutions into real-world practice. Clinicians need to understand the underlying principles behind each AI decision or prediction to confidently work with specific AI tools, even if these algorithms have the potential to enhance their productivity. Moreover, the lack of explainability hampers the ability to pinpoint the source of AI errors and assign responsibilities when issues arise. Identifying the root cause of errors becomes challenging due to the opacity of the AI models. To address these limitations, efforts are being made to develop AI explainability techniques that shed light on the decision-making processes of complex AI models. Explainable AI is critical to build trust, facilitate integration into real-world practice, and enhance accountability in the deployment of AI solutions in various domains, including healthcare. #### 4.5.1 Remedies to prevent lack of transparency Several avenues are available to enhance the transparency of AI technologies in healthcare. Firstly, implementing an 'AI passport' for each AI algorithm can document essential information about the model, promoting understanding and transparency [40]. Secondly, developing Figure 6: Lack of transparency traceability tools would enable monitoring the AI algorithms' usage after deployment, recording errors, performance degradation, and facilitating periodic audits. Thirdly, involving clinical end-users from the beginning of the development process would ensure the selection of appropriate explainability approaches and acceptance in clinical practice. Lastly, regulatory entities can encourage transparency by considering traceability and explainability as prerequisites for certification, fostering accountability and informed decision-making. These efforts aim to create more transparent and trustworthy AI solutions in healthcare. ### Gaps in AI Accountability The term 'algorithmic accountability' has gained significance in addressing the legal implications of AI algorithms' introduction and use in various aspects of human life [25]. Contrary to what the term might imply, it emphasizes that algorithms are a combination of machine learning and human design, and any errors or wrongdongs originate from the humans involved in their development, introduction, or use. AI systems themselves cannot be held morally or legally responsible. In the context of medical AI, accountability is crucial for its acceptance, trustworthiness, and future adoption in healthcare. If clinicians are held responsible for AI-related medical errors despite not designing the algorithms, they may be hesitant to adopt these AI solutions [33]. Similarly, patients and citizens may lose trust if they believe no one can be held accountable for the potential harm caused by AI tools. The novel nature of medical AI and the lack of legal precedence led to ambiguity regarding responsibilities for AI-related medical errors that could harm patients. The rapidly evolving field of medical AI poses challenges for regulators, policymakers, and legislators, requiring adaptations to address accountability and liability in AI-assisted healthcare. Challenges in applying current law and liability principles to emerging AI applications in medicine include the involvement of multiple actors, making it difficult to identify responsibilities among AI developers, data managers, clinicians, patients, healthcare organizers, etc. Additionally, determining the precise cause of AI-related medical errors is challenging, as it can stem from the algorithm, training data, or incorrect use in clinical practice [36]. Moreover, the existence of multiple governance frameworks and the absence of unified ethical and legal standards in AI industries add complexity to the situation. The introduction of AI tools in healthcare expands the patient-clinician dynamic, involving various actors like AI developers, researchers, and manufacturers in medical decision-making [18]. This complexity further contributes to the challenge of assigning accountability. While medical professionals are under regulatory responsibility and may lose their license for not being able to account for their actions, AI developers and technologists typically adhere to ethical codes [8]. The vagueness and enforceability of these codes have raised criticisms. In conclusion, addressing algorithmic accountability in medical AI requires clear regulations, unified standards, and transparent accountability frameworks involving all stakeholders to ensure responsible and trustworthy deployment of AI in healthcare. Figure 7: Gaps in AI accountability #### 4.6.1 Remedies to prevent gaps in AI accountability To address the current lack of accountability in medical AI, it is important to establish processes for identifying the roles of AI developers and clinical users when AI-assisted medical decisions result in harm to individuals. Additionally, creating dedicated regulatory agencies for medical AI can help develop and enforce frameworks that hold specific actors, including AI manufacturers, accountable for their actions. These measures will promote transparency, trust, and responsible practices in the use of AI in healthcare [40]. ## 5 Conclusion In conclusion, this comprehensive literature review has meticulously examined 39 articles focusing on the risks of AI in healthcare. Through a systematic analysis of the literature, a robust framework has been developed, shedding light on three primary genres of AI risks: clinical data risks, technical risks, and socio-ethical risks. Delving deeper into each genre, the study explores various sub-genres, offering a nuanced understanding of the multifaceted challenges surrounding the implementation of AI in healthcare. By providing this detailed reference base, the article equips researchers, policymakers, and healthcare practitioners with valuable insights to foster empirical qualitative and quantitative research in the domain of AI risks in healthcare. This framework not only enhances our comprehension of the potential pitfalls associated with AI adoption but also serves as a crucial guide in designing effective risk mitigation strategies. As AI continues to evolve and permeate healthcare settings, it is imperative to acknowledge and address the inherent risks involved. This literature review study significantly contributes to the ongoing discourse on AI's role in healthcare by facilitating evidence-based decision-making, ensuring the responsible and safe integration of AI technologies in the healthcare landscape. Moving forward, the findings of this review can serve as a stepping stone for future investigations, urging scholars to delve deeper into specific sub-genres and explore emerging risks that may arise as AI applications advance. As technology progresses, so too should our understanding of the potential hazards and opportunities that AI presents in healthcare. By building upon this comprehensive study, the researchers in AI healthcare can foster a more sustainable and patient-centric AI-driven healthcare system. ## Consent It is not applicable. ## Ethical approval It is not applicable. ## Competing interests Authors have declared that no competing interests exist.
2309.17011
Feature Interaction Aware Automated Data Representation Transformation
Creating an effective representation space is crucial for mitigating the curse of dimensionality, enhancing model generalization, addressing data sparsity, and leveraging classical models more effectively. Recent advancements in automated feature engineering (AutoFE) have made significant progress in addressing various challenges associated with representation learning, issues such as heavy reliance on intensive labor and empirical experiences, lack of explainable explicitness, and inflexible feature space reconstruction embedded into downstream tasks. However, these approaches are constrained by: 1) generation of potentially unintelligible and illogical reconstructed feature spaces, stemming from the neglect of expert-level cognitive processes; 2) lack of systematic exploration, which subsequently results in slower model convergence for identification of optimal feature space. To address these, we introduce an interaction-aware reinforced generation perspective. We redefine feature space reconstruction as a nested process of creating meaningful features and controlling feature set size through selection. We develop a hierarchical reinforcement learning structure with cascading Markov Decision Processes to automate feature and operation selection, as well as feature crossing. By incorporating statistical measures, we reward agents based on the interaction strength between selected features, resulting in intelligent and efficient exploration of the feature space that emulates human decision-making. Extensive experiments are conducted to validate our proposed approach.
Ehtesamul Azim, Dongjie Wang, Kunpeng Liu, Wei Zhang, Yanjie Fu
2023-09-29T06:48:16Z
http://arxiv.org/abs/2309.17011v2
# Feature Cognition Enhancement via Interaction-Aware Automated Transformation ###### Abstract Creating an effective representation space is crucial for mitigating the curse of dimensionality, enhancing model generalization, addressing data sparsity, and leveraging classical models more effectively. Recent advancements in automated feature engineering (AutoFE) have made significant progress in addressing various challenges associated with representation learning, issues such as heavy reliance on intensive labor and empirical experiences, lack of explainable explicitness, and inflexible feature space reconstruction embedded into downstream tasks. However, these approaches are constrained by: 1) generation of potentially unintelligible and illogical reconstructed feature spaces, stemming from the neglect of expert-level cognitive processes; 2) lack of systematic exploration, which subsequently results in slower model convergence for identification of optimal feature space. To address these, we introduce an interaction-aware reinforced generation perspective. We redefine feature space reconstruction as a nested process of creating meaningful features and controlling feature set size through selection. We develop a hierarchical reinforcement learning structure with cascading Markov Decision Processes to automate feature and operation selection, as well as feature crossing. By incorporating statistical measures, we reward agents based on the interaction strength between selected features, resulting in intelligent and efficient exploration of the feature space that emulates human decision-making. Extensive experiments are conducted to validate our proposed approach. 1 Footnote 1: University of Central Florida, Emails: [email protected], [email protected], [email protected] 2 Footnote 2: Portland State University, Email: [email protected] 3 Footnote 3: Arizona State University, Email: [email protected] The release code can be found in [https://github.com/ehtesam3154/InfRecon](https://github.com/ehtesam3154/InfRecon) ## 1 Introduction With the advent of deep AI, data representation generation has become a key step to the application of machine learning (ML) models. In this work, we investigate the problem of learning to reconstruct an optimal and interpretable feature representation space that enhances performance of a subsequent ML task(e.g, classification, regression) (**Figure 1**). Formally, given a set of original features, a prediction target, and the specific downstream objective, the goal is to automatically construct an ideal and explainable feature set for said ML task. Prior literature partially addresses this, starting with feature engineering [13, 16] to extract a transformed representation of the data. These methods tend to be labor-intensive and have limited ability to automate the extraction. The next relevant work is representation learning. These include factorization [10], embedding [11] and deep representation learning [27]- all of which focus on learning effective latent features. But these often lack interpretability, which limit their deployment in many application scenarios where both high predictive accuracy and a trustworthy understanding of the underlying factors are required. The final relevant work is learning based feature transformation, which involves principle component analysis [3], traversal transformation graph-based feature generation [16], sparsity regularization-based feature selection [14]. These methods are either deeply integrated into a specific ML model or totally irrelevant to it. In recent years, significant advancements have been made in automated feature engineering (AutoFE)[4, 26, 15]. These approaches aim to tackle the challenges associated with reducing the dependence on manual feature engineering in ML. Researchers have also emphasized the importance of ensuring a traceable representation space[26], while simultaneously ensuring the flexibility of the reconstructed representation space for any given predictor. Nevertheless, the mentioned works face two common issues. Classical feature engineering involves expert-driven feature extraction, whereas AutoFE mainly focuses on model-driven feature optimization. Figure 1: We aim to iteratively reconstruct the feature space to uncover an optimal and explainable representation space for improved performance in a downstream ML task. he problem is, while the generated higher-order features might help achieve better performance, many of them are incomprehensible by human observers (**Figure 2**). **Issue 1 (expert-level cognition):**_How can we guarantee that the representation space reconstruction yields human-understandable and meaningful features?_ Another issue observed is the statistical insignificance of interactions between lower-order features, leading to inefficient exploration of the feature space. This implies that the interactions between some features may have limited impact on the overall predictive performance and the exploration process may focus on feature combinations with minimal contribution to model improvement. **Issue 2 (systematic exploration):**_how can we ensure a methodical exploration of the feature space during representation space reconstruction for faster convergence?_ Our objective is to develop a fresh perspective to address these two well-known yet under-explored challenges to reconstructing an optimal and interpretable representation space. **Our Contributions: An Interaction-aware Reinforced Generation Perspective.** We approach representation space reconstruction through the lens of reinforcement learning (RL). We show that learning to reconstruct is achievable by an interactive process of nested feature generation and selection- where the former focuses on generating new meaningful and comprehensible features while the later controls the size of the feature space. We emphasize that the human intuition and domain expertise in feature generation and selection can be formulated as machine-learnable policies. We demonstrate that the iterative sequential feature generation and selection can be generalized as a RL task. We find that by expanding the operational capabilities of RL agents and ensuring their statistical awareness, we increase the likelihood of generating interpretable and meaningful variables in the new representation space. Additionally, we demonstrate that we can enhance learning efficiency by rewarding the agents based on their level of human-like or statistical awareness. **Summary of Proposed Approach.** Based on our findings, we develop a comprehensive and systematic framework to learn a representation space reconstruction policy that can 1) **Goal 1: explainable explicitness:** provide traceable generation of new features while ensuring their interpretability to a human observer, 2) **Goal 2: self optimization with human-like and statistical awareness:** automatically generate an optimal feature set for a downstream ML task without prior domain knowledge while being aware in both human-like and statistical manner, **3) Goal 3: enhanced efficiency and reward augmentation:** enhance the generation and exploration speed in a large feature space and augment reward incentive signal to learn clear policy. To accomplish Goal 1, we introduce an iterative strategy involving feature generation and selection which enhances interpretability by allowing us to assign semantic labels to new features and track the generation process. To achieve Goals 2 and 3, we break down the feature generation process into three distinct Markov Decision Processes (MDPs) to select an operation and two select meta feature(s). To enhance the coordination and learning capabilities of the agents involved, we employ a hierarchical agent architecture that enables state sharing between agents and facilitates the development of improved selection policies. To avoid the generation of uninterpretable features, we design a model structure that can handle both numerical and categorical features. Additionally, we formulate reward functions to incentivize agents based on their selection of operations or feature type, thereby promoting human-level awareness. To ensure statistical awareness, we incorporate H-statistics [9]. It measures feature interaction strength by quantifying the extent to which the variation in the prediction of the target label depends on the crossing of selected features. This approach enhances efficiency in large feature spaces, providing clearer guidance for selection policies. ## 2 Definitions and Problem Formulation **Operation Set.** We apply mathematical operations to existing features to generate new ones. Previous AutoFE studies typically treat all feature columns as numerical and restrict the operation set to numerical operators. To effectively handle both numerical and categorical features, we extend our operation set \(\mathcal{O}\) to include 26 operators, providing a wider range of functionalities, such as "Combine", "GroupByThenMin", "Freq" etc. **Hierarchical Agent.** To address automated feature generation, we introduce a hierarchical agent structure with three agents: one operation agent and two feature agents. These agents collaborate in dividing the feature generation problem into two sub-problems: operation selection and candidate feature selection, working together by sharing state information. Figure 2: One major drawback of existing AutoFE methods: generation of irrational features. **Problem Statement.** Our research aims to learn an optimal and meaningful representation space for improved performance in a downstream ML task. Formally, given a dataset \(D<\mathcal{F},y>\) with a feature set \(\mathcal{F}\) and a target label \(y\), an operator set \(\mathcal{O}\), and a downstream ML task \(A\) (e.g., classification, regression), our goal is to automatically reconstruct an optimal and interpretable feature set \(\mathcal{F}^{*}\) that maximizes: \[\mathcal{F}^{*}=argmax_{\hat{\mathcal{F}}}(V_{A}(\hat{\mathcal{F}},y)) \tag{1}\] Here, \(\hat{\mathcal{F}}\) denotes a subset comprising combinations of the original feature set \(\mathcal{F}\) and the generated new features \(\mathcal{F}_{g}\). The new features \(\mathcal{F}_{g}\) are obtained by applying operation set \(\mathcal{O}\) to original feature set \(\mathcal{F}\) using a specific algorithmic structure. ## 3 Methodology An overview of our proposed framework, **I**nteraction-aware **H**ierarchical **R**einforced Feature Space **R****e**construction(**InHRecon**) is illustrated in **Figure 3**. We initiate the process by classifying the feature space into two categories: categorical and numerical. This classification is based on the number of unique elements in each feature column. Our approach employs an operation-feature-feature strategy to combine two existing features at each step. The technical details of each of rest of the component are discussed below. ### Hierarchical Reinforced Feature Selection and Generation We devise a hierarchical RL agent structure framework for automated feature generation based on two key findings. **Firstly**, we emphasize that programming the optimal selection criteria for features and operations can be treated as machine-learnable policies, addressed by three learning agents. **Secondly**, we observe that the three selection agents operate in a hierarchical manner, with an interconnected and sequential decision-making process. Within each iteration, the agents divide the feature generation problem into sub-problems of selecting operation and selecting feature(s). They make decisions sequentially, where the choices made by an upstream agent have an impact on the state of the environment for downstream agents. **Three utility metrics for reward quantification.** We propose three metrics to quantify feature usefulness(reward), and form three MDPs to learn three selection policies. _Metric 1: Validity of operation._ We ensure operation validity by considering the compatibility between the selected feature types and operations. For instance, applying a mathematical operator like'sqrt' to a categorical feature such as 'Sex' is not appropriate. To address such instances, we hold the responsible agent accountable through this metric and encourage improved future feature choices. We use the notation \(U(f_{t}|o_{t})\) to represent the metric associated with a selected operation-feature pair \((o_{t},f_{t})\) at \(t\)-th iteration. _Metric 2: Interaction strength of selected features._ To enhance statistical awareness and efficient exploration by agents, we consider how feature interaction influences the outcome of the target label. This interaction strength is quantified using H-statistics, the details of which is given in section 3.2. _Metric 3: Downstream Task Performance._ We evaluate the effectiveness of the feature set in downstream task, such as regression or classification, using a utility metric such as 1-RAE, Precision, Recall, or F1 score. **Learning Selection Policy for Operation and Features.** Leveraging these metrics, we develop three Figure 3: Overview of the proposed framework. Feature classification step categorizes features into continuous and categorical types, along with an enhanced operation set. Hierarchical agents select an operation and two features, followed by statistically aware feature interaction to generate new features. Responsible agents are penalized for invalid operation-feature pairs. The updated feature set evaluated in a downstream task. Feature selection is applied to control the feature set size, with iterations continuing until optimization or set limit. MDPs to learn three agent polices to select the best operation and feature(s). To ease the description, we adopt the \(t\)-th iteration as an example to illustrate the calculation process. Learning Selection Policy for Operation Agent.The operation agent iteratively picks the best operation from an operation set as a feature crossing tool. Its learning system includes: **i) State:** its state is an embedding of the generated feature set of the previous iteration. Let \(Rep\) be a state representation method(discussed in the next section), the state can be denoted by \(s_{t}^{1}=Rep(\mathcal{F}_{t-1})\). \(\mathcal{F}_{t-1}\) is the current feature space observed by the agent. **ii) Action:** its action is the selected operation, denoted by \(a_{t}^{o}=o_{t}\). **iii) Reward:** its reward is the performance improvement on the downstream task, denoted by \(\mathcal{R}(s_{t}^{o},a_{t}^{o})=V_{A_{t}}-V_{A_{opt}}\), where \(V_{A_{t}}\) is the model performance after the \(t\)-th iteration with \(V_{A_{opt}}\) being the best performance achieved so far. Learning Selection Policy for Feature Agent 1.This agent iteratively selects the first meta feature. Its learning system includes: **i) State:** its state is the combination of \(Rep(\mathcal{F}_{t-1})\) and the vectorized representation of the operation selected by operation agent, denoted by \(s_{t}^{1}=Rep(\mathcal{F}_{t-1})\oplus Rep(o_{t})\), where \(\oplus\) indicates the concatenation operation. **ii) Action:** its action is the first meta feature selected from the observed feature space, denoted by \(a_{t}^{1}=f_{t}^{1}\). **iii) Reward:** its reward is determined by the utility score of the selected feature and the improvement in the downstream task performance. The reward can be formulated as \(\mathcal{R}(s_{t}^{1},a_{t}^{1})=U(f_{t}^{1}|o_{t})+(V_{A_{t}}-V_{A_{opt}})\). Learning Selection Policy for Feature Agent 2.This agent selects the best meta feature 2. Its learning system includes: **i) State:** its state is the combination of \(Rep(\mathcal{F}_{t-1})\), \(Rep(f_{t}^{1})\) and the vectorized representation of the operation, denoted by \(s_{t}^{2}=Rep(\mathcal{F}_{t-1})\oplus Rep(f_{t-1}^{1})\oplus Rep(o_{t})\)**ii) Action:** its action is the meta feature 2 selected from the observed feature space, denoted by \(a_{t}^{2}=f_{t}^{2}\). **iii) Reward:** its reward is determined by the utility score of the selected feature, the interaction strength between features \(f_{t}^{1}\) and \(f_{t}^{2}\), \(H_{f_{t}^{1},f_{t}^{2}}\) measured by H-statistics and improvement in downstream task performance. We formulate the reward as \(\mathcal{R}(s_{t}^{2},a_{t}^{2})=U(f_{t}^{2}|o_{t})+H_{f_{t}^{1},f_{t}^{2}}+(V_ {A_{t}}-V_{A_{opt}})\). **State Representation of Feature Space and Operation.** We propose to map the observed feature space to a vector for characterizing its state. Given a feature space \(\mathcal{F}\), we compute its descriptive statistics(_i.e_ count, standard deviation, minimum, maximum and first, second and third quartile) for each column within the feature space. We then calculate the descriptive statistics of the outcome of this step for each row and thus, obtain the descriptive matrix of shape \(\mathbb{R}^{7}\). We then flatten this descriptive matrix to obtain the final representation \(Rep(\mathcal{F})\in\mathbb{R}^{1\times 49}\) of the feature space. By employing this representation method, we generate a fixed-size state vector that adapts to the varying size of the feature set at each iteration. For the operators, we employ one-hot encoding, represented by \(Rep(o)\). Solving the Optimization Problem.During the iterative feature generation process, we train our agents to maximize the discounted and cumulative reward. To achieve this, we minimize the temporal difference error \(\mathcal{L}\) converted from the Bellman equation, given by: \[\mathcal{L}=Q(s_{t},a_{t})-(\mathcal{R}(s_{t},a_{t})+\gamma*\max_{a_{t+1}}Q(s_ {t+1},a_{t+1})) \tag{3.2}\] where \(Q\) is the estimated \(Q\)-function and \(\gamma\in[0\sim 1]\) is the discounted factor. After convergence, agents discover the optimal policy \(\pi^{*}\) to choose the most appropriate action (_i.e_ feature or operation) based on the state of the Q-value, formulated as follows: \[\pi^{*}(a_{t}|s_{t})=\operatorname*{argmax}_{a}Q(s_{t},a) \tag{3.3}\] ### Feature Generation and Post-processing We found that giving our agents human-like statistical awareness can generate more meaningful, interpretable features at an accelerated exploration speed. Based on the selection results, our RL system is faced with two generation scenarios: (1) an operation and two features are selected (2) an operation and a feature are selected. In cases where the selected feature(s) are deemed invalid for the selected operation (e.g.,selected features are 'Weight' and 'Height' and selected operation is '+' or 'GroupByThenMin'), feature generation process for that iteration is bypassed, and responsible agents are penalized based on previously discussed utility metrics. Figure 4: Proposed hierarchical agent structure Figure 5: Illustration of state representation extraction **Scenario 1: H-statistics Based Interaction Aware Feature-Feature Crossing.** Existing AutoFE literature solely focuses on generating higher-order features to improve downstream task performance. However, this trial-and-error approach lacks efficiency and does not align with human expert intuition. To make our model efficient by giving it statistical awareness, we incorporate Friedman's H-statistics [9]. We here present two-way interaction measure that tells us whether and to what extent two features in the model interact with each other. When two features don't interact, we can decompose partial dependence(PD) function as follows(assuming PD functions are centered at zero): \[PD_{jk}(f_{j},f_{k})=PD_{j}(f_{j})+PD_{k}(f_{k}) \tag{3.4}\] where \(PD_{j}(f_{j})\) and \(PD_{k}(f_{k})\) the PD functions of respective features and \(PD_{jk}(f_{j},f_{k})\) is the 2-way PD function of them. PD functions measure the marginal effect a feature has on the predicted outcome of a ML model. For instance, for regression, the PD function can be defined as, \[PD_{s}(x_{s})=E[\hat{f}(x_{s},X_{C})]=\int\hat{f}(x_{s},X_{C})d\mathbb{P}(X_{C}) \tag{3.5}\] \(x_{s}\) being the features for which the partial dependence is to be calculated and \(X_{c}\) the other features used in the ML model \(\hat{f}\). The expected value \(E\) is over the marginal distribution of all variables \(X_{c}\) not represented in \(x_{s}\). PD works by marginalizing the ML model output over the distribution of the features in set \(C\), so that the function shows the relationship between the features in set \(S\) and the predicted outcome. Equation 3.4 expresses the PD function without interactions between \(f_{j}\) and \(f_{k}\). The observed PD function is compared to the no-interaction decomposition, and the difference represents the interaction strength. The variance of the PD output quantifies the interaction between the two features. An interaction statistic of 0 indicates no interaction, while a statistic of 1 suggests that the prediction relies solely on the interaction. Mathematically, the H-statistic proposed by Friedman and Popescu for the interaction between \(f_{j}\) and \(f_{k}\) is: \[H_{j,k}=\frac{\sum_{i=1}^{n}[PD_{jk}(f_{j}^{(i)},f_{k}^{(i)})-PD_{j}(f_{j}^{(i) })-PD_{k}(f_{k}^{(i)})]^{2}}{\sum_{i=1}^{n}PD_{jk}^{2}(f_{j}^{(i)},f_{k}^{(i)})} \tag{3.6}\] where n is the number of instances in the dataset. For ease of understanding, we have refrained from delving into the detailed calculation methodology for interactions involving more than two features, which we implement in our work. In our implementation, rather than directly computing the interaction strength between two higher-order features, we instead calculate it for their respective 'parent' features. Our rationale behind this approach is that although the interaction strength between two features may be relatively low, exploring the interactions between their parent features can yield valuable insights and justify the agents' exploration efforts. **Scenario 2: Relevance Prioritized Unary Feature Generation.** Inspired by [26], we directly apply the operation to the feature that is more relevant to target label when an unary operation and two features are selected. We measure relevance using mutual information(MI) between feature \(f\in\mathcal{F}\) and target label \(y\), quantified by: \(rel=MI(f,y)\). The unary operation is then applied to the more relevant feature to generate higher-order features. **Post-generation Processing.** After generating new features, we combine them with the original feature set to create an updated feature set and evaluate the predictive performance in a downstream task. The performance serves as feedback to update the hierarchical agents' policies for the next round of feature generation. To control feature set size, we apply feature selection if the size surpasses a threshold, using the K-best feature selection. The tailored feature set becomes the original feature set for the next iteration. Upon reaching the maximum number of iteration, the algorithm concludes by returning the optimal feature set \(\mathcal{F}^{*}\). ## 4 Experiments ### Experimental Setup We utilize 24 publicly available datasets from LibSVM*, UCI+, Kaggle++ and OpenMLSS to evaluate our framework, including 14 classification and 10 regression tasks. **Table1** shows the statistics of the data. To evaluate recall and precision of classification tasks, we use the F-1 score. For regression tasks, we use 1-relative absolute error (RAE) to evaluate the accuracy. Footnote *: [https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/dataeta/](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/dataeta/) Footnote †: [https://archive.ics.uci.edu/](https://archive.ics.uci.edu/) Footnote ‡: [https://www.kaggle.com/datasets](https://www.kaggle.com/datasets) Footnote ‡: [https://www.openml.org/](https://www.openml.org/) #### 4.1.1 Baseline Algorithms We compare **InHRecon** with six state-of-the-art feature generation methods: (1) **ERG** expands the feature space by applying operations to each feature and then does feature selection; (2) **LDA**[2] extracts latent features via matrix factorization; (3)**AFT**[15] is an enhanced ERG implementaion that explores feature space and adopts multi-step feature selection leveraging L1-regularization; (4) **NFS**[4] mimics feature transformation trajectory for each feature and optimizes the feature generation process through RL; (5) **TTG**[16] records the feature generation process using a transformation graph and em ploys RL to explore the graph and determine the best feature set; (6) **GRFG**[26] adopts a groupwise feature generation approach, leveraging MI theory and RL techniques. To validate the impact of each technical component, we developed four variants of InHRecon: (i) **InHRecon\({}^{-rnd}\)** randomly picks an operation and feature(s); (ii) **InHRecon\({}^{-h}\)** treats all feature columns as numerical; (iii) **InHRecon\({}^{-u}\)** randomly selects a feature from the selected features when operator is unary; (iv) **InHRecon\({}^{-b}\)** doesn't utilize H-statistics for binary operations. We adopted random forest as the downstream ML model. In all experiments, we opted for a 5-fold stratified cross-validation approach. ### Overall Performance This experiment aims to answer: _Can our proposed method construct optimal feature space to improve a downstream task?_**Table 1** shows that, compared to six baselines, our model achieves state-of-the-art performance on 19 out of 24 datasets overall. The underlying driver is that our personalized feature crossing strategy incorporates the strength of feature-feature interactions to generate new features. The superior performance of InHRecon over expansion-reduction-based (ERG, AFT) methods demonstrates the effectiveness of hierarchical sharing of states among agents, enabling optimal selection policies. Our self-learning end-to-end framework allows for easy application to diverse datasets, making it a practical and automated solution compared to state-of-the-art baselines (NFS, TTG) in real-world scenarios. Our model demonstrates a notable trend in its performance, showcasing superior results on real-world datasets such as _PimaIndian_ or _German Credit_, compared to synthetic datasets like _OpenML 620_. Synthetic datasets typically consist of numerical features with randomly \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Dataset & Source & C/R & Instances\(\backslash\)Features & ERG & LDA & AFT & NFS & TTG & GRFG & InHRecon \\ \hline Higgs Boson & UCIrvine & C & 50000\(\backslash\)28 & 0.674 & 0.509 & 0.711 & 0.715 & 0.705 & 0.716 & **0.718** \\ \hline Amazon Employee & Kaggle & C & 32769\(\backslash\)9 & 0.740 & 0.920 & 0.943 & 0.935 & 0.806 & 0.946 & **0.947** \\ \hline PimalIndian & UCIrvine & C & 768\(\backslash\)8 & 0.703 & 0.676 & 0.736 & 0.762 & 0.747 & 0.767 & **0.778** \\ \hline SpecF & UCIrvine & C & 267\(\backslash\)44 & 0.748 & 0.774 & 0.775 & 0.842 & 0.788 & 0.854 & **0.878** \\ \hline SVMGudice3 & LibSVM & C & 1243\(\backslash\)21 & 0.747 & 0.683 & 0.829 & 0.831 & 0.766 & 8.842 & **0.850** \\ \hline German Credit & UCIrvine & C & 1001\(\backslash\)24 & 0.661 & 0.627 & 0.751 & 0.765 & 0.731 & 0.769 & **0.773** \\ \hline Credit Default & UCIrvine & C & 30000\(\backslash\)25 & 0.752 & 0.744 & 0.799 & 0.799 & 0.809 & 0.800 & **0.812** \\ \hline Messidor Features & UCIrvine & C & 1150\(\backslash\)19 & 0.635 & 0.580 & 0.679 & 0.746 & 0.726 & **0.757** & 0.738 \\ \hline Wine Quality Red & UCIrvine & C & 999\(\backslash\)12 & 0.611 & 0.600 & 0.658 & 0.666 & 0.647 & 0.686 & **0.706** \\ \hline Wine Quality White & UCIrvine & C & 4900\(\backslash\)12 & 0.587 & 0.571 & 0.673 & 0.679 & 0.638 & 0.685 & **0.696** \\ \hline SpamBase & UCIrvine & C & 4601\(\backslash\)57 & 0.931 & 0.908 & 0.951 & 0.955 & 0.959 & 0.958 & **0.971** \\ \hline AP-omentum-vary & OpenML & C & 275\(\backslash\)10936 & 0.705 & 0.117 & 0.783 & 0.804 & 0.795 & 0.808 & **0.811** \\ \hline Lymphography & UCIrvine & C & 148\(\backslash\)18 & 0.638 & 0.737 & 0.833 & 0.859 & 0.846 & 0.866 & **0.875** \\ \hline Ionosphere & UCIrvine & C & 351\(\backslash\)34 & 0.926 & 0.730 & 0.827 & 0.942 & 0.938 & 0.946 & **0.954** \\ \hline Bikeshare DC & Kaggle & R & 10886\(\backslash\)11 & 0.980 & 0.794 & 0.992 & 0.991 & 0.991 & 0.992 & **0.994** \\ \hline Housing Boston & UCIrvine & R & 506\(\backslash\)13 & 0.617 & 0.174 & 0.641 & 0.654 & 0.658 & 0.658 & **0.660** \\ \hline Airfoil & UCIrvine & R & 1503\(\backslash\)5 & 0.732 & 0.463 & 0.774 & 0.771 & 0.783 & 0.787 & **0.793** \\ \hline Openml\_618 & OpenML & R & 1000\(\backslash\)50 & 0.427 & 0.372 & 0.665 & 0.640 & 0.587 & 0.668 & **0.673** \\ \hline Openml\_589 & OpenML & R & 1000\(\backslash\)25 & 0.560 & 0.331 & 0.672 & 0.711 & 0.682 & **0.739** & 0.723 \\ \hline Openml\_616 & OpenML & R & 500\(\backslash\)50 & 0.372 & 0.385 & 0.585 & 0.593 & 0.559 & 0.603 & **0.605** \\ \hline Openml\_607 & OpenML & R & 1000\(\backslash\)50 & 0.406 & 0.376 & 0.658 & 0.675 & 0.639 & **0.680** & 0.671 \\ \hline Openml\_620 & OpenML & R & 1000\(\backslash\)25 & 0.584 & 0.425 & 0.663 & 0.698 & 0.656 & **0.714** & 0.694 \\ \hline Openml\_637 & OpenML & R & 500\(\backslash\)50 & 0.497 & 0.494 & 0.564 & 0.581 & 0.575 & 0.589 & **0.625** \\ \hline Openml\_586 & OpenML & R & 1000\(\backslash\)25 & 0.546 & 0.472 & 0.687 & 0.748 & 0.704 & **0.763** & 0.756 \\ \hline \end{tabular} \end{table} Table 1: Overall performance comparison. ‘C’ for classification and ‘R’ for regression. Figure 6: Comparison of convergence of different variants of InHRecon generated values, which might be better suited for AutoFE frameworks that rely on simple mathematical operations. ### Ablation Study This experiment aims to answer: _How does each component in our model impact its performance?_ We developed four variants of InHRecon (Section 4.1.1). **Figure 6** shows the comparison results on two classification datasets (_PimaIndian_ and _Credit_Default_) and two regression datasets (_Housing Boston_ and _OpenmL637_). Unsurprisingly, InHRecon\({}^{-rnd}\) is consistently outperformed across all experiments. InHRecon surpasses InHRecon\({}^{-h}\), indicating that replicating expert cognition process facilitates the generation of meaningful and optimal features. Performances of InHRecon\({}^{-u}\) and InHRecon\({}^{-b}\) demonstrate that our personalized feature crossing strategy leads to improved feature space. ### Study of Impact of H-statistics This experiment aims to answer: _Is H-statistics more effective than classical approaches in introducing statistical awareness in feature generation?_ We replaced H-statistics with cosine similarity, Pearson's correlation, and mutual information measurements- aiming for the selected features to have lower redundancy and greater relevance to the prediction target. For instance, in experimental setup with MI, the utility metric is designed as follows: \[U(\mathcal{F}|y)=-\frac{1}{|\mathcal{F}|^{2}}\sum_{f_{i},f_{j}\in\mathcal{F}} MI(f_{i},f_{j})+\frac{1}{|\mathcal{F}|}\sum_{f\in\mathcal{F}}MI(f,y), \tag{4.7}\] where \(\mathcal{F}\) represents the set of selected features \(f_{i}\) and \(f_{j}\), \(|\mathcal{F}|\) the size of \(\mathcal{F}\) and \(y\) the target label. We report the comparison results on four different datasets. As seen from **Figure 7**, H-statistics shows superiority across all datasets. The underlying driver is that, instead of merely calculating the degree to which two feature columns are related or how similar or dissimilar they are, H-statistics directly measures the share of variance that is explained by the interaction between two(or more) features. This allows us to prioritize the crossing of features that have more significant impact on prediction variation, resulting in faster convergence. ### Robustness check of InHRecon under different ML models This experiment aims to answer: _Is InHRecon robust when different ML models are used in downstream task performance evaluation?_ We examined the robustness of InHRecon by changing the ML model of a downstream task to Random Forest (RF), Xgboost (XGB), SVM, KNN, and Ridge Regression, respectively. The comparison results, depicted in **Figure 8**, demonstrate that InHRecon consistently enhances model performances across the tested datasets. This indicates that InHRecon exhibits strong generalization capabilities across various benchmark applications, irrespective of the ML model employed. ### Parameter Sensitivity Analysis of InHRecon This experiment aims to answer: _How InHRecon behaves under different parameter settings,_ specifically the order of generated features and the enlargement factor of feature space. While the baseline methods simply Figure 8: Comparison on different ML models in terms of F1 or 1-RAE. Figure 7: Comparison of different interaction strategy in terms of F1 or 1-RAE. overlook this, we argue that higher-order features are less interpretable to human observers. Hence, we report our performance results with the highest feature order set at 4. The results, as depicted in **Figure 9**, indicate that InHRecon stabilizes around the 3rd to 5th order of features. We demonstrate that InHRecon achieves fast convergence when enlarging the feature space, typically only by a factor of 2x to 3x compared to the original feature space. These findings highlight the efficacy of our statistically aware feature crossing strategy, which intelligently explores the feature space and efficiently generates informative features, while ensuring that these features are meaningful to a human observer. ### Case Study: Rationality and Interpretability Analysis This study aims to answer: _Can InHRecon generate a rational and interpretable feature space?_ In our _German Credit_ dataset case study, we use random forest classifier to identify the top 10 essential features for predicting credit risk. This dataset poses challenges in understanding due to its vague categorization, where feature columns are labeled with numbers 1 through 24. **Figure 10** presents the model performances in the central parts of each sub-figure, with corresponding feature names in the associated pie charts. The pie chart size corresponds to feature importance. Our findings show that the **InHRecon**-reconstructed feature space enhances model performance by 5.75%, with 40% of the top 10 features being generated. This suggests that InHRecon generates informative features, leading to the refinement of feature space. Furthermore, our analysis highlights the impact of categorizing the features and expanding operation set to handle both numerical and categorical features. ## 5 Related Work **Hierarchical Reinforcement Learning(HRL).** HRL has been widely employed to address various sequential decision-making problems [1]. It enables simultaneous learning at multiple resolutions to accelerate the learning process [5]. Hierarchical structures incorporate sub-goal information [21], enhancing existing options [23]. Value functions can be decomposed into individual sub-goals, facilitating the sharing and reuse of sub-goal information in HRL [6]. These hierarchical approaches offer numerous benefits, including improved exploration and learning efficiency [7] and decision-making abstractions [18]. In [24], a deep HRL framework in lifelong learning is proposed, while [20] introduces HRL with learned goals for conveying instructions between policy levels. However, none of these methods are not suitable to be directly applied to learn strategies for AutoFE. **Automated Feature Engineering(AutoFE).** AutoFE aims to improve ML model performance by enhancing feature space through feature generation and selection techniques. _Feature selection_ eliminates redundancy and preserves important features, utilizing filter(e.g., univariate selection [8], correlation-based selection [28]), wrapper(e.g., RL [19], Branch and Bound [17]), and embedded methods (e.g., LASSO [25], decision tree [22]). _Feature generation_ on the other hand, includes latent representation learning[12, 27] and feature transformation approaches [4, 16]. These methods lack human-like and statistical awareness, leading to mindless exploration in larger feature spaces and causing inefficiency. Our personalized feature crossing captures highly relevant and interacted features and hierarchical agents learn effective interaction policies- all of which accelerate feature generation. ## 6 Concluding Remarks We introduce _InHRecon_, an interaction-aware hierarchical reinforced feature generation framework for optimal, interpretable and meaningful representation space reconstruction. To this end, we extend the operation space for RL agents, enabling them to emulate human expertise across various feature types. We decompose the feature generation process into sub-problems of operation selection and feature selection, addressed by hierarchical RL agents. We incorporate H-statistics as Figure 10: Top10 features for prediction in the original and InHRecon-reconstructed feature space Figure 9: Parameter Sensitivity Analysis of InHRecon a measurement of interaction strength of selected features to promote systematic exploration of the feature space and faster convergence. InHRecon also offers traceable feature generation, improving explainability. Our framework achieves (surpassing or on-par) state-of-the-art performances on the standardized benchmarks adopted by prior work. Future work aims to further enhance InHRecon by pre-training extremely large-scale models on diverse datasets.
2301.13455
ZhichunRoad at Amazon KDD Cup 2022: MultiTask Pre-Training for E-Commerce Product Search
In this paper, we propose a robust multilingual model to improve the quality of search results. Our model not only leverage the processed class-balanced dataset, but also benefit from multitask pre-training that leads to more general representations. In pre-training stage, we adopt mlm task, classification task and contrastive learning task to achieve considerably performance. In fine-tuning stage, we use confident learning, exponential moving average method (EMA), adversarial training (FGM) and regularized dropout strategy (R-Drop) to improve the model's generalization and robustness. Moreover, we use a multi-granular semantic unit to discover the queries and products textual metadata for enhancing the representation of the model. Our approach obtained competitive results and ranked top-8 in three tasks. We release the source code and pre-trained models associated with this work.
Xuange Cui, Wei Xiong, Songlin Wang
2023-01-31T07:31:34Z
http://arxiv.org/abs/2301.13455v1
# ZhichunRoad at Amazon KDD Cup 2022: MultiTask Pre-Training for E-Commerce Product Search ###### Abstract. In this paper, we propose a robust multilingual model to improve the quality of search results. Our model not only leverage the processed class-balanced dataset, but also benefit from multitask pre-training that leads to more general representations. In pre-training stage, we adopt mlm task, classification task and contrastive learning task to achieve considerably performance. In fine-tuning stage, we use confident learning, exponential moving average method (EMA), adversarial training (FGM) and regularized dropout strategy (R-Drop) to improve the model's generalization and robustness. Moreover, we use a multi-granular semantic unit to discover the queries and products textual metadata for enhancing the representation of the model. Our approach obtained competitive results and ranked top-8 in three tasks. We release the source code and pre-trained models associated with this work1. search relevance, e-commerce, semantic matching, multilingual + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition The InfoXLM\({}_{large}\) model (Han et al., 2017) containing 94 languages and pre-trained with CCNet dataset, and has the same configurations of XLM-R (He et al., 2017) and a shared vocabulary size of 250002. Figure 1 shows a high-level overview of our proposed pretext tasks. **MLM Task**, is widely used for learning text representations (Han et al., 2017). MLM trains a model to predict a random sample of input tokens that have been replaced by a [MASK] placeholder in a multi-class setting over the entire vocabulary (Krizhevsky et al., 2015). We adopt MLM-Task on the multilingual product-catalogue dataset. **Classification Task**, contains three classification subtasks. One of them is Product2Query-Task, this task is a binary classification task. Based on the Poisson distribution, a piece of text is intercepted from commodity text information as the faked query. The Parameters passed to the Poisson distribution and more details can be found in appendix A.1. Product2Brand-Task and Product2Color-Task are multi-class classification that using product text information to predict the brand and the color of current item. **Contrastive Learning Task**, is mainly inspired by SimCSE (Srivastava et al., 2015) and EsimCSE (Krizhevsky et al., 2015). During training, each data point is trained to find out its counterpart among \((N-1)\) from in-batch negative samples and the queue of data samples. The samples in the queue are progressively replaced. \[-\log\frac{\epsilon^{\text{sim}}(\mathbf{h}_{i},\mathbf{h}_{i}^{*})/\tau}{ \sum_{j=1}^{N}e^{\text{sim}\left(\mathbf{h}_{i},\mathbf{h}_{j}^{*}\right)/ \tau}+\sum_{q=1}^{Q}\epsilon^{\text{sim}\left(\mathbf{h}_{i},\mathbf{h}_{q}^{* }\right)/\tau}} \tag{1}\] The \(h_{*}\) is the sentence representation, where \(h_{i}\) and \(h_{i}^{*}\) are semantically related. The \(h_{q}^{*}\) denotes a sentence embedding in the momentum-updated queue. And the \(Q\) is the size of the queue, \(sim(h1,h2)\) is the cosine similarity scores of sentence representations, \(\tau\) is a temperature hyperparameter. In the end, we average the all N Li losses to calculate the contrastive loss \(\mathcal{L}_{\text{con}}\). ``` Input: DataSet \(\mathcal{D}=\left\{(x,y,z)_{i}\right\}_{i=1}^{|\mathcal{D}|}\) 1 Initialize model parameters \(\Theta\) randomly ; 2 Model trainer \(T\) that takes batches of training data as input to optimize the model parameters \(\Theta\) ; 3 Set the max number of epoch: \(epoch_{\text{max}}\) ; 4fore epoch in \(1,2,...,epoch_{\text{max}}\)do 5 Shuffle \(\mathcal{D}\) by mixing data from different tasks ; 6for\(\mathcal{B}\) in \(\mathcal{D}\)do 7 // \(\mathcal{B}\) is a mini-batch of pre-training task ; 8 Compute loss : \(L(\Theta)\) ; 9 10 1. \(L(\Theta)\) = Mask LM Loss ; 11 12 2. \(L(\Theta)\) += Classification Loss ; 13 3. \(L(\Theta)\) += Contrastive Learning Loss ; 14 Optimize the model using \(L(\Theta)\) ; 15 16 end for 17 18 end for Output: Pre-trained Model \(\Theta\) ``` **Algorithm 1**Training a MultiTask model. ### Fine-Tuning Methods After pre-training, we remove the classifiers for pre-training multi-task and concatenate some embeddings with an extra MLP classifier. The embeddings consist of three sets of representations. One of them is done by concatenating the queries' 3-gram mean-pooling, bullet points' 3-gram mean-pooling and descriptions' 3-gram mean-pooling embeddings. The others consist of country embedding, brand embedding and color embedding, as shown in Figure 2. **Exponential Moving Average** Our model uses EMA (Krizhevsky et al., 2015) to smooth the trained parameters. Evaluations that use averaged parameters sometimes produce significantly better results than the final trained values. Formally, we define the smoothed variables and trained variables as \(\theta_{s}\) and \(\theta_{t}\), EMA decay weight as: \(\eta\). After each training step, we update \(\theta_{s}\) by: \[\theta_{s}\leftarrow\eta\theta_{s}+(1-\eta)\theta_{t} \tag{2}\] **Adversarial Training** Recently, adversarial attack has been widely applied in computer vision and natural language processing (Han et al., 2017; He et al., 2017; He et al., 2017). Many works use it during fine-tuning, we explore the influence of adversarial training strategies and compare the FGSM, PGD, FRELB and SMART methods. The adversarial attack works by augmenting the input with a small perturbation that maximizes the adversarial loss: \[\min_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{D}}\left[\max_{\Delta x\in\Omega}L( x+\Delta x,y;\theta)\right] \tag{3}\] where the \(\mathcal{D}\) is dataset, \(x\) is input, \(y\) is the gold label, \(\theta\) is the model parameters, \(L(x,y;\theta)\) is the loss function and \(\Delta x\) is the perturbation. In our experiments, we adopt FGSM method in all tasks which based on the actual performances. **R-Drop** is proved to be an effective regularization method based on dropout, by minimizing the KL-divergence of the output distributions of every two sub-models generated via dropout in model training. \[\mathcal{L}_{KL}=\alpha\cdot[\mathcal{D}_{KL}\left(Logit_{1},Logit_{2}\right) +\mathcal{D}_{KL}\left(Logit_{2},Logit_{1}\right)] \tag{4}\] Figure 1. A schematic overview of our novel pre-training tasks. These tasks encourage the encoded representations to be more general. We use the origin logits of model's output as \(Logit_{1}\), and the logits after adversarial attack as \(Logit_{2}\). **Embedding Mixup** is widely used data augmentation method through linearly interpolating inputs and modeling targets of random samples. We use the contextual embedding vector of [CLS] and the corresponding label to generate synthetic examples for training. Such training has been shown to act as an effective model regularization strategy for text classification task. In conclusion, we present the self-supervised multitask pre-training tasks and the several fine-tuning methods for improving the models' generalization and robustness. ## 4. Experiments ### Settings We use InfoXLM\({}_{large}\) as the text encoder, the EMA decay weight is set to 0.999. And our learning rate is set to 1e-5 with warm-up ratio over 10%, batch size is 32 and gradient clip norm threshold is set to 1. In pre-training stage, the maximum number of epochs was set to 10. And in the fine-tuning stage, the maximum number of epochs was set to 5. During adversarial training, we set \(\epsilon\) to 1.0 in FGM that means calculate only one step in the adversarial attack. We find that the dataset has imbalanced label and use some data processing steps. Through splitting the complement and irrelevant product text information, we could get more pairs which have the same label and get a more balanced dataset. We use confident learning to find the potential label errors and remove \(\sim\)4% data from the training dataset. As presented in appendix A.1, the median of Spanish and English queries is 14 which satisfies the Poisson distribution of \(\mu\) is set to 4. And the median of the Japanese query is 31 which satisfies the Poisson distribution with \(\mu\) is set to 8. ### Main Results Our approach achieved considerably performance gain over the baseline solution, and ranked top-8 in three tasks. The main results are shown in Table 2. In task1, we calculated the mean of all model outputs as the final ranking results. In task2 and task3, we almost used the same network structure except the number of neurons in the classifier. Finally, Our approach ranked 5th, 7th and 8th, respectively. ### Ablation Studies We investigate the impact of adopting different pre-training task in the task2 setting. In Table 3, we show the Mask-LM losses after 5 epochs of pre-training and Micro-F1 scores after 2 epochs of fine-tuning. We find that the Product2Query task achieves an 0.008 improvement compared to the baseline, and the contrastive learning task doesn't get a significant gain. As shown in Table 4, we compare several loss functions, and we adopt Poly1 loss function in task2 and task3 which based on the actual performances. We observe that the Focal loss function and GHM loss function have worse performance than the cross-entropy loss function in the task2 setting. In this subsection, we explore several methods for further improving the model's performance in fine-tuning stage. As presented \begin{table} \begin{tabular}{c c c c} \hline \hline **SubTask** & **Model** & **Metric** & **Ranking** \\ \hline task1 & 6 large models & ndcg=0.9025 & 5th \\ task2 & only 1 large model & micro f1=0.8194 & 7th \\ task3 & only 1 large model & micro f1=0.8686 & 8th \\ \hline \hline \end{tabular} \end{table} Table 2. Performance of our approach on the private leaderboard. In task1, we used six InfoXLM\({}_{large}\) models that finetuned by different datasets or methods. In task2 and task3, we used only one InfoXLM\({}_{large}\) model with the same network structure, as shown in Figure 2. \begin{table} \begin{tabular}{l c c} \hline \hline **Pre-Training Task** & **CV-MLM Loss** & **CV-Micro F1** \\ \hline Mask LM & 1.966 & 74.97 \\ +Product2Query & 1.969 & 75.05 \\ ++Product2Brand & 1.978 & 75.08 \\ ++Contrastive Learning & 2.047 & 75.08 \\ \hline \hline \end{tabular} \end{table} Table 3. The effect of different pre-training tasks and keep accumulating from top to bottom. We report the cross validation MLM-Loss and Micro-F1 Score \(\times\) 100 in the task2 setting. Figure 2. In fine-tuning stage, we concatenate the multi-granular semantic units, the [CLS] embedding from XLM encoder and the IDs’ embeddings. in Table 5, we adopt all of these methods to improve the model's generalization and robustness. We observe that the exponential moving average method(EMA), adversarial training(FGM) and regularized dropout strategy(R-Drop) could improve the model's generalization and robustness. But the Embedding Mixup strategy doesn't get a significant gain. As shown in Table 7, we consider using smaller datasets with removing \(\sim\)4% noisy labels. We used the smaller dataset to achieve an 0.005 improvement in task1, but we get worse results in task2 and task3. It could be explained that since task1 contains more difficult samples, the manually annotated data contains more label errors. ## 5. Conclusion and Future Work In this work, we provide an overview of the combined approach to improve the quality of search results. We use data augmentation, multitask pretraining strategy and several fine-tuning methods to achieve considerably performance. Specifically, we use mlm task, classification task and contrastive learning task in pre-training stage. And we use exponential moving average method(EMA), adversarial training(FGM) and regularized dropout strategy(R-Drop) to improve the model's generalization and robustness in fine-tuning stage. Moreover, we use a multi-granular semantic unit to discover the queries and products textual metadata for enhancing the representation of the model. Future work of our system includes: 1) Comparing with other pre-trained language models, such as \(\text{deborta}_{large}\)-2) Using other training strategies, such as self-distillation.
2306.17694
Gap-Dependent Hydrodynamic Lubrication in Conformal Contacts
We show that the hydrodynamic lubrication of contacting conformal surfaces with a typical texture height gives rise to a universal behaviour in the Stribeck curve in which the friction coefficient shows an anomalous power-law dependence on the Sommerfeld number, $\mu \sim S^{2/3}$. When the gap height drops below the `texture length scale', deviations from $S^{2/3}$ occur, which may resemble the onset of elasto-hydrodynamic and mixed lubrication. Within this framework, we analyse literature data for oral processing and find $S^{2/3}$ scaling with deviations consistent with measured lengthscales.
James A. Richards, Patrick B. Warren, Wilson C. K. Poon
2023-06-30T14:19:45Z
http://arxiv.org/abs/2306.17694v1
# Gap-Dependent Hydrodynamic Lubrication in Conformal Contacts ###### Abstract We show that the hydrodynamic lubrication of contacting conformal surfaces with a typical texture height gives rise to a universal behaviour in the Stribeck curve in which the friction coefficient shows an anomalous power-law dependence on the Sommerfeld number, \(\mu\sim S^{2/3}\). When the gap height drops below the 'texture length scale', deviations from \(S^{2/3}\) occur, which may resemble the onset of elasto-hydrodynamic and boundary lubrication. Within this framework, we analyse literature data for oral processing and find \(S^{2/3}\) scaling with deviations consistent with measured lengthscales. ## I Introduction The importance of lubricated contacts between sliding surfaces cannot be overstated [1; 2]. Such contacts are often characterised by a 'pin-on-disc test', which is analysed in terms of a hemisphere of radius \(R\) trapping a lubricant of viscosity \(\eta\) sliding on a flat surface. The friction coefficient \(\mu\), the ratio of the imposed normal force \(N\) on the pin to the tangential drag force \(F\) on the disc, \(\mu=\frac{F}{N}\), is measured as \(N\) and the relative sliding speed \(U\) vary, giving \(\mu\) as a function of the Sommerfeld number \(S=\eta UR/N\)[3; 4; 5; 6], a Stribeck curve. The typical Stribeck curve shows three regimes, Fig. 1. At high \(S\), \(N\) is purely hydrodynamic in origin [6]. In this hydrodynamic lubrication (HL) regime, \(\mu\) decreases as \(N\) increases, reaching a minimum before upturning due to the onset of elastohydrodynamic lubrication (EHL), where hydrodynamic stresses begin to deform surface asperities, which however are not yet touching. Touching signals the onset of the boundary lubrication (BL), where \(\mu\) becomes nearly constant as \(S\to 0\). Hydrodynamic lubrication originates from volume conservation in the convergent thin-film flow of an incompressible lubricant [7]. Consider a ball-on-flat geometry, Fig. 2a-b, with a plane Couette flux \(Uh/2\) (volume per unit width, here and throughout) in the convergent gap due to the ball sliding along \(x\) relative to the plane at velocity \(-U\). This varies with the gap, Fig. 2c (dashed red), and does not conserve volume. The imbalance is corrected by a gap-dependent pressure-driven in-plane Poiseuille flux, \(-\frac{h^{3}}{12\eta}\nabla p\), Fig. 2c (black). The associated pressure field \(p\) provides the normal force to keep the surfaces apart. Mathematically, balancing the gap-dependent Couette (\(\mathbf{J}_{c}\)) and Poisseuille (\(\mathbf{J}_{p}\)) fluxes translates to \(\nabla\cdot(\mathbf{J}_{c}+\mathbf{J}_{p})=0\). Noting that \(\mathbf{J}_{c}\) is exclusively in the \(x\) direction while \(\mathbf{J}_{p}\) can in general lie in the \((x,y)\) plane, we therefore require \[\frac{\partial}{\partial x}\left(\frac{Uh}{2}\right)-\left[\frac{\partial}{ \partial x}\left(\frac{h^{3}}{12\eta}\frac{\partial p}{\partial x}\right)+ \frac{\partial}{\partial y}\left(\frac{h^{3}}{12\eta}\frac{\partial p}{ \partial y}\right)\right]=0\,. \tag{1}\] In this Reynolds lubrication equation [6], the pressure \(p(x,y)\) is independent of the vertical coordinate because the gap \(h(x,y)\) is everywhere small compared to the flow's length and width. Over the bearing area, this pressure supports \(N\), while \(F\) has Couette and Poiseuille components: \[N=\int\!\mathrm{d}x\;\mathrm{d}y\;p\;,\quad F=\int\!\mathrm{d}x\;\mathrm{d}y \left[\frac{\eta U}{h}+\frac{h}{2}\left(\frac{\partial p}{\partial x}+\frac{ \partial p}{\partial y}\right)\right]\,. \tag{2}\] Analytic solutions to Eq. (1) exist only for a few cases. For a ball (radius \(R\)) on a flat surface with minimum gap \(h_{0}\ll R\), Fig. 2a, Kapitza gave \(h_{0}=(72\pi^{2}/25)RS^{2}\approx 28.4RS^{2}\)[8] while Hamrock has \(h_{0}\approx 34.8RS^{2}\)[1], reflecting different approximations. They also calculated \(p(x,y)\). Similar results for other canonical geometries are presented in textbooks [1; 2; 9; 10]. Curiously, however, there are few calculations of and experimental data for \(\mu(S)\) in the HL regime. So, while HL is widely thought to be understood, much of the theory remains untested by direct experiments. In an accompanying paper [11], we address this knowledge gap for conformal contacts, _i.e._ surfaces with commensurate curvature. We outline scaling arguments predicting an unexpected \(\mu\sim S^{2/3}\) behaviour at large \(S\) in the HL regime, and deviations below some critical \(S^{\star}\). Experiments confirm these predictions. Here, we justify our scaling arguments and present full calculations, paying particular attention to deviations from scaling around \(S^{\star}\), which may resemble the transition to EHL. Below, after recalling the scaling theory for the ball-on-flat geometry [12], we report detailed scaling analyses for conformal Figure 1: Schematic Stribeck curve, friction coefficient (\(\mu=F/N\)) as a function of Sommerfeld number (\(S=\eta UN/R\), for sliding speed \(U\), lubricant viscosity \(\eta\) and radius of curvature of lubrication geometry \(R\)). Regimes of \(\mu\) with decreasing \(S\): hydrodynamic lubrication (HL), elasto-hydrodynamic lubrication (EHL) at minimum, and constant boundary lubrication (BL, shading). contacts to reveals the physics underlying the \(S^{2/3}\) scaling and deviations from it at low \(S\). Solving Eq. (1) by quadrature follows to explore these deviations in depth. We end by applying our results to some recently-published data [13] to demonstrate the practical utility of our analysis. ## II Scaling theory of non-conformal contacts The two surfaces in a non-conformal contact have different curvatures. Kapitza [8] solved the sphere on flat, Fig. 2a, analytically. The physics emerges from a scaling argument [12]. First, expand the convergent gap around its narrowest point, \(h(r)\approx h_{0}+r^{2}/2R\), Fig. 2b. In a region of radius \(r_{0}\sim\sqrt{h_{0}R}\), the gap remains narrow, and we expect from Eq. (1) that \[\frac{Uh_{0}}{r_{0}}\sim\frac{1}{r_{0}}\times\frac{h_{0}^{3}}{\eta}\frac{p}{r_ {0}}\quad\Rightarrow\quad p\sim\frac{\eta Ur_{0}}{h_{0}^{2}}\,. \tag{3}\] This supports load \(N\sim pr_{0}^{2}\sim\eta UR^{3/2}/h_{0}^{1/2}\), or \(h_{0}/R\sim(\eta UR/N)^{2}\sim S^{2}\), confirming Kapitza [8]. Symmetry predicts equal but opposite pressure in the diverging gap. If this negative pressure is significantly below atmospheric and the lubricant cavitates, then the 'half Sommerfeld boundary condition' of \(p=0\) applies, giving a net normal force. The narrow-gap shear rate \(\dot{\gamma}\sim U/h_{0}\) gives a shear stress \(\sigma\sim\eta U/h_{0}\) and lateral force \(F\sim\sigma r_{0}^{2}\sim\eta UR\), so that \[\mu=\frac{F}{N}\sim\frac{\eta UR}{N}\sim S\,. \tag{4}\] Such linear scaling is often assumed in sketches of the Stribeck curve in the HL regime (_e.g._, Fig. 2 in Ref. [14]). For a long cylinder, \(x_{0}\sim\sqrt{h_{0}R}\) in the narrow gap region. If its length \(L\) satisfies \(L\gg x_{0}\), then lubricant flow is exclusively along the direction of motion rather than through side leakage. The pressure has the same form as the ball-on-flat case, but now acts over area \(\sim x_{0}L\), not \(r_{0}^{2}\), which support a load \(N\sim x_{0}Lp_{0}\sim x_{0}^{2}\eta UL/h_{0}^{2}\sim\eta UL(R/h_{0})\), with associated drag \(F\sim\sigma x_{0}L\sim\eta UL\sqrt{R/h_{0}}\). Now we predict \[S\equiv\frac{\eta UL}{N}\sim\frac{h_{0}}{R}\,,\quad\mu\sim\sqrt{\frac{h_{0}} {R}}\quad\Rightarrow\quad\mu\sim S^{1/2}\,. \tag{5}\] Apparently, the HL of non-conformal contacts gives \(\mu\sim S^{\alpha}\) with \(\alpha\) set by the spatial dimension. However, few published Stribeck curves show either of these scalings. ## III Scaling theory of conformal contacts The ball or cylinder on plane is characterized by a single length scale, _viz._, the sphere or cylinder radius, \(R\). A contact between two equal-curvature surfaces apparently has no intrinsic length scale. However, the surfaces in a real conformal contact will show a degree of mesoscale non-flatness with characteristic length scale \(d\gg\) typical asperities. They are Figure 2: Non-conformal lubrication. (a) Lubrication geometry, sphere or cylinder with radius \(R\), loading conditions with normal (\(N\)) and tangential load (\(F\)) applied to upper surface [dark (grey) shading], with opposing forces from fluid [light (blue) shading] due to relative sliding speed (\(U\)) between upper and lower surfaces. (b) Narrow-gap region with gap \(h(r)\) at a distance \(r\ll R\) from point of closest approach with gap \(h_{0}\). Curved surface in narrow-gap region approximated by parabola, \(h(r)=h_{0}+r^{2}/2R\). (c) Schematic of resulting fluid flow. Couette flow [(orange) dashed lines] with changing gap results in compensating Poiseuille flow to give changing net fluid flow (lbue) solid lines and arrows]. (d) Resulting Reynolds lubrication pressure distribution \(p(r)\) schematic. Symmetry implies equal and opposite pressures for \(r\to-r\), negative pressure [(red) shading] neglected due to cavitation, the half-Sommerfeld boundary condition. Figure 3: (a) A textured surface of back-to-back wedge bearings. (b) Wedge profile specified by length \(D\) and ‘step height’ \(d\) as decrease in gap (\(h_{0}+d\) to \(h_{0}\)) from \(x=0\) to \(x=D\). Dimensionless gap profile \(\xi=z+1-u\) for scaled co-ordinates, \(z=h_{0}/d\) and \(u=x/D\), Eq. (1). (c) Inlet-half-cylinder profile. Cylinder approximately parabolic form from \(u=\chi\) to 1, Eq. (10), inlet region, \(h_{0}+d\) at \(u=0\) to \(\chi\) (or \(x=\chi D\)). Increasing \(\chi\), and larger inlet, shown from light to dark shading and with fine to bold lines. (d) ‘Rayleigh’ step gap profile with equivalent bearing dimensions, Eq. (12). therefore 'textured surfaces' [15] in which the gap height varies from some minimum \(h_{0}\) to \(h_{0}+d\) over the area of contact, Fig. 3a. We restrict ourselves to surface undulations along the sliding direction of the conformal contact. Assuming the half-Sommerfeld boundary condition, we consider only the converging gap regions, each of which is then a slider bearing of length \(D\gg d\) and width \(L\gg d\). For the three slider bearings in Fig. 3, a wedge, a 'Rayleigh step', and a half-cylinder inlet, there is a characteristic'step height', \(d\). Two limits exist, Fig. 4, the 'long bearing', \(L\gg D\) (Fig. 4a), or the the'short bearing', \(L\ll D\) (Fig. 4b), where volume conservation is ensured by side-leakage [16]. In each case, there is a large-gap regime (\(h_{0}\gg d\)) and a small-gap regime (\(h_{0}\ll d\)), with cross-over at \(h_{0}/d\approx 1\). Full calculations, Sections IV and V, predict the Stribeck curves shown in Fig. 5. To highlight the physics, we first give scaling analyses for the universal power-law behaviour at high \(S\) and for the details-dependent limit \(S\to 0\). ### Large-gap limit, \(h_{0}\gg d\) For a long slider bearing in this limit, \(\partial/\partial y=0\) and the gap narrows over \(D\) from \(h_{0}+d\) to \(h_{0}\), and Eq. (1) leads us to expect \[\frac{Ud}{D}\;\sim\;\Big{(}\,\frac{1}{D}\times\frac{h_{0}^{3}}{\eta}\,\frac{ p}{D}\,\Big{)}\;+\;\Big{(}\,\frac{1}{L}\times\frac{h_{0}^{3}}{\eta}\,\frac{p}{L}\, \Big{)}\,. \tag{6}\] With \(D\ll L\), we drop the second term on the RHS: the Poiseuille flow is predominantly along \(x\), Fig. 4a. So, \[p\sim\frac{\eta UDd}{h_{0}^{3}}\,. \tag{7}\] This and shear stress \(\sigma\sim\eta U/h_{0}\)[17] act on area \(LD\) to give \[N\sim\frac{\eta UDLD^{2}}{h_{0}^{3}}\,,\quad F\sim\frac{\eta UDL }{h_{0}}\] \[\Rightarrow\quad S\sim\frac{d^{2}}{D^{2}}\Big{(}\frac{h_{0}}{d} \Big{)}^{3}\,,\quad\mu\sim\frac{d}{D}\Big{(}\frac{h_{0}}{d}\Big{)}^{2} \tag{8}\] \[\Rightarrow\quad\mu\sim\Big{(}\frac{D}{d}\Big{)}^{1/3}S^{2/3}\,.\] The 'anomalous exponent' of \(2/3\) arises from \(r_{0}=\sqrt{h_{0}R}\) in Eq. (3), because a sphere is characterised by a single length scale \(R\), while the \(d\) and \(D\) in Eq. (7) are constants unrelated to \(h_{0}\), _i.e._ a slider bearing is specified by _two_ length scales. In the short-bearing limit (\(L\ll D\)), we drop the first term on the RHS of Eq. (6): volume conservation is now entirely managed by side-leakage [16], Fig. 4b, and we have \[p\sim\frac{\eta UdL^{2}}{Dh_{0}^{3}}\,. \tag{9}\] This and shear stress \(\sigma\sim\eta U/h_{0}\) act on area \(LD\) to give \[N\sim\eta UdL^{3}/h_{0}^{3}\,,\quad F\sim\frac{\eta UDL}{h_{0}}\] \[\Rightarrow\quad S\sim\frac{d^{2}}{L^{2}}\Big{(}\frac{h_{0}}{d} \Big{)}^{3}\,,\quad\mu\sim\frac{Dd}{L^{2}}\Big{(}\frac{h_{0}}{d}\Big{)}^{2} \tag{10}\] \[\Rightarrow\quad\mu\sim\Big{(}\frac{D^{3}}{L^{2}d}\Big{)}^{1/3}S ^{2/3}\,.\] The anomalous scaling therefore remains unchanged in the short-bearing limit, which only introduces new pre-factors. ### Small-gap limit, \(h_{0}\lesssim d\) Now, the gap profile matters. Figure 3 (a)-(c) show increasing abruptness of the transition from \(h(x=0)=h_{0}+d\) to \(h(D)=h_{0}\), with \(\chi\) controlling the inlet length in (b) and (c). #### iv.2.1 Wedge As \(h_{0}\to 0\) in a long wedge with profile \[h=h_{0}+d\,\Big{(}1-\frac{x}{D}\Big{)}\,\,, \tag{11}\] the contact becomes effectively non-conformal with a small-gap region of extent \(x_{0}\sim h_{0}D/d\). From Eq. (1) we expect \[\frac{1}{x_{0}}\times Uh_{0}\sim\frac{1}{x_{0}}\frac{h_{0}^{3}}{\eta}\,\frac{p }{x_{0}}\quad\Rightarrow\quad p\sim\frac{\eta Lv_{0}}{h_{0}^{2}}\,. \tag{12}\] This pressure and shear stress \(\eta U/h_{0}\) acting over area \(x_{0}L\) gives \[N\sim\frac{\eta UDLD^{2}}{d^{2}}\,,\quad F\sim\frac{\eta UDL}{d}\quad \Rightarrow\quad\mu\sim\frac{d}{D}\,, \tag{13}\] _i.e._ a constant, Fig. 5a (black lines), with the running power law of the Stribeck curve (_i.e._ the slope in a double-log plot), \(\alpha=\mathrm{dln}\mu/\mathrm{dln}S\to 0\), Fig. 5b. As \(h_{0}\to 0\), our problem becomes equivalent to 'Taylor scraping flow' [18] with vanishing wedge angle, where there are logarithmic divergences in the forces; these are missed in our scaling analysis, but will emerge in the full calculations in SS IV.2. For a short bearing, we again consider an effective non-conformal contact with \(x_{0}\sim h_{0}D/d\). Equation (1) now gives \[\frac{Uh_{0}}{x_{0}}\;\sim\;\Big{(}\,\frac{1}{x_{0}}\times\frac{h_{0}^{3}}{\eta} \,\frac{p}{x_{0}}\,\Big{)}\;+\;\Big{(}\,\frac{1}{L}\times\frac{h_{0}^{3}}{\eta} \,\frac{p}{L}\,\Big{)}\,. \tag{14}\] Figure 4: Fluid flow in slider bearings. (a) Long-bearing limit with length much less than width (\(D\ll L\)) resulting in compensating Poiseuille flow [dark(blue) arrows] in same \(x\) direction as Couette shear flow [light (orange) arrows]. (b) Corresponding short-bearing limit with length much greater than width (\(D\gg L\)) leading to “side-leakage” in \(y\) direction perpendicular to Couette flow. If we are only just into the small-gap limit (\(h_{0}/d\lesssim 1\)), then for a short bearing (\(L\ll D\)), \(x_{0}/L=(h_{0}/d)\,(D/L)\gg 1\), and we can neglect the first term on the RHS of Eq. (14) and obtain \[p\sim\frac{\eta UL^{2}}{x_{0}h_{0}^{2}}\,. \tag{15}\] This pressure and shear stress \(\eta U/h_{0}\) acts on area \(x_{0}L\) to give \[\begin{split} N&\sim p\times x_{0}L\sim\frac{\eta UL ^{3}}{h_{0}^{2}}\,,\quad F\sim x_{0}L\times\frac{\eta U}{h_{0}}\sim\frac{\eta UL D}{d}\\ &\quad\Rightarrow\quad S=\frac{\eta UL}{N}\sim\left(\frac{h_{0}} {L}\right)^{2}\,,\quad\mu\sim\frac{Dh_{0}^{2}}{dL^{2}}\sim\left(\frac{D}{d} \right)S\,,\end{split} \tag{16}\] which is a linear scaling, Fig. 5c and d (black dashed line). Deep into the small gap regime, \(h_{0}/d\ll 1\), and \(x_{0}/L=(h_{0}/d)\,(D/L)\ll 1\) even for a short bearing (\(D/L\gg 1\)). Discarding the second term on the RHS of Eq. (14), we find \[p\sim\frac{\eta Ux_{0}}{h_{0}^{2}}\quad\Rightarrow\quad N\sim p\times x_{0}L \sim\eta UL\left(\frac{D}{d}\right)^{2}\,. \tag{17}\] With \(F\) the same as before, we find \[\mu\sim\frac{d}{D}\,, \tag{18}\] as in the case of the long bearing in the same small-gap limit. #### iii.1.2 Intel-half-cylinder Consistent with the assumptions underpinning Eq. (1), we describe the gap in the inlet-half-cylinder by a parabola, Fig. 3b, \[h=\left\{\begin{array}{ll}h_{0}+d&0\leq x/D<\chi\,,\\ h_{0}+d(1-x/D)^{2}/(1-\chi)^{2}&\chi\leq x/D\leq 1\,,\end{array}\right. \tag{19}\] with \(0\leq\chi\leq 1\). The low-\(S\) limit for the long bearing is simple. For a small gap, the flat inlet region becomes inconsequential and we have a cylinder-on-flat geometry with \(\mu\sim S^{1/2}\), Fig. 5a and b [dark (orange)]. In the small-gap limit for a short bearing, leakage occurs sideways, so that we need only consider a small region near the bottom of the half cylinder (\(x=D\) in our coordinates) where the gap remains of order \(h_{0}\). Equation (19) shows that this region has dimension \(x_{0}=(1-\chi)D\sqrt{h_{0}/d}\). The rest of the scaling analysis proceeds along the same lines as that for the wedge in the thin-film limit, only with a different expression for \(x_{0}\). Just into the thin-film regime, \(x_{0}/L\gg 1\), we now find, Fig. 5b and d [dashed dark (orange)], \[\mu\sim\frac{h_{0}}{L^{2}}\,(1-\chi)D\sqrt{\frac{h_{0}}{d}}\sim\left[\frac{( 1-\chi)D}{(Ld)^{1/2}}\right]S^{3/4}. \tag{20}\] Deep into the thin-film regime, \(x_{0}/L\ll 1\), and again following the analysis for the wedge with a new expression for \(x_{0}\) in Figure 5: Stribeck curves, \(\mu(S)\), for various gap profiles. (a) Long-bearing limit, friction coefficient normalised by geometric pre-factors [\(\mu/(d/D)\)] as a function of Sommerfeld number scaled by geometric factor [\(S/(d^{2}/D^{2})\)] from Eqs. (27) and (28). High-\(S\) scaling of \(\mu\sim S^{2/3}\) shown by dotted line. Solid lines: black, wedge profile slider bearing, Eq. (11); dark (orange), inlet–half-cylinder, Eq. (12); and, light (blue), Rayleigh step profile Eq. (21). For step and inlet–half cylinder inlet length \(\chi=0.5\). (b) Running exponent \(\alpha\) as a function of reduced gap, \(h_{0}/d\). Lines as in (a). (c) Short-bearing limit for same profiles [dashed lines, see legend in (a)] from Eqs. (32) and (33). Modified geometric pre-factors, \(\mu/(dD/L^{2})\) and \(S/(d^{2}/L^{2})\); \(\chi=0.4\). (d) Respective running exponent. Eq. (17) we find \(\mu\sim S^{1/2}\), with geometric pre-factors cancelling and the expected recovery of a long cylinder expression. #### iii.2.3 Rayleigh step The gap in the Rayleigh step is piecewise continuous, \[h=\left\{\begin{array}{ll}h_{0}+d&0\leq x/D<\chi\,,\\ h_{0}&\chi\leq x/D\leq 1\,.\end{array}\right. \tag{21}\] For a long-bearing with this profile in the small-gap limit (\(h_{0}\ll d\)), the Couette flux in the thin gap region is negligible, and the compensating Poiseuille flux exists almost entirely to balance the Couette flux in the inlet of length \(\chi D\) and height \(d+h_{0}\simeq d\). From Eq. (1), we have \[\frac{Ud}{\chi D}\sim\frac{1}{\chi D}\times\frac{d^{3}}{12\eta}\times\frac{p}{ \chi D}\quad\Rightarrow\quad p\sim\frac{\eta U\chi D}{d^{2}}\,. \tag{22}\] This acts over the whole length of the bearing (even though the flow in the narrow gap is negligible) to give the normal force, while the tangential load is dominated by the narrow gap: \[N\sim\frac{\eta U\chi D^{2}L}{d^{2}}\,,\quad F\sim\frac{\eta U}{h_{0}}(1-\chi )DL\,, \tag{23}\] _i.e._, \(N\) is a constant, while \(F\) diverges as the gap narrows. So, we expect that as \(S\) decreases, \(\mu\sim F/N\) must reach a minimum and then upturns and diverges as \(S=\eta UL/N\) approaches some constant value \(\sim\chi^{-1}(d/D)^{2}\), Fig. 5a and b [light (blue)]. In the short Rayleigh step, sideways Poiseuille leakage occurs over the infinitesimal width of the step itself, so that the conditions necessary for the validity of the Reynolds lubrication formalism in Eq. (1) no longer apply. ### Summary of scaling analysis Our scaling analysis gives is a universal 'anomalous' regime of \(\mu\sim S^{2/3}\) at large \(S\) for all three geometries in Fig. 3. When \(h_{0}\to d\), deviations from this scaling occurs. The details are geometry dependent, with the \(S\to 0\) limit derivable by scaling analysis except in the pathological case of the short Rayleigh step. Solving Eq. (1) by quadrature in Sections IV and V will verify these scaling conclusions and provide closed forms for plotting the behaviours shown in Fig. 5 and Table 1 (and Fig. 3 of Ref. [11]). Further, we will show that for a broad class of profiles, with small inlets, the deviation from \(S^{2/3}\) scaling is upwards as \(S\) decreases, which mimics entry into the BL regime even though lubrication remains hydrodynamic in origin. Understanding such deviation then allows, in SS VI, a general way of analysing experimental data to extract a surface texturing length scale from the Stribeck curve. ## IV Exact lubrication calculations and limits ### General lubrication calculations We start by deriving a general expression for the Reynolds lubrication pressure, \(p(x,y)\) from Eq. (1) in the long- and short-bearing limits, Fig. 4a and b respectively. For completeness, we include some material that is available in textbooks [1]. #### iv.1.1 Long slider bearings (\(D\gg L\)) With negligible edge effects and \(p=p(x)\), \(\partial_{y}\to 0\), and we can integrate Eq. (1) with respect to \(x\), \[\frac{\partial p}{\partial x}=6\eta U\left(\frac{1}{h^{2}}-\frac{h_{m}}{h^{3} }\right), \tag{24}\] where \(h_{m}\) is a constant. Define a dimensionaless gap \(\xi=h/d\), where \(d\) is some fiducial length such as a'step' height, or the difference in height across the width of the bearing \([d=h(0)-h(D)]\). In terms of a dimensionless minimum gap \(z=h_{0}/d\), the different bearing profiles are distinguished by \(\delta(u)=\xi(u)-z\), where we have used the dimensionless length \(u=x/D\) so that \(0\leq u\leq 1\) spans the length of the bearing. Hereafter, the length-wise average \(\langle\cdots\rangle\) is given by \[\langle\cdots\rangle=\frac{1}{D}\int_{0}^{D}\!\!\mathrm{d}x\,(\cdots)=\int_{0} ^{1}\!\!\mathrm{d}u\,(\cdots)\,. \tag{25}\] Integrating Eq. (24) gives the pressure drop across the bearing \[\Delta p=6\eta UD(\langle\xi^{-2}\rangle-\xi_{m}\langle\xi^{-3}\rangle)/d^{2}\,. \tag{26}\] Assuming equal inlet and outlet pressures, \(\Delta p=0\) and \(\xi_{m}=\langle\xi^{-2}\rangle/\langle\xi^{-3}\rangle\). Integrating \(p\) to obtain the normal and friction forces, Eq. (2), the transverse (\(y\)) integral trivially gives a factor of the width \(L\). We deal with the \(x\) integral in the normal force by integrating Eq. (24) by parts, \(\int\mathrm{d}x\,p=-\int\mathrm{d}x\,x\,(\partial p/\partial x)\), again assuming \(\Delta p=0\). After some algebra, we find \[N=\frac{6\eta UD^{2}}{d^{2}}\,\frac{\langle u\xi^{-3}\rangle\langle\xi^{-2} \rangle-\langle u\xi^{-2}\rangle\langle\xi^{-3}\rangle}{\langle\xi^{-3} \rangle}=\frac{\eta UD^{2}}{d^{2}}\,g_{\delta}(z)\,, \tag{27}\] \[F=\frac{\eta UD}{d}\,\frac{4\langle\xi^{-1}\rangle\langle\xi^{-3}\rangle-3 \langle\xi^{-2}\rangle^{2}}{\langle\xi^{-3}\rangle}=\frac{\eta UD}{d}f_{\delta} (z)\,, \tag{28}\] where the subscript in the dimensionless normal load, \(g(z)\), and drag, \(f(z)\), points to the dependence on the gap profile \(\delta(u)=\xi(u)-z\). These expressions imply \(\mu=F/N\sim d/D\) and \(S=\eta UL/N\sim d^{2}/D^{2}\), reproducing the geometric factors in Eq. (8) established by scaling analysis. The Stribeck curve, \(\mu(S)\), then follows parametrically as the gap \(z\) (and so \(\langle\xi^{-n}\rangle\)) varies as a control parameter. Note that the higher-order averages required in Eqs. (27) and (28) can be computed from \(\langle\xi^{-1}\rangle\) and \(\langle u\,\xi^{-1}\rangle\) from recurrence: \[\langle\xi^{-n-1}\rangle=-\frac{1}{n}\frac{\mathrm{d}\langle\xi^{-n}\rangle}{ \mathrm{d}z}\,,\quad\langle u\,\xi^{-n-1}\rangle=-\frac{1}{n}\frac{\mathrm{d} \langle u\,\xi^{-n}\rangle}{\mathrm{d}z}\,, \tag{29}\] which follow by differentiating Eq. (25) and using \(\xi=z+\delta\). Short slider bearings (\(D\ll L\)) In this limit, we neglect \(\partial p/\partial x\) in Eq. (1) and, recalling that \(h\) is \(y\)-independent, the equation simplifies to \[h^{3}\frac{\partial^{2}p}{\partial y^{2}}=6\eta U\frac{\mathrm{d}h}{\mathrm{d}x }\,. \tag{30}\] Fixing \(p(y=0)=p(y=L)=0\), _i.e._ taking \(p\) as the excess pressure, this integrates to \[p=-\frac{3\eta U}{h^{3}}\frac{\mathrm{d}h}{\mathrm{d}x}\times y(L-y)\,, \tag{31}\] giving the normal load \[N=3\eta U\frac{L^{3}}{6}\int_{0}^{1}\!\frac{\mathrm{d}u}{\xi^{3}}\,\frac{ \mathrm{d}\xi}{\mathrm{d}u}=\frac{\eta UL^{3}}{d^{2}}\,\frac{(2z+1)}{4z^{2}(z+ 1)^{2}}=\frac{\eta UL^{3}}{d^{2}}g(z)\,. \tag{32}\] Now, \(g(z)\) does not depend on the bearing profile, \(\delta\), but only the dimensionless minimum gap, \(z\). The drag also simplifies due to the absence of Poiseuille flow along the sliding direction, \[F=L\int_{0}^{D}\!dx\,\frac{\eta U}{h}=\frac{\eta ULD}{d}\,f_{\delta}(z)\,, \tag{33}\] with \(f_{\delta}(z)=\langle\xi^{-1}\rangle\). From Eqs. (32) and (33), \(S\) and \(\mu\) can again be derived with the geometric pre-factors previously established in the scaling argument, Eq. (10). #### iv.1.3 Power-law expansion In the large-gap limit, \(z\gg 1\). For the short bearing, expanding Eq. (32) gives \(g(z)\sim z^{-3}\). For the long bearing, we need to expand the \(\xi\)-dependent numerators in Eqs. (27) in the small quantity \(\delta(u)/z\sim\mathcal{O}(1/z)\), whereupon we find \[\langle\xi^{-n}\rangle =\int_{0}^{1}\frac{\mathrm{d}u\,z^{-n}}{[1+\delta(u)/z]^{-n}} \approx z^{-n}(1-n\langle\delta\rangle/z)\,, \tag{34}\] \[\langle u\xi^{-n}\rangle =\int_{0}^{1}\frac{\mathrm{d}u\,z^{-n}u}{[1+\delta(u)/z]^{-n}} \approx z^{-n}(1/2-n\langle u\delta\rangle/z)\,. \tag{35}\] The leading order terms cancel, leaving \[g(z)\approx\frac{6z^{-5}(\langle\delta\rangle/2z-\langle u\delta\rangle/z}{z ^{-3}}=\frac{3(\langle\delta\rangle-2\langle u\delta\rangle)}{z^{3}}\sim \mathcal{O}(\frac{1}{z^{3}})\,, \tag{36}\] as in the short-bearing limit. In the final step we have taken the gap profile averages to be of order the'step' height and not cancel, which is correct for all profiles apart from the case of \(\chi=1\), which violates the lubrication approximation. With the tangential load \(f(z)\sim z\) in all cases (as \(\langle\xi^{-n}\rangle\to z^{-n}\)), we recover \(S\propto z^{3}\) with \(\mu\propto z^{2}\propto S^{2/3}\), as previously found. In the small-gap limit, \(z+1\to 1\) and \(g(z)=1/4z^{2}\propto z^{-2}\) from Eqs. (27) and (32). This gives \(S\propto z^{2}\) and \(\mu\propto z\), _i.e._\(\mu\propto S^{1/2}\) for all bearing profiles and types, contradicting our scaling predictions. A more rigorous analysis is needed. ### Specific gap profiles For each gap profile, full expressions for the load and drag in the long, Eqs. (27)-(28), and short, Eqs. (32)-(33), bearing limits can be calculated, see Appendix, giving the Stribeck curves in Fig. 6 and 7 respectively, and small-gap scaling limits in Table 1. For all step types and most inlet length length, we indeed find (see Appendix) \(\mathcal{O}(1)\) pre-factors. The physical picture underlying our scaling analysis therefore holds. Such analysis can go further: it can treat a changing Poiseuille flow direction with reducing gap and predicts the recovery of the long bearing limit for short bearings for \(h_{0}\ll d\). However, one discrepancy highlights an important limitation of our narrow-gap scaling approach, which gives a constant \(\mu\) as \(h_{0}\to 0\) because both \(N\) and \(F\) individually approach constant values. From quadrature, we see that \(N\) and \(F\) both diverge logarithmically, Table 1, but these cancel to give a constant \(\mu\). In considering only a narrow-gap region with \(h\sim h_{0}\), the cumulative effects of the region with a larger gap are neglected, as the Poiseuille pressure drop and Couette drag fall off with increasing \(h_{0}\). Where the gap increases quadratically this assumption is correct [12]. However, it is not strictly correct for a linear profile. ## V Transition from large-gap scaling To probe how deviations from \(\mu\sim S^{2/3}\) scaling first emerges as \(h_{0}\to d\), which will be important for analysing experiments, consider the 'running exponent' of the Stribeck curve, \[\alpha = \frac{\mathrm{d}\ln\mu}{\mathrm{d}\ln S}=1-\frac{\mathrm{d}\ln F }{\mathrm{d}\ln N}=1-\frac{\mathrm{d}\ln f}{\mathrm{d}\ln g} \tag{37}\] \[= 1-\frac{\mathrm{d}\ln f}{\mathrm{d}z}\left(\frac{\mathrm{d}\ln g }{\mathrm{d}z}\right)^{-1}=1-\frac{1}{f}\frac{\mathrm{d}f}{\mathrm{d}z}\left( \frac{1}{g}\frac{\mathrm{d}g}{\mathrm{d}z}\right)^{-1}.\] \begin{table} \begin{tabular}{c c c c c c} \hline \hline Profile & \(g(z)\propto N\) & \(f(z)\propto F\) & \(S(z)\) & \(\mu(z)\) & \(\mu(S)\) \\ \hline Wedge: Long & \(6\ln z^{-1}\) & \(4\ln z^{-1}\) & \(\frac{1}{6\ln z^{-1}}\) & \(\frac{2}{3}\) & \(\sim S^{0}\) \\ Short & \(\frac{1}{4z^{2}}\) & \(\ln z^{-1}\) & \(4z^{2}\) & \(4z^{2}\ln z^{-1}\) & \(\sim S^{1}\) \\ IHC: Long & \(\frac{3(1-x)^{2}}{z}\) & \(\frac{\pi(1-x)}{z^{2/3}}\) & \(\frac{z}{3(1-x)^{2}}\) & \(\frac{\pi z^{1/2}}{3(1-x)}\) & \(\sim S^{1/2}\) \\ Short & \(\frac{1}{4z^{2}}\) & \(\frac{\pi(1-x)}{2\sqrt{z}}\) & \(4z^{2}\) & \(2\pi(1-x)x^{2/3}\) & \(>S^{3/4}\) \\ Step: Long & \(3\chi\) & \(\frac{1-x}{z}\) & \(\frac{1}{3\chi}\) & \(\frac{1-x}{3\chi z}\) & \(\rightarrow\infty\) \\ \hline \hline \end{tabular} \end{table} Table 1: Small-gap scaling behaviour, \(z\ll 1\), for varying gap profiles, \(\delta(u)\), in both the long and short bearing limits (first column) for wedge, inlet–half-cylinder (IHC) and step. Scaling is given for the normalised normal load (\(g\)), tangential load (\(f\)), Sommerfeld number (\(S\)) and friction coefficient (\(\mu\)) in terms of the minimum gap, respectively. The final column is the resultant low-\(z\) scaling of the Stribeck curve, \(\mu(S)\). ### Short-bearing limit Given \(g(z)\) for all profiles, Eq. (32), and the recurrence relation for the derivative of \(f(z)\), Eq. (29), we find \[\alpha=1-\frac{z(z+1)(2z+1)}{2(3z^{2}+3z+1)}\times\frac{\langle\xi^{-2}\rangle} {\langle\xi^{-1}\rangle}\,. \tag{38}\] The moments of \(\xi\) can be recast explicitly in terms of the gap profile, \(\xi(u)=z+\delta(u)\). In the asymptotic limit of large \(z\), \[\Big{\langle}\frac{1}{\xi^{n}}\Big{\rangle}=\frac{1}{z^{n}}\Big{[}1-\frac{n \langle\delta\rangle}{z}+\mathcal{O}\Big{(}\frac{1}{z^{2}}\Big{)}\Big{]}\,. \tag{39}\] Then, within the short bearing approximation, \[\alpha=\frac{2}{3}+\frac{\langle\delta\rangle-1/2}{3z}+\mathcal{O}\Big{(} \frac{1}{z^{2}}\Big{)}\,. \tag{40}\] The sign of initial deviations from \(\alpha=2/3\) depends on the average profile height \(\langle\delta\rangle\). The short inlet-half-cylinder is 'blunt', \(\langle\delta\rangle<1/2\), only when \(\chi\) is small enough, Fig. 8a [light (gold) to dark (blue) for decreasing \(\chi\)], in which case \(\alpha\) initially drops below 2/3 and the Stribeck curve bends upwards from the region of universal \(S^{2/3}\) scaling, mimicking the onset of EHL. The onset of deviations from \(\alpha=2/3\) for the short wedge, Fig. 8a (black dashed), is at significantly smaller \(z\) than for the inlet-half-cylinder because \(\langle\delta\rangle=1/2\), so that deviations, which are always positive, scale as \(z^{-2}\). ### Long-bearing limit In the long-bearing limit, we find a more complex expression, \[\alpha=\frac{2}{3}+\frac{2(\langle u\rangle\langle\delta^{2}\rangle-\langle u \delta\rangle\langle\delta\rangle)+\langle u\delta\rangle^{2}-\langle u \rangle\langle\delta^{2}\rangle}{3d(\langle u\rangle\langle\delta\rangle- \langle u\delta\rangle)z}+\mathcal{O}\Big{(}\frac{1}{z^{2}}\Big{)}\,. \tag{41}\] To highlight the structure of the answer we have retained \(\langle u\rangle=1/2\), rather than substitute its value. The sign of the initial deviation from \(\alpha=2/3\) is therefore controlled by the numerator of the second term in this expansion [19]. For the wedge, this reduces to \[\alpha=\frac{2}{3}-\frac{17}{90z^{2}}+\mathcal{O}\Big{(}\frac{1}{z^{3}}\Big{)}\,. \tag{42}\] So, \(\alpha\) drops below 2/3, Fig. 8b, leading to an upwards deviation of the Stribeck curve from \(S^{2/3}\) scaling. Again, this effect is second order in \(1/z\), as in the short-bearing case. For the inlet-half-cylinder, \[\alpha=\frac{2}{3}-\frac{2(1+5\chi-30\chi^{2})}{45(1+3\chi)}\frac{1}{z}+ \mathcal{O}\Big{(}\frac{1}{z^{2}}\Big{)}\,. \tag{43}\] Figure 7: Short bearing load and drag gap dependence with changing step profile. (a) Dimensionless load, \(g(z)=N/(\eta UL^{3}/d^{2})\), as a function of reduced gap, \(z=h_{0}/d\), for all short-bearing profiles. Scaling in large-gap limit \(z\gg 1\), \(g\propto z^{-3}\) (dashed line), and scaling in limit of \(z\ll 1\), \(g\propto z^{-2}\) (dotted line). (b) Dimensionless drag, \(f(z)=F/(\eta ULD/d)\), for wedge gap profile, Eq. (A.1), bold (black) line. Scaling for \(z\gg 0\), \(f\propto z^{-1}\) applies to all profiles (dashed lines); scaling for \(z\ll 1\), \(f\propto\ln z^{-1}\) (dotted line). Inlet–half-cylinder profile, Eq. (A.5), with increasing \(\chi\), dark (blue) to light (gold) fine lines. Scaling for \(z\ll 1\), \(f\propto z^{-1/2}\). See Table 1 for resulting scaling of Stribeck curve \(\mu(S)\) for \(z\to 0\). Figure 6: Load and drag dependence with gap height for long-bearing conformal ‘steps’. Dimensionless load, \(g=N/(\eta ULD^{2}/d^{2})\), and drag, \(f=F/(\eta ULD/d)\), as a function of reduced gap, \(z=h_{0}/d\). (a) Wedge profile load (solid line) with large gap limit applying to all profiles, \(\propto z^{-3}\) (dashed line), and profile-specific low-\(z\) limit, \(N\propto\ln(z^{-1})\) (dotted line). (b) Corresponding drag with high-\(z\) limit, \(F\propto z^{-2}\) (dashed line), and low-\(z\) limit, \(F\propto\ln(z^{-1})\) (dotted line). (c) Load for inlet–half-cylinder with increasing inlet length, \(\chi\) (dark to light), see legend in (d) for values. (d) Corresponding drag. (e) Load for Rayleigh step profile with increasing \(\chi\), see (f) for values, and low-\(z\) limit \(\propto z^{-1}\). (f) Corresponding drag. For sufficiently short inlet lengths, _i.e._\(\chi<(5+\sqrt{145})/60\approx 0.284\), in this narrow-gap limit, \(\alpha\) is lowered, Fig. 8c (dark lines), and the Stribeck curve deviates upwards from the large-gap scaling with decreasing \(S\). Finally, for the Rayleigh step, Fig. 8d, we find \[\alpha=\frac{2}{3}-\frac{1-2\chi}{3z}+\mathcal{O}\left(\frac{1}{z^{2}}\right), \tag{44}\] so that in a similar manner, a suitably 'blunt' step, _i.e._\(\langle\delta\rangle=\chi<1/2\), will give \(\alpha<2/3\). We therefore arrive at a conclusion for all slider bearing types and profiles: large-gap scaling is \(\mu\sim S^{2}/3\) and upwards deviation with decreasing \(S\) as the gap approaches the step height requires a small enough \(\langle\delta\rangle\), _i.e._ a small enough inlet. ## VI Interpretation of experimental data We have shown that for a broad class of surface profiles and bearing dimensions, upwards deviation from \(\mu\sim S^{2/3}\) scaling in the HL regime starts when the gap approaches the step height, _i.e._\(h_{0}/d\sim 1\). This also apply to various textured surfaces, Fig. 3(a), so that \(d\) can be estimated from measured Stribeck curves. We end by giving a worked example of this procedure. Soltanahmadi _et al._[13] obtained the Stribeck curves for a range of geometries lubricated by molten chocolate. We show their data for a bio-mimetic tongue against a steel surface, Fig. 9. The \(2\,\mathrm{cm}\times 2\,\mathrm{cm}\) 'tongue' surface consists of \(80\times 500\,\mathrm{\SIUnitSymbolMicro m}\) high hemispherical 'fungiform' papillae and \(800\times 250\,\mathrm{\SIUnitSymbolMicro m}\) high cylindrical 'filiform' papillae, each of which is textured on the \(10\,\mathrm{\SIUnitSymbolMicro m}\) to \(100\,\mathrm{\SIUnitSymbolMicro m}\) scale (see Fig. 3a in Ref. [20]). Their material parameters and applied normal load of \(1\,\mathrm{N}\) implies that the fungiform papillae will compress \(\approx 200\,\mathrm{\SIUnitSymbolMicro m}\) and bring the more numerous filiform papillae into proximity with the lower surface at a gap height comparable to the papillae texturing. In the large gap HL regime, where all papillae have comparable gap heights and the more numerous papilla form dominates, the data presented in Fig. 5B(2) of Ref. [13] can be replotted as the Stribeck curve for a single filiform papilla at a normal load of \(1/800\,\mathrm{N}\), Fig. 9. In the high-\(S\) regime, the scaling law slope is less than unity (fine dashed line), but is consistent with \(\mu\sim S^{2/3}\) (dashed line); fitting the data at \(S>10^{-3}\) gives \(\alpha=0.6(1)\). This regime can therefore be interpreted in terms of our analysis of the HL of a step slider bearing. The properties of the effective step bearing can be estimated from the point \((\mu^{\star}\simeq 0.17,\ S^{\star}\simeq 6\times 10^{-3})\) at which the Stribeck curve deviates from \(S^{2/3}\) scaling, Fig. 9 (black star). For a long bearing, Eq. (8) shows that when \(h_{0}=d\), we have \[S^{\star}\sim\frac{d^{2}}{D^{2}},\quad\mu^{\star}\sim\frac{d}{D}\quad\Rightarrow \quad\sqrt{S^{\star}}\sim\mu^{\star}\,. \tag{45}\] With an observed \(S^{\star}\simeq 0.006\), our scaling analysis of the HL regime of a long slider bearing therefore predicts \(\mu^{\star}\sim\sqrt{S^{\star}}=0.08\), which is order-of-magnitude consistent with the observed \(\mu^{\star}\simeq 0.17\)[21] Taking \(D=350\,\mathrm{\SIUnitSymbolMicro m}\) (the diameter of the cylindrical papillae) then gives an estimate for \(d\), either from \(D\sqrt{S^{\star}}\simeq 30\,\mathrm{\SIUnitSymbolMicro m}\) or from \(D\mu^{\star}\simeq 60\,\mathrm{\SIUnitSymbolMicro m}\), consistent with the actual papillae roughness on the \(30\,\mathrm{\SIUnitSymbolMicro m}\) to \(60\,\mathrm{\SIUnitSymbolMicro m}\) scale [20]. Interestingly, Soltanahmadi _et al._[13] estimated the gap height in the vicinity of \((S^{\star},\mu^{\star})\) to be \(h_{0}\gtrsim 1\,\mathrm{\SIUnitSymbolMicro m}\) [see their Fig. 5B(1)] using a phenomenological expression [22] modified from an analytical form derived from an analysis of EHL [23], Figure 8: Running exponent \(\alpha=\mathrm{dln}\mu/\mathrm{dln}S\) vs gap around \(z\approx 1\). \(\alpha<2/3\) corresponding to upwards deviations in Stribeck curve. (a) Deviation from large-gap scaling (dotted line) for short-bearing limit. Dashed lines: bold (black), wedge; fine, half–half-cylinder with increasing inlet, \(\chi\), dark (purple) to light (yellow), see legend for values. (b) Long-bearing limit wedge. (c) Long inlet-half-cylinder with increasing inlet length [dark (purple) to light (yellow), see legend for values. (d) Long Rayleigh step with increasing inlet length dark (purple) to light (gold), see legend for values. Rayleigh step; and, dark (orange), inlet–half-cylinder. Calculated from Eq. (37) with \(f\) and \(g\) given in Appendix. Figure 9: Stribeck curve for molten chocolate lubricating a bio-mimetic tongue sliding on steel. Re-plotted from Ref. [13] as \(\mu\) against \(S=\eta UD/N\) using \(N=1.25\,\mathrm{mN}\) (see text). Lines: bold dotted, \(S^{2/3}\) scaling; fine dashed, \(\mu\sim S\). Inset schematic: relevant length scales, \(D\sim 350\,\mathrm{\SIUnitSymbolMicro m}\) from papillae diameter, and \(d\sim 30\,\mathrm{\SIUnitSymbolMicro m}\) to \(60\,\mathrm{\SIUnitSymbolMicro m}\) from papillae roughness. which is significantly smaller than the gap size we have estimated based on our HL analysis. However, the actual measured gap size is \(\simeq 50\,\mathrm{\SIUnitSymbolMicro m}\) (see Ref. [13] Supporting Information Fig. S8B), which is well within range of our estimate based on the point of deviation from \(S^{2/3}\) scaling being where \(h_{0}\simeq d\), where \(d\simeq 50\,\mathrm{\SIUnitSymbolMicro m}\) to \(100\,\mathrm{\SIUnitSymbolMicro m}\) is the scale of surface texturing. ## VII Conclusions We have shown that the HL of non-conformal and conformal contacts differ fundamentally. For former, the problem has a single length scale, the minimum gap, giving 'trivial' scaling in the Stribeck curve, \(\mu\sim S\) for ball-on-flat, independent of the absolute gap. In contrast, the HL of conformal contacts is non-trivial due to the presence of two independent length scales, the minimum gap and a surface texturing or'step' height, giving anomalous \(\mu\sim S^{2/3}\) large-gap scaling. As the gap decreases below the step height, deviation from \(S^{2/3}\) occurs. 'Blunt' surface profiles, where the gap remains small over more of the bearing area, show an 'upwards' deviation as \(S\) drops, reminiscent of entry into the EHL regime. We used our approach to deduce the length scale of surface texturing from the data in a recent triborheological study. Large-gap HL is of limited application in engineering applications, which are typically designed to function near the minimum of the Stribeck curve. However, this regime becomes important whenever soft matter is involved in lubrication, _e.g._, in the oral processing of food [13], the application of skin cream [24], and the extrusion of ceramic 'green bodies' [25] (a ceramic 'green body' is the formed object before firing). Future work may therefore fruitfully extend our analysis to non-Newtonian fluids with, _e.g._, rate-dependent rheology and normal stress differences. ## Appendix: Gap profile calculations ### Wedge In dimensionless form the gap, Eq. (11), of the wedge is \[\xi=z+1-u\;. \tag{10}\] The averages required for Eqs. (27) and (28) then evaluate to \[\begin{split}&\langle\xi^{-1}\rangle=\ln\frac{z+1}{z}\;,\quad \langle\xi^{-2}\rangle=\frac{1}{z\,(1+z)}\;,\quad\langle\xi^{-3}\rangle=\frac{ 1+2z}{2z^{2}\,(1+z)^{2}}\;,\\ &\langle u\,\xi^{-2}\rangle=\frac{1}{z}-\ln\frac{z+1}{z}\;,\quad \langle u\,\xi^{-3}\rangle=\frac{1}{2z^{2}\,(1+z)}\;.\end{split} \tag{11}\] In the long-bearing limit, the dimensionless functions are \[f=4\ln\frac{1+z}{z}-\frac{6}{1+2z}\;,\quad g=6\ln\frac{1+z}{z}-\frac{12}{1+2z }\;. \tag{12}\] In the limit of \(z\to 0\), _i.e._ with \(z+1\to 1\), \[\lim_{z\to 0}\{f,g\}=\{6\ln z^{-1},4\ln z^{-1}\}\;\Rightarrow\;\lim_{z\to 0} \mu=\frac{2}{3}\;\frac{d}{D}\;. \tag{13}\] In the short bearing limit, \(f=\langle\xi^{-1}\rangle\to\ln z^{-1}\) follows trivially alongside \(g(z)\to 1/4z^{2}\), as derived in SS IV.1.3. ### Inlet-half-cylinder Recasting the profile, Eq. (19), in dimensionless terms, \[\xi=\left\{\begin{array}{ll}z+1&0\leq u\leq\chi\;,\\ z+(1-u)^{2}/(1-\chi)^{2}&\chi\leq u\leq 1\;.\end{array}\right. \tag{14}\] The averages then evaluate to \[\begin{split}&\langle\xi^{-1}\rangle=\frac{\chi}{1+\varepsilon}+ \frac{1-\chi}{\sqrt{2}}\tan^{-1}\frac{1}{\sqrt{2}}\;,\\ &\langle\xi^{-2}\rangle=\frac{\chi}{(1+\varepsilon)^{2}}+\frac{1-\chi}{2 \varepsilon}\left(\frac{1}{1+\varepsilon}+\frac{1}{\sqrt{2}}\tan^{-1}\frac{1}{ \sqrt{2}}\right)\;,\\ &\langle\xi^{-3}\rangle=\frac{\chi}{(1+\varepsilon)^{3}}+\frac{1-\chi}{8z ^{2}}\left(\frac{3+5\varepsilon}{(1+\varepsilon)^{2}}+\frac{3}{\sqrt{2}}\tan^ {-1}\frac{1}{\sqrt{2}}\right),\\ &\langle u\,\xi^{-2}\rangle=\frac{\chi^{2}}{2(1+z)^{2}}+\frac{1-\chi}{2 \varepsilon}\left(\frac{\chi}{1+\varepsilon}+\frac{1}{\sqrt{2}}\tan^{-1}\frac{ 1}{\sqrt{2}}\right)\;,\\ &\langle u\,\xi^{-3}\rangle=\frac{\chi^{2}}{2(1+z)^{3}}+\frac{1-\chi}{8z ^{2}}\left(\frac{1+2+2\chi(1+2\varepsilon)}{(1+\varepsilon)^{2}}+\frac{3}{ \sqrt{2}}\tan^{-1}\frac{1}{\sqrt{2}}\right)\;.\end{split} \tag{15}\] These expressions used in Eqs. (27) and (28) (long bearing) or Eqs. (32) and (33) (short bearing) parametrically gives Fig. 5 [dark (orange) lines]. We do not show the lengthy expressions for \(f(z)\) and \(g(z)\) in this case. For long bearings, a more rigorous analysis of the limits of Eq. (15) with \(\tan^{-1}(z^{-1/2})/\sqrt{z}\to\pi/2\sqrt{z}-1\) gives \(g(z)\to 3(1-\chi)^{2}/z\), Fig. 6c, and \(f\to\pi(1-\chi)/\sqrt{z}\), Fig. 6d. The \(1-\chi\) originates from a shorter inlet making a larger cylinder, \(R=D^{2}(1-\chi)^{2}/2d\). The resultant \(5\) and \(\mu\) are given in Table 1. For short bearings, taking the limits of Eq. (15) gives \(f(z)\to 2\pi(1-\chi)z^{3/2}\), Fig. 7b. With the \(g(z)\) for all short bearings, we recover \(\mu\propto S^{3/4}\) scaling with included \(\mathcal{O}(1)\) numerical prefactor for \(\chi\lesssim 0.9\), see Table 1. ### Step In non-dimensional form, Eq. (21) becomes \[\xi=\left\{\begin{array}{ll}z+1&0\leq u\leq\chi\;,\\ z&\chi\leq u\leq 1\;.\end{array}\right. \tag{16}\] The gap averages to calculate the \(N\) and \(F\), and hence \(\mu(S)\), for this problem are then, rather trivially, \[\langle\xi^{-n}\rangle=\frac{\chi}{(1+z)^{n}}+\frac{1-\chi}{z^{n}}\;,\quad \langle u\,\xi^{-n}\rangle=\frac{\chi^{2}}{2(1+z)^{n}}+\frac{1-\chi^{2}}{2z^{n} }\;. \tag{17}\] For the Rayleigh step in the long-bearing limit the dimensionless functions, Fig. 6f (dark to light with increasing \(\chi\)), are \[f=\frac{3\chi(1-\chi)}{\chi^{2}+(1-\chi)(1+z)^{3}}+\frac{\chi}{1+\varepsilon}+ \frac{1-\chi}{z}\;,\;g=\frac{3\chi(1-\chi)}{\chi^{2}+(1-\chi)(1+z)^{3}}\;. \tag{18}\] In the large-gap regime while \(S^{2/3}\) scaling is recovered for all \(\chi\), the inlet length does control the magnitude high-\(S\) limit. Using the profile, Eq. (17), to evaluate \(g(z)\), Eq. (36), the thick film scaling limit \(g\to 3\chi(1-\chi)/z^{3}\), with \(f\to 1/z\) trivially. The resultant Stribeck curve is then, \[\mu(S)=\frac{d}{D}\times\frac{S^{2/3}}{[3\chi(1-\chi)]^{1/3}}\,. \tag{38}\] Friction is minimised in the large gap limit when the denominator is maximised, _i.e._\(\chi=0.5\), and \(\mu\) diverges as \(\chi\to 0\) or 1 [26] Pertinent to our scaling analysis, the \(\chi\)-dependent factor is again of \(\mathcal{O}(1)\) for \(\chi\approx 0.01\) to \(0.99\). As \(z\to 0\), \(g\to 3\chi\) for long bearings, Fig. 6e, as the first two powers cancel, reducing the apparent \(z^{-2}\) divergence, SS IV.1.3. Meanwhile, the drag force [Eq. (28)] \(f\to(1-\chi)/z\) as only the second terms \(\propto z^{-n}\) in Eq. (37) diverge. The small-\(z\) limits are summarised in Table 1. As detailed in SS III, while the equations remain analytic for the short bearing the lubrication approximation breaks down.
2309.05415
On the Schur multipliers of Lie superalgebras of maximal class
Let $L$ be a non-abelian nilpotent Lie superalgebra of dimensiom $(m|n)$. Nayak shows there is a non-negative $s(L)$ such that $s(L)=\frac{1}{2}(m+n-2)(m+n-1)+n+1-\dim{\mathcal{M}(L)}$. Here we intend that classify all non-abelian nilpotent Lie superalgebras, when $1\leq s(L)\leq 10$. Moreover, we classify the structure of all Lie superalgebras of dimension at most $5$ such that $\dim {L^2}=\dim {\mathcal{M}(L)}$.
Z. Araghi Rostami, P. Niroomand
2023-09-11T12:30:00Z
http://arxiv.org/abs/2309.05415v1
# On the Schur multipliers of Lie superalgebras of maximal class ###### Abstract. Let \(L\) be a non-abelian nilpotent Lie superalgebra of dimension \((m|n)\). Nayak shows there is a non-negative \(s(L)\) such that \(s(L)=\frac{1}{2}(m+n-2)(m+n-1)+n+1-\dim\mathcal{M}(L)\). Here we intend that classify all non-abelian nilpotent Lie superalgebras, when \(1\leq s(L)\leq 10\). Moreover, we classify the structure of all Lie superalgebras of dimension at most \(5\) such that \(\dim L^{2}=\dim\mathcal{M}(L)\). Key words and phrases:Schur multiplier, Nilpotent Lie superalgebra, Maximal class _Mathematics Subject Classification 2010._ 17B01, 17B05, 17B30, 19C09 and algebras are defined over an unital commutative ring \(\mathbb{K}\). Here, we give some notations on Lie superalgebras that are given in [5]. Let \(\mathbb{Z}_{2}=\{0,1\}\) be a field and we put \((-1)^{\bar{0}}=1\) and \((-1)^{\bar{1}}=-1\). A \(\mathbb{Z}_{2}\)-graded algebra (or superalgebra) \(M\) is a direct sum of algebras \(M_{\bar{0}}\) and \(M_{\bar{1}}\) (\(M=M_{\bar{0}}\oplus M_{\bar{1}}\)), whose elements are called even and odd, respectively. Non-zero elements of \(M_{\bar{0}}\cup M_{\bar{1}}\) are said to be homogeneous. For a homogeneous element \(m\in M_{\bar{\alpha}}\) with \(\alpha\in\mathbb{Z}_{2}\), \(|m|=\bar{\alpha}\) is the degree of \(m\). So whenever we have the notation \(|m|\), \(m\) will be a homogeneous element. A subalgebra \(N\) of \(M\) is called \(\mathbb{Z}_{2}\)-graded subalgebra (or sub superalgebra), if \(N=N_{\bar{0}}\oplus N_{\bar{1}}\) where \(N_{0}=N\cap M_{0}\) and \(N_{\bar{1}}=N\cap M_{\bar{1}}\). **Definition 1.1**.: _[_5_]_ _A Lie superalgebra is a superalgebra \(M=M_{\bar{0}}\oplus M_{\bar{1}}\) with a multiplication denoted by \([.,.]\), called super bracket operation, satisfying the following identities_ 1. \([x,y]=-(-1)^{|x||y|}[y,x],\)__ 2. \([x,[y,z]]=[[x,y],z]+(-1)^{|x||y|}[y,[x,z]],\)__ 3. \([m_{\bar{0}},m_{\bar{0}}]=0\)__ _for all homogeneous elements \(x,y,z\in M\) and \(m_{\bar{0}}\in M_{\bar{0}}\)._ Note that the last equation is easily derived from the first equation, in this case, \(2\) is invertible in \(\mathbb{K}\). The second equation is equivalent to the following graded Jacobi identity \((-1)^{|x||z|}[x,[y,z]]+(-1)^{|y||x|}[y,[z,x]]+(-1)^{|z||y|}[z,[x,y]]=0\). By using above identities, it can be seen that for a Lie superalgebra \(M=M_{\bar{0}}\oplus M_{\bar{1}}\), the even part \(M_{\bar{0}}\) is a Lie algebra and the odd part \(M_{\bar{1}}\) is a \(M_{\bar{0}}\)-module. Hence if \(M_{\bar{1}}=0\), then \(M\) is a Lie algebra and if \(M_{\bar{0}}=0\), then \(M\) is an abelian Lie superalgebra (i.e. for all \(x,y\in M\), \([x,y]=0\)). But in general a Lie superalgebra is not a Lie algebra. The sub superalgebra of \(L\) is a \(\mathbb{Z}_{2}\)-graded vector subspace which is closed under bracket operation. Take \([L,L]\), it is an graded subalgebra of \(L\) and is denoted as \(L^{2}\). A \(\mathbb{Z}_{2}\)-graded subspace \(I\) is a graded ideal of \(L\) if \([I,L]\subseteq I\) and for all \(x\in L\) the ideal \(Z(L)=\{z\in L;[z,x]=0\}\) is a graded ideal and it is called the center of \(L\). If \(I\) is an ideal of \(L\), the quotient Lie superalgebra \(L/I\) inherits a canonical Lie superalgebra structure such that the natural projection map becomes a homomorphism. The notions of epimorphisms, isomorphisms and auotomorphisms have the obvious meaning. According to the super dimension structure of Lie superalgebras over a field, we say that \(L=L_{\bar{0}}\oplus L_{\bar{1}}\) is an \((m,n)\) Lie superalgebra, if \(\dim L_{\bar{0}}=m\) and \(\dim L_{\bar{1}}=n\). Also throughout \(A(m|n)\) denotes an abelian Lie superalgebra with \(\dim A=(m|n)\). The descending central sequence of a Lie superalgebra \(L\) is defined by \(L^{1}=L\) and \(L^{c+1}=[L^{c},L]\), for all \(c\geq 1\). If for some positive integer \(c\), \(L^{c+1}=0\) and \(L^{c}\neq 0\), then \(L\) is called nilpotent with nilpotency class \(c\). Also we have \(|[m,n]|=|m|+|n|\). By a homomorphism between superspaces \(f:V\to W\) of degree \(|f|\in\mathbb{Z}_{2}\), we mean a linear map satisfying \(f(V_{\bar{\alpha}})\subseteq W_{\bar{\alpha}+|f|}\) for \(\alpha\in\mathbb{Z}_{2}\). In particular, if \(|f|=\bar{0}\), then the homomorphism \(f\) is called homogeneous linear map of even degree. A Lie superalgebra homomorphism \(f:M\to N\) is a homogeneous linear map of even degree such that \(f([x,y])=[f(x),f(y)]\) holds for all \(x,y\in L\). Now we present some results related to the Schur multiplier of a Lie superalgebra L. **Theorem 1.2** (11, Theorem 3.3).: _Let \(L\) be a Lie superalgebra with \(\dim L=(m|n)\). Then \(\dim\mathcal{M}(L)\leq\frac{1}{2}[(n+m)^{2}+(n-m)]\)._ As a consequence, there is a non-negative integer \(t(L)\), such that \(\dim\mathcal{M}(L)=\frac{1}{2}[(n+m)^{2}+(n-m)]-t(L)\). **Theorem 1.3** (11, Theorem 3.4).: _Let \(L\) be a Lie superalgebra with \(\dim L=(m|n)\). Then \(\dim\mathcal{M}(L)=\frac{1}{2}[(n+m)^{2}+(n-m)]\) if and only if \(L\) is abelian._ As a result, we can say that \(t(L)=0\) if and only if \(L\) is abelian. **Theorem 1.4** (11, Theorem 5.1).: _Let \(L=L_{\bar{0}}\oplus L_{\bar{1}}\) be a nilpotent Lie superalgebra of \(\dim L=(m|n)\) and \(\dim L^{2}=r+s\) with \(r+s\geq 1\). Then_ \[\dim\mathcal{M}(L)\leq\frac{1}{2}[(m+n+r+s-2)(m+n-r-s-1)]+n+1.\] Moreover if \(r+s=1\), then equality holds if and only if \(L\cong H(1,0)\oplus A(m-3|n)\), where \(A(m-3|n)\) is an abelian Lie superalgebra with dimension \((m-3|n)\) and \(H(1,0)\) is a special Heisenberg Lie superalgebra of dimension \((3|0)\). Define the function \(s(L)\) as \(s(L)=\frac{1}{2}(m+n-2)(m+n-1)+n+1-\dim\mathcal{M}(L)\). So, \(\dim\mathcal{M}(L)=\frac{1}{2}(m+n-2)(m+n-1)+n+1-s(L)\). Where \(s(L)\geq 0\). Nevertheles, we have \(t(L)=m+n-2+s(L)\). It seems that the classification of nilpotent Lie superalgebras \(L\) by \(s(L)\) leads to the classification of \(L\) in terms of \(t(L)\). **Theorem 1.5**.: _If \(L\) is a finite dimensional nilpotent Lie superalgebra of dimension greater than \(1\) and class \((p,q)\). Then \(\mathcal{M}(L)\neq 0\)._ Proof.: The result obtained [3, Theorem 3.2]. **Definition 1.6** (15, Definition 4.1).: _A Lie algebra \(L\) is called capable if \(L\cong H/Z(H)\) for a Lie algebra \(H\)._ We denote \(Z^{*}(L)\) to be the smallest graded ideal in \(L\) such that \(L/Z^{*}(L)\) is capable, (see [15]). **Lemma 1.7** (15, Lemma 4.3).: _A Lie superalgebra \(L\) is capable if and only if \(Z^{*}(L)=\{0\}\)._ **Theorem 1.8** (15, Theorem 4.9).: _Let \(N\) be a central ideal in a Lie superalgebra \(L\). Then the following conditions are equivalent_ 1. \(\frac{\mathcal{M}(L/N)}{\mathcal{M}(L)}\cong N\cap L^{2}\)__ 2. \(N\subseteq Z^{*}(L)\)__ 3. \(\mathcal{M}(L)\to\mathcal{M}(L/N)\) _is monomorphism._ ## 2. Main Results This section is devoted to obtain the Main Result. **Theorem 2.1**.: _Let \(L\) be a nilpotent Lie superalgebra with dimensional greater than \(2\) and \(n\geq 1\) such that \(\dim L^{2}=m+n-2\), then_ \[(n+m)(n+m-3)+4\leq 2t(L)<(n+m)^{2}+n-m.\] Proof.: Using Theorem 5.4 in [11], we have \(\dim\mathcal{M}(L)\leq m+2n-2\). Since \(\dim\mathcal{M}(L)=\frac{1}{2}[(n+m)^{2}+n-m]-t(L)\), we can see that \[\frac{1}{2}[(n+m)^{2}+n-m]-t(L)\leq m+2n-2.\] Thus, \((n+m)(n+m-3)+4\leq 2t(L)\). Now, Theorem 1.5 implies that \(\dim\mathcal{M}(L)>0\), so \(\frac{1}{2}[(n+m)^{2}+n-m]-t(L)>0\). Hence \(2t(L)<(n+m)^{2}+n-m\) and the result follows. **Theorem 2.2**.: _Let \(L\) be a nilpotent Lie superalgebra with dimensional greater than \(2\) with \(n\geq 1\) such that \(\dim L^{2}=m+n-2\), then_ \[(n+m)(n+m-5)+8\leq 2s(L)<(n+m)(n+m-1)-2m+4.\] Proof.: Since \(t(L)=(n+m-2)+s(L)\), by using Theorem 2.1, we have \[(n+m)(n+m-3)+4\leq 2(n+m-2)+2s(L)<(n+m)^{2}+(n-m).\] Thus, \[(n+m)(n+m-3)+4-2(n+m-2) \leq 2(n+m-2)+2s(L)\] \[<(n+m)^{2}+(n-m)-2(n+m-2)\] So we have \((n+m)(n+m-5)+8\leq 2s(L)\) and \(2s(L)<(n+m)(n+m-1)-2m+4\) and the result is obtained. Now we need to the classification of nilpotent Lie superalgebras of dimension at most \(5\) which are not Lie algebras. N. Backhouse and N. L. Matiadou in [1] and [9], respectively, classified these Lie superalgebras into two types trivial and non trivial. Note that the Lie superalgebra \(L\) is trivial, if \([L_{\bar{1}},L_{\bar{1}}]=0\), otherwise \(L\) is non trivial. According to the all notations in [1], we also denote the elements of \(L_{\bar{0}}\) (resp \(L_{\bar{1}}\)) by Latin letters (resp Greek letters) taken from the begining of the alphabet. Using this classification, we have following Lie superalgebras with maximal class \((\dim L^{2}=m+n-2)\). Later we will compute the Schur multiplier of them in the Theorem 2.3. Table 1, \((1,2)\)-Lie superalgebra \begin{tabular}{l l l} \hline Name & Relations & \(\dim\mathcal{M}(L)\) \\ \hline Trivial LS & & \\ \(L_{1,2}^{(3)}\) & \([\alpha,\beta]=\alpha\) & 2 \\ Non-Trivial LS & & \\ \(L_{1,2}^{(1)}\) & \([\alpha,\alpha]=a,[\beta,\beta]=a\) & 2 \\ \(L_{1,2}^{(2)}\) & \([\alpha,\alpha]=a,[\beta,\beta]=-a\) & 2 \\ \hline \end{tabular} Table 2, \((1,3)\)-Lie superalgebra \begin{tabular}{l l l} \hline Name & Relations & \(\dim\mathcal{M}(L)\) \\ \hline Trivial LS & & \\ \(L_{1,3}^{(5)}\) & \([a,\beta]=\alpha,[a,\gamma]=\beta\) & 3 \\ \hline \end{tabular} Table 3, \((2,2)\)-Lie superalgebra \begin{tabular}{l l l} \hline Name & Relations & \(\dim\mathcal{M}(L)\) \\ \hline Non-Trivial LS & & \\ \(L_{2,2}^{(9)}\) & \([\alpha,\alpha]=a,[\beta,\beta]=b\) & 2 \\ \(L_{2,2}^{(10)}\) & \([\alpha,\alpha]=a,[\beta,\beta]=b,[\alpha,\beta]=a\) & 1 \\ \(L_{2,2}^{(11)}\) & \([\alpha,\alpha]=a,[\beta,\beta]=b,\) & 1 \\ & \([\alpha,\beta]=p(a+b)\ ;\ p>0\) & \\ \(L_{2,2}^{(12)}\) & \([\alpha,\alpha]=a,[\beta,\beta]=b,\) & 1 \\ & \([\alpha,\beta]=p(a-b)\ ;\ p>0\) & \\ \hline \end{tabular} Table 4, \((1,4)\)-Lie superalgebra \begin{tabular}{l l l} \hline Name & Relations & \(\dim\mathcal{M}(L)\) \\ \hline Trivial LS & & \\ \(E^{22}\) & \([a,\alpha]=\beta,[a,\beta]=\gamma,[a,\gamma]=\delta,\) & 6 \\ \hline \end{tabular} \begin{tabular}{l l l l} \hline Name & Relations & \(\dim\mathcal{M}(L)\) \\ \hline Non-Trivial LS & & & \\ \(3A_{1,1}+2A\) & \([\alpha,\alpha]=a,[\beta,\beta]=b,[\alpha,\beta]=c\) & \(3\) \\ \hline \end{tabular} Table 6, \((2,3)\)-Lie superalgebras \begin{tabular}{l l l} \hline Name & Relations & \(\dim\mathcal{M}(L)\) \\ \hline Non-Trivial LS & & & \\ \((D^{15}+A_{1,1})^{1}\) & \([a,\beta]=\alpha,[a,\gamma]=\beta,[\gamma,\gamma]=b\) & \(1\) \\ \((D^{15}+A_{1,1})^{2}\) & \([a,\beta]=\alpha,[a,\gamma]=\beta,[\beta,\beta]=b\) & \(2\) \\ & \([a,\gamma]=-b\) & & \\ \((D^{15}+A_{1,1})^{3}\) & \([a,\beta]=\alpha,[a,\gamma]=\beta,[\beta,\beta]=b\) & \(3\) \\ & \([\gamma,\gamma]=b,[\alpha,\gamma]=-b\) & \\ \((D^{15}+A_{1,1})^{4}\) & \([a,\beta]=\alpha,[a,\gamma]=\beta,[\beta,\beta]=b\) & \(2\) \\ & \([\gamma,\gamma]=-b,[\alpha,\gamma]=-b\) & \\ \hline \end{tabular} According to the free presentations in the previous tables, we want to compute the Schur multiplier of them. **Theorem 2.3**.: _The Schur multiplier of Lie superalgebras is given in the Table 7._ \begin{tabular}{l l l l} \hline \(LS\) & \(\mathcal{M}(L)\) & \(LS\) & \(\mathcal{M}(L)\) \\ \hline \(L^{(1)}_{1,2}\) & \(A(2|0)\) & \((D^{15}+A_{1,1}))^{3}\) & \(A(2|1)\) \\ \hline \(L^{(2)}_{1,2}\) & \(A(2|0)\) & \(L^{(10)}_{2,2}\) & \(A(1|0)\) \\ \hline \(L^{(3)}_{1,2}\) & \(A(1|1)\) & \(L^{(11)}_{2,2}\) & \(A(1|0)\) \\ \hline \(L^{(9)}_{2,2}\) & \(A(1|1)\) & \(L^{(12)}_{2,2}\) & \(A(1|0)\) \\ \hline \((D^{15}+A_{1,1})^{2}\) & \(A(1|1)\) & \(E^{22}\) & \(A(5|1)\) \\ \hline \((D^{15}+A_{1,1})^{4}\) & \(A(1|1)\) & \(3A_{1,1}+2A\) & \(A(1|2)\) \\ \hline \(L^{(5)}_{1,3}\) & \(A(2|1)\) & \((D^{15}+A_{1,1})^{1}\) & \(A(0|1)\) \\ \hline \end{tabular} Proof.: We state the proof in details to \(L^{(3)}_{1,2}\) and \((D^{15}+A_{1,1})^{4}\), the remaining case be proved in a similar way. Let \(L\cong L^{(3)}_{1,2}\). According to the Definition 1.3 in [17], \(\alpha\wedge\alpha=\alpha\wedge\beta=0\). Thus \(L\wedge L=<a\wedge\alpha,a\wedge\beta,\beta\wedge\beta>\). Hence for all \(w\in\mathcal{M}(L)\), there exist \(\alpha_{1},\alpha_{2},\alpha_{3}\in\mathbb{R}\), such that \(w=\alpha_{1}(a\wedge\alpha)+\alpha_{2}(a\wedge\beta)+\alpha_{3}(\beta\wedge\beta)\). Now let \(\tilde{\kappa}:L\wedge L\rightarrow[L,L]\) be given by \((x\wedge y\rightarrow[x,y])\). Since \(\tilde{\kappa}(w)=0\), we have \(\alpha_{1}[a,\alpha]+\alpha_{2}[a,\beta]+\alpha_{3}[\beta,\beta]\), so \(\alpha_{2}\alpha=0\) and \(\alpha_{2}=0\). Thus \(w=\alpha_{1}(a\wedge\alpha)+\alpha_{3}(\beta\wedge\beta)\). Therefore \(\mathcal{M}(L)=<a\wedge\alpha,\alpha\wedge\alpha>\) and \(\dim\mathcal{M}(L)=2\). On the other hand by using Definition 1.3 in [17], \(\tilde{\kappa}\) is a homogeneous linear map of even degree. Also, we know \(|[a,\alpha]|=|a|+|\alpha|=0+1=1\) and \(|[\beta,\beta]|=|\beta|+|\beta|=1+1=0\). Hence \(a\wedge\alpha\) and \(\beta\wedge\beta\) are odd and even elements of \(L\wedge L\), respectively. Finally, since the Schur multiplier of a Lie superalgebra is abelian, we have \(\mathcal{M}(L)\cong A(1|1)\). Let \(L\cong(D^{15}+A_{1,1})^{4}\), by using to the Definition 1.3 in [17], we have \(a\wedge b=b\wedge\alpha=b\wedge\beta=b\wedge\gamma=\alpha\wedge\alpha=\alpha \wedge\beta=\beta\wedge\gamma=0\), and \(\beta\wedge\beta=-\gamma\wedge\alpha\). Thus \(L\wedge L=<a\wedge\alpha,a\wedge\beta,a\wedge\gamma,\alpha\wedge\alpha,\gamma \wedge\gamma>\). Hence for all \(w\in\mathcal{M}(L)\), there exist \(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}\in\mathbb{R}\), such that \(w=\alpha_{1}(a\wedge\alpha)+\alpha_{2}(a\wedge\beta)+\alpha_{3}(a\wedge\gamma) +\alpha_{4}(\alpha\wedge\alpha)+\alpha_{5}(\gamma\wedge\gamma)\). Now let \(\tilde{\kappa}:L\wedge L\rightarrow[L,L]\) be given by \((x\wedge y\rightarrow[x,y])\). Since \(\tilde{\kappa}(w)=0\), we have \(\alpha_{1}[a,\alpha]+\alpha_{2}[a,\beta]+\alpha_{3}[a,\gamma]+\alpha_{4}[ \alpha,\alpha]+\alpha_{5}[\gamma,\gamma]=0\), so, \(-\alpha_{1}\beta+\alpha_{2}\alpha+(\alpha_{4}+\alpha_{5})b=0\) and \(\alpha_{1},\alpha_{2}=0,\alpha_{5}=-\alpha_{4}\). Thus \(w=\alpha_{3}(a\wedge\gamma)+\alpha_{4}(\alpha\wedge\alpha-\gamma\wedge\gamma)\). Therefore \(\mathcal{M}(L)=<a\wedge\gamma,\alpha\wedge\alpha-\gamma\wedge\gamma>\) and \(\dim\mathcal{M}(L)=2\). On the other hand, by using Definition 1.3 in [17], \(\tilde{\kappa}\) is a homogeneous linear map of even degree. Also, we know \(|[a,\gamma]|=1,|[\alpha,\alpha]|=0\) and \(|[\gamma,\gamma]|=0\). Hence \(a\wedge\gamma\), \(\alpha\wedge\alpha-\gamma\wedge\gamma\) are odd and even elements of \(L\wedge L\), respectively. Finally, since the Schur multiplier of a Lie superalgebra is abelian, we have \(\mathcal{M}(L)\cong A(1|1)\). Now we want to classify non-abelian nilpotent Lie superalgebras of maximal class for \(1\leq s(L)\leq 10\). **Theorem 2.4**.: _Let \(L\) be a non-abelian \((m|n)\)-dimensional nilpotent Lie superalgebra of maximal class and \(m+n\geq 3\). Then \(1\leq s(L)\leq 10\) if and only if \(L\) is isomorphic to the one of the Lie superalgebras listed in the table \(8\)._ Table 8, Summary of Results \begin{tabular}{l l} \hline \(s(L)\) & Name \\ \hline 1 & There is no such Lie superalgebra \\ \hline 2 & \(L^{(1)}_{1,2},L^{(2)}_{1,2},L^{(3)}_{1,2}\) \\ \hline 3 & There is no LS \\ \hline 4 & \(L^{(5)}_{1,3},L^{(9)}_{2,2}\) \\ \hline 5 & \(L^{(10)}_{2,2},L^{(11)}_{2,2},L^{(12)}_{2,2},E^{22}\), \\ \hline 6 & \(3A_{1,1}+2A\) \\ \hline 7 & \((D^{15}+A_{1,1})^{3}\) \\ \hline 8 & \((D^{15}+A_{1,1})^{2},(D^{15}+A_{1,1})^{4}\) \\ \hline 9 & \((D^{15}+A_{1,1})^{1}\) \\ \hline 10 & There is no such Lie superalgebra \\ \hline \end{tabular} Proof.: Using Theorem 2.2, if \(s(L)=1\) then \(m+n=3\) and \(\dim\mathcal{M}(L)=3\), so by looking at the Table 1, there is no such Lie superalgebra with these properties. Let \(s(L)=2\), by looking of Tables \(1,2,3\), if \(m+n=3\), then we should have \(L\cong L^{(1)}_{1,2}\), \(L^{(2)}_{1,2}\) and \(L\cong L^{(3)}_{1,2}\). Also when \(m+n=4\), there is no such a Lie superalgebra with our condition. Let \(s(L)=3\), by looking at Tables \(1,2,3\), if \(m+n=3,4\), there is no such Lie superalgebra for the cases. Now, let \(s(L)=4\), then \(m+n=4\) and by using Tables \(2,3\) we have \(L\cong L^{(5)}_{1,3}\) or \(L^{(9)}_{2,2}\). Similarly for \(5\leq s(L)\leq 10\), we can have a classification for non-abelian nilpotent Lie superalgebra of maximal class and dimension at most \(5\). We put the summary of the results in the Table 8. Now we want to determine the structure of all \((m|n)\)-dimensional nilpotent Lie superalgebras \(L\) when \(\dim L^{2}=\dim\mathcal{M}(L)=m+n-2\) for all \(m+n\leq 5\). **Proposition 2.5**.: _Let \(L\) be an \((m|n)\)-dimensional nilpotent Lie superalgebra such that \(\dim L^{2}=\dim\mathcal{M}(L)=m+n-2\) and \(m+n\leq 5\). Then \(L\) is isomorphic to one of the nilpotent Lie superalgebras \(L^{(9)}_{2,2}\), \(3A_{1,1}+2A\) or \((D^{15}+A_{1,1})^{3}\)._ Proof.: By looking at the classification of all nilpotent Lie superalgebras of dimension at most \(5\) in [1, 9], we can see that only Lie superalgebras whose derived superalgebra is of dimension \(m+n-2\) are those listed in Tables 1 to 6. By using a similar way as in the proof of theorem 2.3, we have \(\dim\mathcal{M}(L^{(9)}_{2,2})=2\), \(\dim\mathcal{M}(3A_{1,1}+2A)=3\) and \(\dim\mathcal{M}((D^{15}+A_{1,1})^{3})=3\). Hence \(L\) should be isomorphic to one of the nilpotent Lie superalges \(L^{(9)}_{2,2}\), \(3A_{1,1}+2A\) or \((D^{15}+A_{1,1})^{3}\). **Proposition 2.6**.: _Let \(L\) be an \((m|n)\)-dimensional nilpotent Lie algebra of maximal class. If \(\dim\mathcal{M}(L)=\dim L^{2}\), then \(L\) is capable._ Proof.: According to the Proposition 2.5, \(L\) is isomorphic to the one of the Lie superalges \(L^{(9)}_{2,2}\), \(3A_{1,1}+2A\) and \((D^{15}+A_{1,1})^{3}\). Now by contray let \(L\cong 3A_{1,1}+2A\) be non capable. By using Theorem, there is a non-zero element \(x\) in \(Z^{*}(L)\). Two cases can be considered. First, if \(x\) be an even element of \(Z^{*}(L)\), then according to the Table 2, \(L\) is isomorphic to the one of the \((2|2)\)-dimensional Lie superalgebras \(L^{(9)}_{2,2},L^{(10)}_{2,2},L^{(11)}_{2,2},L^{(12)}_{2,2}\) which the dimension their Schur multipliers are \(1\) or \(2\). So in this case, the map \(\mathcal{M}(L)\to\mathcal{M}(\frac{L}{<x>})\) is not injective that is in contradiction with part \((iii)\) of the Theorem 2. Hence \(L\) is capable. Second, let \(x\) be an odd element of \(Z^{*}(L)\), then by using Black house's classification in [1], \(L\cong L^{(1)}_{3,1}=<a,b,c,\alpha\mid[b,c]=a,[b,\alpha]=\alpha>\) and similar to the Theorem 2.3, it can be shown that \(\dim\mathcal{M}(L^{(1)}_{1,3})=2\). Thus, the map \(\mathcal{M}(L)\to\mathcal{M}(\frac{L}{<x>})\) is not injective and similarly, we have a contradiction. Therefore, in this case \(L\) is capable. It can also be easily and similarly seen that \(L^{(9)}_{2,2}\) and \((D^{15}+A_{1,1})^{3}\) are capable.
2309.12169
Estimation of the angular position of a two-wheeled balancing robot using a real IMU with selected filters
A low-cost measurement system using filtering of measurements for two-wheeled balancing robot stabilisation purposes has been addressed in this paper. In particular, a measurement system based on gyroscope, accelerometer, and encoder has been considered. The measurements have been corrected for deterministic disturbances and then filtered with Kalman, $\alpha$-$\beta$ type, and complementary filters. A quantitative assessment of selected filters has been given. As a result, the complete structure of a measurement system has been obtained. The performance of the proposed measurement system has been validated experimentally by using a dedicated research rig.
Krzysztof Laddach, Rafał Łangowski, Tomasz Zubowicz
2023-09-21T15:26:06Z
http://arxiv.org/abs/2309.12169v1
[ ###### Abstract A low-cost measurement system using filtering of measurements for two-wheeled balancing robot stabilisation purposes has been addressed in this paper. In particular, a measurement system based on gyroscope, accelerometer, and encoder has been considered. The measurements have been corrected for deterministic disturbances and then filtered with Kalman, \(\alpha\)-\(\beta\) type, and complementary filters. A quantitative assessment of selected filters has been given. As a result, the complete structure of a measurement system has been obtained. The performance of the proposed measurement system has been validated experimentally by using a dedicated research rig. A mode-locked balancing robot]Estimation of the angular position of a two-wheeled balancing robot using a real IMU with selected filters K. Laddach et al.]Krzysztof Laddach\({}^{a}\), Rafal Langowski\({}^{a,*}\) and Tomasz Zubowicz\({}^{a}\) \({}^{a}\)Department of Electrical Engineering, Control Systems and Informatics Gdatsk University of Technology, ul. G. Naratuwicza 11/12, 80-233 Gdatsk, Poland ## 1 Introduction A two-wheeled balancing robot is a single-axle mobile vehicle with a centre of mass located above the wheel rotation axis enabling tilt in only one axis [1]. This type of construction has been gaining popularity in recent years, especially in the field of commercial applications, which include, e.g., segway, hoverboard, etc. The basic functionality (the main control goal) of a two-wheeled balancing robot is to enable its movement (control of linear velocity) while stabilising it - keeping a robot in a vertical position (control of angular position) [1, 2, 3, 4]. Fundamentally, the goal of a two-wheeled balancing robot control coincides with an inverted pendulum control problem. Thus, many of the approaches used in solving an inverted pendulum stabilisation problem are applicable to a two-wheeled balancing robot control, e.g., [5, 6, 7, 8, 9, 10]. In order to enable the basic functionality of a two-wheeled balancing robot an adequate control system is needed. This control system, apart from the use of a suitable control technique, depends on the information provided by the measurement system (measuring devices) and requires a properly selected actuator system, typically electric drives. Hence, one of the crucial factors which should be considered during control system design is the availability and quality of measurement information. Thus, this paper focuses on measuring issues in a control feedback loop for two-wheeled balancing robot stabilisation purposes. To accomplish the stabilisation control objective, at least an angular position of a two-wheeled balancing robot, i.e. the value of the angle of tilt of a two-wheeled balancing robot from the vertical axis must be known. This information is provided to a feedback loop by the measurement system. The measurement system can be interpreted as an assemblage of (hard) sensors, and (optionally) algorithms used to enhance or augment the information provided, e.g., by applying filters or estimators (soft-sensors) [11, 12, 13, 14]. Typically, a measurement system that provides high-quality measurements is an expensive investment. This issue is particularly important in the case of mentioned commercial applications. A typical solution to this problem is based on using a cheaper or fewer number of sensors. Unfortunately, in both cases, the resulting quality of provided measurements can be significantly decreased and in consequence insufficient for control purposes. Clearly, the measurement information can be influenced by measurement noise or errors. The widespread approach to cope with this problem is based on the use of estimation (filtration). Hence, by exploiting the soft-sensor approach it is possible to find a trade-off between the cost of a measurement system and the quality of measurement information. Thus, in this paper, a low-cost measurement system using filtering of measurements for two-wheeled balancing robot stabilisation purposes is further considered. In the considered application domain, typically an Inertial Measurement Unit (IMU) is a core of the measurement system. An IMU usually is composed of various configurations of gyroscopes, accelerometers, and magnetometers. In this paper, the IMU consists of gyroscopes and accelerometers. Thus, the required information regarding the angular position of a two-wheeled balancing robot is determined based on measurements from the gyroscopes and accelerometers. The gyroscope delivers measurements of the angular velocity of a two-wheeled balancing robot. Moreover, these measurements can be integrated in time, assuming the knowledge of the initial conditions, which allows computing the gyroscope's orientation [15, 16, 17, 18]. The precision gyroscopes, e.g., those based on optical phenomena such as ring laser [19], are too expensive and bulky for aforementioned applications including two-wheeled balancing robots. Therefore, the cheaper gyroscopes of the type Micro Electrical Mechanical Systems (MEMSs) are commonly used [20, 19]. Besides its low purchase and operating costs, MEMS is characterised by a solid construction with small size and weight. In addition, these sensors are low power consumption, short start-up time, and high-reliability. Moreover, MEMS gyroscopes and accelerometers are capable of providing inertial-grade measurements of angular velocity and acceleration even for long-range navigation systems [19]. However, these are also characterised by a lower quality of the measurements provided. Also, the integration of measurement errors leads to accumulating an error in the calculated orientation, which prevents the proper measurement of the absolute orientation of gyroscope [21]. Hence, an additional accelerometer (or magnetometer) is required to measure a gravitational (respectively magnetic) field vector with known orientation in space. However, these measurements are subject to the interference of accelerometer non-ideal characteristics (or imprecision) and a two-wheeled balancing robot vector of motion. Therefore, besides the appropriate collection of measurements from considered sensors, the filtration and correction of measurements are necessary [22]. The aim of filtration is to remove stochastic measurements interference whereas the purpose of correction is to improve measurements by removing deterministic interference. Hence, correction of measurements consists of taking into account the accelerations resulting from sensor movement, and identifiable measurement errors such as non-linearity or bias [22]. In the literature, various models of stochastic interference can be found; nevertheless, the most widespread seems to be Gauss-Markov model [23, 24, 25, 26]. Naturally, this also involves an extensive volume of literature on filtration [26, 27, 28, 22, 29, 30, 21, 31, 32]. The most common types of filters used in this task are: Kalman filter [27, 33], \(\alpha-\beta\) filter and its extensions [30], and complementary filter [29, 33]. The popularity of complementary and \(\alpha-\beta\) filters is due to their simplicity and computational efficiency, which translates into their performance and reduced need for microprocessor power. Whereas, Kalman filter provides the optimal estimates of the angle of a two-wheeled balancing robot tilt from the vertical axis, but only if certain assumptions are met [27, 19]. In turn, works in the correction of interference of measurements made by accelerometer and gyroscope can be found in, e.g., [23, 25, 24, 34, 35, 36, 37, 38, 39]. In this paper, correction of measurements uses an additional measurement of a two-wheeled balancing robot's (progressive) linear position provided by the encoder. It should be noticed that, in the case of balancing robots, this does not introduce an additional cost for the measurement system, because most control systems use this information anyway. Whereas for the filtering of the measurements the following are used: Kalman filter, a family of \(\alpha-\beta\) filters, and complementary filter. Thus, a novelty presented in the paper is a comparison of the use of correction and distinct filtration of measurements mechanisms in a single measurement system. Hence, the main contribution of this paper is to investigate a low-cost measurement system with correction and filtration of provided measurements for stabilisation of a two-wheeled balancing robot purposes. Moreover, a performance comparison of selected filters including attention to the'miss-use' of the 'classical' Kalman filter is presented. Each filter has been designed and then implemented in the constructed two-wheeled balancing robot. To that goal, optimised (minimum covariance) infinite impulse response filters have been put against a minimum covariance, linear, and unbiased filtering implemented using recursive Kalman filter. In the case of the former solutions, this involves \(\alpha-\beta\) type filters and complementary filter. The obtained results have been quantitatively assessed using a typical measure, i.e. mean square error (MSE). The paper is organised as follows. The problem statement is presented in section 2. Section 3 includes the description of the measurement system. Next, the experimental framework the results obtained are widely discussed in section 4. The paper is concluded in section 5. ## 2 Problem Statement Consider \(\mathbf{y}\), \(\mathbf{\bar{y}}\), \(\mathbf{\bar{y}}\), \(\mathbf{y}_{\mathrm{m}}\), \(\mathbf{u}^{*}\), and \(\mathbf{u}^{*\alpha}\) to denote the vectors of: real measurements, sensors outputs, corrected measurements, measurements provided by the measurement system (after correction and filtration), control signals generated by the stabilisation control system, and control signals applied to a two-wheeled balancing robot, respectively. The general structure of a two-wheeled balancing robot stabilisation control system is shown in Fig. 1. As it can be noticed in Fig. 1 the measurement system consists of two main parts. The first (physical/hardware layer) includes sensors, most notably gyroscopes and accelerometers. As it has been mentioned in section 1 these are made in MEMS technology. In turn, the second (software layer) includes correction and filtration mechanisms, primarily. The physical layer of the measurement system is described by the so-called measurement equation, which, in general form, yields: \[\mathbf{\bar{y}}(k)=\mathbf{y}(k)+\mathbf{f}(k)+\mathbf{g}(k), \tag{1}\] where: \(k\) is the discrete time instant; \(\mathbf{f}(k)\) represents deterministic measurements interference; \(\mathbf{g}(k)\) represents stochastic measurements interference. Figure 1: General structure of the two–wheeled balancing robot stabilisation control system Thus, measurements \(\tilde{\mathbf{y}}\) are assumed to be disturbed by deterministic and stochastic interference. This necessitates their correction and implies the existence of the second layer of measurement system. The software layer of the measurement system comprises correction and filtration. A detailed description of the measurements correction, which is based on the adopted model of deterministic interference, is presented in sections 3.1 - 3.3. Signal filtering, on the other hand, is performed using a selected filter, the design of which are presented in sections 3.4. Thus, as a result of the above operations, the measurement system will provide measurements \(\mathbf{y}_{\text{m}}\). It is easy to notice that in this case the vector \(\mathbf{y}_{\text{m}}\) is an estimate of the two-wheeled balancing robot's angular position, which is the necessary measurement information for the stabilisation control system. The deliberation and conclusion provided in the following sections of the manuscript include have been obtained under the following conditions. For the purpose of estimation (filtration), the following model of two-wheeled balancing robot kinematics has been adopted [40]: \[\begin{split}\phi(t)&=\phi_{0}+\dot{\phi}(t)t+ \frac{\ddot{\phi}(t)t^{2}}{2},\\ \dot{\phi}(t)&=\dot{\phi}_{0}+\ddot{\phi}(t)t,\\ \ddot{\phi}(t)&=\ddot{\phi}(t),\end{split} \tag{2}\] where: \(t\) is the time instant; \(\phi(t)\) denotes the angle of tilt of a two-wheeled balancing robot from the vertical axis (angular position) at time instant \(t\); \(\dot{\phi}(t)\) signifies the angular velocity at time instant \(t\); \(\ddot{\phi}(t)\) stands for the angular acceleration at time instant \(t\); \(\dot{\phi}_{0}\) is the initial angular position of a two-wheeled balancing robot. In this research work it is assumed that the angular position \(\phi(\cdot)\) represent a real value of the angle of tilt of a two-wheeled balancing robot from the vertical axis. Given (1) and taking into account that the filtration system operates in discrete time, (2) is discretised by the Euler method as follows: \[\begin{split}\phi(k)&=\phi(k-1)+\dot{\phi}(k-1) \Delta t+\frac{\ddot{\phi}(k-1)\Delta t^{2}}{2},\\ \dot{\phi}(k)&=\dot{\phi}(k-1)+\ddot{\phi}(k-1) \Delta t,\\ \ddot{\phi}(k)&=\ddot{\phi}(k-1),\end{split} \tag{3}\] where \(\Delta t\) is the discretisation time-step. The equations (3) can be re-written in the vector-matrix general form as follows: \[\mathbf{x}(k) =\mathbf{A}\mathbf{x}(k-1)+\mathbf{B}\mathbf{u}(k-1), \tag{4}\] \[\mathbf{y}(k) =\mathbf{C}\mathbf{x}(k). \tag{5}\] Equation (4) stands for the equation of system dynamics whereas (5) is the measurement (observation) equation. Hence, \(\mathbf{x}(k)\), \(\mathbf{u}(k-1)\), and \(\mathbf{y}(k)\) represent the state vector of the system, input vector to the system, and output vector from the system, respectively, and they will change in a given considered filter. Similarly, the size and elements of matrices \(\mathbf{A}\), \(\mathbf{B}\), and \(\mathbf{C}\) will change in a given considered filter. As it has been aforementioned, the performance of selected filters, which consequently translates into the quality of the information provided by the measurement system, is quantitatively assessed using the mean square error of estimation, which is expressed by: \[\text{MSE}=\frac{1}{m}\sum_{k=1}^{m}(\phi(k)-\dot{\phi}(k))^{2}, \tag{6}\] where (\(\cdot\)) denotes the estimate of a given variable and \(m\) signifies a number of discrete time instants. To summarise, a low-cost measurement system with correction and filtration of provided measurements for stabilisation of a two-wheeled balancing robot is obtained. The entire measurement system has been implemented in the constructed two-wheeled balancing robot. ## 3 Measurement system The detailed structure of the developed measurement system is shown in Fig. 2. As it can be noticed, besides the aforementioned sensors, i.e. gyroscopes and accelerometers (MEMS), and encoder, and filters it also includes several other elements. Their detailed description is provided later in this section. As it has been mentioned, at the output of the measurement system there are estimates \(\ddot{\mathbf{\phi}}(\cdot)\) of the angular position of the two-wheeled balancing robot. ### Gyroscope The simplified structure of the MEMS gyroscope is shown in Fig. 3. The particular symbols used in it denote the following: \(C\) denotes the capacity, \(\overline{F_{\text{e}}}\) is the Euler force, \(\overline{F_{\text{c}}}\) denotes the centrifugal force, \(\overline{\mathcal{V}}\) signifies the horizontal velocity of vibrating mass, and \(\overline{\omega}\) stands for angular velocity of the sensor. The measurements of the angular velocity of the two-wheeled balancing robot from this gyroscope are calculated by measuring the capacity \(C\), which changes due to the motion of the inner frame relative to the outer frame. In the inner frame, the Coriolis force acts on the moving (oscillating, vibrating) proof mass, which causes movement of the inner frame. The inner frame can move only orthogonally to the direction of proof mass vibration. To eliminate the influence of inertia forces acting on the oscillating mass, which results from the gyroscope's progressive motion in the vibration direction, there are two sets of frames in the sensor in which mass vibrates in the anti-phase to each other. Then signals from both sets are added, and the in-phase component generated by linear acceleration is subtracted [41, 19]. A more detailed description of the gyroscope's build and operation can be found in [19, 23]. According to (1), the equation for measuring angular velocity from a gyroscope in the selected axis can be written as follows [19]: \[\begin{split}\ddot{\phi}_{x}(k)&=\dot{\phi}_{x}(k)+S _{\phi}\left(\dot{\phi}_{x}(k)\right)+M_{x}\dot{\phi}_{x}(k)+M_{y}\dot{\phi}_{y}(k )\\ &\quad+b_{\phi}+B_{x}a_{x}(k)+B_{y}a_{y}(k)+B_{xy}a_{x}(k)a_{y}(k)\\ &\quad+\nu_{\phi},\end{split} \tag{7}\] where: \(\vec{\phi}_{z}(k)\) denotes the measured value of the angular velocity in z axis at time instant \(k\); \(\phi_{z}(k)\), \(\phi_{z}(k)\), \(\phi_{y}(k)\) are the real values of angular velocities in z, x, and y axes at time instant \(k\), respectively; \(a_{z}(k)\), \(a_{y}(k)\) stand for the accelerations in z and y axes at time instant \(k\), respectively; \(b_{\phi}\) is acceleration-insensitive bias; \(B_{x}\), \(B_{y}\) are acceleration-sensitive bias coefficients in z and y axes, respectively; \(B_{xy}\) signifies the anisoelastic bias coefficient; \(M_{x}\), \(M_{y}\) denotes the cross-coupling coefficients in x and y axes, respectively; \(\nu_{\phi}\) is the zero-mean random bias; \(S_{\phi}\) (\(\dot{\phi}_{z}(k)\)) stands for the scale-factor error which may be expressed as a polynomial in \(\dot{\phi}_{z}(k)\) to represent the scale-factor non-linearities. However, for non-pendulous designed MEMS type gyroscopes, which are built from three single axis gyroscopes reasonable is to expect that the cross-axis coupling factors and vibro-pendulous errors would be insignificant [23, 25, 24]. Moreover, the scale-factor error arises mainly from temperature changes, and the resulting changes in the characteristics of the magnetic materials in the sensor [19]. Assuming that the constructed two-wheeled balancing robot will move at a relatively constant temperature it is possible to neglect the scale-factor error. Thus, (7) can be re-written as follows: \[\vec{\phi}_{z}(k)=\dot{\phi}_{z}(k)+b_{\phi}+\nu_{\phi}. \tag{8}\] Because in (8) no elements are corresponding to phenomena in the other axes than z axis, in the further part of this paper the index of the z axis is omitted. The bias \(b_{\phi}\) is non-zero value of the gyroscope's output even despite a lack of applied input rotation. It may be caused by various effects including the residual torques from flexible leads within the gyroscope, spurious magnetic fields, and temperature gradients. Moreover, this bias is independent of both rotational and progressive sensor movement [19]. Thus, commonly the dynamics of \(b_{\phi}\) is assumed to zero (the value of bias is assumed to be constant) [19, 23, 25, 24]. Hence, the corrected measurements \(\overline{\phi}(k)\) of the angular velocity are calculated from: \[\overline{\phi}(k)=\dot{\phi}(k)-b_{\phi}. \tag{9}\] ### Accelerometer The MEMS accelerometer belongs to the group of capacitive accelerometers. The principle of operation of capacitive accelerometers is analogous to the MEMS gyroscopes (see section 3.1), except that the proof mass motion is caused directly by the inertia force, and the proof mass is not vibrated by electrical forces. A more detailed description can be found in [19, 23, 38]. In the task of determining the value of the angle of tilt of the two-wheeled balancing robot from the vertical axis on Figure 3: The internal structure of the MEMS gyroscope Figure 2: The detailed structure of the measurement system a two-dimensional plane, from accelerometer the measurements of accelerations associated with the two axes, i.e. \(\mathbf{x}^{\prime}\) and \(\mathbf{y}^{\prime}\) are sufficient. Clearly, to calculate the angle of tilt, it is necessary to know the length of the component vectors of gravity acceleration (associated with the force of gravity \(\overline{\mathcal{Q}}\)), which are projected on the axes of the two-wheeled balancing robot's reference system (\(\mathbf{x}^{\prime}\) and \(\mathbf{y}^{\prime}\)). However, in addition to measuring the components of gravitational acceleration, the accelerometer measures also accelerations resulting from Coriolis, inertial \(\overline{F_{t}}\), Euler \(\overline{F_{e}}\) and centrifugal \(\overline{F_{c}}\) forces. It is assumed that the velocities of the proof mass occurring in the accelerometer are so small and their duration is so short (the proof mass stabilises in the position where the applied forces cancel each other) that the Coriolis force acting on the proof mass is omitted from the consideration. The rest of the forces are shown in Fig. 4. Whereas the other symbols, i.e. \(\overline{R}\), \(\overline{V_{t}}\) and \(\overline{a_{t}}\) stand for position vector, translational (in x axis) velocity and acceleration of the two-wheeled balancing robot, respectively. According to (1), the equation for measuring acceleration from a accelerometer in the selected axis can be written as follows [19]: \[\begin{split}\ddot{a}_{\mathbf{x}^{\prime}}(k)&=a _{\mathbf{x}^{\prime}}(k)+S_{\mathbf{x}\mathbf{a}^{\prime}}\left(a_{\mathbf{x} ^{\prime}}(k)\right)+M_{\mathbf{y}}a_{\mathbf{y}^{\prime}}(k)+M_{\mathbf{z}}a _{\mathbf{z}^{\prime}}(k)\\ &\quad+b_{\mathbf{x}\mathbf{a}^{\prime}}+b_{\mathbf{y}}a_{ \mathbf{z}^{\prime}}(k)a_{\mathbf{y}^{\prime}}(k)+v_{\mathbf{z}\mathbf{a}^{ \prime}},\end{split} \tag{10}\] where: \(\ddot{a}_{\mathbf{x}^{\prime}}(k)\) is the measured value of the acceleration in \(\mathbf{x}^{\prime}\) axis at time instant \(k\); \(a_{\mathbf{x}^{\prime}}(k)\), \(a_{\mathbf{y}^{\prime}}(k)\), \(a_{\mathbf{z}^{\prime}}(k)\) denote the real values of accelerations in \(\mathbf{x}^{\prime}\), \(\mathbf{y}^{\prime}\), and \(\mathbf{z}^{\prime}\) axes at time instant \(k\), respectively; \(S_{\mathbf{x}\mathbf{a}^{\prime}}\left(a_{\mathbf{x}^{\prime}}(k)\right)\) signifies the scale-factor error, usually expressed in polynomial form to include non-linear effects; \(M_{\mathbf{y}}\), \(M_{\mathbf{z}}\) are the cross-coupling coefficients in y and z axes, respectively; \(b_{\mathbf{x}\mathbf{a}^{\prime}}\) denotes the constant bias; \(b_{\mathbf{y}}\) stands for the vibro-pendulous error coefficient; \(v_{\mathbf{a}\mathbf{x}^{\prime}}\) is the random noise which expected value is assumed to be zero (see [19]). However, as for the gyroscope, also for accelerometer based on MEMS technology, non-pendulous design and consisting of three single accelerometers (one in each axis) the cross-axis coupling factors and vibro-pendulous errors would be insignificant [23, 25, 24, 39]. Thus, (10) can be re-written as follows: \[\ddot{a}_{\mathbf{x}^{\prime}}(k)=a_{\mathbf{x}^{\prime}}(k)+S_{\mathbf{x} \mathbf{a}^{\prime}}\left(a_{\mathbf{x}^{\prime}}(k)\right)+b_{\mathbf{x} \mathbf{a}^{\prime}}+v_{\mathbf{x}\mathbf{a}^{\prime}}. \tag{11}\] Next, to compensate a calibration must be made to provide values of \(b_{\mathbf{x}\mathbf{a}^{\prime}}\) and \(S_{\mathbf{x}\mathbf{a}^{\prime}}\left(a_{\mathbf{x}^{\prime}}(k)\right)\). A detailed description of this operation is contained in section 4. Hence, the corrected measurements \(\overline{a}_{\mathbf{x}^{\prime}}(k)\) of the acceleration in \(\mathbf{x}^{\prime}\) axis are calculated from: \[\overline{a}_{\mathbf{x}^{\prime}}(k)=\ddot{a}_{\mathbf{x}^{\prime}}(k)-b_{ \mathbf{x}\mathbf{a}^{\prime}}-S_{\mathbf{x}\mathbf{a}^{\prime}}\left(\ddot{ a}_{\mathbf{x}^{\prime}}(k)-b_{\mathbf{x}\mathbf{a}^{\prime}}\right), \tag{12}\] and similarly in the \(\mathbf{y}^{\prime}\) axis from: \[\overline{a}_{\mathbf{y}^{\prime}}(k)=\ddot{a}_{\mathbf{y}^{\prime}}(k)-b_{ \mathbf{y}\mathbf{a}^{\prime}}-S_{\mathbf{a}\mathbf{y}^{\prime}}\left(\ddot{ a}_{\mathbf{y}^{\prime}}(k)-b_{\mathbf{a}\mathbf{y}^{\prime}}\right). \tag{13}\] ### Other components According to Fig. 2 the corrected value of the angle of tilt of the two-wheeled balancing robot from the vertical axis (the corrected angular position) can be determined as follows: \[\overline{\phi}(k)=\operatorname{arctg}_{\text{d}}\left(\frac{\overline{a}_{ \mathbf{x}^{\prime}}(k)+a_{\text{e}}(k)+a_{\text{t},\mathbf{x}^{\prime}}(k)}{ \overline{a}_{\mathbf{y}^{\prime}}(k)+a_{\text{c}}(k)-a_{\text{t},\mathbf{y}^ {\prime}}(k)}\right), \tag{14}\] where: \(\overline{\phi}(k)\) is the corrected angular position at time instant \(k\) in \([\texttt{}^{\circ}]\); \(\operatorname{arctg}_{\text{d}}(\cdot)\) denotes the arc tangent function; \(a_{\text{e}}(k)\) stands for the Euler acceleration associated with Euler force at time instant \(k\)[42]; \(a_{\text{c}}(k)\) signifies the centrifugal acceleration associated with centrifugal force at time instant \(k\)[42]; \(a_{\text{t},\mathbf{x}^{\prime}}(k)\), \(a_{\text{t},\mathbf{y}^{\prime}}(k)\) are the accelerations resulting from the translational acceleration \(\overline{a_{\text{t}}}\) of the two-wheeled balancing robot in \(\mathbf{x}^{\prime}\) and \(\mathbf{y}^{\prime}\) axes at time instant \(k\), respectively. The value of the centrifugal acceleration can be calculated as follows: \[a_{\text{c}}(k)=(\overline{\dot{\phi}}_{t}(k))^{2}R, \tag{15}\] where: \(\overline{\dot{\phi}}_{t}(k)=\frac{\pi}{180}\overline{\dot{\phi}}(k)\) is the corrected measurements \(\overline{\dot{\phi}}(k)\) of the angular velocity at time instant \(k\) in \([\text{rad}/\text{s}]\); \(R=0.135\) m denotes the distance of the sensor from the axis of rotation. In turn, the value of the Euler acceleration can be determined as: \[a_{\text{e}}(k)=\overline{\dot{\phi}}_{t}(k)R. \tag{16}\] where \(\overline{\dot{\phi}}_{t}(k)\) is the angular acceleration in \([\text{rad}/\text{s}^{2}]\). The value of \(\overline{\dot{\phi}}_{t}(k)\) is not measured; therefore, it is calculated as a discrete derivative of the corrected angular velocity \(\overline{\dot{\phi}}_{t}(k)\) (see Fig. 2). However, the measurements of the angular velocity are burden with measurement noise, which excludes calculation of the derivative. Thus, to determine \(\overline{\dot{\phi}}_{t}(k)\), the value \(\overline{\dot{\phi}}_{t}(k)\) Figure 4: Distribution of forces on a moving sensor is first passed through the following discrete first order low-pass filter: \[\overline{\phi}_{\mathrm{r,f}}(k)=\frac{\overline{\phi}_{\mathrm{r}}(k)\Delta t +\overline{\phi}_{\mathrm{r,f}}(k-1)T_{\phi}}{\Delta t+T_{\phi}}, \tag{17}\] where \(T_{\phi}\) is time constant of the low-pass filter (see section 4). Hence, the value of \(\overline{\phi}_{\mathrm{r}}(k)\) yields: \[\overline{\phi}_{\mathrm{r}}(k)=\frac{\overline{\phi}_{\mathrm{r,f}}(k)- \overline{\phi}_{\mathrm{r,f}}(k-1)}{\Delta t}. \tag{18}\] The values of the accelerations \(a_{\mathrm{r,s}^{\prime}}(k)\) and \(a_{\mathrm{r,s}^{\prime}}(k)\) are calculated from the acceleration of the two-wheeled balancing robot \(a_{\mathrm{r}}(k)\). This acceleration is discretely calculated as a derivative of the translational velocity \(V_{\mathrm{t}}(k)\). It is calculated from the measurements of the additional encoder mounted on the axis of the two-wheeled balancing robot's wheels as follows [43]: \[V_{\mathrm{t}}(k)=\frac{2\pi R_{\mathrm{w}}}{N\Delta t}n, \tag{19}\] where: \(R_{\mathrm{w}}=0.0375\) [m] is the radius of the two-wheeled balancing robot wheel; \(n\) denotes the number of counted pulses from encoder; \(N\) stands for the number of signal changes per shaft rotation. It should be added that the errors resulting from the encoder measurements are not considered in the paper. It is due to their values are much smaller than the error values of the other considered sensors. Moreover, it is assumed that the considered two-wheeled balancing robot control system is tuned well-enough to avoid wheel slippage. However, the measurements of the progressive displacement have step characteristics. Indeed, \(n\) is the number of counted pulses that is always an integer. Calculation of the derivative from such signal will not bring useful information. Therefore, the velocity is first filtered through the following low-pass filter: \[V_{\mathrm{t,f}}(k)=\frac{V_{\mathrm{t}}(k)\Delta t+V_{\mathrm{t,f}}(k-1)T_{ \mathrm{V}}}{\Delta t+T_{\mathrm{V}}}, \tag{20}\] where \(T_{\mathrm{V}}\) is time constant of the low-pass filter (see section 4). Thus, the value of the translational acceleration is then calculated using a discrete Euler derivative: \[a_{\mathrm{t}}(k)=\frac{V_{\mathrm{t,f}}(k)-V_{\mathrm{t,f}}(k-1)}{\Delta t}. \tag{21}\] Knowing the value of the translational acceleration, projection of it on the axes of the reference system associated with the sensor must be done. To do so, the corrected angular position must be used. Since its current value is just calculated, the previous value is used as follows: \[\begin{split} a_{\mathrm{t,x}^{\prime}}(k)&=a_{ \mathrm{t}}(k)\mathrm{cos}\left(\overline{\phi}(k-1)\right),\\ a_{\mathrm{t,y}^{\prime}}(k)&=a_{\mathrm{t}}(k) \mathrm{sin}\left(\overline{\phi}(k-1)\right).\end{split} \tag{22}\] ### Selected filters As it has been aforementioned the Kalman filter, the family of \(a-\beta\) filters, and complementary filter have been used for the filtering (estimation) of the measurements. The structure of selected filters is shown in Fig. 5. In turn, the dynamics of selected filter, in a general form, can be described by equation: \[\begin{split}\hat{\mathbf{x}}(k)&=\left[\mathbf{A}-\mathbf{KCA} \right]\hat{\mathbf{x}}(k-1)\\ &\quad+\left[\mathbf{B}-\mathbf{KCB}\right]\mathbf{u}(k-1)+\mathbf{K}\overline{y }(k),\end{split} \tag{23}\] where \(\mathbf{K}\) is the gains matrix of the selected filter. Whereas the estimation error \(\mathbf{e}(k)\) is defined as follows: \[\mathbf{e}(k)=\mathbf{x}(k)-\hat{\mathbf{x}}(k). \tag{24}\] It is easy to show that the dynamics of the estimation error can be written as follows: \[\mathbf{e}(k)=\left[\mathbf{A}-\mathbf{KCA}\right]\mathbf{e}(k-1)+\mathbf{K}\left[\hat{\mathbf{y}}(k)- \overline{\mathbf{y}}(k)\right]. \tag{25}\] To ensure the filter stability the following must hold: \[\forall_{\lambda_{i}\in\mathrm{eig}(\mathbf{A}-\mathbf{KCA})}\quad|\lambda_{i}|<1, \tag{26}\] where: \(\lambda_{i}\in\mathrm{eig}(\mathbf{A}-\mathbf{KCA})\) is \(i\)th eigenvalue of matrix \(\left[\mathbf{A}-\mathbf{KCA}\right]\); \(\mathrm{eig}(\mathbf{A}-\mathbf{KCA})\) denotes the spectrum of matrix \(\left[\mathbf{A}-\mathbf{KCA}\right]\). #### 3.4.1 Kalman filter The filtration method, which ensures the optimal value of MSE of estimation for linear plants was developed by Rudolf E. Kalman in 1960 [27]. In general, Kalman filter is a discreet observer in which the gain values of the correction part are calculated in each iteration of the filter in a way that results from the solution of the optimisation task. Assuming that the measurements are burdened only with Gauss noise this solution provides a minimal MSE, in considered case between the corrected angular position and its estimate. It should be noticed that in fact, Kalman filter has no degrees of freedom, i.e. there are no adjustable parameters in it. The covariance matrices given when initialising the filter are derived from the interference characteristics and should be calculated from the Gauss noise analysis. However, this task provides many problems, especially when actual measurements are also affected by other types of interference, e.g. bias, or when the noise has not clear Gauss character, which often results in the empirical tuning of Kalman filter. A detailed description of Kalman filter can be found in [27, 19]. Figure 5: The structure of selected filters – block ‘Selected filter’ in Fig. 2 From the point of view of the above, two approaches to Kalman filter design are taken under consideration in this paper, 'Kalman' and 'Kalman". The first assumes access to the a priori knowledge on the interference. The second assumes limited knowledge of the interference characteristics. Hence, in the first of these, the covariance matrices of the estimated variables \(\mathbf{Q}\) and measurements \(\mathbf{R}\), respectively are determined by using the interference analysis. In turn, in the second one ('Kalman"), the values of particular elements of the matrices \(\mathbf{Q}\) and \(\mathbf{R}\) have been selected by optimisation. It is decided that Kalman filter will provide estimates of the angular position \(\hat{\mathbf{\phi}}(k)\) and the gyroscope bias \(\hat{b}_{\mathbf{\phi}}\), whereas the corrected angular velocity measurement \(\overline{\hat{\mathbf{\phi}}}(k)\) is treated as an input to the kinematics model. This approach has been taken because it is most common for the task of filtering the angle of tilt of a two-wheeled balancing robot from the vertical axis. Moreover, (3) is simplify by assuming that \(\hat{\mathbf{\phi}}(k)=0\), whereas the dynamics of bias \(\hat{b}_{\hat{\mathbf{\phi}}}\) is assumed to be zero. Thus, the prediction equations can be written as follows: \[\begin{split}\hat{\mathbf{x}}(k|k-1)&=\mathbf{A}\hat{\mathbf{x }}(k-1)+\mathbf{B}\overline{\hat{\mathbf{\phi}}}(k-1),\\ \hat{\mathbf{\phi}}(k|k-1)&=\mathbf{C}\hat{\mathbf{x}}(k), \end{split} \tag{27}\] where: \[\hat{\mathbf{x}}(\cdot)=\begin{bmatrix}\hat{\mathbf{\phi}}(\cdot)\\ \hat{b}_{\hat{\mathbf{\phi}}}(\cdot)\end{bmatrix},\ \mathbf{A}=\begin{bmatrix}1&-\Delta t\\ 0&1\end{bmatrix},\ \mathbf{B}=\begin{bmatrix}\Delta t\\ 0\end{bmatrix},\ \mathbf{C}=\begin{bmatrix}1\\ 0\end{bmatrix}^{\mathrm{T}}. \tag{28}\] The gains matrix (Kalman matrix) is of the form \(\mathbf{K}=\begin{bmatrix}k_{1}&k_{2}\end{bmatrix}^{T}\), and it is calculated in each cycle of filter. For this purpose, in the first step the covariance matrix \(\mathbf{P}\) is calculated as follows: \[\mathbf{P}(k|k-1)=\mathbf{A}\mathbf{P}(k-1)\mathbf{A}^{-1}+\mathbf{Q}, \tag{29}\] where: \[\mathbf{Q}=\begin{bmatrix}q_{1}\Delta t&0\\ 0&q_{2}\end{bmatrix}, \tag{30}\] and \(q_{1}\), \(q_{2}\) are variances of \(\hat{\mathbf{\phi}}(\cdot)\) and \(\hat{b}_{\hat{\mathbf{\phi}}}(\cdot)\), respectively. Next: \[\begin{split}\mathbf{K}(k)&=\begin{bmatrix}k_{1}(k)\\ k_{2}(k)\end{bmatrix}\\ &=\mathbf{P}(k|k-1)\mathbf{C}^{\mathrm{T}}\left(\mathbf{C}\mathbf{P}(k|k-1)\mathbf{C}^{ \mathrm{T}}+\mathbf{R}\right)^{-1},\end{split} \tag{31}\] where \(\mathbf{R}=[r]\), and \(r\) is variances of measurements. Then the vector of estimates and the covariance matrix are subject to the correction: \[\begin{split}\hat{\mathbf{x}}(k|k)&=\hat{\mathbf{x}}(k|k-1)+\mathbf{K} (k)\left(\overline{\hat{\mathbf{\phi}}}(k)-\hat{\mathbf{\phi}}(k|k-1)\right),\\ \mathbf{P}(k|k)&=(\mathbf{I}-\mathbf{K}(k)\mathbf{C})\ \mathbf{P}(k|k-1), \end{split} \tag{32}\] where \(\mathbf{I}\) is an identity matrix of a proper size. The initial values of elements of matrix \(\mathbf{P}\) have been calculated in such a way that in the first iteration of the filter, the correction gains take the value of the optimal \(a-\beta\) - WB (see section 3.4.2) filter parameters. This has done because of the similarity of assumed kinematic model used in this two filters. #### 3.4.2 The family of \(\alpha-\beta\) filters The \(\alpha-\beta\) filter is a particular example of Kalman filter where the (correction) gains matrix is fixed and calculated outside the filter algorithm. There are many types of \(\alpha-\beta\) filters; however, they are based on the same principle: the prediction is based on the dynamic model equations, and next, the predicted state is updated by the correction based on measurements. In various types of \(\alpha-\beta\) filters, in the tasks of estimating orientation, the difference is the adopted model of kinematics. One of the simplest is \(\alpha-\beta\) filter providing estimates of the angular position and velocity. This filter is described as first in this section. **i) \(a-\beta\) filter without velocity bias estimation (\(a-\beta-\mathbf{WOB}\))** In this filter, model (3) is simplify by assuming that \(\hat{\mathbf{\phi}}(k)=0\). Whereas the estimated variables are the angular position \(\hat{\mathbf{\phi}}(k)\) and angular velocity \(\hat{\mathbf{\phi}}(k)\). Thus, the prediction equations takes form: \[\begin{split}\hat{\mathbf{\phi}}(k|k-1)&=\hat{\mathbf{\phi}} (k-1)+\hat{\mathbf{\phi}}(k-1)\Delta t,\\ \hat{\mathbf{\phi}}(k|k-1)&=\hat{\mathbf{\phi}}(k-1). \end{split} \tag{33}\] In turn, in the correction phase, predictions are corrected by the equations: \[\begin{split}\hat{\mathbf{\phi}}(k|k)&=\hat{\mathbf{\phi}} (k|k-1)+\alpha\left(\overline{\hat{\mathbf{\phi}}}(k)-\hat{\mathbf{\phi}}(k|k-1)\right), \\ \hat{\mathbf{\phi}}(k|k)&=\hat{\mathbf{\phi}}(k|k-1)+ \frac{\beta}{\Delta t}\left(\overline{\hat{\mathbf{\phi}}}(k)-\hat{\mathbf{\phi}}(k|k-1) \right).\end{split} \tag{34}\] Once above equations are bound, equation (23) can be obtained in which the vectors and matrices take form: \[\begin{split}\hat{\mathbf{x}}(k)&=\begin{bmatrix}\hat{\mathbf{ \phi}}(k)\\ \hat{\mathbf{\phi}}(k)\end{bmatrix},\ \mathbf{A}=\begin{bmatrix}1&\Delta t\\ 0&1\end{bmatrix},\ \mathbf{B}=0,\\ \mathbf{C}&=\begin{bmatrix}1&0\\ 0&1\end{bmatrix},\ \mathbf{K}=\begin{bmatrix}\alpha&0\\ 0&\beta\end{bmatrix}.\end{split} \tag{35}\] **ii) \(a-\beta\) filter with velocity bias (\(a-\beta-\mathbf{W}\)B)** In this filter, model (3) is simplify again by assuming that \(\hat{\mathbf{\phi}}(k)=0\). However, due to the not ideal constancy of the \(b_{\hat{\mathbf{\phi}}}\) value over time, also an estimate of its value is calculated in this filter. For this purpose, the bias dynamic is assumed as \(\hat{b}_{\hat{\mathbf{\phi}}}(t)=0\) which in discrete time gives \(b_{\hat{\mathbf{\phi}}}(k)=b_{\hat{\mathbf{\phi}}}(k-1)\). The estimate of bias is initialised with the value calculated in section 4. The direct measurement of the angular velocity is treated as an input to the model. Thus, the prediction equations take form: \[\begin{split}\hat{\mathbf{\phi}}(k|k-1)&=\hat{\mathbf{\phi}} (k-1)+\left(\hat{\mathbf{\phi}}(k-1)-\hat{b}_{\hat{\mathbf{\phi}}}(k-1)\right)\Delta t, \\ \hat{b}_{\hat{\mathbf{\phi}}}(k|k-1)&=\hat{b}_{\hat{\mathbf{\phi}}}(k-1). \end{split} \tag{36}\] Whereas the correction phase equations take the form of: \[\begin{split}\dot{\phi}(k|k)&=\dot{\phi}(k|k-1)+\alpha \left(\overline{\phi}(k)-\dot{\phi}(k|k-1)\right),\\ \hat{b}_{\dot{\phi}}(k|k)&=\hat{b}_{\dot{\phi}}(k|k- 1)+\beta\left(\overline{\phi}(k)-\dot{\phi}(k|k-1)\right).\end{split} \tag{37}\] The vectors and matrices of equation (23) take form: \[\begin{split}\dot{\pi}(k)&=\begin{bmatrix}\dot{ \phi}(k)\\ \dot{b}_{\phi}(k)\end{bmatrix},\;\mathbf{A}=\begin{bmatrix}1&-\Delta t\\ 0&1\end{bmatrix},\;\mathbf{B}=\begin{bmatrix}\Delta t\\ 0\end{bmatrix},\\ \mathbf{C}=\begin{bmatrix}1&0\end{bmatrix},\;\mathbf{K}=\begin{bmatrix}\alpha\\ \beta\end{bmatrix}.\end{split} \tag{38}\] One of the ways of using the angular velocity measurement is to treat it as an input into kinematics equations, which is shown above. In another filter from the \(a-\beta\) family, the angular velocity measurement is used differently, which is described in the next section. **iii) \(\alpha\)-\(\beta\)-\(\alpha\)7 filter** Similar to the \(a-\beta\) filters described previously, also in this filter the kinematics is simplified by assuming \(\dot{\phi}(k)=0\). Also, the bias is not estimated. In contrast, this time both measurement values are used only in the correction phase. Thus, the prediction phase consists of equations: \[\begin{split}\dot{\phi}(k|k-1)&=\dot{\phi}(k-1)+\Delta t \dot{\phi}(k-1),\\ \dot{\phi}(k|k-1)&=\dot{\phi}(k-1).\end{split} \tag{39}\] Whereas, the correction phase is extended by using both measurements in both its equations: \[\begin{split}\dot{\phi}(k|k)&=\dot{\phi}(k|k-1)+\alpha \left(\overline{\phi}(k)-\dot{\phi}(k|k-1)\right)\\ &\quad+\theta\Delta t\left(\overline{\phi}(k)-\dot{\phi}(k|k-1) \right),\\ \dot{\phi}(k|k)&=\dot{\phi}(k|k-1)+\frac{\beta}{ \Delta t}\left(\overline{\phi}(k)-\dot{\phi}(k|k-1)\right)\\ &\quad+\gamma\left(\overline{\phi}(k)-\dot{\phi}(k|k-1)\right). \end{split} \tag{40}\] This filter fits into equation (23) by the following vectors and matrices: \[\begin{split}\dot{\pi}(k)&=\begin{bmatrix}\dot{ \phi}(k)\\ \dot{\phi}(k)\end{bmatrix},\;\mathbf{A}=\begin{bmatrix}1&\Delta t\\ 0&1\end{bmatrix},\;\mathbf{B}=0,\\ \mathbf{C}=\begin{bmatrix}1&0\\ 0&1\end{bmatrix},\;\mathbf{K}=\begin{bmatrix}a&\theta\Delta t\\ \frac{\beta}{\Delta t}&\gamma\end{bmatrix}.\end{split} \tag{41}\] **iv) \(\alpha\)-\(\beta\)-\(\theta\) filter with acceleration (\(\alpha\)-\(\beta\)-\(\theta\) - WA-a or b)** As it can be noticed in all filters presented above assumed that the angular acceleration is zero. In contrast in the last filter from the family, \(\alpha\)-\(\beta\) described in this section does not do this, and the value of angular acceleration also is estimated. Thus, the prediction equations yields: \[\begin{split}\dot{\phi}(k|k-1)&=\dot{\phi}(k-1)+\Delta t \dot{\phi}(k-1)+\frac{\Delta t^{2}}{2}\dot{\phi}(k-1),\\ \dot{\phi}(k|k-1)&=\dot{\phi}(k-1)+\Delta t\dot{\phi}(k-1), \end{split} \tag{42}\] \[\dot{\phi}(k|k-1) =\dot{\phi}(k-1).\] During the correction phase, the value of the angular acceleration can be updated based on the measurement of angular position or angular velocity [30], what provides to the two following sets of equations, which are different only in the last one: \[\begin{split}\dot{\phi}(k|k)&=\dot{\phi}(k|k-1)+\alpha \left(\overline{\phi}(k)-\dot{\phi}(k|k-1)\right),\\ \dot{\phi}(k|k)&=\dot{\phi}(k|k-1)+\beta\left( \overline{\phi}(k)-\dot{\phi}(k|k-1)\right),\\ iv.a)\;\dot{\phi}(k|k)&=\dot{\phi}(k|k-1)+\frac{ \theta}{\Delta t^{2}}\left(\overline{\phi}(k)-\dot{\phi}(k|k-1)\right),\\ iv.b)\;\dot{\phi}(k|k)&=\dot{\phi}(k|k-1)+\frac{ \theta}{\Delta t}\left(\overline{\phi}(k)-\dot{\phi}(k|k-1)\right).\end{split} \tag{43}\] The above equations translate into appropriate vectors and matrices in equation (23): \[\begin{split}\dot{\mathbf{x}}(k)&=\begin{bmatrix}\dot{ \phi}(k)\\ \dot{\phi}(k)\\ \dot{\phi}(k)\end{bmatrix},\;\mathbf{A}=\begin{bmatrix}1&\Delta t&\frac{\Delta t^{2}}{2}\\ 0&1&\Delta t\\ 0&0&1\end{bmatrix},\\ \mathbf{B}=0,\;\mathbf{C}=\begin{bmatrix}1&0&0\\ 0&1&0\end{bmatrix},\\ iv.a)\mathbf{K}=\begin{bmatrix}\alpha&0\\ 0&\beta\\ \frac{\theta}{\Delta t^{2}}&0\end{bmatrix},\;iv.b)\mathbf{K}=\begin{bmatrix}\alpha&0\\ 0&\beta\\ 0&\frac{\theta}{\Delta t}\end{bmatrix}.\end{split} \tag{44}\] In following part of the paper both possibilities are analysed, calling them \(\alpha\)-\(\beta\)-\(\theta\) - WA-a and \(\alpha\)-\(\beta\)-\(\theta\) - WA-b, respectively. #### Complementary filter Complementary filters integrate information of the same type coming from different sources, e.g., the measurements of the same physical variable from two or more different sensors. Hence, the purpose of this filtration is to use useful information while rejecting interference from a given source. This filtration brings the expected results when the sensors differ in the nature of the interference, particularly in its frequency [44]. This requirement is met in the analysed example, where the measurements from the accelerometer are burden with high-frequency noise, whereas the measurements from the gyroscope are burden mainly with a slow changing (low-frequency) bias error [19]. To obtain the value of the angular position \(\overline{\phi}_{\dot{\phi}}\) from the measurements of the angular velocity, it has to be integrated, what in discrete time takes as follows: \[\overline{\phi}_{\dot{\phi}}(k)=\overline{\phi}_{\dot{\phi}}(k-1)+\Delta t \overline{\phi}(k). \tag{45}\] Then the measurements obtained from the gyroscope are subjected to high-pass filtration, whereas the measurements from the accelerometer are subjected to low-pass filtration. The complementary filter should meet the rule: \[\sum_{j=1}^{l}G_{j}(s)=1, \tag{46}\] where \(G_{j}(s)\) is the transfer functions of \(j\)th sub-filter. There is an infinite number of pairs consisting of high-pass and low-pass filter that meets (46). However, i.a., in order to ensure a low computational cost, the simple case has been selected, taking the low-pass filter as first-order inertia: \[G_{\text{l-p}}(s)=\frac{1}{1+T_{\text{c}}s}, \tag{47}\] where \(T_{\text{c}}\) is the filter parameter. Thus, according to (47), (46) requires that the transfer function of the high-pass filter is as follows: \[G_{\text{h-p}}(s)=\frac{T_{\text{c}}s}{1+T_{\text{c}}s}. \tag{48}\] The output from the complementary filter \(\hat{\phi}(k)\) is the sum of output signals of the low-pass and high-pass filters. By using the inverse Laplace's transformation and discretising with the backward Euler method, this sum can be approximate as follows [44]: \[\begin{split}\hat{\phi}(k)&=\frac{T_{\text{c}}}{ \Delta t+T_{\text{c}}}\hat{\phi}(k-1)+\frac{\Delta t}{\Delta t+T_{\text{c}}} \overline{\phi}(k)\\ &\quad+\frac{T_{\text{c}}}{\Delta t+T_{\text{c}}}\left(\overline {\phi}_{\phi}(k)-\overline{\phi}_{\phi}(k-1)\right).\end{split} \tag{49}\] By inserting (45) into (49) it is obtained: \[\begin{split}\hat{\phi}(k)&=\frac{T_{\text{c}}}{ \Delta t+T_{\text{c}}}\hat{\phi}(k-1)+\frac{\Delta t}{\Delta t+T_{\text{c}}} \overline{\phi}(k)\\ &\quad+\frac{T_{\text{c}}}{\Delta t+T_{\text{c}}}\Delta t \overline{\phi}(k).\end{split} \tag{50}\] Equation (50) can be written as (23) by using the following vectors and matrices: \[\begin{split}\hat{\mathbf{x}}(k)&=\left[\hat{\phi}(k) \right],\;\mathbf{A}=\left[\frac{T_{\text{c}}}{\Delta t+T_{\text{c}}}\right],\\ \mathbf{B}&=\left[\frac{\Delta t}{\Delta t+T_{\text{c}} }\frac{T_{\text{c}}\Delta t}{\Delta t+T_{\text{c}}}\right],\\ \mathbf{C}&=0,\;\mathbf{K}=0,\;\mathbf{u}(k)=\left[\frac{ \overline{\phi}(k)}{\hat{\phi}(k)}\right].\end{split} \tag{51}\] It should be noticed that the inputs \(\mathbf{u}(k)\) are values from the current sample \(k\) and not the previous one \(k-1\). The developed a low-cost measurement system with correction and filtration of provided measurements is characterised by computational efficiency, and it is easy to automate the procedure of implementation in numerous devices of one type, where different values of errors can occur, e.g. gyroscope bias, due to non-identity of the applied sensors of a given type. ## 4 Experimental framework and results The research rig with constructed two-wheeled balancing robot is presented in Fig. 6. It should be noticed that besides the aforementioned additional encoder (marked with a red box in Fig. 6) there is the second encoder (marked with a blue box in Fig. 6). This encoder is used to acquire reference measurement values of the angular position of the constructed two-wheeled balancing robot. In practice, this requires information of the initial value of the angle of tilt of the two-wheeled balancing robot from the vertical axis and then the subsequent addition of the measured values to the current position. In other words, it requires changes in the position of the reference encoder shaft connected to the two-wheeled balancing robot in the axis of rotation, which is described by the formula: \[\phi(k)=\phi(k-1)+\Delta\phi(k), \tag{52}\] where \(\Delta\phi(k)\) is calculated from encoder measurements as \(\Delta\phi(k)=360\;n/N\), and \(N=2000\). In turn, a zero initial value of \(\phi(k=0)\) is ensured by positioning the two-wheeled balancing robot vertically at the beginning of the data acquisition process. As the IMU, the MPU6050 unit has been chosen [45]. This unit is equipped with three single-axis accelerometers and three single-axis gyroscopes in MEMS technology, which are set at right angles to each other. The configuration of the MPU6050 during the experiments has been as follows. The gyroscopes and accelerometers scale ranges have been set at \(\pm 250^{\circ}\) and \(\pm 2g\), respectively. The measurements have been read directly from the measurement registers without using a low-pass filter for the accelerometer and a high-pass filter for the gyroscope, which are built into the unit. The bias \(b_{\phi}\) has been calculated as an average value of data from a two-hour range, in which the constructed two-wheeled balancing robot has been stationary. It has been checked if this approach is correct for the chosen sensor. For this purpose, with a sampling period equal \(\Delta t=1.508\) ms, for about 2 hours \(5\cdot 10^{6}\) measurements of the angular velocity have been collected - see Fig. 7. Then the minimum and maximum values have been read as \(-1.41985\;^{\circ}/\text{s}\) and \(-2.45802\;^{\circ}/\text{s}\), respectively. It should be noticed that some correlations can be found in changes of bias. More specifically, in Fig. 8 the mean values of the trajectory from Figure 6: Research rig with two–wheeled balancing robot Fig. 7 in successive intervals, where each of all has \(10^{5}\) samples are presented. Despite observable changes in the bias value over consecutive observation time windows (Fig. 8), its relative change over a long horizon remains small. Hence, although approaches that update the bias values, e.g., [23, 25, 24] are justified, an approach based on single value bias estimation remains concurrent in the considered case. Hence, in this paper the value of bias \(b_{\phi}\) has been established as \(-1.91195\ ^{\circ}/\mathrm{s}\). Similarly, the biases \(b_{\mathrm{ax^{\prime}}}\) and \(b_{\mathrm{ap^{\prime}}}\) have been calculated for the accelerometer. For this purpose, for about 2 hours, the constructed two-wheeled balancing robot has been placed vertically stationary and measured the accelerations in two proper axes have been recorded. On the basis of the difference between the minimum values equal to \(8.98132\ \mathrm{m/s^{2}}\) and \(-0.14610\ \mathrm{m/s^{2}}\) for \(\mathrm{y^{\prime}}\) and \(\mathrm{x^{\prime}}\) axis, respectively, and the maximum values equal to \(9.39087\ \mathrm{m/s^{2}}\) and \(0.14370\ \mathrm{m/s^{2}}\) for \(\mathrm{y^{\prime}}\) and \(\mathrm{x^{\prime}}\) axis, respectively the correctness of a constant values of biases have been assumed (it should be mentioned that the measurements are also burden with stochastic interference). Finally, the biases \(b_{\mathrm{ax^{\prime}}}\) and \(b_{\mathrm{ay^{\prime}}}\) have been calculated as the mean value of data from a 2-hour range, and they are \(-0.02340\ \mathrm{m/s^{2}}\) and \(-0.63629\ \mathrm{m/s^{2}}\), respectively. Then the search of the scale-factor errors \(-S_{\mathrm{ay^{\prime}}}(\cdot)\) and \(S_{\mathrm{ax^{\prime}}}(\cdot)\) functions began. To find them the constructed two-wheeled balancing robot has been deflected from the vertical and stopped in various angles to collect the values of \(\phi(\cdot)\). During this operation, the accelerations \(\bar{a}_{y^{\prime}}(\cdot)\) and \(\bar{a}_{x^{\prime}}(\cdot)\) have been also measured, in situations where have not been the accelerations associated with the sensor movement. On this basis, the reference accelerations have been defined as: \[\begin{split} a_{\mathrm{ref},x^{\prime}}(k)&=g\sin \left(\phi(k)\right),\\ a_{\mathrm{ref},y^{\prime}}(k)&=g\cos\left(\phi(k) \right).\end{split}\] Next, by solving an optimisation task that searches for function coefficients that ensure the minimum of MSE between the calculated corrected and reference acceleration, the parameters of searched functions are found. For both axes, polynomials from 1 to 10 degree have been tested. It has been observed that increase the polynomial degree above 5 brought degradation in the target function value. In all polynomials, free expression equal to zero has been assumed, because this value corresponds to the bias [19]. The scale-factor error depends on the real acceleration of the sensor, therefore, the argument of the proposed functions is the measurement of acceleration corrected by bias, i.e. \(\bar{a}_{\mathrm{x^{\prime}}\mathrm{or}\mathrm{y^{\prime}}}(k)\) - \(b_{\mathrm{ax^{\prime}}\mathrm{or}\mathrm{ay^{\prime}}}\). In other words, the measurements of acceleration are first corrected by the bias. Finally, functions \(S_{\mathrm{ay^{\prime}}}(\cdot)\) and \(S_{\mathrm{ax^{\prime}}}(\cdot)\) achieving the smallest MSE have been selected (optimisation has been carried out using \(\mathtt{f}\mathtt{n}\mathtt{n}\mathtt{n}\mathtt{n}\mathtt{n}\mathtt{n}\mathtt{n}\) in MATLAB environment), which are of the form: \[\begin{split} Let:\quad p_{\mathrm{x}}(k)&:=\bar{a}_{ \mathrm{x^{\prime}}}(k)-b_{\mathrm{ax^{\prime}}},\\ S_{\mathrm{ax^{\prime}}}\left(p_{\mathrm{x}}(k)\right)& =0.04537p_{\mathrm{x}}(k)-0.00576p_{\mathrm{x}}(k)^{2}\\ &\quad-0.00143p_{\mathrm{x}}(k)^{3}+0.00005p_{\mathrm{x}}(k)^{4} \\ &\quad+0.00001p_{\mathrm{x}}(k)^{5},\end{split} \tag{53}\] \[\begin{split} Let:\quad p_{\mathrm{y}}(k)&:=\bar{a}_{ \mathrm{y^{\prime}}}(k)-b_{\mathrm{ay^{\prime}}},\\ S_{\mathrm{ay^{\prime}}}\left(p_{\mathrm{y}}(k)\right)& =0.12723p_{\mathrm{y}}(k)-0.05823p_{\mathrm{y}}(k)^{2}\\ &\quad+0.00930p_{\mathrm{y}}(k)^{3}-0.00068p_{\mathrm{y}}(k)^{4} \\ &\quad+0.00002p_{\mathrm{y}}(k)^{5}.\end{split} \tag{54}\] The trajectories of the real values, the measurements without the scale-factor error correction (bias only), and the values after correction with the obtained functions of acceleration are shown in Figs. 9 and 10 for \(\mathrm{x^{\prime}}\) and \(\mathrm{y^{\prime}}\) axis, respectively. The MSE values between the uncorrected measurements and the real trajectories are \(0.04817\ \mathrm{m/s^{2}}\) and \(0.03465\ \mathrm{m/s^{2}}\) for \(\mathrm{x^{\prime}}\) and \(\mathrm{y^{\prime}}\) axis, respectively. Whereas, between the corrected measurements and the real trajectories are \(0.02720\ \mathrm{m/s^{2}}\) and \(0.02898\ \mathrm{m/s^{2}}\) for \(\mathrm{x^{\prime}}\) and \(\mathrm{y^{\prime}}\) axis, respectively. In turn, the time constants \(T_{\phi}\) and \(T_{\mathrm{V}}\) of the low-pass filters have been selected by solving an optimisation task providing minimum of MSE between the reference trajectory of \(\phi(\cdot)\) and the calculated angular position \(\overline{\phi}(\cdot)\) for each sampling time separately (see Table 1). Next, a noise analysis overlapping \(\overline{\phi}\) and \(\overline{\phi}\) has been performed to check the performance of the correction part. The Figure 8: Mean values of the trajectory shown in Fig. 7 in intervals of \(10^{5}\) samples Figure 7: The measurements of angular velocity from gyroscope real angle of tilt from the vertical axis of the two-wheeled balancing robot \(\mathbf{\phi}\) has been subtracted from the corrected angular position measurement \(\overline{\mathbf{\phi}}\). Whereas, for the analysis of \(\overline{\mathbf{\phi}}\), the trajectory from Fig. 7 has been used, because the real angular velocity of the robot is known for this trajectory, and is equal to zero. The obtained noise trajectories have been subjected to fast Fourier transform to obtain the spectrum presented in Fig. 11. Also, the spectrum of the noise calculated directly from the measurements (without correction) and the real values are shown in Fig. 11. Part of the spectrum analysis of \(\mathbf{\phi}\) and \(\mathrm{arctg}_{\mathrm{d}}\left(\frac{\delta_{d^{\prime}}}{\delta^{\prime}_{ d^{\prime}}}\right)-\mathbf{\phi}\) overlap, which indicates the influence of \(\mathbf{\phi}\) on \(\mathrm{arctg}_{\mathrm{d}}\left(\frac{\delta_{d^{\prime}}}{\delta_{d^{\prime }}}\right)-\mathbf{\phi}\). It can be noticed that the interference caused by changes in the angle of tilt of the robot \(\mathbf{\phi}\) has been rejected by the correction - there is no overlap in \(\mathbf{\phi}\) and \(\overline{\mathbf{\phi}}-\mathbf{\phi}\). Thus, Fig. 11 together with table 1 testify to the positive influence of the correction part. Moreover, it can be deduced from Fig. 11 that the angular velocity measurement \(\overline{\mathbf{\phi}}\) is much less noisy than the angular position measurement \(\overline{\mathbf{\phi}}\). The field under Figure 10: The trajectories of the acceleration in y\({}^{\prime}\) axis Figure 9: The trajectories of the acceleration in x\({}^{\prime}\) axis. the angular velocity noise trajectory equals 1.13, while the field under the trajectory of the real angular velocity \(\dot{\phi}\) is equal to 0. Whereas the field under the angular position noise trajectory is 11.42, while the signal to noise ratio is 6.3 dB. However, basing the filtering system only or mainly on angular velocity may lead to long or no convergence of the angle of tilt estimates \(\dot{\phi}\) to the real value in situations where the initial value of angle of tilt is unknown and its current estimate is subject to significant error. The simplest and at the same time the least computationally expensive method of solving this problem the appropriate filter initialisation. During the experiments, the values of initial angular position and velocity have been calculated directly from the measurements as \(\dot{\phi}(k=0)=\mathrm{arctg}_{\mathrm{d}}\left(\frac{\dot{a}_{\phi}(k=0)}{ \dot{a}_{\phi}(k=0)}\right)\), \(\dot{\phi}(k=0)=\overline{\dot{\phi}}(k=0)\), \(\dot{\phi}(k=0)=0\), \(\dot{b}_{\phi}(k=0)=b_{\dot{\phi}}\). Whereas the parameters of correction parts have been initialised by \(\overline{\dot{\phi}}_{\mathrm{r,f}}(k=0)=\overline{\dot{\phi}}(k=0)\) and \(V_{\mathrm{r,f}}(k=0)=0\). The entire low-cost measurement system shown in Fig. 2 has been implemented in research rig presented in Fig. 6. The results obtained are as follows. Table 1 shows the obtained value of the low-pass filters constants (equations (17) and (20)) and obtained values of MSE before and after correction part. The values of the parameters of the relevant filters and obtained MSE values (equation (6)) for the trajectory used in solving the optimisation task (further referred to as 'Training') and for the verification trajectory are shown in Table 2. To investigate the real operating conditions and to demonstrate a system performance during dynamic changes and disturbances of the two-wheeled balancing robot movement, the training and verification trajectories have been registered. This took place while the robot moved in both (dynamical) progressive and angular motions. In turn, in order to investigate the performance of the proposed measurement system (with \(\alpha\)-\(\beta\) - WB filter) the trajectories of the real and estimated angular position of the two-wheeled balancing robot are shown in Fig. 12. Moreover, the trajectory of \(\mathrm{arctg}_{\mathrm{d}}\left(\frac{\dot{a}_{\phi}}{\dot{a}_{\phi}}\right)\) is also presented in Fig. 12. The \(\alpha\)-\(\beta\) - WB filter has been selected for presentation purposes due to the highest MSE value in verification - see Table 2. Thus, the results obtained are satisfactory for the two-wheeled balancing robot stabilisation purposes. However, the performance of both considered Kalman filters is average at best. It is because Kalman filter provides optimal estimates under certain assumptions. Hence, the tests carried out indicate that in the case under consideration these assumptions have not been met. It is particularly important in the body of numerous applications of the 'classical' Kalman filter, e.g., in engineering works in the task of angular position estimation from MEMS IMU measurements. On the other hand, considering either of the non-Kalman filters results in comparable estimate precision. ## 5 Conclusions In this paper, a low-cost measurement system using filtering of measurements for the two-wheeled balancing robot stabilisation purposes has been investigated. The proposed measurement system includes two layers. First, the physical layer consists of the gyroscopes and accelerometers in MEMS technology and the additional encoder. Second, the software layer containing the correction and filtration mechanisms. The measurements correction is based on the additional encoder, whereas the selected filters, i.e. Kalman, \(\alpha\)-\(\beta\) type, and complementary have been used as the filtration mechanism. The performance of the proposed measurement system has been successfully demonstrated in the experimental setting on the constructed two-wheeled balancing robot. Moreover, the quantitative assessment using a typical measure, i.e. mean square error of selected filters has been provided. Hence, in general, extensive knowledge about a low-cost measurement system using filtering of measurements for the two-wheeled balancing robot stabilisation purposes has been aggregated in this paper. It can be found interesting and useful for the relevant community, both in research and engineering applications. The future research may be the conditioning of measurement signals in such a way as to meet the assumptions \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(\Delta t\) [ms] & \(T_{\dot{\phi}}\) & \(T_{\mathrm{V}}\) & MSE for \(\dot{\phi}(k)=\mathrm{arctg}_{\mathrm{d}}\left(\frac{\dot{a}_{\phi}(k)}{\dot{a} _{\phi}(k)}\right)\) & MSE after correction \(\overline{\dot{\phi}}(k)\) \\ \hline 2 & 0.06874 & 0.04607 & 150.56951 & 72.52314 \\ 5 & 0.02392 & 0.02031 & 515.27065 & 222.05807 \\ 10 & 0.02557 & 0.02045 & 377.39749 & 174.45667 \\ 20 & 0.00774 & \(-\)0.00065 & 280.23692 & 110.55792 \\ \hline \end{tabular} \end{table} Table 1: The obtained values for the low-pass filters Figure 11: Spectrum of noises \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Filter name & \(\Delta t\) [ms] & Parameters & MSE \(-\) ”Training” & MSE \(-\) Verification \\ \hline \(a\)–\(\beta\) – WOB & 2 & \(a=0.00227\), \(\beta=1.58242\) & 1.98686 & 0.82071 \\ & 5 & \(a=0.00866\), \(\beta=1.12381\) & 6.18150 & 3.07579 \\ & 10 & \(a=0.00103\), \(\beta=1.67836\) & 1.60046 & 11.94604 \\ & 20 & \(a=0.00165\), \(\beta=1.84408\) & 2.32469 & 3.34761 \\ \hline \(a\)–\(\beta\) – WB & 2 & \(a=0.00185\), \(\beta=-0.00018\) & 1.93816 & 0.78603 \\ & 5 & \(a=0.00858\), \(\beta=-0.00007\) & 6.16623 & 3.05931 \\ & 10 & \(a=0.00080\), \(\beta=0\) & 1.73683 & 14.17050 \\ & 20 & \(a=0.00171\), \(\beta=0\) & 3.07329 & 4.05407 \\ \hline \(a\)–\(\beta\)–\(\theta\)–\(\gamma\) & 2 & \(a=0.00204\), \(\beta=-0.00001\), \(\theta=1.07026\), \(\gamma=-0.00013\) & 0.74852 & 1.33160 \\ & 5 & \(a=0.00668\), \(\beta=-0.00005\), \(\theta=1.05866\), \(\gamma=0.00007\) & 3.91003 & 2.21875 \\ & 10 & \(a=0.00088\), \(\beta=0\), \(\theta=1.05141\), \(\gamma=-0.00002\) & 0.61819 & 12.05841 \\ & 20 & \(a=0.00391\), \(\beta=-0.00406\), \(\theta=-0.04194\), \(\gamma=1.87665\) & 2.32142 & 3.34578 \\ \hline \(a\)–\(\theta\)–\(\theta\) – WA-a & 2 & \(a=0.00169\), \(\beta=1.21567\), \(\theta=0\) & 1.94261 & 0.86258 \\ & 5 & \(a=0.00850\), \(\beta=1.12964\), \(\theta=0\) & 6.16275 & 3.07526 \\ & 10 & \(a=0.00080\), \(\beta=1.67821\), \(\theta=0\) & 1.58494 & 13.92453 \\ & 20 & \(a=0.00165\), \(\beta=1.84410\), \(\theta=0\) & 2.32469 & 3.34753 \\ \hline \(a\)–\(\beta\)–\(\theta\) – WA-b & 2 & \(a=0.00315\), \(\beta=0.28647\) & 0.00673 & 1.87487 & 1.12021 \\ & 5 & \(a=0.00911\), \(\beta=0.32710\), \(\theta=0.01188\) & 5.43600 & 2.81706 \\ & 10 & \(a=0.00104\), \(\beta=0.69743\), \(\theta=0.06346\) & 1.33276 & 11.34403 \\ & 20 & \(a=0.00168\), \(\beta=1.01622\), \(\theta=0.17281\) & 2.06774 & 3.07125 \\ \hline Kalman & 2 & \(q_{1}=0.01076\), \(q_{2}=0\), \(r=0.02792\) & 6.88674 & 11.08092 \\ & 5 & \(q_{4}\) & 9.45858 & 6.80941 \\ & 10 & & 9.41660 & 6.79686 \\ & 20 & & 7.98366 & 12.38334 \\ \hline Kalman* & 2 & \(q_{1}=0.00001\), \(q_{2}=0\), \(r=2.30640\) & 1.94297 & 0.79206 \\ & 5 & \(q_{1}=0.00112\), \(q_{2}=0\), \(r=17.16979\) & 6.17602 & 3.03979 \\ & 10 & \(q_{1}=0\), \(q_{2}=0\), \(r=2.25847\) & 1.73928 & 13.73496 \\ & 20 & \(q_{1}=0\), \(q_{2}=0\), \(r=2.92997\) & 3.07025 & 4.10513 \\ \hline Complementary & 2 & \(T_{c}=1.06895\) & 2.01177 & 0.82619 \\ & 5 & \(T_{c}=0.60307\) & 6.39301 & 3.38819 \\ & 10 & \(T_{c}=9.74413\) & 1.92216 & 12.33891 \\ & 20 & \(T_{c}=12.40721\) & 3.31344 & 4.30363 \\ \hline \end{tabular} \end{table} Table 2The obtained values for the selected filters Figure 12: The trajectories of the real and estimated angular position of the two–wheel balancing robot for \(a\)–\(\beta\) – WB filter with \(\Delta t=10\) ms of Kalman filter or the use of more complex and computationally expensive filters, e.g., extended Kalman filter or adaptive-type Kalman filters. ## Acknowledgement The research was done in accordance with funding from Polish MEiN under Young Researcher Support Program. The authors wish to express their thanks for the support.
2309.13277
Twisted calculus in several variables
Following the techniques and notions we defined in our previous article, we define the notion of twisted differential operator of finite radius. We show that this notion is independent of the choice of the endomorphisms. In the case of $q$-coordinates we obtain an equivalence between $\eta^\dagger$-convergent modules of finite type endowed with an integrable connection and the modules of finite type endowed with an action by $\underline{q}$-differential operators of the same radius.
Pierre Houédry
2023-09-23T06:32:02Z
http://arxiv.org/abs/2309.13277v1
# Twisted calculus in several variables ###### Abstract. Following the techniques and notions we defined in our previous article, we define the notion of twisted differential operator of finite radius. We show that this notion is independent of the choice of the endomorphisms. In the case of \(q\)-coordinates we obtain an equivalence between \(\eta^{\dagger}\)-convergent modules of finite type endowed with an integrable connection and the modules of finite type endowed with an action by \(\underline{q}\)-differential operators of the same radius. ###### Contents * 1 Twisted algebras * 2 Twisted calculus ## Introduction In [1] we defined some objects and developed some techniques to investigate twisted differential operators in several variables. We now wish to apply those technique to study the phenomenon of \(p\)-adic confluence, following the work of Andre-Di Vizio and Pulita. However, our framework is more general. We consider a Tate ring \(A,\) it is a Huber ring admiting an invertible toplogically nilpotent element. It is used to defined a submultiplicative norm on our ring. We assume that our ring is endowed with classical and symmetrical \(\underline{q}\)-coordinates so that the objects defined in our previous paper are well defined. For a positive real number \(\eta\) close enough to \(1,\) we define the notion of \(\eta^{\dagger}\)-convergent \(\mathrm{D}^{(\infty)}_{\underline{q}}\)-module by requiring that the twisted Taylor series has a radius of convergence at least equal to \(\eta\). In a similar manner to [1], we introduce the notion of \(\eta\)-convergent twisted differential operator (and \(\eta^{\dagger}\)-convergent by going to the limit) and construct two rings \(\mathrm{D}^{(\eta)}_{\underline{q}}\) and \(\mathrm{D}^{(\eta^{\dagger})}_{\underline{q}}.\)We then show an equivalence between the category of \(\mathrm{D}^{(\eta^{\dagger})}_{\underline{q}}\)-modules that are of finite type over \(A\) and the category of \(\eta^{\dagger}\)-convergent \(\mathrm{D}^{(\infty)}_{\underline{q}}\)-module of finite type over \(A.\) If there exists some others \(R\)-linear continuous endomorphisms \(\underline{\tau}=(\tau_{1},\ldots,\tau_{d})\) of \(A\) that commute such that the coordinates \(\underline{x}\) are also classical and symmetrical \(\underline{\tau}\)-coordinates we have for \(\eta\) close enough to \(1,\) an isomorphism \(\mathrm{D}^{(\eta)}_{\underline{q}}\simeq\mathrm{D}^{(\eta)}_{\underline{\tau }}.\) As an application we can look at the case of \(\underline{q}\)-coordinates: there exists some elements \(\underline{q}=\{q_{1},\ldots,q_{d}\}\) of \(R\) such that \(\underline{x}\) are classical and symmetrical \(\underline{q}\)-coordinates, \[\forall i=1,\ldots,d;\ \forall n\in\mathbb{N},\ (n)_{q_{i}}\in R^{\times},\] and \(A\) is \(\eta^{\dagger}\)-convergent then, we have the following equivalence of categories \[\nabla^{\text{Int}}_{\underline{\sigma}}\text{-Mod}^{(\eta^{\dagger})}_{\text{ tf}}(A/R)\simeq\text{D}^{(\eta^{\dagger})}_{\underline{\sigma}}\text{-Mod}_{\text{tf}}(A/R),\] between the category of \(\eta\)-convergent finite module over \(A\) endowed with an integrable twisted connection \(\nabla^{\text{Int}}_{\underline{\sigma}}\text{-Mod}^{(\eta^{\dagger})}_{\text{ tf}}(A/R)\) and the category \(\text{D}^{(\eta^{\dagger})}_{\underline{\sigma}}\text{-Mod}_{\text{tf}}(A/R)\) of \(\text{D}^{(\eta^{\dagger})}_{\underline{\sigma}}\)-modules of finite type over \(A\). We recover a result in the spirit of [1] and [11]. ## 1. Twisted algebras ### Huber rings **Definition 1.1**.: A topological ring \(A\) is said to be a _Huber ring_ if there exists an open subring \(A_{0}\) of \(A\) and a finitely generated ideal \(I_{0}\) of \(A_{0}\) such that \(\{I_{0}^{n}\}_{n\in\mathbb{N}}\) is a fundamental system of neighborhoods of \(0\) in \(A_{0}\). The subring \(A_{0}\) is called a _ring of definition_ of \(A\) and the ideal \(I_{0}\) is called an _ideal of definition_ of \(A\). A _couple of definition_\((A_{0},I_{0})\) is the couple given by a ring of definition and an ideal of definition. In the case where we can choose \(A\) as a ring of definition, the ring is said to be _adic_. Moreover, if \(\{0\}\) is an ideal of definition, the ring \(A\) is said to be _discrete_. **Examples**.: 1. Let \(A\) be a Huber ring. For every natural integer \(n\), the ring \(A[X_{1},\dots,X_{n}]\) can be endowed with a structure of Huber ring,that induces the topology on \(A\), with \(A_{0}[X_{1},\dots,X_{n}]\) as a ring of definition and \(I_{0}A_{0}[X_{1},\dots,X_{n}]\) as an ideal of definition. 2. Let \(K\) be a _non-archimedean field_: by that we mean a topological field for a non-trivial non-archimedean absolute value \(|\cdot|\). There exists an element \(\pi\) of \(K\) such that \(0<|\pi|<1\). The field \(K\) is a Huber ring: the set \(\{|x|\leq 1\}\) is a ring of definition and \((\pi)\) is an ideal of definition. 3. Let \(d\) be a natural integer. In the previous situation, we consider the following ring \[A=\left\{\sum_{\underline{k}\in\mathbb{Z}^{d}}a_{\underline{k}}\underline{X}^{ \underline{k}},a_{\underline{k}}\in K,|a_{\underline{k}}|\prod_{i=1}^{d}\max(1,|\pi|^{k_{i}})\to 0\text{ when }\underline{k}\rightarrow+\infty\right\},\] endowed with the topology by the Gauss norm \[\left\|\sum_{\underline{k}\in\mathbb{Z}^{d}}a_{\underline{k}}\underline{X}^{ \underline{k}}\right\|=\max\left\{\sup_{|\underline{k}|\geq 0}|a_{\underline{k}}|, \sup_{|\underline{k}|<0}|\pi|^{|\underline{k}|}|a_{\underline{k}}|\right\}.\] With this topology, this ring is a Huber ring. A couple of definition is given by the following subring and ideal \[A_{0}=\left\{\sum_{\underline{k}\in\mathbb{Z}^{d}}a_{\underline{k}} \underline{X}^{\underline{k}}\in A\text{ such that }|a_{\underline{k}}|\leq 1\text{ if }| \underline{k}|\geq 0,\ |\pi^{|\underline{k}|}a_{\underline{k}}|\leq 1\text{ if }| \underline{k}|<0\right\},\] \[I_{0}=\left\{\sum_{\underline{k}\in\mathbb{Z}^{d}}a_{\underline{k}} \underline{X}^{\underline{k}}\in A\text{ such }|a_{\underline{k}}|<1\text{ if }| \underline{k}|\geq 0,\ |\pi^{|\underline{k}|}a_{\underline{k}}|<1\text{ si }| \underline{k}|<0\right\}.\] **Definition 1.2**.: Let \(A\) and \(B\) be two Huber rings. A _morphism of Huber rings_\(u:A\to B\) is a morphism of rings which is continuous. This is equivalent to require that there exists a ring of definition \(A_{0}\) of \(A\) (resp. \(B_{0}\) of \(B\)) and an ideal of definition \(I_{0}\subset A_{0}\) (resp. \(J_{0}\subset B_{0}\)) such that \(u(A_{0})\subset B_{0}\) and \(u(I_{0})\subset J_{0}\). In the case where the ideal generated by \(u(I_{0})\) is an ideal of definition of \(B\), the morphism is said to be _adic_. **Definition 1.3**.: If \(R\to A\) is a morphism of Huber rings, \(A\) is said to be a _Huber \(R\)-algebra_. If the morphism is adic \(A\) is said to be a _\(R\)-adic algebra_. **Remarks**.: 1. A \(R\)-adic algebra is not necessarily an adic ring. 2. Let \(R\) be a Huber ring with \((A_{0},I_{0})\) as a couple of definition. If \(M\) is a \(A\)-module of finite type, we can endow it with the natural topology, that is the universal topology for the continuous \(A\)-linear morphisms \(M\to N\) into topological \(A\)-modules. In that case, every \(A_{0}\)-module \(M_{0}\) of \(M\) that generates \(M\) is necessarily open in \(M\) and the topology of \(M_{0}\) is the \(I_{0}\)-adic topology. **Definition 1.4**.: A _Tate ring_\(A\) is a Huber ring such that there exists a topologically nilpotent invertible element \(\pi\) of \(A\). **Proposition 1.5**.: _The topology of a Huber ring can always be defined by a submultiplicative semi-norm. In the case where the ring is Tate, the topology can be defined by a submultiplicative norm._ Proof.: We refer the reader to [10, Proposition 1.8]. **Definition 1.6**.: A submultiplicative norm \(\|\ \|\) over a ring \(A\) is said to be _contractive_ with respect to an endomorphism \(\varphi\) of \(A\) if \(\forall x\in A,\ \|\varphi(x)\|\leq\|x\|\). **Definition 1.7**.: Let \(R\) be a Huber ring, \(A\) a \(R\)-adic algebra and \(d\in\mathbb{N}\). The \(R\)-algebra \(A\) is said to be _twisted of order \(d\)_ if it is endowed with a sequence of \(R\)-linear continuous endomorphisms \(\underline{\sigma}=(\sigma_{A,1},\ldots,\sigma_{A,d})\) that commute. Set \(P_{A}=A\otimes_{R}A\). We extend \(\sigma_{i}\) to \(P_{A}\) by setting \(\sigma_{i}(a\otimes b)=\sigma_{i}(a)\otimes b\). If no confusion can arise, we will simply write \(\sigma_{i}\) instead \(\sigma_{A,i}\). The data of the ring and the set of endomorphisms will be denoted under the form of a couple \((A,\underline{\sigma})\). ### Twisted principal parts In what follows we consider a Huber ring \(R\) and a twisted \(R\)-adic algebra \((A,\underline{\sigma})\) of order \(d\). We also consider some elements \(\underline{x}=(x_{1},\ldots,x_{d})\) of \(A\) on which we will later put various conditions. We will use the same notation \(\sigma_{i}\) for the endomorphism of the ring of polynomials \(A\left[\underline{\xi}\right]\) such that for every \(i,j\in\{1,\ldots,d\}\) \[\sigma_{i}(\xi_{j})=\begin{cases}\xi_{i}+x_{i}-\sigma_{i}(x_{i}),&\text{if $j=i$}\\ \xi_{j},&\text{otherwise}.\end{cases}\] **Definition 1.8**.: For every natural integer \(n\), the \(A\)-module \(P_{A,(n)_{\underline{\sigma},\underline{\sigma}}}\) of _twisted principal parts of order \(n\)_ is the \(A\)-module defined by \[P_{A,(n)_{\underline{\sigma},\underline{\sigma}}}:=A[\underline{\xi}]\Big{/} \left(\underline{\xi}^{(\underline{k})_{\underline{\sigma},\underline{x}}} \text{ such that }|\underline{k}|=n+1\right)\] where for \(\underline{k}\in\mathbb{N}^{d}\) \[\underline{\xi}^{(\underline{k})_{\underline{\sigma},\underline{x}}}:=\xi_{1 }^{(k_{1})_{\sigma_{1}}}\ldots\xi_{d}^{(k_{d})_{\sigma_{d}}}\text{ and }\xi_{i}^{(k_{i})_{\sigma_{i}}}:=\xi_{i}\sigma_{i}(\xi_{i})\ldots\sigma_{i}^{k_ {i}-1}(\xi_{i}).\] This gives \[\underline{\xi}^{(\underline{k})_{\underline{\sigma},\underline{x}}}=\prod_{i=1}^ {d}\prod_{j=0}^{k_{i}-1}\left(\xi_{i}+x_{i}-\sigma_{i}^{j}(x_{i})\right). \tag{1}\] In the case where \(d=1\), this is the module defined by Le Stum and Quiros in [13, Definition 1.5]. **Remark**.: As remarked in the examples following the definition 1.1, the ring of polynomials \(A[\underline{\xi}]\) can be endowed with a structure of Huber rings inducing the topology on \(A\). The ring \(P_{A,(n)_{\underline{\sigma},\underline{x}}}\) endowed with the quotient topology is also a Huber ring. Here and subsequently we will put this topology on it. **Proposition 1.9**.: _For every natural integer \(n\), the composed morphism_ \[A[\underline{\xi}]_{\leqslant n}\hookrightarrow A[\underline{\xi}] \twoheadrightarrow P_{A,(n)_{\underline{\sigma},\underline{x}}}\] _is an isomorphism._ Proof.: We refer the reader to [11, Proposition 4.4] **Definition 1.10**.: The elements \(x_{1},\ldots,x_{d}\) are \(\underline{\sigma}\)_-coordinates_ over \(A\) if, for every natural integer \(n\), there exists an unique adic morphism of \(R\)-algebras \[\Theta_{A,(n)_{\underline{\sigma},\underline{x}}}\colon A \to P_{A,(n)_{\underline{\sigma},\underline{x}}}\] \[x_{i} \mapsto x_{i}+\xi_{i}\] such that the composition of the projection on \(A\) with \(\Theta_{A,(n)_{\underline{\sigma},\underline{x}}}\) is the identity. The morphism \(\Theta_{A,(n)_{\underline{\sigma},\underline{x}}}\) is called the \(n\)_-th Taylor morphism_ of \(A\) with respect to \(\underline{\sigma}\) and \(\underline{x}\). **Remark**.: Let \(\underline{q}=(q_{1},\ldots,q_{d})\) be elements of \(R\). In the situation where \[\forall i,j\in\{1,\ldots,d\},\sigma_{i}(x_{j})=\begin{cases}q_{i}x_{i}&\text{ if }i=j\\ x_{j}&\text{ otherwise,}\end{cases}\] and \(\underline{x}\) are \(\underline{\sigma}\)-coordinates, we will say that they are \(\underline{q}\)-coordinates to emphasize the link with the elements \(q_{1},\ldots,q_{d}\). In the case \(d=1\), this is the situation described in [1]. Let \(\underline{x}\) be \(\underline{\sigma}\)-coordinates over \(A\). Consider for a natural integer \(n\) the morphism \[\tilde{\Theta}_{A,(n)_{\underline{\sigma},\underline{x}}}: P_{A} \to P_{A,(n)_{\underline{\sigma},\underline{x}}}\] \[a\otimes b \mapsto a\Theta_{A,(n)_{\underline{\sigma},\underline{x}}}(b)\] that extends \(\Theta_{A,(n)_{\underline{\sigma},\underline{x}}}\) to \(P_{A}\) by \(A\)-linearity. For \(i=1,\ldots,d\), \[\tilde{\Theta}_{A,(n)_{\underline{\sigma},\underline{x}}}(1\otimes x_{i}-x_{i }\otimes 1)=\xi_{i}.\] The proposition 1.9 ensures that this morphism is surjective. Denote \[I_{A}^{(n+1)_{\underline{\sigma},\underline{x}}}:=\ker\left(\tilde{\Theta}_{A,(n)_{\underline{\sigma},\underline{x}}}\right).\] By the first isomorphism theorem \[P_{A}/I_{A}^{(n+1)_{\underline{\sigma},\underline{x}}}\simeq P_{A,(n)_{ \underline{\sigma},\underline{x}}}.\] ### Derivations We keep the hypothesis of the previous subsection: \(R\) is a Huber ring and \((A,\underline{\sigma})\) is a twisted \(R\)-adic algebra of order \(d\) (definition 1.7) Here and subsequently we assume that there exist some elements \(\underline{x}=(x_{1},\ldots,x_{d})\) of \(A\) that are \(\underline{\sigma}\)-coordinates (definition 1.10). **Definition 1.11**.: The \(A\)-module of _twisted differential forms_ of \(A\) over \(R\) is \[\Omega^{1}_{A,\underline{\sigma}}:=I^{(1)_{\underline{\sigma}}}_{A}/I^{(2)_{ \underline{\sigma}}}_{A}.\] **Definition 1.12**.: Let \(1\leq i\leq d\). We recall from [15] that a \(\sigma_{i}\)_-derivation_\(D\) of \(A\) is a \(R\)-linear morphism from \(A\) into a \(A\)-module \(M\) that verifies the twisted Leibniz rule: \[\forall x,y\in A,\ D(xy)=xD(y)+\sigma_{i}(y)D(x).\] **Remark**.: Denote by \((e_{i})_{i=1,\ldots,d}:=((1\otimes x_{i}-x_{i}\otimes 1)^{*})_{i=1,\ldots,d}\) the basis of \(\operatorname{Hom}_{A}\left(\Omega^{1}_{A,\underline{\sigma}},A\right)\) dual to the basis \((1\otimes x_{i}-x_{i}\otimes 1)_{i=1,\ldots,d}\) of \(\Omega^{1}_{A,\underline{\sigma}}\). Set \[\operatorname{d}:A\to\Omega^{1}_{A,\underline{\sigma}},\ f\mapsto\Theta_{A,(1 )_{\underline{\sigma}}}(f)-f.\] This map permits to define \[\forall i\in\{1\ldots,d\},\ \partial_{\underline{\sigma},i}:=e_{i}\circ \operatorname{d}\] We will denote by \(\operatorname{Der}_{\underline{\sigma}}(A,A)\) the \(A\)-module generated by \(\partial_{\underline{\sigma},1}\ldots,\partial_{\underline{\sigma},d}\). This module is free over \(A\). **Definition 1.13**.: The \(\underline{\sigma}\)-coordinates \(\underline{x}\) are _classical_ if \[\forall i=1\ldots,d,\ \forall f\in A,\ \sigma_{i}(f)=f+(\sigma_{i}(x_{i})-x_{i })\partial_{\underline{\sigma},i}(f).\] **Proposition 1.14**.: _The \(\underline{\sigma}\)-coordinates \(\underline{x}\) are classical coordinates if and only if \(\forall i\in\{1,\ldots,d\},\ \partial_{\underline{\sigma},i}\) is a \(\sigma_{i}\)-derivation._ Proof.: We refer the reader to [14, Proposition 7.4]. **Definition 1.15**.: The \(\underline{\sigma}\)-coordinates \(\underline{x}\) are said to be _symmetrical_ if \[\forall i\in\{1,\ldots,d\},\sigma_{i}(x_{i})\in R[x_{i}]\ \text{and}\] \[\forall n\in\mathbb{N},\forall f\in A,\ \delta_{n,m}\left(\Theta_{A,(n+m)_{ \underline{\sigma}}}(f)\right)=1\otimes^{\prime}\Theta_{A,(m)_{\underline{ \sigma}}}(f).\] Where \(\otimes^{\prime}\) means that the action on the left is given by \(\Theta_{A,(n)_{\underline{\sigma}}}\). **Proposition 1.16**.: _Assume that the coordinates are symmetrical. Set_ \[\delta\colon P_{A}\to P_{A}\otimes_{A}P_{A}\] \[a\otimes b\mapsto a\otimes 1\otimes 1\otimes b\] _then,_ \[\delta\left(I^{(n+m+1)_{\underline{\sigma}}}_{A}\right)\subset I^{(n+1)_{ \underline{\sigma}}}_{A}\otimes P_{A}+P_{A}\otimes I^{(m+1)_{\underline{ \sigma}}}_{A}.\] Proof.: We refer the reader to [14, Proposition 7.11]. ### Twisted connection We keep the hypothesis of the previous subsection: \(R\) is a Huber ring and \((A,\underline{\sigma})\) is a twisted \(R\)-adic algebra of order \(d\) (definition 1.7). The exists some elements \(\underline{x}=(x_{1},\dots,x_{d})\) of \(A\) that are \(\underline{\sigma}\)-coordinates (definition 1.10). **Definition 1.17**.: When the coordinates \(\underline{x}\) are classical, a _twisted connection_ on a \(A\)-module \(M\) is the data of a map \[\nabla_{\underline{\sigma}}:M\to M\otimes_{A}\Omega^{1}_{\underline{ \sigma}}\] verifying that the map \[\Theta_{M}:M\to M\otimes_{A}P_{A,(1)_{\underline{\sigma}}},\ s\to s \otimes 1+\nabla_{\underline{\sigma}}(s)\] is \(\Theta_{A,(1)_{\underline{\sigma}}}\)-linear : \[\forall f\in A,\ \forall s\in M,\Theta_{M}(fs)=\Theta_{A,(1)_{\underline{ \sigma}}}(f)\Theta_{M}(s).\] **Remark**.: For \(n\in\mathbb{N}^{*}\), set \(\Omega^{n}_{\underline{\sigma}}=\bigwedge^{n}\Omega^{1}_{\underline{\sigma}}.\) A twisted connection \(\nabla_{\underline{\sigma}}\) on a \(A\)-module \(M\) gives a sequence of \(A\)-linear maps \[\nabla_{n,\underline{\sigma}}:M\otimes_{A}\Omega^{n}_{\underline{\sigma}} \to M\otimes_{A}\Omega^{n+1}_{\underline{\sigma}}\] by setting \(\nabla_{n,\underline{\sigma}}(m\otimes w)=m\otimes\mathrm{d}(w)+\nabla_{ \underline{\sigma}}(m)\wedge(-1)^{n}w\). **Definition 1.18**.: A twisted connection \(\nabla_{\underline{\sigma}}\) on a \(A\)-module \(M\) is said to be _integrable_ if \(\nabla_{2,\underline{\sigma}}\circ\nabla_{\underline{\sigma}}=0\). **Remark**.: On a \(A\)-module \(M\), it is possible to construct the de Rham complex \[\mathrm{DR}(A)=\left[M\to M\otimes_{A}\Omega^{1}_{\underline{\sigma}}\to M \otimes_{A}\Omega^{2}_{\underline{\sigma}}\to\dots\right].\] The de Rham cohomology is the cohomology of this complex. We denote by \(\nabla_{\underline{\sigma}}\)-\(\mathrm{Mod}(A)\) the set of \(A\)-modules endowed with a twisted connection and by \(\nabla^{\mathrm{Int}}_{\underline{\sigma}}\)-\(\mathrm{Mod}(A)\) the set of \(A\)-modules endowed with an integrable twisted connection. **Proposition 1.19**.: _When the coordinates \(\underline{x}\) are classical, there exists an equivalence of categories_ \[\nabla_{\underline{\sigma}}\mathrm{-Mod}(A)\simeq\mathrm{T}_{A,\underline{ \sigma}}\mathrm{-Mod}(A).\] Proof.: We refer the reader to [10, Proposition 7.14]. ### Twisted differential operators We keep the hypothesis of the previous subsection: \(R\) is a Huber ring, \((A,\underline{\sigma})\) is a twisted \(R\)-adic algebra of order \(d\) (definition 1.7) and there exist some elements \(\underline{x}=(x_{1},\dots,x_{d})\) of \(A\) that are \(\underline{\sigma}\)-coordinates (definition 1.10). In this section we will also assume that the coordinates \(x_{1},\dots,x_{d}\) are symmetrical (definition 1.15). This will be useful to define the composition of twisted differential operators. **Definition 1.20**.: Let \(M\) and \(N\) be \(A\)-modules. A _twisted differential operator of order at most \(n\)_ is a \(R\)-linear morphism \(\phi:M\to N\) which \(A\)-linearization \(\tilde{\phi}\) factorizes through \(P_{A,(n)_{\underline{a}}}\). This gives the diagram where \(\otimes_{A}^{\prime}\) designates that the action on the left is given by \(\Theta_{A,(n)_{\underline{a}}}\). This condition can be translated by the fact that \(\tilde{\phi}\) is zero over \(I_{A}^{(n+1)_{\underline{a}}}\). We denote by \(\operatorname{Diff}_{n,\underline{a}}(M,N)\) the set of twisted differential operators of order at most \(n\). By definition \(\operatorname{Diff}_{n,\underline{a}}(M,N)\simeq\operatorname{Hom}_{A}(P_{A,(n )_{\underline{a}}}\otimes_{A}^{\prime}M,N)\). In the case where \(M=N\) we will simply write \(\operatorname{Diff}_{n,\underline{a}}(M)\). We also set \[\operatorname{Diff}_{\underline{a}}(M,N)=\varinjlim_{n}\operatorname{Hom}_{A} \left(P_{A,(n)_{\underline{a}}}\otimes_{A}^{\prime}M,N\right)\text{ and }\operatorname{D}_{A,\underline{a}}^{(\infty)}= \operatorname{Diff}_{\underline{a}}(A).\] Recall that for an element \(q\) of \(R\) and every natural integer \(n\),we define the \(q\)-analogue of \(n\) as \[(n)_{q}:=1+q+\ldots+q^{n-1}.\] It verifies \(\lim_{q\to 1}(n)_{q}=n.\) In a similar manner, we can define the \(q\)-analogue of the factorial of \(n\) by setting: \[(n)_{q}!:=(2)_{q}\ldots(n)_{q}.\] **Remarks**.: 1. A standard basis of \(\operatorname{Diff}_{n,\underline{a}}(A)\) is given by \(\partial_{\underline{a}}^{[\underline{k}]}\) dual of the image of \(\underline{\xi}^{(\underline{k})_{\underline{a}}}\) in \(P_{A,(n)_{\underline{a}}}\) for \(\underline{k}\in\mathbb{N}^{d}\) such that \(|\underline{k}|\leq n\). 2. Let \(\underline{q}=\{q_{1},\ldots,q_{d}\}\) be elements of \(R.\)In the case where \(x_{1},\ldots,x_{d}\) are \(\underline{q}\)-coordinates, by corollary 6.2 of [11], we have \[\forall i\in\{1,\ldots,d\},\ \forall k\in\mathbb{N},\ \forall z\in A,\partial_{q_{i}}^{k}(z)=(k)_{q_{i}}^{!} \partial_{q_{i}}^{[k]}(z).\] For \(\underline{k}=(k_{1},\ldots,k_{d})\), set \[\partial_{\underline{a}}^{\underline{k}}:=\partial_{\underline{a},1}^{k_{1}} \circ\ldots\circ\partial_{\underline{a},d}^{k_{d}}.\] Thus, by the previous relation, (2) \[\partial_{\underline{a}}^{\underline{k}}=(k_{1})_{q_{1}}^{!}\ldots(k_{d})_{q_{ d}}^{!}\partial_{\underline{a}}^{[\underline{k}]}.\] **Proposition 1.21**.: _If there exist some elements \(\underline{q}=\{q_{1},\ldots,q_{d}\}\) of \(R\) such that \(\underline{x}\) are classical \(\underline{q}\)-coordinates and_ \[\forall i=1,\ldots,d\ \forall n\in\mathbb{N},\ (n)_{q_{i}}\in R^{\times},\] _then there exists an equivalence of categories_ \[\nabla_{\underline{a}}^{\operatorname{Int}}\text{-Mod}(A)\simeq\operatorname {D}_{A,\underline{a}}^{(\infty)}\text{-Mod}.\] Proof.: We refer the reader to [14, Proposition 8.5]. ## 2. Twisted calculus From now on we make the following assumptions: \(R\) is a Tate ring, \((A,\underline{\sigma})\) is a complete noetherian twisted (definition 1.7) \(R\)-adic algebra of order \(d\) (in particular \(A\) is a Tate ring and so it is endowed with an ultrametric norm). There exist some elements \(\underline{x}=(x_{1},\dots,x_{d})\) of \(A\) that are classican and symmetrical \(\underline{\sigma}\)-coordinates (definitions 1.10, 1.13 and 1.15). We fix an ultrmatric norm \(\|\ \|\) that defines the topology on \(A\). We will assume that this norm is contractive (definition 1.6) with respect to the endomorphisms \(\sigma_{1},\dots,\sigma_{d}\). **Remarks**.: 1. If \(A\) is a ring endowed with a submultiplicative norm \(\|\ \|\) then, every \(A\)-module of finite type \(M\) dcan be endowed with a semi-norm. If we choose some generators \((e_{1},\dots,e_{m})\) of \(M\), We can define the semi-norm of \(M\) as follows \[\forall x\in M,\ \|x\|_{M}=\inf_{x=\sum_{i=1}^{m}a_{i}e_{i}}\left\{\sum_{i=1}^{m }\|a_{i}\|\right\}.\] We refer the reader to section 2 of [15] for more details. 2. In what follows it will be necessary to assume that finite modules over \(A\) are complete for some limits to exist. It is the case under the hypothesis we are working. An other option would have been to require that there exists a noetherian ring of definition \(A_{0}\) of \(A\) such that \(A\) is of finite presentation over \(A_{0}\) or that the module \(M\) is flat of finite presentation over \(A\). ### Twisted differential principal parts of finite radius **Definition 2.1**.: The _\(\underline{x}\)-radius_ of \(\underline{\sigma}=\{\sigma_{1},\dots,\sigma_{d}\}\) is \[\rho(\underline{\sigma})=\max_{1\leqslant i\leqslant d}\|x_{i}-\sigma_{i}(x_{i} )\|.\] **Lemma 2.2**.: _If we set \(\underline{\sigma}^{n}=\{\sigma_{1}^{n},\dots,\sigma_{d}^{n}\}\) then,_ \[\rho(\underline{\sigma}^{n})\leqslant\rho(\underline{\sigma})\] Proof.: By induction over \(n\), \[\sup_{1\leqslant i\leqslant d}\|x_{i}-\sigma_{i}^{n+1}(x_{i})\| \leqslant\max\left\{\sup_{1\leqslant i\leqslant d}\|x_{i}-\sigma_{i} ^{n}(x_{i})\|,\sup_{1\leqslant i\leqslant d}\|\sigma_{i}^{n}(x_{i}-\sigma_{i} (x_{i}))\|\right\}\] \[\leqslant\max\left\{\rho(\underline{\sigma}^{n}),\rho(\underline {\sigma})\right\}=\rho(\underline{\sigma}).\] The last inequality is obtained using the fact that the norm is contractive with respect to the endomorphisms \(\sigma_{1},\dots,\sigma_{d}\). Let \(\eta\in\mathbb{R}\) such that \(0<\eta<1\). Set \[A\left\{\underline{\xi}/\eta\right\}=\left\{\sum_{\underline{n}\in\mathbb{N}^ {d}}z_{\underline{n}}\underline{\xi}^{\underline{n}},\ z_{\underline{n}}\in A \ \text{et}\ \|z_{\underline{n}}\|\eta^{|\underline{n}|}\to 0\ \text{when}\ |\underline{n}|\to \infty\right\}.\] This a Banach algebra for the sup norm \[\left\|\sum_{\underline{n}\in\mathbb{N}^{d}}z_{\underline{n}}\underline{\xi }^{\underline{n}}\right\|_{\eta}=\max\|z_{\underline{n}}\|\eta^{|\underline{ n}|}.\] We refer the reader to section 2.2 of [15] for more details. This ring is a Tate ring. **Lemma 2.3**.: _If \(\eta\geq\rho(\underline{\sigma})\) then the \(A\)-linear map_ \[\underline{\xi^{\underline{n}}}\mapsto\underline{\xi^{(\underline{n})_{ \underline{\sigma}}}}\] _is an isometric automorphism of the \(A\)-module \(A\left\{\underline{\xi}/\eta\right\}\)._ Proof.: Recall that for \(\underline{k}\in\mathbb{N}^{d}\), \[\underline{\xi^{(\underline{k})_{\underline{\sigma}}}}=\prod_{i=1}^{d}\prod_{ j=0}^{k_{i}-1}(\xi_{i}+x_{i}-\sigma_{i}^{j}(x_{i}))=\xi_{1}^{k_{1}}\ldots\xi_{d}^{k_ {d}}+f_{\underline{k}}.\] with \(f_{\underline{k}}\in A[\underline{\xi}]_{<|\underline{k}|}\). By lemma 2.2, for every natural integer \(j\) and every \(1\leq i\leq d\), \(\|x_{i}-\sigma_{i}^{j}(x_{i})\|\leq\eta\). It follows that \(\|f_{\underline{k}}\|_{\eta}\leq\eta^{(\sum k_{i})}=\eta^{|\underline{k}|}\) and that \(\left\|\underline{\xi^{(\underline{k})_{\underline{\sigma}}}}\right\|_{\eta}= \eta^{|\underline{k}|}\). Hence, the unique \(A\)-linear endomorphism of \(A[\underline{\xi}]\) that sends \(\underline{\xi^{\underline{k}}}\) on \(\underline{\xi^{(\underline{k})_{\underline{\sigma}}}}\) is an isometry that preserves the degree. Therefore, it extends in a unique manner to a isometry of \(A\left\{\underline{\xi}/\eta\right\}\) to itself. **Definition 2.4**.: An _orthogonal Schauder basis_ of a normed \(A\)-module is a familly \(\left\{s_{n}\right\}_{n\in\mathbb{N}}\) of elements of \(M\) such that every element \(s\) of \(M\) can be uniquely written as \[s=\sum_{n=0}^{\infty}z_{n}s_{n}\] with \(z_{n}\in A\) et \(\|s\|=\sup\left\{\|z_{n}\|\|s_{n}\|\right\}.\) **Proposition 2.5**.: _Let \(M\) be a normed \(A\)-module and \(\left\{s_{n}\right\}_{n\in\mathbb{N}}\) be a Schauder basis of \(M\). If, \(N\) is a \(A\)-module of finite type then every element \(f\) of \(N\otimes_{A}M\) can be written under the form_ \[f=\sum f_{i}\otimes s_{i}\] _where \(\left\{f_{i}\right\}_{i\in\mathbb{N}}\) is a familly of elements of \(N\)._ Proof.: By hypothesis, for a natural integer \(n\) there exists a surjective morphism \[A^{n}\twoheadrightarrow N.\] We deduce a surjective morphism \[M^{n}\simeq(A\otimes_{A}M)^{n}\simeq(A^{n}\otimes_{A}M)\twoheadrightarrow N \otimes_{A}M.\] This concludes the proof. **Proposition 2.6**.: _When \(\eta\geq\rho(\underline{\sigma})\), \(\left\{\underline{\xi^{(\underline{n})_{\underline{\sigma}}}}\right\}_{ \underline{n}\to\mathbb{N}^{d}}\) is an orthogonal Schauder basis for the \(A\)-module \(A\left\{\underline{\xi}/\eta\right\}\)._ Proof.: We conclude from lemma 2.3 and the fact that the familly \(\left\{\underline{\xi^{\underline{k}}}\right\}_{\underline{k}\in\mathbb{N}^{ d}}\) is an orthogonal Schauder basis of \(A\left\{\underline{\xi}/\eta\right\}\). **Proposition 2.7**.: _If \(\eta\geq\rho(\underline{\sigma})\), then, for all \(n\in\mathbb{N}\), there exists an isomorphism of \(A\)-algebras_ \[A\left\{\underline{\xi}/\eta\right\}/\left(\underline{\xi^{(\underline{k})_{ \underline{\sigma}}}}\operatorname{avec}\,\underline{k}\in\mathbb{N}^{d}\text{ \rm{\ tel s\ que} }\,|\underline{k}|=n+1\right)\simeq P_{A,(n)_{\underline{\sigma}}}.\] Proof.: By proposition 1.9 \[P_{A,(n)_{\underline{\sigma}}}=A[\underline{\xi}]/\left(\underline{\xi}^{(k)_{ \underline{\sigma}}}\text{ with }\underline{k}\in\mathbb{N}^{d}\text{ such that }|\underline{k}|=n\right)\simeq A[\underline{\xi}]_{\leq n}.\] Set, \(I_{n}:=\left(\underline{\xi}^{(\underline{k})_{\underline{\sigma}}}\text{ with }\underline{k}\in\mathbb{N}^{d}\text{ such that }| \underline{k}|=n+1\right)\) the ideal generated by those elements in \(P_{A,(n)_{\underline{\sigma}}}\) and \(A\left\{\underline{\xi}/\eta\right\}\). We can conclude by considering the map below \[A[\underline{\xi}]_{\leq n}\simeq A[\underline{\xi}]/I_{n}\to A\left\{ \underline{\xi}/\eta\right\}/I_{n}\] which is bijective by proposition 2.6. **Corollary 2.8**.: _If \(\eta\geq\rho(\underline{\sigma})\). Then, for every \(n\in\mathbb{N}\) and every \(A\)-module \(M\) of finite type, there exists a canonical injection_ \[M\otimes_{A}A\left\{\underline{\xi}/\eta\right\}\to M\otimes_{A}\widehat{ \widehat{P}}_{\underline{\sigma}}.\] Proof.: By proposition 2.7, for all \(n\in\mathbb{N}^{d}\) there exists a map \[A\left\{\underline{\xi}/\eta\right\}\to A\left\{\underline{\xi}/\eta \right\}/\left(\underline{\xi}^{(\underline{k})_{\underline{\sigma}}},\text{ }|\underline{k}|=n+1\right)\simeq P_{A,(n)_{\underline{\sigma}}}.\] Going to the limit and tensoring by \(M\) we obtain a map \(M\otimes_{A}A\left\{\underline{\xi}/\eta\right\}\to M\otimes_{A}\widehat{ \widehat{P}}_{\underline{\sigma}}\). It remains to show that this map is injective. This can be translated by \[\forall\sum s_{i}\otimes f_{i}\in M\otimes_{A}A\left\{\underline{\xi}/\eta \right\};\forall\underline{n}\in\mathbb{N}^{d},\sum s_{i}\otimes f_{i}=0\text{ mod }\underline{\xi}^{(\underline{n})_{\underline{\sigma}}}\Rightarrow\sum s_{i} \otimes f_{i}=0.\] This follows directly from the fact that \(\left\{\underline{\xi}^{(\underline{n})_{\underline{\sigma}}}\right\}_{ \underline{n}\in\mathbb{N}^{d}}\) is a Schauder basis of \(A\left\{\underline{\xi}/\eta\right\}\) and proposition 2.5. ### Radius of convergence **Definition 2.9**.: Let \(M\) be a \(\mathrm{D}^{(\infty)}_{A,\underline{\sigma}}\)-module of finite type over \(A\) and \(\eta\in\mathbb{R}\) 1. Let \(s\in M\) then, 1. \(s\) is \(\eta\)_-convergent_ if \[\left\|\partial_{\underline{\sigma}}^{[\underline{k}]}(s)\right\|\eta^{| \underline{k}|}\to 0\text{ when }|\underline{k}|\rightarrow\infty,\] 2. the _radius of convergence_ of \(s\) is \[\text{Rad}(s)=\sup\left\{\eta,\text{ $s$ is $\eta$-convergent}\right\},\] 3. \(s\) is \(\eta^{\dagger}\)_-convergent_ if \(\text{Rad}(s)\geq\eta\). 2. \(M\) is \(\eta\)_-convergent_ if all elements of \(M\) are, 3. the _radius of convergence_ of \(M\) is \[\text{Rad}(M)=\inf_{s\in M}\text{Rad}(s)\] 4. \(M\) is \(\eta^{\dagger}\)_-convergent_ if all elements of \(M\) are. **Remarks**.: 1. We have the following formula \[\text{Rad}(M)=\inf_{s\in M}\limsup_{\underline{k}\rightarrow\infty}\left\| \partial_{\underline{\sigma}}^{[\underline{k}]}(s)\right\|^{-\frac{1}{| \underline{\lambda}|}}.\] 2. If \(s\) (resp. \(M\)) is \(\eta\)-convergent then \(s\) (resp. \(M\)) is \(\eta^{\dagger}\)-convergent. **Lemma 2.10**.: _Let \(M\) be a \(\mathrm{D}^{(\infty)}_{A,\underline{\sigma}}\)-module of finite type over \(A\) and \(\eta\geqslant\rho(\underline{\sigma})\). An element \(s\) of \(M\) is \(\eta\)-convergent if and only if its twisted Taylor series_ \[\widehat{\theta}(s)=\sum_{\underline{k}\in\mathbb{N}^{d}}\partial_{\underline {\sigma}}^{(\underline{k}]}(s)\otimes\underline{\xi}^{(\underline{k})_{ \underline{\sigma}}}\] _is an element of \(M\otimes_{A}A\left\{\underline{\xi}/\eta\right\}\subset M\otimes_{A}\widehat {\underline{P}}_{\underline{\sigma}}\)._ Proof.: By proposition 2.6, \(\left(\underline{\xi}^{(\underline{n})_{\underline{\sigma}}}\right)_{ \underline{n}\in\mathbb{N}^{d}}\) is a Schauder basis of \(A\left\{\underline{\xi}/\eta\right\}\). We conclude by using again proposition 2.5. **Proposition 2.11**.: _Let \(M\) be a \(\mathrm{D}^{(\infty)}_{A,\underline{\sigma}}\)-module of finite type over \(A\) and \(\eta\geqslant\rho(\underline{\sigma})\). The module \(M\) is \(\eta\)-convergent if and only if its Twisted Taylor series factorizes in the following manner_ Proof.: It can be deduced directly from lemma 2.10. **Definition 2.12**.: The map \(\theta_{\eta}\) is called _twisted Taylor map of radius \(\eta\)_. **Proposition 2.13**.: _The \(\eta\)-convergence for the \(\mathrm{D}^{(\infty)}_{A,\underline{\sigma}}\)-module of finite type over \(A\) is stable under quotient and subobjects if they are themself of finite type over \(A\)._ Proof.: The proof is identical to the one of [13, Proposition 3.5]. ### Twisted differential operators of finite radius **Definition 2.14**.: The \(A\)-module structure of \(A\left\{\underline{\xi}/\eta\right\}\) induced by the twisted Taylor map of radius \(\eta\) is called _right structure_. In that case we will denote \(A\left\{\underline{\xi}/\eta\right\}\otimes^{\prime}-\) that we use the right structure on the left hand side of the tensor product: \[f\otimes^{\prime}zs=\theta_{\eta}(z)f\otimes^{\prime}s.\] **Remark**.: If \(A\) is \(\eta\)-convergent, it is possible to linearize the twisted Taylor map of radius \(\eta\) to obtain a \(A\)-linear map \[\tilde{\theta}_{\eta}:P_{A}\to A\left\{\underline{\xi}/\eta\right\}\] which induces the commutative diagram below (3) where \(\theta\) is defined as \[\theta:A\to P_{A},\ f\mapsto 1\otimes f.\] **Lemma 2.15**.: _Let \(M\) and \(N\) eb two \(A\)-modules of finite type, the map_ \[M\to A\left\{\underline{\xi}/\eta\right\}\otimes_{A}^{\prime}M,\ s\mapsto 1 \otimes^{\prime}s\] _induce a \(P_{A}\)-linear injective map_ \[\operatorname{Hom}_{A-\operatorname{cont}}(A\left\{\underline{\xi}/\eta \right\}\otimes_{A}^{\prime}M,N)\to\operatorname{Hom}_{R-\operatorname{cont}} (M,N).\] Proof.: By the diagram 3 the inclusion of \(A[\underline{\xi}]\) in \(A\left\{\underline{\xi}/\eta\right\}\) factorizes as \[A[\underline{\xi}]\to P_{A}\xrightarrow{\tilde{\theta}_{\eta}}A\left\{ \underline{\xi}/\eta\right\}.\] The image of \(\tilde{\theta}_{\eta}\) is dense in \(A\left\{\underline{\xi}/\eta\right\}\). Moreover, for a \(A\)-linear continuous morphism \(\psi:A\left\{\underline{\xi}/\eta\right\}\otimes_{A}^{\prime}M\to N\), the following diagram is commutative by construction ou \(\bar{\psi}:=\psi\circ(\tilde{\theta}_{\eta}\otimes^{\prime}\operatorname{Id} )\circ i\) and Hence, by density of \(\tilde{\theta}_{\eta}\) inside \(A\left\{\underline{\xi}/\eta\right\}\), the map \[\operatorname{Hom}_{A-\operatorname{cont}}(A\left\{\underline{\xi}/\eta \right\}\otimes_{A}^{\prime}M,N)\to\operatorname{Hom}_{R-\operatorname{cont}} (M,N),\ \psi\mapsto\psi\circ(\tilde{\theta}_{\eta}\otimes^{\prime} \operatorname{Id})\circ i.\] is injective. **Definition 2.16**.: Let \(M\) and \(N\) be two \(A\)-module of finite type, a \(R\)-linear map \(\varphi:M\to N\) is called _twisted differential operator of radius \(\eta\)_ if it extends in a continuous \(A\)-linear map \(\tilde{\varphi}_{\eta}:A\left\{\underline{\xi}/\eta\right\}\otimes_{A}^{\prime} M\to N\) called \(\eta\)-linearization. We will denote by \(\operatorname{Diff}_{\underline{\sigma}}^{(\eta)}(M,N)\) the set of twisted differential operators of radius \(\eta\). **Remark**.: Let \(M\) and \(N\) be two \(A\)-modules of finite type. The set \(\operatorname{Hom}_{R-\operatorname{cont}}(M,N)\) of \(R\)-linear continuous morphisms from \(M\) to \(N\) is a \(P_{A}\)-module for the action defined by \[\forall\psi\in\operatorname{Hom}_{R-\operatorname{cont}}(M,N),\ \forall a,b\in A, \forall x\in M\ (a\otimes b)\cdot\psi(x)=a\psi(bx).\] **Proposition 2.17**.: _Let \(M\) and \(N\) be two \(A\)-modules of finite type. The set \(\operatorname{Diff}_{\underline{\sigma}}^{(\eta)}(M,N)\) is a \(P_{A}\)-submodule of \(\operatorname{Hom}_{R-\operatorname{cont}}(M,N)\) containing \(\operatorname{Diff}_{\underline{\sigma}}^{(\infty)}(M,N)\) and_ \[\operatorname{Hom}_{A-\operatorname{cont}}(A\left\{\underline{\xi}/\eta \right\}\otimes_{A}^{\prime}M,N)\simeq\operatorname{Diff}_{\underline{\sigma}} ^{(\eta)}(M,N).\] Proof.: It is possible to define the action of \(P_{A}\) over \(\operatorname{Hom}_{A-\operatorname{cont}}(A\left\{\underline{\xi}/\eta \right\}\otimes_{A}^{\prime}M,N)\). For an element \(\varphi\) of \(\operatorname{Diff}_{\underline{\sigma}}^{(\eta)}(M,N)\), the diagram below is commutative where the canonical map \(\pi\) is \(P_{A}\)-linear. Hence, \((a\otimes b)\cdot\varphi=\pi((a\otimes b)\cdot\tilde{\varphi}_{\eta})\). It ensure us that \(\operatorname{Diff}_{\underline{\sigma}}^{(\eta)}(M,N)\) is a \(P_{A}\)-submodule of \(\operatorname{Hom}_{R-\operatorname{cont}}(M,N)\). It contains \(\operatorname{Diff}_{\underline{\sigma}}^{(\infty)}(M,N)\) because we have a surjection of \(A\left\{\underline{\xi}/\eta\right\}\) into \(P_{A,(n)_{\underline{\sigma}}}\) for every natural integer \(n\). The isomoprhism can be directly deduced from the definition. In what follows, we endows \(A\left\{\underline{\xi}/\eta\right\}\widehat{\otimes}_{A}^{\prime}A\left\{ \underline{\xi}/\eta\right\}\) with the norm defined by \[\forall f\in A\left\{\underline{\xi}/\eta\right\}\widehat{\otimes}_{A}^{ \prime}A\left\{\underline{\xi}/\eta\right\},\ \|f\|:=\inf_{f=\sum a_{i}\otimes b_{i}}\max\|a_{i}\|_{\eta}\|b_{i}\|_{\eta}.\] **Proposition 2.18**.: _The map \(\delta_{\eta}\) defined by_ \[\delta_{\eta}:A\left\{\underline{\xi}/\eta\right\}\to A\left\{ \underline{\xi}/\eta\right\}\widehat{\otimes}_{A},\ \xi_{i}\mapsto\xi\otimes^{\prime}1+1\otimes^{\prime}\xi_{i}\] _makes the following diagram commute for every natural integers \(m^{\prime}<m\) and \(n^{\prime}<n\) :_ _Moreover, \(\delta_{\eta}\) is a morphism of Huber \(R\)-algebras of norm \(1\)._ Proof.: Going to the limit, we obtain the commutative diagram below The uniqueness and the fact that \(\delta_{\eta}\) is a morphism of rings comes from proposition 1.16 and the fact that the map \(A\left\{\underline{\xi}/\eta\right\}\to\widehat{\widehat{P}}_{\underline{ \sigma}}\) is injective. For the existence, it suffice to set \(\delta_{\eta}(\xi_{i})=1\otimes^{\prime}\xi_{i}+\xi_{i}\otimes^{\prime}1\) and check that the map defined in such way makes the diagram commute. It is well defined and of norm \(1\) because \(\|\delta_{\eta}(\xi_{i})\|_{\eta}=\eta\). **Proposition 2.19**.: _The \(A\)-module \(\operatorname{Hom}_{A-\operatorname{cont}}(A\left\{\underline{\xi}/\eta \right\},A)\) is a Banach \(R\)-algebra for the multiplication defined by_ \[\psi\phi:A\left\{\underline{\xi}/\eta\right\}\xrightarrow{\delta_{\eta}}A \left\{\underline{\xi}/\eta\right\}\widehat{\otimes}_{A}^{\prime}A\left\{ \underline{\xi}/\eta\right\}\xrightarrow{\operatorname{Id}\otimes^{\prime} \phi}A\left\{\underline{\xi}/\eta\right\}\xrightarrow{\psi}A.\] **Proposition 2.20**.: _The twisted differential operators of radius \(\eta\) are stable under composition and \(\|\varphi\circ\psi\|_{\eta}\leq\|\varphi\|_{\eta}\|\psi\|_{\eta}\)._ Proof.: Let \(\varphi:M\to N\) be \(\psi:L\to M\) two twisted differential operators of radius \(\eta\). The diagram below commutes where the upper lign is exactly \(\widehat{\phi\circ\psi}\). The twisted differential operators of radius \(\eta\) are stable by composition. BY submultiplicatity of the norm, the following inequality is satisfied \[\|\phi\circ\psi\|_{\eta}\leqslant\|\widehat{(\phi\circ\psi)}_{\eta}\|\leqslant\| \tilde{\phi}_{\eta}\|\|\mathrm{Id}\otimes\tilde{\psi}_{\eta}\|\|\delta_{\eta} \|=\|\tilde{\phi}_{\eta}\|\|\tilde{\psi}_{\eta}\|=\|\phi\|_{\eta}\|\psi\|_{\eta}.\qed\] **Proposition 2.21**.: _Let \(M\) be a \(A\)-module of finite type and \(\theta_{\eta}:M\to M\otimes_{A}A\left\{\underline{\xi}/\eta\right\}\) be a \(A\)-linear map with respect to the right structure of \(A\left\{\underline{\xi}/\eta\right\}\). Then, \(\theta_{\eta}\) is a twisted Taylor map of radius \(\eta\) if and only if the diagram below is commutative_ Proof.: Assume the module \(M\) is endowed by a twisted Taylor map of radius \(\eta\). By proposition 2.11, the twisted Taylor map factorizes trough \(M\otimes_{A}A\left\{\underline{\xi}/\eta\right\}\) and the map \(\theta_{\eta}\). Proposition 2.18 implies the commutativity of the diagram and allows us to conclude. Reciprocally we can use the proposition and follow the same path. **Definition 2.22**.: _The ring of twisted differential operators of radius \(\eta\) is_ \[\mathrm{D}^{(\eta)}_{\underline{\sigma}}:=\mathrm{Diff}^{(\eta)}_{A, \underline{\sigma}}(A,A).\] **Remark**.: We endow \(\mathrm{D}^{(\eta)}_{\underline{\sigma}}\) with a norm defined by: \[\|\varphi\|_{\eta}:=\|\tilde{\varphi}_{\eta}\|:=\sup_{f\neq 0}\frac{\| \tilde{\varphi}_{\eta}(f)\|}{\|f\|}.\] **Corollary 2.23**.: _There exists a canonical isomorphism of Banach \(A\)-modules_ \[\mathrm{D}^{(\eta)}_{\underline{\sigma}}\simeq\mathrm{Hom}_{A-\mathrm{cont}}( A\left\{\underline{\xi}/\eta\right\},A).\] **Remark**.: For \(n\in\mathbb{N}\), we denote by \(K^{[n]}\) the submodule generated by \(\partial^{[\underline{k}]}_{\underline{\sigma}}\) such that \(|\underline{k}|\geq n\) and \[\widehat{\mathrm{D}}^{(\infty)}_{\underline{\sigma}}=\varprojlim\mathrm{D}^{ (\infty)}_{A,\underline{\sigma}}/K^{[n]}.\] In general this is not a ring, however, we have the following description \[\widehat{\mathrm{D}}^{(\infty)}_{\underline{\sigma}}:=\left\{\sum_{ \underline{k}\in\mathbb{N}^{d}}z_{\underline{k}}\partial^{[\underline{k}]}_{ \underline{\sigma}},\ z_{\underline{k}}\in A\right\}.\] By duality, the diagram 3 induces the following sequence of inclusions \[\mathrm{D}^{(\infty)}_{\underline{\sigma}}\to\mathrm{D}^{(\eta)}_{\underline {\sigma}}\to\widehat{\mathrm{D}}^{(\infty)}_{\underline{\sigma}}.\] where the right map is explicitely given by \[\varphi\mapsto\sum_{\underline{k}\in\mathbb{N}^{d}}\tilde{\varphi}_{\eta} \left(\underline{\xi}^{(\underline{k})_{\underline{\sigma}}}\right)\partial ^{[\underline{k}]}_{\underline{\sigma}}.\] **Proposition 2.24**.: _The injective map \(\mathrm{D}^{(\eta)}_{A,\underline{\sigma}}\to\widehat{\mathrm{D}}^{(\infty)}_{A, \underline{\sigma}}\) iinduces an isometric isomorphism of Banach \(A\)-modules_ \[\mathrm{D}^{(\eta)}_{\underline{\sigma}}\to\left\{\sum\limits_{\underline{k}\in \mathbb{N}^{d}}z_{\underline{k}}\underline{c}^{[\underline{k}]}_{\underline{ \sigma}},\ \exists C>0,\ \forall\underline{k}\in\mathbb{N}^{d},\ \|z_{ \underline{k}}\|\leq C\eta^{|\underline{k}|}\right\}\] _for the sup norm_ \[\left\|\sum\limits_{\underline{k}\in\mathbb{N}^{d}}z_{\underline{k}}\underline {c}^{[\underline{k}]}_{\underline{\sigma}}\right\|_{\eta}=\sup\left\{\|z_{ \underline{k}}\|\,/\eta^{|\underline{k}|}\right\}.\] Proof.: Let \(\varphi\in\mathrm{D}^{(\eta)}_{\underline{\sigma}}\), as \(\tilde{\varphi}_{\eta}\) is continuous, for all \(\underline{k}\in\mathbb{N}^{d}\) \[\left\|\tilde{\varphi}_{\eta}\left(\underline{\xi}^{(\underline{k})\underline {\sigma}}\right)\right\|\leq\|\tilde{\varphi}_{\eta}\|\left\|\underline{\xi}^ {(\underline{k})\underline{\sigma}}\right\|=\|\tilde{\varphi}_{\eta}\|\,\eta^ {|\underline{k}|}.\] The injection map \(\mathrm{D}^{(\eta)}_{\underline{\sigma}}\to\widehat{\mathrm{D}}^{(\infty)}_{ \underline{\sigma}}\) has its image contained in the considered set and its norm is at most \(1\). Given an element \(\sum\limits_{\underline{k}\in\mathbb{N}^{d}}w_{\underline{k}}\underline{c}^{[ \underline{k}]}_{\underline{\sigma}}\) of norm at most \(C\) on the rifht side. By proposition 2.17, for every sequence \((z_{\underline{k}})_{\underline{k}\in\mathbb{N}^{d}}\) of elements of \(A\) satisfying \(\|z_{\underline{k}}\|\eta^{|\underline{k}|}\to 0\), there exists a unique \(\varphi\in\mathrm{D}^{(\eta)}_{\underline{\sigma}}\) such that \[\tilde{\varphi}_{\eta}\left(\sum\limits_{\underline{k}\in\mathbb{N}^{d}}z_{ \underline{k}}\underline{\xi}^{(\underline{k})\underline{\sigma}}\right)= \sum\limits_{\underline{k}\in\mathbb{N}^{d}}z_{\underline{k}}w_{\underline{k}}\in A\] following from the fact that \(\|z_{\underline{k}}w_{\underline{k}}\|\leq C\|z_{\underline{k}}\|\eta^{| \underline{k}|}\to 0\). It also implies that \(\|\varphi\|\leq C\). That defines an inverse map which is indeed an isometry. **Proposition 2.25**.: _The structure of \(\mathrm{D}^{(\infty)}_{\underline{\sigma}}\)-module over a \(\eta\)-convergent module of finite type over \(A\) extends canonically in a structure of \(\mathrm{D}^{(\eta)}_{\underline{\sigma}}\)-module._ Proof.: Consider the \(A\left\{\underline{\xi}/\eta\right\}\)-linear map \[\begin{CD}M\otimes_{A}A\left\{\underline{\xi}/\eta\right\}&\text{\rm Hom}_{ A}(\mathrm{Hom}_{A}(A\left\{\underline{\xi}/\eta\right\},A),M)\\ s\otimes f&\text{\rm Hom}_{A}(\psi\mapsto\psi(f)s).\end{CD}\] The twisted Taylor map of radius \(\eta\) can be linearized to get a map \(A\left\{\underline{\xi}/\eta\right\}\otimes^{\prime}M\to M\otimes A\left\{ \underline{\xi}/\eta\right\}\). By composition we obtain a \(A\left\{\underline{\xi}/\eta\right\}\)-linear map \[A\left\{\underline{\xi}/\eta\right\}\otimes^{\prime}_{A}M\to\mathrm{Hom}_{A} (\mathrm{D}^{(\eta)}_{\underline{\sigma}},M)\] or, in an equivalent manner, a \(A\left\{\underline{\xi}/\eta\right\}\)-linear map \[\mathrm{D}^{(\eta)}_{\underline{\sigma}}\to\mathrm{Hom}_{A}(A\left\{ \underline{\xi}/\eta\right\}\otimes^{\prime}M,M)=\mathrm{D}^{(\eta)}_{ \underline{\sigma}}(M,M).\] It is a morphism of rings by proposition 2.21. By construction, this map is compatible with the map \[\mathrm{D}^{(\infty)}_{\underline{\sigma}}\to\mathrm{D}^{(\infty)}_{ \underline{\sigma}}(M,M)\] which gives the action of \(\mathrm{D}^{(\infty)}_{\underline{\sigma}}\) over \(M\). **Proposition 2.26**.: _If \(M\) is a \(\mathrm{D}^{(\eta)}_{\underline{\sigma}}\)-module of finite type over \(A\) then, \(M\) is \(\eta^{\dagger}\)-convergent._ Proof.: We have to show here that \(M\) is \(\eta^{\prime}\)-convergent for every \(\eta^{\prime}<\eta\). By lemma 4.1.2 of [1], \(M\) is a \(\operatorname{D}_{\underline{\sigma}}^{(\eta)}\) topological module. In other words, for every elements \(s\) of \(M\), the map \[\operatorname{D}_{\underline{\sigma}}^{(\eta)}\to M,\ \varphi\mapsto\varphi(s)\] is continuous \(A\)-linear. This translates by the fact that there exists a constant \(C\) such that for every \(\varphi\in\operatorname{D}_{\underline{\sigma}}^{(\eta)}\), \(\|\varphi(s)\|\leq C\|\varphi\|\|s\|\). Hence, \(\forall\underline{k}\in\mathbb{N}^{d}\), \(\|\partial_{\underline{\sigma}}^{[\underline{k}]}(s)\|\leq C\|s\|/\eta^{| \underline{k}|}\) and so \(\|\partial_{\underline{\sigma}}^{[\underline{k}]}(s)\|\eta^{\prime[ \underline{k}]}\leq C\|s\|\left(\eta^{\prime[\underline{k}]}/\eta^{| \underline{k}|}\right)\to 0\). We have shown that the module \(M\) is \(\eta^{\prime}\) convergent for every \(\eta^{\prime}<\eta\). **Proposition 2.27**.: _Let \(\tau_{1},\ldots,\tau_{d}\) be \(R\)-linearr continuous endomorphisms of \(A\) that commute. Assume that \(\underline{x}\) are classical and symmetrical \(\underline{\tau}\)-coordinates and that \(\eta\geq\rho(\underline{\tau})\). Then, if \(A\) is \(\eta\) with respect to \(\underline{\sigma}\), it is also \(\eta\)-convergent with respect to \(\underline{\tau}\) with the same twisted Taylor morphism as for \(\underline{\sigma}\)._ Proof.: We can consider the commutative diagram below (where \(\theta_{\eta}\) is the twisted Taylor map of radius \(\eta\) with respect to \(\underline{\sigma}\)). The diagram being commutative, the composition of the vertical maps on the right does not depend of \(\underline{\sigma}\). Moreover, by hypothesis \(\eta\geq\rho(\underline{\tau})\), hence, the map \[A[\underline{\xi}]/\left(\underline{\xi}^{(\underline{k})\underline{\tau}}, \ |\underline{k}|=n+1\right)\to A\left\{\underline{\xi}/\eta\right\}/\left( \underline{\xi}^{(\underline{k})\underline{\tau}},\ |\underline{k}|=n+1\right)\] is an isomorphism by proposition 2.7. Going to the limit we obtain the diagram below By proposition 2.11 it follows that \(A\) is \(\eta\)-convergent with respect to \(\underline{\tau}\)\(\theta_{\eta}\) is the twisted Taylor map of radius \(\eta\) with respect to \(\underline{\tau}\). **Proposition 2.28**.: _The ring structure of \(\operatorname{Hom}_{A-\operatorname{cont}}(A\left\{\underline{\xi}/\eta \right\},A)\) does not depend of \(\underline{\sigma}\)._ Proof.: It follows from the multiplication defined on the \(A\)-module \(\operatorname{Hom}_{A-\operatorname{cont}}(A\left\{\underline{\xi}/\eta \right\},A)\) in corollary 2.19 and the proposition 2.27. **Theorem 2.29**.: _Let \(\tau_{1},\ldots,\tau_{d}\) be \(R\)-linear continuous endomorphisms of \(A\) that commute. Assume that \(\underline{x}\) are classical and symmetrical \(\underline{\tau}\)-coordinates and that \(\eta\geqslant\rho(\underline{\tau})\). Then, there exists an isometric \(A\)-linear isomorphism of \(R\)-algebras_ \[\mathrm{D}_{\underline{x}}^{(\eta)}\simeq\mathrm{D}_{\underline{\tau}}^{(\eta )}.\] _that only depends on \(\underline{x}\)._ Proof.: This results from the previous proposition and the fact that the isomorphism \[\mathrm{D}_{\underline{\sigma}}^{(\eta)}\simeq\mathrm{Hom}_{A-\mathrm{cont}}(A \left\{\underline{\xi}/\eta\right\},A)\] is \(A\)-linear. In particular, it is possible to apply this theorem in the case where \[\forall i=1,\ldots,d,\ \tau_{i}=\mathrm{Id}_{A}.\] **Corollary 2.30**.: _If \(\underline{x}\) are etale coordinates then, there exists an isometric \(A\)-linear isomorphism of \(R\)-algebras_ \[\mathrm{D}_{\underline{\sigma}}^{(\eta)}\simeq\mathrm{D}_{\underline{ \mathrm{Id}}}^{(\eta)}\] _where \(\mathrm{D}_{\underline{\mathrm{Id}}}^{(\eta)}\) designates the ring of twisted differential operators of radius \(\eta\) with \(\underline{\mathrm{Id}}=(\mathrm{Id}_{A},\ldots,\mathrm{Id}_{A})\)._ ### Confluence From now on we fix a \(\eta>\rho(\underline{\sigma})\) such that \(A\) is \(\eta^{\dagger}\)-convergent. **Definition 2.31**.: The ring of _twisted differential operators of radius \(\eta^{\dagger}\)_ is \[\mathrm{D}_{\underline{\sigma}}^{(\eta^{\dagger})}=\varinjlim_{\eta^{\prime} <\eta}\mathrm{D}_{\underline{\sigma}}^{(\eta^{\prime})}.\] **Proposition 2.32**.: _The category of \(\mathrm{D}_{\underline{\sigma}}^{(\eta^{\dagger})}\)-modules of finite type over \(A\) is equivalent to the subcategory of \(\mathrm{D}_{\underline{\sigma}}^{(\infty)}\)-modules \(\eta^{\dagger}\)-convergent of finite type over \(A\)._ Proof.: It is enough to apply propositions 2.25 and 2.26. **Theorem 2.33**.: _Let \(\tau_{1},\ldots,\tau_{d}\) be \(R\)-linear continuous endomorphisms of \(A\) such that \(\underline{x}\) are also \(\underline{\tau}\)-coordinates. If \(\eta>\rho(\underline{\tau})\), then, the categories of \(\mathrm{D}_{\underline{\sigma}}^{(\infty)}\)-modules of finite type over \(A\) which are also \(\eta^{\dagger}\)-convergent with respect to \(\underline{\sigma}\) and \(\underline{\tau}\) are quivalent._ Proof.: It is enough to apply theorem 2.29 and proposition 2.32. As in [1] to defined the \(q\)-analogue of the exponential \(\sum\limits_{n\geqslant 0}\frac{x^{n}}{(n)_{q}}\), we are interested in the case where \(q\)-analogues of non zero integers are invertible. **Definition 2.34**.: When there exist some elements \(\underline{q}=\{q_{1},\ldots,q_{d}\}\) of \(R\) such that \(\underline{x}\) are classical \(\underline{q}\)-coordinates and \[\forall i=1,\ldots,d;\ \forall n\in\mathbb{N},\ (n)_{q_{i}}\in R^{\times}.\] then, a \(A\)-module \(M\) of finite type endowed with a twisted connection \(\nabla_{\underline{\sigma}}\) is said to be \(\eta^{\dagger}\)-_convergent_ when its image in the category of \(\mathrm{D}_{\underline{\sigma}}^{(\infty)}\)-modules (see proposition 1.21) is \(\eta^{\dagger}\)-convergent. We denote by \(\nabla^{\text{Int}}_{\underline{\sigma}}\text{-Mod}^{(\eta^{\dagger})}_{\text{tf}}(A)\) the category \(A\)-modules of finite type endowed with an integrable twisted connection that are \(\eta^{\dagger}\)-convergent and \(\operatorname{D}^{(\eta^{\dagger})}_{\underline{\sigma}}\text{-Mod}_{\text{tf}} (A)\) the category of \(\operatorname{D}^{(\eta^{\dagger})}_{\underline{\sigma}}\)-modules of finite type over \(A\). The cohomology of this category is the one given by \(\operatorname{Ext}_{\operatorname{D}^{(\eta^{\dagger})}_{\underline{\sigma}}}(A,\cdot)\). We also use the notation \[\operatorname{SP}\left(\operatorname{D}^{(\eta^{\dagger})}_{\underline{\sigma} }\right)=\operatorname{Hom}_{\operatorname{D}^{(\eta^{\dagger})}_{\underline{ \sigma}}}\left(\operatorname{DR}\left(\operatorname{D}^{(\eta^{\dagger})}_{ \underline{\sigma}}\right),\operatorname{D}^{(\eta^{\dagger})}_{\underline{ \sigma}}\right).\] **Theorem 2.35**.: _Assume there exist some elements \(\underline{q}=\{q_{1},\ldots,q_{d}\}\) of \(R\) such that \(\underline{x}\) are classical and symmetrical \(\underline{q}\)-coordinates and that every \(q_{i}\)-adic integers are invertible in \(R\). Then, if moreover, \(A\) is \(\eta^{\dagger}\)-convergent, we have the following equivalence of categories_ \[\nabla_{\underline{\sigma}}\text{-Mod}^{(\eta^{\dagger})}_{\text{tf}}(A) \simeq\operatorname{D}^{(\eta^{\dagger})}_{\underline{\sigma}}\text{-Mod}_{ \text{tf}}(A).\] _This equivalence is compatible with the cohomologies._ Proof.: By proposition 1.21, the category of finite \(A\)-modules endowed with an integrable connection \(\nabla_{\underline{\sigma}}\) is equivalent to the category \(\operatorname{D}^{(\infty)}_{A,\underline{\sigma}}\)-modules of finite type over \(A\). We used here the hypothesis on the elements \(q_{1},\ldots,q_{d}.\) The equivalence follows directly from proposition 2.32. We have left to prove that this equivalence is compatible with the cohomologies on both sides. The sequence \(\partial_{\underline{\sigma},1},\ldots,\partial_{\underline{\sigma},d}\) is a regular sequence of \(\operatorname{D}^{(\eta^{\dagger})}_{\underline{\sigma}}\). By proposition 1.4.3 of [13], \(\operatorname{SP}\left(\operatorname{D}^{(\eta^{\dagger})}_{\underline{\sigma }}\right)\) is a free resolution of \(A\). Hence, \[\operatorname{Ext}_{\operatorname{D}^{(\eta^{\dagger})}_{ \underline{\sigma}}}(A,M) =\operatorname{Hom}_{\operatorname{D}^{(\eta^{\dagger})}_{ \underline{\sigma}}}\left(\operatorname{SP}(\operatorname{D}^{(\eta^{\dagger}) }_{\underline{\sigma}}),M\right)\] \[=\operatorname{Hom}_{\operatorname{D}^{(\eta^{\dagger})}_{ \underline{\sigma}}}\left(\operatorname{SP}(\operatorname{D}^{(\eta^{\dagger}) }_{\underline{\sigma}}),\operatorname{D}^{(\eta^{\dagger})}_{\underline{\sigma }}\right)\otimes_{\operatorname{D}^{(\eta^{\dagger})}_{\underline{\sigma}}}M\] \[=\operatorname{DR}(\operatorname{D}^{(\eta^{\dagger})}_{\underline {\sigma}})\otimes_{\operatorname{D}^{(\eta^{\dagger})}_{\underline{\sigma}}}M\] \[=\operatorname{DR}(M).\] We recover here a generalization in several variables of the result in [10]. This theorem is in the spirit of [1] and [17]. **Theorem 2.36**.: _Let \(K\) be a non-archimedean field complete of characteristic \(0\). Let \(A\) be a Huber \(K\)-algebra and \(\underline{q}=(q_{1},\ldots,q_{d})\) be elements of \(R\) such that for every \(i=1,\ldots,d\) the \(q_{i}\)-entiers are invertible. If \(A\) is \(\eta^{\dagger}\)-convergent with respect to etale classical symmetrical \(\underline{q}\)-coordinates, \(\underline{x}=(x_{1},\ldots,x_{d})\) with \(\eta\geqslant\rho(\underline{\sigma})\) then, we have an equivalence of categories_ \[\nabla^{\text{Int}}_{\underline{\sigma}}\text{-Mod}^{(\eta^{\dagger})}_{\text{tf }}(A)\simeq\nabla^{\text{Int}}_{\underline{\text{Id}}}\text{-Mod}^{(\eta^{ \dagger})}_{\text{tf}}(A)\] _compatible with cohomologies on both sied._ Proof.: We are in a situation where we can apply 2.35.It suffice to show that we have an isomorphism \[\operatorname{D}^{(\eta^{\dagger})}_{\underline{\sigma}}\simeq\operatorname{D }^{(\eta^{\dagger})}_{\underline{\text{Id}}}.\] This follows from corollary 2.30.
2309.03702
DiffDefense: Defending against Adversarial Attacks via Diffusion Models
This paper presents a novel reconstruction method that leverages Diffusion Models to protect machine learning classifiers against adversarial attacks, all without requiring any modifications to the classifiers themselves. The susceptibility of machine learning models to minor input perturbations renders them vulnerable to adversarial attacks. While diffusion-based methods are typically disregarded for adversarial defense due to their slow reverse process, this paper demonstrates that our proposed method offers robustness against adversarial threats while preserving clean accuracy, speed, and plug-and-play compatibility. Code at: https://github.com/HondamunigePrasannaSilva/DiffDefence.
Hondamunige Prasanna Silva, Lorenzo Seidenari, Alberto Del Bimbo
2023-09-07T13:28:36Z
http://arxiv.org/abs/2309.03702v1
# DiffDefense: Defending against Adversarial Attacks via Diffusion Models + ###### Abstract This paper presents a novel reconstruction method that leverages Diffusion Models to protect machine learning classifiers against adversarial attacks, all without requiring any modifications to the classifiers themselves. The susceptibility of machine learning models to minor input perturbations renders them vulnerable to adversarial attacks. While diffusion-based methods are typically disregarded for adversarial defense due to their slow reverse process, this paper demonstrates that our proposed method offers robustness against adversarial threats while preserving clean accuracy, speed, and plug-and-play compatibility. Keywords:Diffusion models Adversarial defense Adversarial Attack. ## 1 Introduction The susceptibility of machine learning models to adversarial attacks is a major challenge in the field of artificial intelligence. While various techniques have been proposed to enhance the robustness of classifiers against such attacks, there is a pressing need for more effective and efficient solutions. In recent years, generative models such as Generative Adversarial Networks (GANs) [13], Diffusion Probabilistic Models [17] have emerged as a promising approach to improve the resilience of machine learning models against adversarial attacks. Modern deep generative models have a common structural similarity: the generation of novel patterns is usually performed by transforming some random latent code \(z\). Sampling \(z\sim p(z)\), where \(p(z)\) is a known distribution (e.g. \(\mathcal{N}(0,I)\) and then computing \(G(z)\), where \(G(\cdot)\) is a deep neural network allows the generation of new data. Given a model \(G(\cdot)\), trained on clean data we can assume that attacked samples \(x^{*}\) have a different distribution, therefore finding a latent code \(z^{*}\) able to generate \(x^{*}\) should be hard. Our approach builds on the idea that given some attacked pattern \(x^{*}=x+\epsilon\) where \(x\) is an unknown clean pattern and \(\epsilon\) is a perturbation crafted to induce some classifier into a mistake, we should be able to find some latent code \(z^{*}\) for which \(G(z^{*})\) is closer to the unknown clean pattern \(x\) than to the attacked one \(x^{*}\). In Fig. 2 it can be seen how an attack can add subtle patterns (center) to a clean image(left) and how DiffDefense recover a correctly classified example(right). In this paper, we present a novel approach that leverages Diffusion Models to enhance the resistance of machine learning classifiers to adversarial attacks. Our proposed method involves reconstructing the input image using a reverse process of a diffusion model (see Fig. 1 for details), which improves the model's ability to withstand adversarial attacks. We show that this approach offers comparable speed and robustness to other generative model-based solutions. Moreover, our proposed defense mechanism can be applied as a plug-and-play tool to any classifier without compromising its accuracy, provided that the diffusion model can generate high-quality images. Overall, our approach holds promise as a viable alternative to other more complex to train models, such as GANs, for defending against adversarial attacks on machine learning models, owing to the benefits offered by Diffusion Models. Our contribution is threefold: * We are the first to use recently successful Denoising Diffusion Probabilistic models as a plug-in algorithm for reconstruction based adversarial defense. Differently from [26; 35] our approach is based on reconstruction thus not requiring backward and forward passes for each optimization step. Moreover, Figure 1: Overview of our approach. Adversarial attacks happen in image space by adding crafted noise to a pattern \(x\), shifting classifier’s output to a wrong class. DiffDefence starts by drawing a sample \(z_{T}^{1}\), to diffuse iteratively, for \(T\) steps into a reconstruction \(x_{r}^{(i)}=z_{0}^{(i)}=G\left(z_{T}^{(i)}\right)\); we then optimize \(z_{T}^{(i)}\) so that the diffusion output for a given optimized pattern \(z_{T}^{(i+1)}\), lies closer to the original attacked sample. In the figure, we drop the diffusion step subscript for readability purposes. DDPMs are more stable in training with respect to GANs which have also been used as a reconstruction tool [30]. * Thanks to a superior reconstructive and representational power, DDPMs require less prototype embeddings and iterations to extract a clean pattern from the attacked one, leading to higher inference efficiency with respect to [26; 30; 35]. * Finally, our approach does not require to be trained on adversarial patterns and can be used to detect attacked images. ## 2 Related Works We now cover existing state of the art on modern generative models in adversarial machine learning, forming the base of our reconstruction based adversarial defense approach. We than discuss recent methods of adversarial attacks and defense. ### Generative Models Generative models [13; 17; 19; 31] have emerged as a powerful class of machine learning algorithms that can create new data samples with characteristics similar to a given dataset. Their central idea is to learn the underlying probability data distribution, which could then be used to generate new patterns via sampling. Interestingly, these models have also proven to be particularly effective in adversarial scenarios, showcasing their ability to create samples that can attack classifiers [34; 36]. Aforementioned models have also shown the capability to learn a semantically coherent latent embedding space. This property has been exploited, in adversarial scenarios, to remove attacks from patterns. By reconstructing [20; 30], purifying [26] the perturbed sample and generating new samples to bolster adversarial training [16], these models can significantly enhance the security of machine learning systems. Recent research has highlighted the potential role of generative models in adversarial defense, as their primary objective is to produce fake data that closely resembles real data. In this paper we want to investigate the use of Diffusion Models to bolster the robustness of models against adversarial attacks. For a more thorough coverage of modern Deep Generative Models we refer the reader to [4]. Figure 2: Left original example from MNIST. Center result of the DeepFool attack (wrongly classified). Right our DiffDefense reconstruction (correctly classified). ### Adversarial Attacks and Defense An adversarial attack is a process that aims at altering a classifier input pattern in order to get the classifier to output a wrong prediction. So given an input \(x_{i}\) and a corresponding label \(y_{i}\), and a classifier \(\mathcal{C}\) an attack method will aim at obtaining a \(x^{*}\) such that \(\mathcal{C}(x^{*})\neq y_{i}\). Given this definition any method completely replacing pattern \(x\) with a different pattern would suffice in making a classifier mistake the label. For this reason, a constraint on the perturbation of the attacked pattern \(x^{*}\) is also required. Therefore the attacked pattern \(x^{*}\) must be close to the original one \(||x-x^{*}||<\epsilon\). Adversarial attacks can work in white-box and black-box scenarios. In the white-box scenario the classifier is known to the attacker. The full knowledge of the classification method implies that, for example for a neural network all weights are known and the attacking method can leverage this knowledge. White-box approaches may exploit the computation of the gradient of the model loss with respect to its parameters for a specific input, such as the Fast Gradient Signed Method (FGSM) [14]. PGD [21, 22] improves over [14] by a refined attack generation obtained by iterative Projected Gradient Descent (PGD). Instead, Deepfool [23] attempts to find the closest decision boundary to then perturb the input in that direction. Combination of attacks have been also proposed in [11], combining parameter free versions of PGD with SquareAttack [2]. EOT+PGD combines the concepts of Expectation over Transformation (EOT) [3] and Projected Gradient Descent (PGD) to improve its effectiveness. Elasticnet [6] exploits a combination of L1 and L2 regularisation terms to provide an optimal trade-off. In the black-box scenario, the attacking method does not have access to the classifier, which is the most realistic setting. When an attacker has no knowledge of a classifier's architecture and weights, they can employ a query-based approach to perturb the input without relying on the gradient by applying a perturbation on the input until the classifier changes its output. Depending on the model's feedback for a given query, an attack can be classified as a score-based [7] or hard-label [5] attack. In the score-based setting,the attacker exploits the model's output probabilities of each decision. Several attacks have been created with this approach, such as Square Attack [2], which selects localized square-shaped updates at random positions. Pixel Attack [32] show attacks are possible even with a single pixel perturbation. In the hard label-based approach, the attacker exploits the model's final decision output. Recently, SIGN-OPT [9] improved a previous work [8] using fewer queries (20K) to attack, being faster than the previous and obtaining a similar error rate of white-box attacks, but remaining much slower than the white-box. However, a query-based approach may not be as efficient as a white-box attack. Instead, a black-box attack can be employed by using the transferability [27, 28] of perturbed images to attack the target model. The attacker can use a substitute model, where they have full knowledge, to generate adversarial images using white-box attacks that can then be used to attack the target model. Further coverage of adversarial attack and defense technique can be found in [1]. The issue of adversarial attacks in machine learning has prompted the development of various defense mechanisms, we can roughly identify three main approaches: adversarial training, adversarial reconstruction, and adversarial purification. Adversarial training methods improve the robustness of a model against adversarial attacks by augmenting its training set with adversarial examples. Introduced first by [14], adversarial training has become one of the most successful defense against adversarial attacks [21, 29, 15], adversarial training can also be enhanced using generative models for data augmentation [16]. The main limitation of this approach is that is mainly protecting classifiers from methods used in the adversarial training. Adversarial reconstruction approaches leverage the projection of patterns onto a learned latent manifold to regenerate the original input from its adversarial counterpart. Generative models [20, 30] are a natural choice to learn such latent representation and to obtain clean reconstructions out of attacked patterns. Other approaches leveraged super-resolution networks [24] or trained a reconstruction network to minimize the perceptual loss between the reconstruction of the attacked pattern and the clean image. Adversarial purification techniques perform a filtering of attacked patterns removing adversarial perturbations while preserving their original features. Recently proposed denoising Diffusion Models have been used as a tool for purification [33, 35, 37] Reconstructions based on GANs [30] are effective and generalize versus unseen threats. However, the instability of GANs during training remains a challenge. Moreover, many source noise embeddings and multiple reconstruction iterations are required to obtain effective defense. Defense-VAE is faster and as effective as Defense-GAN. However, to obtain effective reconstructions [20] the method is fine-tuned on attacked images making the approach less general and more prone to fail on unseen threats. Purification via Diffusion Models [33, 26, 35] exploits multiple forward/backward passes to obtain a reliable defense which requires significant time to purify an image. With respect to [20, 30] our approach exploits powerful Diffusion Models as a reconstruction tool. Interestingly, our approach is more efficient then Defense-GAN, requiring less iterations and source embeddings. Different from [20] we do not require to train on adversarial examples to work as Defense-VAE [20]. Current defense mechanisms exploiting Diffusion models are less efficient, requiring as much as 5s on a V100 card [26], while our approach runs in 0.28s on a TitanXP card. ## 3 Methodology We propose a diffusion reconstruction method as a defense against adversarial attacks. The underlying idea is that adversarial attacks seek to deceive a deep neural network (DNN) by introducing a disturbance to the image while preserving its semantic meaning. Hence, the adversarial image ought to be situated close to the original, unperturbed image. Our approach is based on the idea that it is possible to induce a Generative model \(G(\cdot)\) to produce a given image \(x^{*}\) by minimizing the distance in image space of the output pattern, getting \(\hat{z}\) as the result of such minimization \[\hat{z}=\arg\min_{z}||G(z)-x^{*}|| \tag{1}\] Obtaining the reconstructed image \(x_{r}=G(\hat{z})\). Having \(G(\cdot)\) being learned on a clean dataset, the main assumption is that output generated obtained by solving Eq. 1 are closer to clean examples than corrupted ones. In our case \(G(z)\) is the result of a reverse diffusion process, each step of which is given by: \[z_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}-\left(z_{t}-\frac{1-\alpha_{t}}{\sqrt{1- \bar{\alpha}_{t}}}\epsilon_{\theta}\left(z_{t},t\right)\right)+\sigma_{t}n \tag{2}\] where \(\epsilon_{\theta}\) is the U-Net noise prediction model, \(\alpha_{t}=1-\beta_{t},\bar{\alpha_{t}}=\prod_{s=1}^{t}\alpha_{s},\{\beta_{t} \in(0,1)\}_{t=1}^{T},\sigma_{t}=\sqrt{\beta_{t}}\). Our goal is to have the diffusion reverse process create a clean image, that is as close as possible to the attacked input. To this end we must obtain a suitable noise vector \(z_{k}\). Therefore, we start from a random noise sample \(z_{T}^{1}\), and we iteratively generate an image using the reverse process of a diffusion model. We then optimize z to solve Eq. 1 as shown in Algorithm 1. In its general form, the proposed algorithm may also be run for multiple source embeddings, although we found that it only increase slightly the accuracy. ``` 1:Given adversarial image \(x^{*}\) 2:\(z_{T}^{1}\sim\mathcal{N}(0,\,I)\) 3:for\(i=1,2,\ldots,L\)do 4:for\(t=T^{*},T^{*}-1,\ldots,0\)stepsdo 5:\(n\sim\mathcal{N}(0,\,I)\) 6:endfor 7:\(\eta^{i}=\eta^{i-1}\Delta^{\frac{1}{|T+0.8|}}\) 8:\(x_{r}^{(i)}=z_{0}^{(i)}\) 10:\(z_{T}^{(i+1)}=z_{T}^{(i)}-\eta^{i}\nabla_{z}\mathcal{L}(x_{r}^{(i)},x^{*})\) 11:endfor ``` **Algorithm 1** DiffDefense Reconstruction Algorithm. As a loss \(\mathcal{L}(x_{r}^{(i)},x^{*})\) we used Mean Square Error (MSE). \(T^{*}\) are the diffusion steps and \(L\) are the gradient descent iterations, both treated as hyperparameters. \(\Delta=0.1\) is a decay rate. ### Implementation details The noise prediction U-Net \(\epsilon_{\theta}\) architecture consists of a contracting path, bottleneck layer and an expansive path. The contracting path involves repeating a block with layer normalization, 3x3 convolutions, and SiLU activation, followed by downsampling with a stride of 2. The number of feature channels is doubled at each of the three downsampling step. The bottleneck layer consists of the same block of the contracting path repeated three times. The expansive path starts by concatenating the corresponding feature map from the contracting path with an upsampled input using transpose convolution. It is then followed by a block with layer normalization, 3x3 convolutions, and SiLU activation. At each upsampling step, the number of feature channels is halved. This process is also repeated three times. Both the contracting and expansive paths include a time-embedding layer, which consists of two linear layers with a SiLU activation in between. This time-embedding layer is added at each block of the contracting and expansive paths. We employ two classifiers: the attacked classifier A and the surrogate classifier B. Classifier A is composed of two 5x5 convolutions with 64 output channels and stride of 2 and 1, respectively, using ReLU activations. It is then followed by a dropout layer (p = 0.25), a linear layer with 128 output features using ReLU activation, another dropout layer (p = 0.5), and finally a linear layer with 10 output features. We use classifier B to generate adversarial samples for black box attacks. B consists of a dropout layer (p = 0.2), followed by three convolutions with respective filter sizes of 8x8, 6x6, and 5x5 and strides of 2, 2, and 1, using ReLU activations. Afterward, another dropout layer (p = 0.5) is applied, and the final layer is a linear layer with 10 output features. The Diffusion Model and the classifier are trained using the same clean dataset. Training the classifier on reconstructed data is unnecessary if the diffusion model generates high-fidelity images resembling the originals. To implement the adversarial attack used to evaluate DiffDefense, we used adversarial robustness toolkit [25] and torchattacks [18]. ## 4 Experiments This section presents the experiments that evaluate the proposed method using both black-box and white-box attacks. First, we evaluate performance against three classic attacks in both settings [14, 22, 23]. In these experiments we seek optimal values for the number of iterations for gradient descent \(L\), the embedding set size \(R\), and the diffusion step \(T^{*}\). Then, keeping these hyperparameter fixed we test DiffDefense against unseen attacks [2, 6, 11, 38]. Finally, we use this method to detect adversarial samples. The experiments are conducted on two different datasets, MNIST [12] and KMNIST [10]. ### Result of white-box & black-box attack We investigate DiffDefense ability to withstand both white-box and black-box attacks. To this end, we subject it to three potent white-box attacks: FGSM, PGD, and Deep Fool. Furthermore, we evaluate the performance of DiffDefense against these same attacks in the black-box setting, where we generate adversarial samples using an auxiliary classifier to attack the target classifier. In Tab. 1 we can see how all attacks are pretty effective in both settings except for DeepFool used as a black box method. In general using DiffDefense we can always recover a correct classification for almost all attacked examples. ### Defense against unseen threats One of the significant limitations of existing adversarial training defense methods is their inability to effectively address previously unseen threats. DiffDefense does not require to observe adversarial patterns to work, nonetheless previous experiments were performed seeking the optimal values for hyperparameters \(L,R,T^{*}\). In order to assess the robustness of our proposed approach to such \begin{table} \begin{tabular}{l|c c c c} \hline \hline Dataset & Attack & Type & Without defense & With defense \\ \hline \multirow{6}{*}{MNIST} & No attack & - & 99.14\% & 99.06\% \\ \cline{2-5} & DeepFool & White box & 0.95\% & 98.16\% \\ & & Black box & 97.17\% & 98.86\% \\ \cline{2-5} & PGD & White box & 5.81\% & 95.94\% \\ & & Black box & 51.28\% & 97.18\% \\ \cline{2-5} & FGSM & White box & 23.72\% & 89,95\% \\ & & Black box & 15.81\% & 91.28\% \\ \hline \multirow{6}{*}{KMNIST} & No attack & - & 95.18\% & 94.38\% \\ \cline{2-5} & DeepFool & White box & 2.93\% & 93.92\% \\ \cline{2-5} & & Black box & 92.16\% & 93.92\% \\ \cline{2-5} & PGD & White box & 26.83\% & 84.85\% \\ \cline{2-5} & FGSM & White box & 58.43\% & 91.49\% \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of DiffDefense on white box & black box attacks on MNIST & KMNIST datset. We report accuracy for each attack with and without defense. For Black box attacks, adversarial images has been crafted using a substitute classifier. \begin{table} \begin{tabular}{l l c c c} \hline \hline Attack & Type & W/O Defense & W/ Defense & Adv. Training \\ \hline FGSM \(\epsilon=0.3\) & White Box & 23.72\% & 89.95\% & 98.02\% \\ PGD \(\epsilon=0.3\) & White Box & 5.81\% & 95.94\% & 79.59\% \\ Deep Fool & White Box & 0.95\% & 98.16\% & 5.81\% \\ EOT+PGD \(\epsilon=0.3\) & White Box & 24.57\% & 96.46\% & 89.22\% \\ Square Attack & Black Box & 43.31\% & 97.31\% & 93.09\% \\ AutoAttack & White Box & 1.26\% & 88.09\% & 45.86\% \\ Elastic Net & White Box & 0.75\% & 95.75\% & 0.62\% \\ \hline \hline \end{tabular} \end{table} Table 2: Robustness of DiffDefense against unseen threats on MNIST dataset. Adversarial training using adversarial sample crafted by FGSM attack \(\epsilon=0.3\). unseen attacks we conducted evaluations using four different attack techniques: Square Attack [2], Auto Attack [11], EOT+PGD [38] attack and Elastic Net [6], without tuning hyperparameters. Results in Tab. 2 indicate that our method is robust against all four of these previously unseen attack methods. Here we also test the behavior of adversarial training with samples produced by FGSM with \(\epsilon=0.3\). Interestingly, DiffDefense obtains high accuracy even in cases in which adversarial training, is not helping at all [6, 11, 23]. ### Ablation Studies To evaluate the effectiveness and speed of our proposed approach, we conducted an analysis of the three main hyperparameters, iteration number \(L\), embedding set size \(R\) and diffusion step \(T^{*}\). We found that the proposed method does not need the same amount of steps of the Diffusion Model but it converges with less steps, as shown in Fig.3. Moreover, in a comparison with Defense-GAN [30], the results of our experiments revealed that our method achieved convergence with fewer iteration steps and a smaller embedding set, while also requiring less time to converge than the GAN-based method. This was evident in the results presented in Tab.3, which show the superiority of our proposed approach over Defense-GAN. The metrics used for comparison include robust accuracy, which measures the accuracy after applying the defense, and time, which indicates the duration to reconstruct a single image. ### Attack detection Interestingly the results of our study indicate that non-perturbed images are reconstructed with greater ease in comparison to those subjected to adversarial attacks. This is expected since the diffusion model and the classifier are trained on the same data, facilitating the diffusion in the reverse process phase using an unperturbed image to an adversarial image. This ease of reconstruction is reflected in significantly smaller reconstruction errors after an equal number of iterations. These findings suggest that the reconstruction error may serve as a potential indicator of the presence of an attack. In Fig. 4 we show ROC curves \begin{table} \begin{tabular}{c|c c c c} \hline \hline Method & L & R & Time & Robust Acc \\ \hline \multirow{3}{*}{Defense} & 25 & 10 & 0.086 & 79.98\% \\ & 100 & 1 & 0.273 & 50.11\% \\ \cline{1-1} Gan [30] & 100 & 10 & 0.338 & 89.11\% \\ & 200 & 10 & 0.675 & 91.55\% \\ \hline \multirow{2}{*}{Ours} & 5 & 1 & 0.280 & 87.78\% \\ & **5** & **5** & **0.280** & **89.95**\% \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with Defense-GAN [30]. Fewer iterations (L) and smaller embedding set (R) in DiffDefense lead to faster convergence and reduced time. All tests made on MNIST using white-box FGSM attack (\(\epsilon=0.3\)) and the same classifier as [30] varying the diffusion step for [6, 23]. For all other methods [2, 11, 22, 38] we get AUC \(\in\) [.99, 1]. ## 5 Conclusion We proposed DiffDefense, a novel method that uses Diffusion models for reconstruction, enhancing classifier robustness against attacks. Empirical evaluation demonstrated its efficacy, speed, and potential as an alternative to GAN-based methods and adversarial purification methods based on diffusion models. We also showed that our approach is effective against previously unseen attacks, highlighting its robustness to new attacks. Additionally, we illustrated the usefulness of reconstruction as a tool for adversarial detection. Our findings suggest that Diffusion based adversarial defense by reconstruction is a promising path toward developing secure AI systems. We believe that future work may further improve our method by adopting better solvers for more accurate and faster reconstruction. Figure 4: Attack detection ROC curves for DiffDefense. In our experiments FGSM, PGD, EOT+PGD, AutoAttack, Square Attack yielded a AUC \(\in\) [.99, 1]. Figure 3: Accuracy analysis of the classifier after DiffDefense has been applied on different diffusion steps. Using L = 4 and R = 5.
2309.15160
Opportunities and limits of lunar gravitational-wave detection
A new era of lunar exploration has begun with participation of all major space agencies. This activity brings opportunities for revolutionary science experiments and observatories on the Moon. The idea of a lunar gravitational-wave detector was already proposed during the Apollo program. The key characteristic of the Moon is that it is seismically extremely quiet. It was also pointed out that the permanently shadowed regions at the lunar poles provide ideal conditions for gravitational-wave detection. In recent years, three different detector concepts were proposed with varying levels of technological complexity and science potential. In this paper, we confront the three concepts in terms of their observational capabilities based on a first more detailed modeling of instrumental noise. We identify important technological challenges and potential show-stoppers.
Andrea Cozzumbo, Benedetta Mestichelli, Marco Mirabile, Lavinia Paiella, Jacopo Tissino, Jan Harms
2023-09-26T18:00:13Z
http://arxiv.org/abs/2309.15160v2
# Opportunities and limits of lunar gravitational-wave detection ###### Abstract A new era of lunar exploration has begun with participation of all major space agencies. This activity brings opportunities for revolutionary science experiments and observatories on the Moon. The idea of a lunar gravitational-wave detector was already proposed during the Apollo program. The key characteristic of the Moon is that it is seismically extremely quiet. It was also pointed out that the permanently shadowed regions at the lunar poles provide ideal conditions for gravitational-wave detection. In recent years, three different detector concepts were proposed with varying levels of technological complexity and science potential. In this paper, we confront the three concepts in terms of their observational capabilities based on a first more detailed modeling of instrumental noise. We identify important technological challenges and potential show-stoppers. ## I Introduction Lunar gravitational-wave (GW) detection was originally proposed by Joseph Weber who was the lead scientist behind all the early efforts to build gravitational-wave detectors. His Lunar Surface Gravimeter was deployed on the Moon by the crew of the Apollo 17 mission in December 1972 [1]. The main science goal was to observe surface vibrations of the Moon caused by passing GWs. The instrument did not perform as expected, and today we also know that the sensitivity would not have been enough to detect GW signals. Nonetheless, the idea was valid since the Moon was already known at that time to be seismically extremely quiet [2]. Models of the lunar seismic background indicate that vibration amplitudes in the frequency band suitable for GW detection might be several orders of magnitude weaker than on Earth [3]. Furthermore, the permanently shadowed regions (PSRs) at the lunar poles provide thermally stable conditions, and they might very well be the coldest regions in our solar system [4]. Following opportunities to submit mission ideas to NASA and ESA calls, three new concepts were proposed in 2020: the Lunar GW Antenna (LGWA) [5], the Lunar Seismic and Gravitational Antenna (LSGA) [6], and the Gravitational-wave Lunar Observatory for Cosmology (GLOC) [7] soon followed by a study of a concept similar to GLOC called LION [8]. Another concept similar to LGWA was recently proposed using a different sensor technology [9]. The LGWA and LSGA concepts exploit the response of the Moon to GWs, while GLOC and LION work like the Virgo/LIGO detectors on Earth [10; 11]. In the following, we will refer to the GLOC/LION type concepts as long-baseline interferometers with suspended test masses (LBI-SUS), and to LSGA type concepts as long-baseline interferometers with ground optics (LBI-GND). A recent study found that sensitivity limits coming from the seismic background create a separation of the observation bands of the three concepts [12]. The order of concepts from low to high observation frequencies is LBI-GND, LGWA, and LBI-SUS. As we will show in this paper, none of the concepts can ever reach good sensitivity below \(1\,\mathrm{mHz}\), which must be left to space-based detectors like LISA [13], and there is no strong motivation to observe GWs above a few Hertz with lunar detectors since this is where terrestrial detectors can already achieve an excellent sensitivity [14]. Inside this designated band of lunar GW detection from \(1\,\mathrm{mHz}\) to a few Hertz lies the decihertz band, which plays a strategic role for GW science. A decihertz detector would be the _missing link_ between the observation bands of LISA and terrestrial detectors. The decihertz band offers immense opportunities for breakthrough science [15; 16; 17; 5; 18]. If the sensitivity targets of proposed space-based decihertz detectors like the Big Bang Observer [19] and DECIGO [20] can be reached, they would be able to observe primordial GWs even when assuming conservative slow-roll inflationary models [21; 22]. In this paper, we compare sensitivity models and observational capabilities of the three lunar GW detector concepts. So far, only the LGWA concept was presented with a detailed noise budget [23]. We present a first noise model for an LBI-GND concept and a more detailed analysis of instrumental noise of a LBI-SUS concept. In contrast to the sensitivity targets proposed for GLOC and LION, we believe that such a detector needs to be more ambitious and cover the entire decihertz band to justify the immense cost and effort to build such a detector on the Moon. Proposed next-generation terrestrial detectors like Einstein Telescope [14] or Cosmic Explorer [24] will already reach down to a few Hertz with GW observations. In section II, we briefly review the LGWA concept and noise model. A more detailed analysis of instrument-noise models is provided in section III for long-baseline laser-interferometer concepts. We use these models to calculate noise budgets for LBI-GND, LBI-SUS concepts, which we present in section IV. We then take the sensitivity models and compare observational capabilities of all three concepts in section V. ## II Inertial acceleration measurement The LGWA was proposed as an array of 4 seismometers deployed in a PSR. Each seismometer reads out the lunar surface displacement produced by GWs with respect to an inertial reference mass. The array helps to distinguish between the seismic background and GW signals. The array diameter depends on the correlation length of the seismic field around \(0.3\,\mathrm{Hz}\), where LGWA is designed to have its peak sensitivity. Correlations between sensors must be high enough to coherently analyze the seismic background, but the sensors cannot be too close either since differential effects between sensors strongly boost the efficiency of background reduction methods. It is expected that the optimal diameter is about a kilometer, which is quite small and due to the fact that the lunar regolith and the underlying geology do not have the same solid structure of the typical terrestrial geology. The requirement of 4 seismic stations comes from the idea that three stations are needed to uniquely identify propagation directions along the surface, and to use this information to predict (and then subtract) the seismic disturbance at a fourth station. Deployment inside a PSR is crucial to profit from low temperatures and better thermal stability. Temperature changes were responsible for excess noise especially in horizontal seismic channels of the Apollo seismometers [25]. Each LGWA station will have two horizontal seismic channels for vibration measurements between \(1\,\mathrm{mHz}\) and \(10\,\mathrm{Hz}\). This means that LGWA has a total of 8 GW channels. The baseline design of the LGWA payload consists of a compact niobium Watt's linkage with a resonance frequency of \(0.25\,\mathrm{Hz}\) and an inertial reference mass of \(10\,\mathrm{kg}\)[23]. A Watt's linkage is a combination of a pendulum and an inverted pendulum allowing for a very low resonance frequency in a compact device [23; 26]; the specific design, which permits the fabrication of a (quasi) monolithic mechanical structure [27; 28], is adapted from an earlier prototype. The main sensitivity limitation below \(0.5\,\mathrm{Hz}\) comes from suspension thermal noise connected to structural damping of the niobium mechanical structure. While niobium quality factors of \(5\cdot 10^{7}\) were observed in the past at low temperatures [29], a conservative estimate of \(Q=10^{4}\) is chosen for the LGWA baseline design since it is expected that the electro-discharge machining especially of the sub-mm thick flexures of the Watt's linkage might degrade its mechanical quality. Displacements relative to the inertial reference mass will be read out with a compact laser interferometer. The readout noise will limit the sensitivity of the sensor above \(0.5\,\mathrm{Hz}\). The mechanical structure will be cooled down from \(\mathrm{{}_{j}50\,K}\) ambient temperatures of the PSR to \(4\,\mathrm{K}\) with a sorption cooler, which permits the use of superconducting coil actuators. An alternative sensor concept is under investigation, where the niobium material is substituted by silicon, and the laser-interferometric readout by a magnetic readout using superconducting coils and superconducting quantum-interference devices (SQUID) as amplifiers [23]. This concept would lead to sensitivity improvements over the entire observation band since it reduces suspension thermal noise (assuming a quality factor of \(Q=10^{6}\) for silicon) and readout noise (sub-femtometer) with respect to the baseline design. It would also reduce power consumption of the payload, which might be a strong asset at a deployment location where solar panels cannot easily be used to deliver energy. It should be noted though that even with niobium and laser-interferometric readout, it might be possible to reach this performance [29; 30]. The two sensitivity models are shown in figure 1. The lunar GW response model is another important ingredient for the evaluation of the detector sensitivity curve. The response below the decihertz band is best modeled by normal-mode simulations [31; 32; 5]. Above \(0.1\,\mathrm{Hz}\), topography and regional geology are expected to play an important role, because seismic waves above \(0.1\,\mathrm{Hz}\) are short enough to show significant interaction with these heterogeneous structures. More detailed simulations of such a system with appropriate tools are under preparation. For the time being, we match the decihertz response to known features. For example, Figure 1: Sensitivity targets for a single LGWA seismic channel (the full detector has 8 seismic channels). The sensitivity below \(0.3\,\mathrm{Hz}\) is limited by suspension thermal noise, where the niobium model assumes a quality factor \(Q=10^{4}\) and the silicon model \(Q=10^{6}\). The sensitivity above \(0.3\,\mathrm{Hz}\) is limited by readout noise assuming that the seismic background can be reduced sufficiently. A detailed noise budget was presented in [23]. starting from the simplified Dyson response, which states that the effective detector baseline of a homogeneous ground to GWs is the shear-wave length divided by \(\pi\), we include a correction from resonant amplification of the ground response. The quality factor of ground material even at relatively shallow depth is in the thousands [33]. Generally, the rock quality is only an indication of the achievable amplification factor, and the actual energy loss can be due to coupling into deep parts of the lunar interior and scattering. However, measurements of, e.g., decay times of moonquake coda (the diffusive waveform at the end of a moonquake) are consistent with quality factors in the thousands, which indicates an actual storage capability of seismic energy [34]. This would also mean an enhanced GW response. However, due to the strong scattering of the waves, at least when propagating over larger distances, it is unclear how much the amplification can be exploited to improve sensitivity to GW signals. As a conservative estimate, we assume amplification factors of a few 100 in the decihertz band. The LGWA sensitivity estimate could easily be off in either direction by an order of magnitude. ## III Instrument noise of long-baseline laser interferometers ### Quantum noise For the modeling of quantum noise, we will follow the formalism initially developed for terrestrial GW detectors [35; 36]. Important here is to distinguish between interferometers with suspended test masses and interferometers where the test masses are mounted to the ground. In the former case, there is additional noise at low frequencies caused by the quantum radiation-pressure fluctuations of the laser beam. The other important distinction is between a laser interferometer that has arm cavities and one that does not. Arm cavities are optical resonators formed by a pair of mirrors, which resonantly amplify the light power and signal response. A detailed analysis whether arm cavities are required are beyond the scope of this paper. We choose a system without arm cavities for LBI-GND, and a system with arm cavities for LBI-SUS. Arm cavities add complexity to the system, but they give an important handle to control and reduce noise couplings from auxiliary degrees of freedom, which might prove crucial for the ambitious LBI-SUS concept. The quantum noise of an LBI-GND detector is given by \[h_{\rm{QN}}^{\rm{gnd}}(f)=\sqrt{\frac{\hbar\omega_{0}}{P}}\frac{2c}{\omega_{0} L}, \tag{1}\] where \(L\) is the length of the interferometer arm, \(\omega_{0}\) is the angular frequency of the laser, \(c\) is the speed of light, \(P\) is the power of the laser beam in each arm, and \(\hbar\) is Planck's constant. There is no significant radiation-pressure noise in this case since the optics are mounted to ground. If instead we consider arm cavities (optical resonators) and suspended test masses, then a set of transfer functions need to be considered, which describe how the signal and noise fields propagate inside the detector: \[\mathcal{R}(f)=\exp(4\pi\mathrm{i}Lf/c)\begin{pmatrix}1&0\\ -\mathcal{K}(f)&1\end{pmatrix}\] Round-trip propagation of a field inside an optical resonator of a detector arm (in quadrature formalism [36]) \[\mathcal{C}(f)=(\mathcal{R}(f)-\rho\mathbb{1})\cdot(\mathbb{1}-\rho\mathcal{R}( f))^{-1}\] Reflection of optical field from optical resonator \[\mathcal{T}(f)=\tau(\mathbb{1}-\rho\mathcal{R}(f))^{-1}\] Transmission of field into or out of the optical resonator \[S(f)=\vec{\eta}^{\top}\cdot\Re(\mathcal{C}(f)\cdot\mathcal{C}(f)^{\dagger}) \cdot\vec{\eta}\] Power-spectral density of quantum noise at the detector output; physical units are accounted for in equation (1) \[r(f)=\vec{\eta}^{\top}\cdot\mathcal{T}(f)\cdot\begin{pmatrix}0\\ 1\end{pmatrix}\] Signal field at the detector output (in quadrature formalism [36]); physical units are accounted for in equation (1) \[h_{\mathrm{QN}}^{\mathrm{sus}}(f)=h_{\mathrm{QN}}^{\mathrm{gnd}}(f)\frac{ \sqrt{S(f)}}{r(f)}\] Sensitivity limitation caused by quantum noise in units of GW strain Here, \(\mathcal{K}(f)=8\omega_{0}P/(mc^{2}(2\pi f)^{2})\), \(m\) is the suspended test mass, and \(\vec{\eta}\) is a unit vector that specifies the linear combination of amplitude and phase quadrature observed by the homodyne detector [36]. Here we choose \(\vec{\eta}=(0,1)\), which represents a measurement of the phase quadrature. The two arm cavities are assumed to be loss free except for the transmissivity \(\tau=\sqrt{1-\rho^{2}}\) of their input mirrors. The unit vector \((0,1)\) is chosen to minimize quantum noise at most frequencies, but it can in principle have other directions controlled by the homodyne detection system. ### Model of the ambient seismic field In order to model seismic noise in LBI-GND and LBI-SUS detectors, one first needs a model of ground vibrations. In general, not only the seismic displacement spectrum matters, but also ground tilt and spatial correlations. For the displacement spectra, we will use the study [3] by Lognonne et al, where time series of ground vibrations caused by meteoroid impacts were simulated. The simulations predicted a root-mean-square (rms) below \(75\,\mathrm{fm}\)\(50\%\) of the time and below \(0.5\,\mathrm{pm}\)\(90\%\) of the time. We would like to convert the rms into a power spectral density (PSD), which requires an assumption of the frequency dependence of the spectrum. We do not know at present what the background spectrum looks like. Models of meteoroid impacts indicate that it might not be approximated well by a simple power law [37]. Nevertheless, in this study, we adopt a simple model where the seismic displacement spectrum is proportional to \(1/f^{1/2}\). Considering that the rms in Lognonne et al was evaluated down to \(17\,\)mHz, the noise spectrum consistent with an rms of \(75\,\)fm is \[\xi(f)=\sqrt{\frac{1\,\text{Hz}}{f}}10^{-14}\,\text{m/Hz}^{1/2}. \tag{3}\] It should be noted that this model predicts a background noise that is higher than the \(\text{fm/Hz}^{1/2}\) background on low-order normal-mode resonances estimated by Harms et al [5]. It is likely that an extrapolation of the PSD derived from simulations in the decihertz band towards lower frequencies is inaccurate. The main background in the decihertz band is produced by the impact of many very small meteoroids and these small impacts can hardly excite the global quadrupole modes at mHz frequencies. Additional numerical simulations are required, and upcoming lunar seismic experiments like the Farside Seismic Suite [38] will shed new light on the distribution and properties of seismic events. Another important aspect are seismic correlations between points on the lunar surface. They determine the LGWA array configuration, and they also influence the seismic noise of LBI-GND concepts, which measures ground strain. Strong correlations (with matching phase) between the ends of an LBI-GND detector means that it sees a smaller seismic strain, which reduces the impact of the seismic background on the GW measurement. We call this effect _common-mode rejection_. Correlations are determined by seismic speeds, seismic scattering, source distribution and mode content of the seismic field. In figure 2, we show an estimate of the correlation length obtained from a simulated Rayleigh-wave dispersion curve. The layered geological model is taken from Garcia et al [33] and using the gpdc tool of the Geopsy pack to calculate a Rayleigh-wave dispersion curve 1. Assuming a LBI-GND arm length of \(35\,\)km, we find that above \(20\,\)mHz, ground displacement is uncorrelated. The lower bound on the common-mode rejection is set by the longest spatial scales of the displacement field. In theory, there are modes without gradient along great circles (all \(l=0\) modes), but we choose a more conservative value of \(R_{\text{moon}}/2\) since the actual correlation also depends on which modes dominate the Figure 2: Left: Plot of the correlation length of a fundamental Rayleigh-wave field estimated from seismic speeds using a layer model obtained from Garcia et al [33]. Seismic scattering, mode content and source distribution are not considered here, but might have an important influence on correlations. Right: Common-mode suppression of seismic displacement measured across a distance of \(L=35\,\)km calculated from the correlation model. hum around a mHz. One should also expect stronger seismic gradients because of topography and heterogeneous geology. ### Seismic isolation Seismic isolation is the defining feature of the LBI-SUS concept. Seismic isolation generally consists of active [39] and passive [40] elements. Active seismic isolation is a form of vibration control using sensors to stabilize a platform. A chain of spring or pendulum stages can then be suspended from this platform to filter out disturbances above its fundamental resonances, by which we mean the system of coupled resonances forming from the fundamental resonances of the individual stages. The final suspension stage holds the test mass. All forms of ground motion (tilt and displacement in any direction) can couple into motion of the test mass along the direction of the interferometer arm leading to seismic noise of the GW measurement. Here we will focus on the modeling of seismic isolation with respect to ground displacement along the horizontal direction of the interferometer arm and along the vertical. We model the suspension system as a sequence of suspended masses \(m_{k},\,k=1,\ldots,n\), where \(n\) is the number of suspended stages and \(m_{n}\) is the test mass. Each suspension stage, e.g., a pendulum for horizontal filtering or a system of spring blades for vertical filtering, is characterized by a resonance frequency \(\omega_{k}\). We also define \(m_{ij}=m_{i}+\ldots+m_{j}\) with \(i<j\), e.g., \(m_{13}=m_{1}+m_{2}+m_{3}\), and \(m_{ii}=m_{i}\). The equation of motion for mass \(m_{k}\) can then be cast into the form \[-m_{k}\omega^{2}\tilde{x}_{k}+m_{kn}\left(\tilde{x}_{k}-\tilde{x}_{k-1}\right) (1+i\phi_{k})\omega_{k}^{2}+m_{(k+1)n}\left(\tilde{x}_{k}-\tilde{x}_{k+1} \right)(1+i\phi_{k+1})\omega_{k+1}^{2}=\delta\tilde{F}_{k}, \tag{4}\] where \(\phi_{k}\) is the loss angle of a suspension stage. A loss angle quantifies the fraction of energy in the system lost due to dissipation. When an index in this equation takes a value smaller than 1 or larger than \(n\), the respective variable must be set to 0. Instead of considering a force \(\delta\tilde{F}_{k}\) acting on the mass \(m_{k}\), it can sometimes be more convenient to model the external disturbance as displacement noise. This is the case for the seismic noise coupling into the system through the top most suspension point. In this case, the equivalent force acting on the top mass \(m_{1}\) takes the form \(\delta\tilde{F}_{1}=m_{1n}\omega_{1}^{2}\tilde{x}_{0}\), where \(\tilde{x}_{0}\) is the displacement of the suspension point of the first stage. A comparison of the isolation performance of suspension systems with a different number of stages is shown in figure 3. The equations of motion neglect coupling between displacements along orthogonal directions, which is a good approximation for the purpose of this paper. However, even if the mechanical system is engineered so accurately that cross couplings between orthogonal displacements can be ignored, it turns out that vertical displacement of the test mass couples into the GW measurement. This is because the vertical direction is defined by the direction of local gravity. This direction is different at the two ends of an interferometer arm mostly due to the curvature of the Moon. If the interferometer has an arm length \(L\), the vertical to horizontal coupling is given by \(L/(2R)\), where \(R\) is the radius of the Moon. For a \(35\,\mathrm{km}\) long arm, we find a \(1\%\) vertical to horizontal coupling. This has important consequences for the seismic isolation system as we explain in the following. It is unfeasible to achieve the \(20\,\mathrm{mHz}\) and \(1\,\mathrm{mHz}\) resonances with conventional pendulum stages or spring assemblies, which would all result in much higher resonance frequencies. Instead, a mechanical system would rely on spring-antispring dynamics [41]. Here, antispring dynamics means that the elastic restoring force has the opposite sign compared to a normal spring as can for example be found in an inverted pendulum when the gravitational pull on the supported mass exceeds the elastic restoring force of the supporting bar [42]. While it is conceivable that \(20\,\mathrm{mHz}\) and maybe even \(1\,\mathrm{mHz}\) resonances can be achieved for horizontal filters [43], it is unthinkable today to achieve such values for the vertical isolation. This is because the vertical stages must also counteract the gravitational pull of the Moon, i.e., the challenge is to realize a soft suspension dynamics on top of a strong force that counteracts gravity. Euler springs, which are assemblies of near vertically aligned cantilevers, which provide a spring-like restoring force through lateral deformation (buckling), were identified as promising mechanical stages for vertical isolation, but with performance far from what is needed for an LBI-SUS [44]. There is an additional challenge concerning the suspension thermal noise of the lower suspension stages, which points to even more exotic suspension systems like superconducting magnetic levitation. In any case, we believe that vertical seismic isolation is a potential show-stopper for LBI-SUS concepts. In a real suspension system, the resonances would be damped using feedback control designed for this purpose, but since it does not play a role for our sensitivity studies, we use the undamped suspension model. ### Newtonian noise The lunar seismic field also produces fluctuations of the gravitational field. The corresponding noise in GW measurements is called Newtonian noise (NN). It only affects LBI-SUS concepts. In this study, we use a very simple model that only considers NN contributions from the displacement of the lunar surface. Contributions from the compression of the ground are neglected. We also assume for simplicity Figure 3: Comparison of strain noises from horizontal ground displacement. The suspension systems consist of \(n-1\) 20 mHz stages and a final 1 mHz stage. Here we assume uncorrelated ground motion between the different LBI-SUS test masses, equation (3) for the PSD of ground displacement, and 35 km long interferometer arms. The suspended masses are 800 kg (top mass of 4-stage system), 800 kg (top mass of 3-stage system), 1200 kg (top mass of 2-stage system), 1289 kg (test mass). that the ambient seismic field is isotropic and homogeneous. In this case, the NN is given by [45] \[h_{\rm NN}(f)=\frac{2}{L}\frac{2\pi G\rho_{0}}{\sqrt{2}}\,\frac{\xi(f)}{(2\pi f)^ {2}}, \tag{5}\] where \(\xi(f)\) is the vertical surface displacement, \(G\) Newton's constant, and \(\rho_{0}\) the density of the ground. We will see in section IV that NN is too weak to play an important role for lunar GW detection. However, there can be stronger NN transients produced by moonquakes. In this case, a cancellation of NN can be attempted, which requires the deployment of seismometer arrays around all test masses [46; 47]. ### Suspension thermal noise The coupling between the mechanical system and a heat bath of temperature T leads to thermal fluctuations described by the _fluctuation-dissipation_ theorem. Accordingly, the suspension thermal noise spectrum is determined by the real part of the complex admittance \(Y(\omega)=-\mathrm{i}\omega\tilde{x}(\omega)/\tilde{F}(\omega)\), with \(\tilde{x}(\omega)\) being the displacement amplitude of a suspended mass at frequency \(\omega\) and \(\tilde{F}(\omega)\) the force acting on it [48]. One can think of the real part of the admittance of a mechanical system as the ease, with which it reacts to a probing force. A high-admittance system will experience larger displacements of its components than a low-admittance system under the same force. Given this, the PSD of the fluctuations can be written as: \[S(x;\omega)=\frac{4\mathrm{k_{B}}T}{\omega^{2}}\Re(Y(\omega)). \tag{6}\] To understand this results, let us take a simple pendulum as an example; we introduce a term of structural damping, parameterized by the so-called _loss angle_\(\phi\), which determines a small imaginary part of a restoring force. We can write the equation of motion of this pendulum of length \(L\) with suspended mass \(m\) in the frequency domain: \[\omega_{0}^{2}\Big{(}1+\frac{L_{\rm el}}{L}(1-\mathrm{i}\phi/2)\Big{)}\tilde{ x}(\omega)-\omega^{2}\tilde{x}(\omega)=\delta\tilde{F}(\omega)/m \tag{7}\] with \(\omega_{0}^{2}=g/L\); the parameter \(L_{\rm el}\) is the bending length of the suspension fiber [49], and \(L_{\rm el}/L\) is the fraction of elastic to gravitational restoring force of the pendulum also known as the dilution factor. This dilution factor enhances the quality factor of the pendulum mode with respect to the material quality factor. It does not exist for filter stages of vertical displacement, where the restoring force cannot get any assistance from gravity. The admittance is found to be: \[\tilde{Y}(\omega)=-\mathrm{i}\omega\frac{\tilde{x}}{\delta\tilde{F}}=\frac{- \mathrm{i}\omega}{m(\omega_{0}^{2}-\omega^{2}+\omega_{0}^{2}(L_{\rm el}/L)(1- \mathrm{i}\phi/2))} \tag{8}\] Calculating its real part and inserting the result in equation (6), we obtain the following PSD of the thermal fluctuations of the mass coordinate \(x\): \[S(x;\omega)=\frac{4\mathrm{k_{B}}T}{m\omega}\frac{\omega_{\rm el}^{2}\phi/2}{ (\omega_{0}^{2}+\omega_{\rm el}^{2}-\omega^{2})^{2}+\omega_{el}^{2}\phi^{2}/4} \tag{9}\] with \(\omega_{\rm el}^{2}=\omega_{0}^{2}(L_{\rm el}/L)\ll\omega_{0}^{2}\). Note that the quality factor effectively decreases when the resonant frequencies are reduced by spring-antispring dynamics [42], which means that setting a requirement for a Q-factor at low resonance frequency poses a greater challenge to meet this requirement than if the same Q-factor had to be achieved with a higher resonance frequency [48]. For multi-stage suspension systems, the susceptibility in equation (8) can be calculated by solving the system of equations (4) applying a force \(\delta\tilde{F}_{n}\) to the test mass \(m_{n}\). The dominant thermal noise is produced by the final suspension stage. Thermal noise from upper stages are filtered out to some extent by the suspension system, but can still be important if the quality factor of the upper stages is substantially lower than the final stage. Moreover, the vertical-to-horizontal coupling, which we already discussed in the context of seismic isolation, also plays an important role for suspension thermal noise of LBI-SUS concepts. Since 1% of vertical motion couples into the GW measurement (assuming a 35 km long detector), it means that the quality factor of vertical isolation stages cannot be less than 4 orders of magnitude smaller than the diluted quality factor of horizontal isolation stages. Otherwise, the vertical stages would dominate the suspension thermal noise. We have computed the suspension thermal noise in the case of two, three and four suspension stages, and we show their trends in figure 4. The final stage has a 1 mHz resonance while all other stages have a 20 mHz resonance. We assumed a loss angle of \(10^{-9}\) for the final stage, and \(10^{-3}\) for all other stages. As we can see from this plot, the strain noise is smaller than \(10^{-23}\,\mathrm{Hz}^{-1/2}\) at \(\sim 200\) mHz in all the three cases. These frequencies and loss angles in this example were chosen to meet the requirements for decihertz GW detection. However, there is no known or envisioned technology that would enable such low loss factors combined with the very low resonance frequencies. Furthermore, as pointed out before, assuming similar resonance frequencies for the vertical suspension stages, the final stage would need a loss factor no larger than \(10^{-5}\). Figure 4: Suspension thermal noise of systems with two (dashed green), three (red), and four (blue) stages of attenuation. The penultimate stage makes the main contribution to the noise up to 0.1 Hz, while the noise from the final stage dominates above 0.2 Hz. ### Mirror thermal noise Mirror thermal noise is connected to dissipation mechanisms in the mirror substrate and coating layers. Mechanical dissipation leads to Brownian noise, which is the dominant contribution to mirror thermal noise in current GW detectors [50]. Thermal fluctuations can lead to thermoelastic noise and thermorefractive noise [51]. Coatings are formed by multiple layers of materials with a typical optical thickness of \(\lambda/4\). Each coating layer is obtained from a deposition of atomic layers over the substrate's surface, e.g., through ion-beam sputtering. A full coating stack has a few tens of layers and each layer typically has sub-micron thickness. The noise contribution of this inhomogeneous multi-layer coating can be calculated using Levin's method [52] of calculating thermal noise from the fluctuation-dissipation theorem written in equation (6). The Levin approach does not give an answer though to how exactly the losses in the substrate and the multi-layer coating contribute to the overall loss; it was later provided through detailed studies of the heterogeneous structure [50; 53]. Since our coating-thermal noise estimate does not need to be very accurate, we choose a simplified model that only takes into account mechanical loss of substrate and coating, and we use the Nakagawa approximation [54], \[S_{x}^{\rm MTN}(f)=\frac{2\mathrm{k}_{\rm B}T}{\pi^{3/2}f}\frac{1-\sigma^{2}} {Yw}\Big{(}\phi_{\rm sub}+\frac{2}{\pi^{1/2}}\frac{d}{w}\frac{1-2\sigma}{1- \sigma}\phi_{\rm coat}\Big{)}, \tag{10}\] where we have introduced the Young's modulus \(Y\) and Poisson's ratio \(\sigma\) of the substrate, \(d\) is the thickness of the multi-layer coating, \(w\) is the beam radius on the mirror, and \(\phi_{\rm sub},\,\phi_{\rm coat}\) are the substrate and coating loss angles. If the interferometer has arm cavities of length \(L\) and with curvature radii \(R_{1},\,R_{2}\) of the input and end test masses, we can calculate the beam radius using \[w(z)=w_{0}\sqrt{1+\Big{(}\frac{z}{z_{R}}\Big{)}^{2}}, \tag{11}\] where \(z_{\rm R}=\pi w_{0}^{2}/\lambda_{0}\) is the Rayleigh range, and \(z\) is the distance from the waist of the laser beam along the beam axis, where the waist is the minimal diameter of a beam transverse to the optical axis. The waist is located at a distance \(z_{0}=Lg_{2}(1-g_{1})/(g_{1}+g_{2}-2g_{1}g_{2})\) from the input test mass, where \(g_{1}=1-L/R_{1}\) and \(g_{2}=1-L/R_{2}\) are the g-factors. This leads to a minimal beam radius of \(w_{0}=(\lambda_{0}L/\pi)^{1/2}(g_{1}g_{2}(1-g_{1}g_{2})/(g_{1}+g_{2}-2g_{1}g_{2 })^{2})^{1/4}\). We can then find the beam radii on the two test masses by calculating \(w(z_{0})\) and \(w(L-z_{0})\). The mirror design of an LBI detector inherits some of the known issues of Earth-based detectors working at cryogenic temperatures [55]. Materials need to be compatible with the cryogenic temperatures in a lunar PSR (\(T=40-100\,\mathrm{K}\)). This influences the choice of laser frequency and design of the coating stack. Not only are we interested in the loss angle, but also in achieving low absorption and scattering. Apart from the material properties, we also need to highlight that the size of the test masses needs to be very large (\(\mathcal{O}(\mathrm{ton})\)), which begs the question on whether it is possible to deposit the required high-quality coatings on such large substrates. ### Dust noise Given the strict requirements on residual gas and dust contamination in current detectors, noise from lunar dust was pointed out as a potential show-stopper for LBI-concepts on the Moon. Dust and micro-meteoroids have an extremely low density in space, and in fact, a study showed that noise introduced by inter-planetary particles can be neglected for the LISA detector [56]. However, meteoroids act as impactors on the lunar surface, which generates ejecta particles from the lunar regolith. There is a continuous bombardment of the lunar surface by meteoroids, which leads to a continuously present particle cloud around the Moon. The dust distribution around the Moon was measured with the Lunar Dust Experiment (LDEX) on the Lunar Atmosphere and Dust Environment Explorer (LADEE) mission for grain sizes larger 0.3 \(\mu\)m [57]. The measurement was done at altitudes between a few kilometers and 260 km, and spanning selenographic latitudes of -23\({}^{\circ}\) to +24\({}^{\circ}\). The observed particle concentration was highest versus the apex direction (direction of motion of the Earth-Moon system around the Sun) consistent with impactor sources in the equatorial plane. At lower altitudes, the observed particle density varied around a few \(10^{-3}\) m\({}^{-3}\). A model consistent with these observations was later developed to extend the distributions to the polar regions and lower altitudes [58]. Their model predicts particle densities around \(10^{-3}\) m\({}^{-3}\) at the poles. They estimate an average speed of impact ejecta around 660 m/s. Since the distributions are matched to the LDEX measurements, which did not reach altitudes below 1 km, this model might miss a near-surface, low-speed ejecta population. The mechanism by which dust particles produce phase noise is through forward scattering. The strain noise is given by [56] \[h_{\rm DN}(f)=|\Re(2(\tilde{m}-1))|\sqrt{\frac{1}{Lw_{0}\tilde{v}n}}\exp\left( -\sqrt{2\pi\lambda L}f/\bar{v}\right), \tag{12}\] where \(\Re(\cdot)\) is the real part, \(n\) is the particle number density, \(\bar{v}\) the average particle speed, and \(\tilde{m}\) is connected to the forward scattering amplitude \(S(0)\) of the dust particle, \[\tilde{m}=1+{\rm i}\frac{2\pi n}{k^{3}}S(0), \tag{13}\] and \(k\) is the wavenumber of the laser beam. Since the typical diameter \(a\) of dust particles is similar to the laser wavelength \(\lambda\), we are in the Mie-scattering regime. Before presenting an estimate of Mie-scattering amplitudes \(S(0)\), we briefly discuss the two simpler cases where the particles are much larger or much smaller than \(\lambda\). For homogeneous, non-absorbing, spherical particles with \(a\ll\lambda\), we obtain the equation for Rayleigh forward scattering [59] \[S(0)={\rm i}k^{3}\alpha={\rm i}\frac{1}{3}(m^{2}-1)(ka)^{3} \tag{14}\] where \(\alpha\) is the polarizability, and \(m\) is the refractive index of the particle. In general, the polarizability can have an imaginary part describing absorption. In the case of \(a\gg\lambda\) and if the particle is spherical, we have \[S(0)=\frac{1}{2}(ka)^{2}. \tag{15}\] Since this scattering amplitude is real-valued, it means that it describes pure absorption. Concerning phase noise produced by dust, the most interesting regime is Mie scattering. For spherical particles, we can expand the forward scattering amplitude into a sum according to [59] \[S(0)=\frac{1}{2}\sum_{n=1}^{\infty}(2n+1)(a_{n}+b_{n}), \tag{16}\] where the expansion coefficients are given by \[\begin{split} a_{n}&=\frac{\psi^{\prime}(y)\psi_{n}(x)-m \psi_{n}(y)\psi^{\prime}_{n}(x)}{\psi^{\prime}(y)\zeta_{n}(x)-m\psi_{n}(y)\zeta ^{\prime}_{n}(x)},\\ b_{n}&=\frac{m\psi^{\prime}(y)\psi_{n}(x)-\psi_{n}(y) \psi^{\prime}_{n}(x)}{m\psi^{\prime}(y)\zeta_{n}(x)-\psi_{n}(y)\zeta^{\prime}_ {n}(x)}.\end{split} \tag{17}\] These expressions depend on the spherical Bessel function and on the spherical Hankel function of the second kind: \[\psi_{n}(z)=zj_{n}(z),\,\zeta_{n}(z)=zh_{n}^{(2)}(z), \tag{18}\] where \(y=mka\), \(x=ka\). We use this expression to evaluate the phase noise in LBI concepts. Figure 5 shows the scattering amplitudes for absorption and refraction. ## IV Long-Baseline, Laser-Interferometric GW Measurement ### Lbi-Gnd The LBI-GND concept is a long-baseline laser interferometer to measure ground strain. This strain is produced by GWs as well as by the seismic background. A model of the elastic strain response of the Moon to GW strain is shown in figure 6. The Moon behaves like a fluid above a few \(10\,\mathrm{mHz}\), which Figure 5: Forward scattering amplitudes for Mie scattering. The orange line is the small-particle approximation (Rayleigh scattering) of equation (14). The red line is the large-particle approximation of equation (15). The blue and green curves are the real (absorption) and imaginary (refraction) part of the Mie scattering amplitude including terms up to \(n=100\) in equation (16). The imaginary part determines the phase noise according to equation (12). means that its mass distribution follows the GW quadrupole field without significant elastic resistance like an assembly of freely falling test masses. At lower frequencies, the response shows amplification on the resonances of the quadrupole normal modes. In this model, the low-frequency modes are assumed to have quality factors of about 100. Below the lowest order quadrupole mode at about 1 mHz, the GW response of the Moon is strongly suppressed by its stiffness. This response function is used to convert instrument noises of the seismic strain measurement into equivalent noises of the GW measurement. The LBI-GND concept does not require seismic isolation systems and it does not require suspended test masses. The optics are directly mounted to the ground. This also means that suspension thermal noise and radiation-pressure noise do not play a role. The noises we include in the model are quantum noise, mirror thermal noise, dust noise, and the noise produced by the seismic background. The parameter values of these models are summarized in table 1. As for LGWA, we assume that the LBI-GND detector would be deployed inside a PSR to avoid disturbances due to strong temperature variations. More precisely, the three stations would need to be deployed on the crater wall of a PSR to have a clear line of sight between the interferometer stations. In this case, the ambient temperature is below 100 K. There are substrate and coating materials with low mechanical loss at low temperatures such as silicon [60], but mirror thermal noise is a very small contribution to the instrument noise for LBI-GND detectors. We choose fused silica with silica-tantala coatings in our model taking into account the increased mechanical loss at low temperature [61]. The main instrument parameters were chosen so that it should not be too hard to achieve them, but at the same time realizing a readout sensitivity such that the seismic background becomes the clearly dominant noise. As shown in figure 7, seismic noise is stronger than other instrument noise by a few orders of magnitude with our model. While this LBI-GND concept seems feasible on paper, it must be emphasized that it is still a complex system whose deployment would face substantial challenges. Three stations w Figure 6: Lunar GW response in terms of seismic strain along the surface used for the LBI-GND concept. The Moon’s response to GWs does not affect the LBI-SUS signal due to its seismic-isolation system. to form a full Michelson interferometer. This is necessary to reduce the laser frequency noise in the GW measurement. The optics must be properly aligned to be able to form the interferometer. Maybe it is possible to remove laser frequency noise like in LISA through electronic combinations of signals with appropriate time delays [62]. However, the much shorter arm length and higher laser power of the LBI-GND concept compared to LISA would require significant modifications. It is beyond the scope of this paper to analyze implementation details, but we do not see clear show-stoppers for the \begin{table} \begin{tabular}{l c} \hline Parameter & Value \\ \hline \hline **Relevant geophysical parameters** & \\ \hline Temperature & \(100\,\mathrm{K}\) \\ Speed of dust particles & \(660\,\mathrm{m/s}\) \\ Dust particle density & \(10^{-3}\,\mathrm{m^{-3}}\) \\ Refractive index of dust particle & \(1.33\) \\ \hline **Detector parameters** & \\ \hline Arm length & \(35\,\mathrm{km}\) \\ Radius of end mirrors & \(0.4\,\mathrm{m}\) \\ Laser input power & \(10\,\mathrm{W}\) \\ Laser wavelength & \(1064\,\mathrm{nm}\) \\ Test mass material & Fused silica \\ Coating loss angle & \(10^{-3}\) \\ Substrate loss angle & \(10^{-4}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the model parameters of the LBI-GND configuration. Figure 7: Some of the principal noises of a long-baseline laser-interferometric lunar GW detector with optics mounted on the ground. The DN model assumes a particle density of \(1\,\mathrm{km^{-3}}\) near the lunar surface. LBI-GND concept. We also point out that optical fibers were proposed to create the long-baseline interferometric readout, but this would come at a loss of sensitivity, e.g., due to fundamental thermal noise of the fiber and scattering [63]. A fiber interferometer is not suitable for GW measurements. ### Lbi-Sus The LBI-SUS concept works similar to LIGO-Virgo interferometers. It requires seismic isolation and suspension systems. Certainly, it is by far the most challenging concept to realize, and in fact, given our current experience with terrestrial GW detectors, it would be impossible to build and operate them on the Moon. Revolutionary advances in engineering would be required to deploy and commission the detectors without continuous intervention by humans. Nonetheless, here our attention is not on the engineering, but on the most basic aspects of the detector design. We consider an LBI-SUS model with 35 km arm length. We assume that arm cavities are required. They add complexity to the system, but they provide an important handle to control and reduce noise couplings from auxiliary degrees of freedom, which might prove crucial for the ambitious LBI-SUS concept. Each test mass is suspended from a chain of passive isolation stages, e.g., pendula and spring blades for horizontal and vertical isolation. We consider a suspension system with 3 stages. The final stage cannot be mechanical since it would be impossible to meet the requirements on mechanical loss and resonance frequency. As a possible solution, we propose magnetic levitation with superconducting coils. The idea was already investigated for terrestrial GW detectors [64]. It might be equally \begin{table} \begin{tabular}{l c} \hline Parameter & Value \\ \hline \hline **Relevant geophysical parameters** & \\ \hline Temperature & 100 K \\ Speed of dust particles & 660 m/s \\ Dust particle density & \(10^{-3}\) m\({}^{-3}\) \\ Refractive index of dust particle & 1.33 \\ \hline **Detector parameters** & \\ \hline Arm-cavity length & 35 km \\ Radius of TM & 0.4 m \\ Thickness of TM & 1.1 m \\ Chain of suspended masses & 800/1200/1289 kg \\ Loss angle of final stage & \(10^{-9}\) \\ Loss angles of upper two stages & \(10^{-3}\) \\ Resonance frequency final stage & 1 mHz \\ Resonance frequencies of upper two stages & 20 mHz \\ Amplitude transmissivity of ITM & 0.1 \\ Laser arm-input/arm-cavity power & 0.1 W/40 W \\ Laser wavelength & 1500 nm \\ Test mass material & Silicon \\ Coating loss angle & \(10^{-5}\) \\ Substrate loss angle & \(10^{-8}\) \\ ITM radius of curvature & 17.7 km \\ ETM radius of curvature & 17.7 km \\ TM pitch resonance frequency & 1 mHz \\ \hline \hline \end{tabular} \end{table} Table 2: Overview of the LBI-SUS model parameters. The masses of the suspension system are listed from top mass to test mass. impossible to realize this stage through levitation, but it cannot be ruled out as a possibility given our current level of understanding of superconducting magnetic levitation. None of the stages can be realized as simple springs or pendula since it would require enormous mechanical structures to achieve such low resonance frequencies. Mechanical spring-antispring stages might be an option for all but the final stage [43, 44]. As for the other lunar GW detector concepts, we consider deployment in a PSR for improved temperature stability, and the deployment of the three interferometer stations would have to be on the crater walls to have a clear line of sight between the stations. The Moon's cryogenic temperatures set constraints on the choice of the test-mass substrate and coating materials. For our model, we choose silicon for the test mass, which has low mechanical loss at cryogenic temperatures [60]. This is a common choice also in the case of third generation interferometers [55, 14]. The bulk loss of silicon is very low and therefore, the bulk contribution to the mirror thermal noise is negligible with respect to typical loss angles for the coating materials; see equation (10). For what concerns the coating, the main characteristics to improve the noise budget are: coating deposition dimensions, materials and layer-layer and layer-substrate interfaces, that ultimately impact the value of the loss angle [65]. Another important parameter is the beam radius on the test masses, which can be regulated to some extent by the mirrors' radii of curvature. The larger the beam, the lower the mirror thermal noise. The price to pay is reduced stability of the arm cavity [66]. Suitable coating materials like amorphous silicon - fused silica for cryogenic interferometers are already being investigated for next-generation GW detectors [67, 55]. They might provide a path to reach coating mechanical loss as low as \(\mathcal{O}(10^{-5})\) at cryogenic temperature. The choice of materials constraints the possible laser frequencies. For silicon substrates, a laser frequency of \(1500\,\mathrm{nm}\) can be chosen. The model parameter values are summarized in table 2, and the corresponding noise budget is shown in figure 8. The noise budget does not contain noise from vertical filter stages. Since the vertical-to-horizontal coupling of a \(35\,\mathrm{km}\) long detector is about \(1\%\), it means that all noise from the suspensions along the vertical displacement direction cannot be more than a factor 100 larger compared to the displacement noise along the horizontal direction, which sets strong requirements on vertical seismic isolation and loss angles of the vertical filters. We derive three potential show-stoppers of the LBI-SUS concept from our noise models: * **Suspension thermal noise**. We do not know today how to realize a horizontal suspension stage with resonance frequency of \(1\,\mathrm{mHz}\) and quality factor of \(10^{9}\). * **Vertical seismic isolation**. We do not know today how to realize a vertical filter with \(1\,\mathrm{mHz}\) resonance frequency and quality factor of \(10^{5}\). There is no damping dilution in vertical filters. * **Dust noise**. A pipe must be constructed to protect the beam from dust. Noise from stray light interacting with the pipe must be investigated. Another technological challenge of the LBI-SUS concept, which does not appear in our modeled noise budget, is its angular controls especially with regards to the Sidles - Sigg effect. This effect results from optomechanical dynamics involving the suspended optics and laser beam [68, 69]. The optomechanical coupling gives rise to two eigenmodes for each of the two angular degrees of freedom of the two test masses forming an arm cavity. These modes are commonly known as soft and hard mode. The frequencies can be calculated using \[\kappa =\frac{2P_{\text{cav}}L}{c(g_{1}g_{2}-1)},\] \[\tau_{\text{soft}} =\frac{\kappa}{2}\left(g_{1}+g_{2}+\sqrt{(g_{1}-g_{2})^{2}+4} \right),\] \[\tau_{\text{hard}} =\frac{\kappa}{2}\left(g_{1}+g_{2}-\sqrt{(g_{1}-g_{2})^{2}+4} \right)\right), \tag{19}\] \[f_{\text{soft}} =\frac{1}{2\pi}\sqrt{(2\pi f_{\text{mech}})^{2}+\tau_{\text{soft} }/\mathcal{I}},\] \[f_{\text{hard}} =\frac{1}{2\pi}\sqrt{(2\pi f_{\text{mech}})^{2}+\tau_{\text{hard} }/\mathcal{I}},\] where \(\mathcal{I}\) is the moment of inertia of the mirror around the rotation axis (e.g., a horizontal axis perpendicular to the direction of the arm in case of mirror pitch motion), and \(P_{\text{cav}}\) is the power of the laser beam inside the arm cavities. Assuming a mechanical angular mode of the suspended test mass at frequency \(f_{\text{mech}}=1\,\)mHz, the coupled system has eigenfrequencies at \(f_{\text{soft}}=2.9\,\)mHz and \(f_{\text{hard}}=4.8\,\)mHz. Both mode frequencies lie well below the decihertz band helping the angular controls, but it is still important to model the noise introduced by the angular controls and all auxiliary controls, since these systems are known to introduce important low-frequency noise in terrestrial GW detectors [70]. Figure 8: Sensitivity models of a long-baseline laser-interferometric lunar GW detector. The suspension thermal noise requires a (today) unthinkable final suspension stage with resonance frequency at \(1\,\)mHz (in horizontal and vertical direction) and quality factor \(Q=10^{9}\) for the horizontal and \(Q=10^{5}\) for the vertical. ## V Observational capabilities of lunar GW detectors The computation of noise budgets for the lunar detectors considered in this paper allows us to estimate and compare their scientific capabilities. Several studies have already highlighted the importance of a decihertz observatory [7; 16; 18; 71; 72]. Considering all concepts, lunar GW detection might cover the band from \(1\,\mathrm{mHz}\) to a few Hz and open the decihertz band to GW observations [12]. It should be noted that in addition to the concepts proposed in recent years, additional considerations were made to enhance their science case. For example, antipodal pairs of GW detectors at the two lunar poles would form an ideal network for the search of stochastic GWs exploiting correlations between two detectors [73]. It was also pointed out that a distribution of vibration sensors over the surface of a sphere would enable detailed measurements of the GW polarization and interesting tests of general relativity [74; 75]. In this section, we will focus on the three main concepts as individual detectors, deployed in a single spot of the lunar surface, and leave the analysis of lunar GW detector networks to a future study. For the LBI-SUS analysis, we assume that the dust noise (section III.7) is removed by a shielding pipe. Without this shield, we think that the LBI-SUS sensitivity would not be good enough to justify the deployment of such a complex instrument on the Moon. Figure 5 shows the noise curves of all three concepts expressed in terms of characteristic strain noise, \(h_{n}(f)=\sqrt{fS_{n}(f)}\). This dimensionless quantity is useful since it can be compared on a logarithmic axis with the characteristic signal strain, Figure 9: Estimated sensitivities of the detector concepts discussed in this work, compared with the ones of the Laser Interferometer Space Antenna [76] and the Einstein Telescope [14]. The LBI-SUS sensitivity is plotted without the dust noise assuming that a protective pipe is built around the laser beam. \(h_{s}(f)=2f|\widetilde{h}(f)|\), to yield the signal to noise ratio (SNR): \[\text{SNR}^{2}=\int\frac{h_{s}^{2}(f)}{h_{n}^{2}(f)}\text{d}\log f\,. \tag{20}\] LBI-SUS and LGWA are candidates for the decihertz band, while the LBI-GND concept is most sensitive in the millihertz band, and it would not provide an improvement in the decihertz band if LISA reaches its design sensitivity. The emission frequency of compact object binaries is inversely proportional to their mass; the ones that have been detected so far by ground-based interferometers are all in the range of one to several tens of solar masses. The binaries merging in the decihertz band, therefore, would be on the order of thousands of solar masses: the very center of the Intermediate Mass Black Hole (IMBH) range, \(10^{2}\) to \(10^{5}M_{\odot}\)[18]. The amplitude of the emission also scales linearly with the mass, meaning that even if very distant, these binaries could still be detectable. In order to compute the distances to which these could be detected, shown in figure 10, we need to make an assumption on how the gravitational strain tensor \(h_{ij}\) maps onto the strain measured by the sensor, \(h(t)\). We consider a linear response to these waves given by \(h=h_{ij}D_{ij}\), where the detection Figure 10: Detection horizon for an optimally-oriented binary of black holes, at a signal-to-noise ratio of 10. The masses on the horizontal axis are in the source frame, but the waveform observed depends on the detector-frame mass, which is affected by redshift as \(M_{\text{det}}=(1+z)M_{\text{source}}\). The distance shown on the vertical axis is luminosity distance, defined in a cosmological context as the quantity \(d_{L}\) for which the flux scales as \(F\propto 1/d_{L}^{2}\). The relation between luminosity distance and redshift is computed assuming a cosmology consistent with the Planck mission’s 2018 results [77]. tensor is \[D_{ij}^{\text{LBI}}=\frac{1}{2}\left(a_{i}a_{j}-b_{i}b_{j}\right) \tag{21}\] for interferometric detectors, where \(a\) and \(b\) are two unit vectors along the interferometer's arms (which we assume to be orthogonal and located near a lunar pole), while \[D_{ij}^{\text{LGWA}}=n_{i}b_{j} \tag{22}\] for seismic inertial measurement [79], where \(n\) is the normal unit vector to the surface at the location of the sensor, while \(b_{j}\) is a tangent unit vector in the direction where the surface displacement is being measured. We assume to have 4 stations (resulting in an equivalent reduction in the PSD by a factor 4) and two measurement channels per station in orthogonal directions. For the signals we are considering (without effects such as eccentricity or precession), we can compute a one-to-one function relating frequency and time of arrival, shown schematically in figure 11. This allows us to compute the varying antenna pattern as a function of frequency. We assume a lifetime of 10 years for all detectors, and truncate the signals which would last longer than this in the low Figure 11: Time to merger for a compact binary as a function of total mass (in the detector frame) and frequency. We highlight the decihertz band from 0.05Hz to 2Hz, where lunar detectors such as LGWA and LBI-SUS have an edge over space- and ground-based ones, and report for reference the total mass of the most and least massive events to date, GW190521 and GW170817 (for the latter, that is the total mass the binary would have had if the mass ratio was 1). These times are computed with a lowest-order post-Newtonian approximation, which is very accurate at low frequencies but imprecise at the high frequency end. frequency end -- this is the optimal observation scenario, in which the signal comes in band as soon as the detector is turned on, and ends right at the end of the detector lifetime. There is a phase effect due to the motion of the Moon around the Sun, which in the frequency domain can be expressed as \[\phi=-\vec{k}\cdot\vec{r}_{\rm Moon}\,, \tag{23}\] where \(\vec{k}=(2\pi f/c)\hat{k}\) is the wave vector of the incoming gravitational wave (which we consider as fixed in our solar-centered International Celestial Reference System (ICRS) [80]), while \(\vec{r}_{\rm Moon}\) is the position of the detector, which we compute using the astropy [81] interface to jplephem [82]. Figure 10 shows the distances to which equal-mass black hole binaries could be detected if they were optimally oriented. For all detectors, these are cosmological distances, reaching the formation of the Figure 12: Fisher matrix integrand for the dependence of the signal on declination. The signal is a neutron star binary with parameters set to their median values for the GW170817 detection [78]. In the upper panel, we show the square modulus of the derivative of the waveform with respect to declination, multiplied by the frequency \(f\). This does not depend on the noise PSD, but only on the antenna pattern and the motion of the Moon. In the bottom panel, we show this quantity normalized to the estimated sensitivity of LGWA. For this analysis, we fix the orbital phase term at the upper edge of the detector band, \(\exp(-\mathrm{i}\vec{k}\cdot\vec{r})\), to zero — this is an arbitrary phase choice and has no impact on the final localization, but without it the integrand would have a large contribution at high frequency, which is however irrelevant due to being completely degenerate with the arrival time parameter. With a 10-year observation, this signal would be localized to within 3 arcmin\({}^{2}\) (90% area). The oscillations in the Fisher contribution are monthly and yearly: the largest contributor to the localization is the last year. earliest galaxies. Little is known about IMBHs, and a decihertz observatory would either detect their mergers, or put an extremely strong constraint on their population. Solar-mass binaries are also a target of interest for LGWA and LBI-SUS, even though the distances to which they could be detected are not as large. These would be detected months to years before their merger, which means that during their orbit the Moon would cover a significant distance in its orbit around the Earth and in turn around the Sun. This would correspond to a large phase term in the observed waveform: the relevant quantities to compare to estimate it are the wavelength of the gravitational wave, ranging from 1 to 10 light-seconds in the decihertz band, and the Moon-Sun orbital diameter, which is around 1000 light-seconds. The phase due to the orbit of the Moon around the Earth is in general less significant, on the order of a few cycles at most at the upper range of the detector band, but it still gives a contribution due to its faster evolution. While a comprehensive analysis on the localization capabilities of such detectors has not been yet performed, we can give some estimate using the Fisher matrix formalism. Specifically, we can compute with GWFish [79] the Fisher integrand for the dependence of the Fourier-domain signal \(h(f)\) on one of the two angles defining the signal position, such as declination: \[F_{\text{dec, dec}}=\int\frac{4}{S_{n}(f)}\left|\frac{\partial h}{\partial \text{dec}}\right|^{2}\text{d}f=\int\underbrace{\frac{4f}{S_{n}(f)}\left| \frac{\partial h}{\partial\text{dec}}\right|^{2}}_{\text{Fisher integrand}} \text{d}\log f\,. \tag{24}\] This quantifies the extent to which each frequency region contributes to the estimation of the parameter, in the Gaussian approximation. In figure 12, we show the integrand in \(\text{d}\log f\) as opposed to the one in \(\text{d}f\), so that the Fisher matrix element is visually represented by the area of the curve being shown on a logarithmic axis. We consider a signal with the same parameters as GW170817 as seen by the LGWA detector, where it would have an SNR of 32, very similar to the one it had for the LVK network. We can see modulations with both the period of the Earth-Moon system's orbit around the Sun, and the period of the rotation of the Moon. The full Fisher analysis requires us to compute all the Fisher matrix elements and invert the matrix, thus accounting for correlations; it yields a 90% sky area of roughly 3 arcmin\({}^{2}\). The upper panel of figure 12 shows the extent to which rotation around the Sun can contribute to the localization of a binary that stays in band for a long time. It does not depend on the specifics of the lunar detector as long as it is sensitive enough to observe the source. Repeating the same analysis for the LBI-SUS detector yields analogous results, and an even smaller sky localization area due to the higher SNR. The decihertz band is crucial to this end, since the orbital contribution to the localization capabilities is the largest there; it would be even larger at millihertz frequencies, but stellar-mass binaries detected in that band will not merge for hundreds or thousands of years, meaning that while we could localize them, this would likely not be connected with any multimessenger observation since we could not follow them all the way to merger. Therefore, the decihertz band has a unique potential for precise localization, which can only be matched by several ground-based detectors with high SNR. ## VI Conclusion We presented noise models of lunar GW detector concepts. This includes LGWA as well as long-baseline, laser-interferometric concepts. Laser interferometers can oberve GWs either with optics mounted to the ground (LBI-GND) or with suspended test masses (LBI-SUS). We find that the LBI-GND concept can achieve excellent sensitivity in the millihertz band without technological show-stoppers; at least based on our study of the main instrumental noises. The LBI-SUS concept has the potential to become a ground-breaking decihertz GW observatory, but there are a few potential show-stoppers that must be addressed, which include dust noise, suspension thermal noise, and seismic isolation for vertical surface displacement. It is unclear whether an ambitious LBI-SUS concept can ever be realized on the Moon, or whether it has any crucial advantages over space-based decihertz concepts. However, the extremely low seismic background on the Moon is an enormous advantage compared to terrestrial detectors. One of the main show-stoppers for decihertz GW observations with terrestrial detectors, the gravitational background noise produced for example by the seismic field, would play a minor role for a lunar LBI-SUS detector. The LGWA is a much less complex system and relies mostly on technologies that are within reach. A potential role of the LBI-GND concept could be to realize a long-lived observatory in the millihertz band; albeit with much lower sensitivity than LISA. Both, LBI-GND and LGWA, would also be outstanding lunar geophysical observatories. All three technological approaches (LGWA, LBI-GND, LBI-SUS) have the potential to detect GW sources out to redshifts of \(z=30\) and beyond. Many publications have already emphasized the enormous potential for breakthrough science with GW observations in the decihertz band. The Moon certainly has the potential to become the third impactful platform for GW detectors together with Earth and space. ###### Acknowledgements. This work is the outcome of a lecture series at GSSI on lunar GW detection, and we would like to express our gratitude for the support of the GSSI administration and of the coordinator of the PhD lecture program. Also, JT would like to thank Tito Dal Canton for a fruitful discussion on the localization capabilities of a lunar detector.
2310.00368
Tame maximal weights, relative types and valuations
In this article, we obtain a class of tame maximal weights (Zhou weights). Using Tian functions (the function of jumping numbers with respect to the exponents of a holomorphic function or the multiples of a plurisubharmonic function) as a main tool, we establish an expression of relative types (Zhou numbers) to these tame maximal weights in integral form, which shows that the relative types satisfy tropical multiplicativity and tropical additivity. Thus, the relative types to Zhou weights are valuations (Zhou valuations) on the ring of germs of holomorphic functions. We use Tian functions and Zhou numbers to measure the singularities of plurisubharmonic functions, involving jumping numbers and multiplier ideal sheaves. Especially, the relative types to Zhou weights characterize the division relations of the ring of germs of holomorphic functions. Finally, we consider a global version of Zhou weights on domains in $\mathbb{C}^n$, which is a generalization of the pluricomplex Green functions, and we obtain some properties of them, including continuity and some approximation results.
Shijie Bao, Qi'an Guan, Zhitong Mi, Zheng Yuan
2023-09-30T13:21:21Z
http://arxiv.org/abs/2310.00368v3
# Tame maximal weights, relative types and valuations ###### Abstract. In this article, we obtain a class of tame maximal weights (Zhou weights), whose relative types (Zhou numbers) satisfy the tropical multiplicativity and tropical additivity, and characterize the multiplier ideal sheaves of plurisubharmonic functions. Especially, the relative types to them are valuations (Zhou valuations) on the ring of germs of holomorphic functions, and characterize the division relations of the ring. We consider a global version of these weights on domains in \(\mathbb{C}^{n}\), and obtain some properties of them, including the continuity and some approximation results. Key words and phrases:Lelong number, strong openness property, tame maximal weight, valuation, multiplier ideal sheaf, plurisubharmonic function 2020 Mathematics Subject Classification: 32U25, 14B05, 32S15, 32U35, 13A18 ## 1. Introduction The concept of _Lelong number_ is an analytic analogue of the algebraic notion of multiplicity (see [11]), which was first introduced by Lelong in [33]. Let \(u\) be a plurisubharmonic function (see [12, 40, 41]) near the origin \(o\) in \(\mathbb{C}^{n}\). The Lelong number was defined by \[\nu(u,o):=\sup\big{\{}c\geq 0:u(z)\leq c\log|z|+O(1)\text{ near }o\big{\}},\] which was used to measure the singularity of \(u\) at \(o\) (see [12]). Some further developments can be seen in [34, 41, 7, 8, 9]. To give more accurately an insight into the singularity of plurisubharmonic functions, some generalized Lelong numbers were studied. Let \(a=(a_{1},\dots,a_{n})\in\mathbb{R}_{>0}^{n}\). Replacing \(\log|z|\) by \(\log\max_{1\leq j\leq n}|z_{j}|^{\frac{1}{nj}}\) in the definition of Lelong numbers, the directional Lelong numbers \(\nu_{a}(u,o)\) (see [12], also known as _Kiselman numbers_) was introduced by Kiselman, which gave more detailed information on the singularity of \(u\) at \(o\). Note that, \(\nu_{a}(\cdot)\) is well suited for the tropical structure of the cone of plurisubharmonic functions (see [39, 12]): \[\nu_{a}(u+v,o)=\nu_{a}(u,o)+\nu_{a}(v,o)\text{ ({\it tropical multiplicativity})}\] and \[\nu_{a}\big{(}\max\{u,v\},o\big{)}=\min\big{\{}\nu_{a}(u,o),\nu_{a}(v,o) \big{\}}\text{ ({\it tropical additivity})}.\] These properties play an important role, for example, in the investigation of valuations on the ring of germs of holomorphic functions at \(o\) (see [16, 39]). With respect to more general plurisubharmonic functions \(\varphi\), a generalized Lelong number \[\nu_{\varphi}(u):=\lim_{t\to+\infty}\int_{\{\varphi<-t\}}dd^{c}u\wedge(dd^{c }\varphi)^{n-1}\] Introduction Let \(\mathcal{I}(u)\) be a class of tame maximal weights, let \(\varphi\) be a function on a complex manifold \(\mathcal{C}\) and let \(\varphi\) be a function on \(\mathcal{C}\). We say that \(\varphi\) is _purisubharmonic_ if \(\varphi\) is a function on \(\mathcal{C}\) and \(\varphi\) is a function on \(\mathcal{C}\). We say that \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisubharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisubububharmonic_ if \(\varphi\) is _purisububububharmonic_ if \(\varphi\) is _purisububububharmonic_ if \(\varphi\) is _purisububububharmonic_ if \(\varphi\) is _purisububububharmonic_ if \(\varphi\) is _purisububububharmonic_ if \(\varphi\) is _purisubububububharmonic_ if \(\varphi\) is _purisubububububharmonic_ if \(\varphi\) is _purisub jumping numbers \(c^{f}_{o}(\Phi_{o,\max})\) and the multiplier ideal sheaves: for any Zhou weights \(\Phi_{o,\max}\), we prove that \(c^{f}_{o}(\Phi_{o,\max})\) and \(\nu(f,\Phi_{o,\max})\) for all \((f,o)\in\mathcal{O}_{o}\) are linearly controlled by each other; we give a characterization for inclusion relations of multiplier ideal sheaves by using \(\sigma(\cdot,\Phi_{o,\max})\) to all Zhou weights \(\Phi_{o,\max}\), which shows that Theorem 1.1 also holds if replacing tame maximal weights by Zhou weights; as an application, the valuations \(\nu(\cdot,\Phi_{o,\max})\) of \(\mathcal{O}_{o}\) characterize the division relations in \(\mathcal{O}_{o}\). Finally, we discuss _global Zhou weights_ on domains in \(\mathbb{C}^{n}\), and present some properties of them, including the continuity and some approximation results. ### Valuations related to local Zhou weights Let \(f_{0}=(f_{0,1},\cdots,f_{0,m})\) be a vector, where \(f_{0,1},\cdots,f_{0,m}\) are holomorphic functions near \(o\). Denote \(|f_{0}|^{2}=|f_{0,1}|^{2}+\cdots+|f_{0,m}|^{2}\). Let \(\varphi_{0}\) be a plurisubharmonic function near \(o\), such that \(|f_{0}|^{2}e^{-2\varphi_{0}}\) is integrable near \(o\). **Definition 1.2**.: _We call that \(\Phi_{o,\max}^{f_{0},\varphi_{0}}\) (\(\Phi_{o,\max}\) for short) is a **local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\)**, if the following three statements hold_ _(1) \(|f_{0}|^{2}e^{-2\varphi_{0}}|z|^{2N_{0}}e^{-2\Phi_{o,\max}}\) is integrable near \(o\) for large enough \(N_{0}\gg 0\);_ _(2) \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\Phi_{o,\max}}\) is not integrable near \(o\);_ _(3) for any plurisubharmonic function \(\varphi^{\prime}\geq\Phi_{o,\max}+O(1)\) near \(o\) such that \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\varphi^{\prime}}\) is not integrable near \(o\), \(\varphi^{\prime}=\Phi_{o,\max}+O(1)\) holds._ Let \(\varphi\) be a plurisubharmonic function near \(o\). The following remark gives the existence of local Zhou weights. **Remark 1.3**.: _Assume that \(|f_{0}|^{2}e^{-2\varphi_{0}}|z|^{2N_{0}}e^{-2\varphi}\) is integrable near \(o\) for large enough \(N_{0}\gg 0\), and \((f_{0},o)\not\in\mathcal{I}(\varphi+\varphi_{0})\) holds._ _Then there exists a local Zhou weight \(\Phi_{o,\max}\) related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\) such that \(\Phi_{o,\max}\geq\varphi\)._ _Moreover, \(\Phi_{o,\max}\geq N\log|z|+O(1)\) near \(o\) for some \(N\gg 0\). We prove this remark in Section 2.4._ A plurisubharmonic function \(\varphi\) on a neighborhood of \(o\) is said a _maximal weight_, if \(u(o)=-\infty\), \(u\) is local bounded on \(U\backslash\{o\}\) and \((dd^{c}\varphi)^{n}=0\) on \(U\backslash\{o\}\) (see [39, 6]), where \(U\) is a neighborhood of \(o\). Let \(\psi\) be any plurisubharmonic function near \(o\), and \(\varphi\) be a maximal weight. Denote \[\sigma(\psi,\varphi):=\sup\big{\{}b:\psi\leq b\varphi+O(1)\text{ near }o\big{\}},\] which is called the _relative type_ of \(\psi\) with respect to \(\varphi\) (see [39, 6]). Note that for any local Zhou weights \(\Phi_{o,\max}\) near \(o\), there exists a maximal weights \(\varphi\) on a small enough neighborhood of \(o\) such that \(\varphi=\Phi_{o,\max}+O(1)\) near \(o\) (see Lemma 8.5 and Proposition 1.18). Then, we call the relative types \(\sigma(\psi,\Phi_{o,\max})\) of \(\psi\) to Zhou weights \(\Phi_{o,\max}\)**Zhou numbers of \(\psi\) to \(\Phi_{o,\max}\)**. Note that for any \(b<\sigma(\psi,\Phi_{o,\max})\), \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\max\{\Phi_{o,\max},\frac{1}{b}\psi\}}\) is not integrable near \(o\), then it follows from the strong openness property of multiplier ideal sheaves (Theorem 2.8) that \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\max\big{\{}\Phi_{o,\max},\frac{1}{\sigma(\psi,\Phi_{o,\max})}\psi\big{\}}}\) is not integrable near \(o\). Note that \[\max\bigg{\{}\Phi_{o,\max},\frac{1}{\sigma(\psi,\Phi_{o,\max})}\psi\bigg{\}} \geq\Phi_{o,\max},\] then \[\max\bigg{\{}\Phi_{o,\max},\frac{1}{\sigma(\psi,\Phi_{o,\max})}\psi\bigg{\}}= \Phi_{o,\max}+O(1),\] which implies that \[\Phi_{o,\max}\geq\frac{1}{\sigma(\psi,\Phi_{o,\max})}\psi+O(1),\] i.e., \[\psi\leq\sigma(\psi,\Phi_{o,\max})\Phi_{o,\max}+O(1).\] In fact, for any maximal weights \(\varphi\), Rashkovskii proved \(\psi\leq\sigma(\psi,\varphi)\varphi+O(1)\) near \(o\) (see [39]). **Remark 1.4**.: _Let \(\Phi_{o,\max}\) be a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\). Then for any small enough neighborhood \(U\ni o\), there exists a (unique) negative plurisubharmonic function \(\Phi_{o,\max}^{U}=\Phi_{o,\max}+O(1)\) on \(U\), such that for any negative plurisubharmonic function \(\psi\) on \(U\), the inequality_ \[\psi\leq\sigma(\psi,\Phi_{o,\max})\Phi_{o,\max}^{U} \tag{1.1}\] _holds on \(U\)._ _Especially, if plurisubharmonic functions \(\psi_{j}\to\psi\) in \(L^{1}_{\rm loc}\) when \(j\to+\infty\), then_ \[\limsup_{j\to+\infty}\sigma(\psi_{j},\Phi_{o,\max})\leq\sigma(\psi,\Phi_{o, \max}). \tag{1.2}\] _We prove this remark in Section 2.4._ We give some examples of local Zhou weights, which show that the Zhou number is a generalization of the notion of directional Lelong number. **Example 1.5**.: _Let \(\varphi=\log\max_{1\leq j\leq n}|z_{j}|^{a_{j}}\) on \(\Delta^{n}\subset\mathbb{C}^{n}\), where \(a_{j}>0\) for any \(j\) satisfying that \(\sum_{1\leq j\leq n}\frac{1}{a_{j}}=1\). It is clear that \(\mathcal{I}(\varphi)_{o}\) is the maximal ideal of \(\mathcal{O}_{o}\). For any plurisubharmonic function \(\tilde{\varphi}\) near \(o\) satisfying that \(\tilde{\varphi}\geq\varphi\) near \(o\) and \(e^{-2\tilde{\varphi}}\) is not integrable near \(o\), \(\tilde{\varphi}=\varphi+O(1)\) near \(o\) (see [23]). Thus, \(\varphi\) is a local Zhou weight related to \(1\) near \(o\)._ In the following, we give some properties of local Zhou weights and Zhou numbers. **Proposition 1.6**.: _Let \(f=(f_{1},\cdots,f_{m^{\prime}})\) be a vector, where \(f_{1},\cdots,f_{m^{\prime}}\) are holomorphic functions near \(o\). Denote that \(|f|:=(|f_{1}|^{2}+\cdots+|f_{m^{\prime}}|^{2})^{1/2}\). Let \(\Phi_{o,\max}\) be a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\). Then the following two statements holds:_ \((1)\) _for any \(\alpha>0\), \(\big{(}1+\alpha\sigma(\log|f|,\Phi_{o,\max})\big{)}\Phi_{o,\max}\) is a local Zhou weight related to \(|f|^{2\alpha}|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\);_ \((2)\)_\(\big{(}1+\sigma(\varphi_{0},\Phi_{o,\max})\big{)}\Phi_{o,\max}\) is a local Zhou weight related to \(|f_{0}|^{2}\)._ The following proposition gives a comparison result between Zhou weights. **Proposition 1.7**.: _Let \(\varphi_{i}\) be a local Zhou weight related to \(|f_{0,i}|^{2}\), where \(i\in\{1,2\}\). Assume that_ \[\sigma(\log|f|,\varphi_{1})\leq\sigma(\log|f|,\varphi_{2}) \tag{1.3}\] _holds for any holomorphic function \(f\) near \(o\)._ _Then \(\varphi_{1}\leq\varphi_{2}+O(1)\) near \(o\). Especially, "\(=\)" in inequality (1.3) holds if and only \(\varphi_{1}=\varphi_{2}+O(1)\) near \(o\)._ We present a formula for calculating Zhou numbers. **Theorem 1.8**.: _Let \(\Phi_{o,\max}\) be a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\). Then for any plurisubharmonic function \(\psi\) near \(o\),_ \[\sigma(\psi,\Phi_{o,\max})=\lim_{t\to+\infty}\frac{\int_{\{\Phi_{o,\max}<-t\}}|f _{0}|^{2}e^{-2\varphi_{0}}(-\psi)}{t\int_{\{\Phi_{o,\max}<-t\}}|f_{0}|^{2}e^{-2 \varphi_{0}}}.\] It follows from Theorem 1.8 and the definition of Zhou numbers that Zhou numbers are tropically multiplicative and tropically additive. **Corollary 1.9**.: _Let \(\psi_{1}\) and \(\psi_{2}\) be plurisubharmonic functions near \(o\). The following statements hold_ (1) _for any \(c_{1}\geq 0\) and \(c_{2}\geq 0\), \(\sigma(c_{1}\psi_{1}+c_{2}\psi_{2},\Phi_{o,\max})=c_{1}\sigma(\psi_{1},\Phi_{ o,\max})+c_{2}\sigma(\psi_{2},\Phi_{o,\max})\);_ (2)_\(\sigma(\log|f_{1}+f_{2}|,\Phi_{o,\max})\geq\min\big{\{}\sigma(\log|f_{1}|,\Phi_ {o,\max}),\sigma(\log|f_{2}|,\Phi_{o,\max})\big{\}}\), where \(f_{1}\) and \(f_{2}\) are holomorphic functions near \(o\);_ (3)_\(\sigma(\max\{\psi_{1},\psi_{2}\},\Phi_{o,\max})=\min\big{\{}\sigma(\psi_{1}, \Phi_{o,\max}),\sigma(\psi_{2},\Phi_{o,\max})\big{\}}\)._ Denote that \[\nu(f,\Phi_{o,\max}):=\sigma(\log|f|,\Phi_{o,\max})\] for any \((f,o)\in\mathcal{O}_{o}\). Corollary 1.9 shows that \(\nu(\cdot,\Phi_{o,\max})\) is a _valuation_ of \(\mathcal{O}_{o}\) for any local Zhou weights \(\Phi_{o,\max}\), and we call it **Zhou valuation**. **Corollary 1.10**.: _For any local Zhou weight \(\Phi_{o,\max}\) near \(o\), \(\nu(\cdot,\Phi_{o,\max}):\mathcal{O}_{o}\to\mathbb{R}_{\geq 0}\) satisfies the following:_ (1)_\(\nu(fg,\Phi_{o,\max})=\nu(f,\Phi_{o,\max})+\nu(g,\Phi_{o,\max})\);_ (2)_\(\nu(f+g,\Phi_{o,\max})\geq\min\big{\{}\nu(f,\Phi_{o,\max}),\nu(g,\Phi_{o,\max })\big{\}}\);_ (3)_\(\nu(f,\Phi_{o,\max})=0\) for any \(f(o)\neq 0\)._ Let \(G\) be a holomorphic function near \(o\). Recall the definition of _jumping number_ \[c_{o}^{G}(\Phi_{o,\max}):=\sup\big{\{}c:|G|^{2}e^{-2c\Phi_{o,\max}}\text{ is integrable near }o\big{\}}\] (see [19, 20]). Denote that \(c_{o}(\Phi_{o,\max}):=c_{o}^{1}(\Phi_{o,\max})\) (see [45, 11]). Given a local Zhou weight \(\Phi_{o,\max}\) near \(o\), the following theorem shows the jumping number \(c_{o}^{G}(\Phi_{o,\max})\) and the Zhou valuation \(\nu(G,\Phi_{o,\max})\) for all \((G,o)\in\mathcal{O}_{o}\) are linearly controlled by each other. **Theorem 1.11**.: _For any holomorphic function \(G\) near \(o\), we have the following relation between the jumping number \(c_{o}^{G}(\Phi_{o,\max})\) and the Zhou valuation \(\sigma(\cdot,\Phi_{o,\max})\),_ \[\nu(G,\Phi_{o,\max})+c_{o}(\Phi_{o,\max})\leq c_{o}^{G}(\Phi_{o,\max})\] \[\leq \nu(G,\Phi_{o,\max})-\sigma(\log|f_{0}|,\Phi_{o,\max})+1+\sigma( \varphi_{0},\Phi_{o,\max}).\] _Especially, if \(|f_{0}|^{2}e^{-2\varphi_{0}}\equiv 1\), we have_ \[\nu(G,\Phi_{o,\max})+1=c_{o}^{G}(\Phi_{o,\max}).\] We give a relation between the Zhou numbers and the multiplier ideal sheaves. **Theorem 1.12**.: _Let \(u\), \(v\) be two plurisubharmonic functions near \(o\), then the following three statements are equivalent:_ (1) _there exist two sequences of numbers \(\{t_{i,j}\}_{j\in\mathbb{Z}_{\geq 0}}\) (\(t_{i,j}\to+\infty\) when \(j\to+\infty\), \(i=1,2\)) such that \(\lim_{j\to+\infty}\frac{t_{i,j}}{t_{2,j}}=1\) and_ \[\mathcal{I}(t_{1,j}v)_{o}\subset\mathcal{I}(t_{2,j}u)_{o}\] _for any \(j\);_ (2) _for any plurisubharmonic function \(\varphi_{0}\) near \(o\) and any \(t>0\), we have_ \[\mathcal{I}(\varphi_{0}+tv)_{o}\subset\mathcal{I}(\varphi_{0}+tu)_{o};\] (3) _for any local Zhou weight \(\Phi_{o,\max}\) near \(o\), we have_ \[\sigma(u,\Phi_{o,\max})\leq\sigma(v,\Phi_{o,\max}).\] We call a plurisubharmonic function near \(o\) is a _tame weight_ (see [6]), if \(\varphi\) has an isolated singularity at \(o\), \(e^{\varphi}\) is continuous and there exists a constant \(C>0\) such that for any \(t>0\) and every \((f,o)\in\mathcal{I}(t\varphi)_{o}\), \[\log|f|\leq(t-C)\varphi+O(1)\] near \(o\). For any local Zhou weights \(\Phi_{o,\max}\), there exists a plurisubharmonic function \(\varphi\) on a neighborhood of \(o\) such that \(\varphi\) has an isolated singularity at \(o\), \(e^{\varphi}\) is continuous and \[\varphi=\Phi_{o,\max}+O(1)\] near \(o\) (using Lemma 8.5 and Proposition 1.18 and Proposition 1.19), and Theorem 1.11 shows that there exists a constant \(C>0\) such that for any \(t>0\) and every \((f,o)\in\mathcal{I}(t\Phi_{o,\max})_{o}\), \[\log|f|\leq(t-C)\Phi_{o,\max}+O(1)\] near \(o\). Thus, local Zhou weights are all tame weights (difference by a bounded function). Theorem 1.12 shows that the maximal tame weights in Theorem 1.1 can be replaced by local Zhou weights, as seen in the following remark. The proof of Theorem 1.12 is purely analytic and independent of Boucksom-Favre-Jonsson's result. **Remark 1.13**.: _Theorem 1.12 shows that the following three statements are equivalent:_ (1) _there exist two sequences of numbers \(\{t_{i,j}\}_{j\in\mathbb{Z}_{\geq 0}}\) (\(t_{i,j}\to+\infty\) when \(j\to+\infty\), \(i=1,2\)) such that \(\lim_{j\to+\infty}\frac{t_{1,j}}{t_{2,j}}=1\) and \(\mathcal{I}(t_{1,j}v)_{o}=\mathcal{I}(t_{2,j}u)_{o}\) for any \(j\);_ (2) _for any plurisubharmonic function \(\varphi_{0}\) near \(o\) and any \(t>0\), \(\mathcal{I}(\varphi_{0}+tv)_{o}=\mathcal{I}(\varphi_{0}+tu)_{o}\);_ (3) _for any local Zhou weight \(\Phi_{o,\max}\) near \(o\), \(\sigma(u,\Phi_{o,\max})=\sigma(v,\Phi_{o,\max})\)._ Recall that a plurisubharmonic function \(u\) is said to have _analytic singularities_ near \(o\) if \[u=c\log\sum_{1\leq j\leq N}|f_{j}|^{2}+O(1)\] near \(o\), where \(c\in\mathbb{R}^{+}\) and \(\{f_{j}\}_{1\leq j\leq N}\) are holomorphic functions near \(o\). Theorem 1.12 implies the following corollary, which shows that the valuations \(\nu(\cdot,\Phi_{o,\max})\) characterize the division relations in \(\mathcal{O}_{o}\). **Corollary 1.14**.: _Let \(u,\,v\) be two plurisubharmonic functions near \(o\). Assume that \(u\) has analytic singularities near \(o\), then the following three statements are equivalent:_ (1)_\(v\leq u+O(1)\) near \(o\);_ (2) _there exist two sequences of numbers \(\{t_{i,j}\}_{j\in\mathbb{Z}_{\geq 0}}\) (\(t_{i,j}\to+\infty\) when \(j\to+\infty\), \(i=1,2\)) such that \(\lim_{j\to+\infty}\frac{t_{1,j}}{t_{2,j}}=1\) and \(\mathcal{I}(t_{1,j}v)_{o}\subset\mathcal{I}(t_{2,j}u)_{o}\) for any \(j\);_ (3)_\(\sigma(u,\Phi_{o,\max})\leq\sigma(v,\Phi_{o,\max})\) holds for any local Zhou weight \(\Phi_{o,\max}\) near \(o\)._ _Especially, for any two holomorphic functions \(f\) and \(g\) near \(o\), the following two statements are equivalent:_ (1)_\(f=gh\) near \(o\), where \(h\) is a holomorphic function near \(o\);_ (2)_\(\nu(f,\Phi_{o,\max})\geq\nu(g,\Phi_{o,\max})\) holds for any local Zhou weight \(\Phi_{o,\max}\) near \(o\)._ In [30], Kim proved that the statement (1) in Theorem 1.1 implies \(v\leq u+O(1)\) near \(o\) with the assumption that \(v\) has analytic singularities near \(o\). Theorem 1.1 and Theorem 1.12 shows that the statement (1) in Theorem 1.1 is equivalent to \(\sigma(u,\Phi_{o,\max})=\sigma(v,\Phi_{o,\max})\) for any local Zhou weight \(\Phi_{o,\max}\) near \(o\). ### Global Zhou weights In this section, we discuss global Zhou weights. Let \(D\) be a domain in \(\mathbb{C}^{n}\), such that the origin \(o\in D\). Let \(f_{0}=(f_{0,1},\cdots,f_{0,m})\) be a vector, where \(f_{0,1},\cdots,f_{0,m}\) are holomorphic functions near \(o\). Denote \(|f_{0}|^{2}=|f_{0,1}|^{2}+\cdots+|f_{0,m}|^{2}\). Let \(\varphi_{0}\) be a plurisubharmonic function near \(o\), such that \(|f_{0}|^{2}e^{-2\varphi_{0}}\) is integrable near \(o\). **Definition 1.15**.: _We call a negative plurisubharmonic function \(\Phi_{o,\max}^{f_{0},\varphi_{0},D}\) (\(\Phi_{o,\max}^{D}\) for short) on \(D\)**a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\)** if the following statements hold:_ (1)_\(|f_{0}|^{2}e^{-2\varphi_{0}}|z|^{2N_{0}}e^{-2\Phi_{o,\max}^{D}}\) is integrable near \(o\) for large enough \(N_{0}\);_ (2)_\(|f_{0}|^{2}e^{-2\varphi_{0}-2\Phi_{o,\max}^{D}}\) is not integrable near \(o\);_ (3) _for any negative plurisubharmonic function \(\tilde{\varphi}\) on \(D\) satisfying that \(\tilde{\varphi}\geq\Phi_{o,\max}^{D}\) on \(D\) and \(|f_{0}|^{2}e^{-2\varphi_{0}-2\tilde{\varphi}}\) is not integrable near \(o\), \(\tilde{\varphi}=\Phi_{o,\max}^{D}\) holds on \(D\)._ The existence of global Zhou weights can be referred to Remark 8.2. For any \(w\in D\), denote that \[L_{w}:=\big{\{}u\in\mathrm{PSH}(D):u<0\ \&\ \limsup_{z\to w}(u(z)-\log|z-w|)<+ \infty\big{\}},\] where \(\mathrm{PSH}(D)\) denotes the set of plurisubharmonic functions on \(D\). If \(L_{w}\neq\emptyset\), the _pluricomplex Green function_ of \(D\) with a pole at \(w\) was defined as follows (see [5, 31]): \[G_{D}(w,\cdot):=\sup\{u:u\in L_{w}\}.\] When \(D\) is a bounded domain, \[\limsup_{z\to o}\big{|}G(o,z)-\log|z|\big{|}<+\infty,\] then \(G(o,\cdot)\) is a global Zhou weight with respect to \(e^{-2(n-1)\log|z|}\) on \(D\) near \(o\) (by Example 1.5, Lemma 8.5 and the definition of \(G_{D}(o,\cdot)\)). For any global Zhou weight \(\Phi_{o,\max}^{D}\), we have \[\Phi_{o,\max}^{D}\geq NG_{D}(o,\cdot)\] on \(D\) for large enough \(N\gg 0\) (see Lemma 8.3). Denote that \[\tilde{L}_{o}:=\big{\{}u\in L_{o}:u\in L_{\mathrm{loc}}^{\infty}(U\backslash\{ o\})\ \text{for some neighborhood $U$ of $o$}\big{\}}.\] If \(D\) is bounded or hyperconvex, \(\tilde{L}_{o}\neq\emptyset\) (see Remark 8.4). We recall the definitions of _hyperconvex domain_ and _strictly hyperconvex domain_ as follows. **Definition 1.16** (see [37]).: _A domain \(D\subset\mathbb{C}^{n}\) is said to be hyperconvex if there exists a continuous plurisubharmonic exhausted function \(\varrho:D\to(-\infty,0)\)._ _A bounded domain \(D\subset\mathbb{C}^{n}\) is said to be strictly hyperconvex if there exists a bounded domain \(\Omega\) and a function \(\varrho:\Omega\to(-\infty,1)\) such that \(\varrho\in C(\Omega)\cap\mathrm{PSH}(\Omega)\), \(D=\{z\in\Omega:\varrho(z)<0\}\), \(\varrho\) is exhaustive for \(\Omega\) and for any real number \(c\in[0,1]\), the open set \(\{z\in\Omega:\varrho(z)<c\}\) is connected._ It follows from Lemma 8.5, Remark 8.6 and Lemma 8.7 that if \(\tilde{L}_{o}\neq\emptyset\), there is a one-to-one correspondence between the global Zhou weights \(\Phi^{D}_{o,\max}\) and the local Zhou weights \(\Phi_{o,\max}\), which is given as follows: \[\Phi^{D}_{o,\max}(z)=\sup\big{\{}u(z):u\in \mathrm{PSH}^{-}(D),\,(f_{0},o)\not\in\mathcal{I}(\varphi_{o}+u)_{o}\] \[\&\ u\geq\Phi_{o,\max}+O(1)\ \mathrm{near}\ o\big{\}},\] where \(\mathrm{PSH}^{-}(D)\) denotes the set of negative plurisubharmonic functions on \(D\). We give some examples of global Zhou weights. **Example 1.17**.: _Let \(\varphi=\log\max_{1\leq j\leq n}|z_{j}|^{a_{j}}\) on \(\Delta^{n}\subset\mathbb{C}^{n}\), where \(a_{j}>0\) for any \(j\) satisfying that the equation \(\sum_{1\leq j\leq n}\frac{x_{j}}{a_{j}}=1\) has positive integer solution. We know that \(\varphi\) is a global Zhou weight with respect to \(|f_{0}|^{2}\) near \(o\) on \(\Delta^{n}\), thus \(\varphi\) is a local Zhou weight with respect to \(|f_{0}|^{2}\) near \(o\), where \(f_{0}=\prod_{1\leq j\leq n}z_{j}^{b_{j}-1}\) and \(b_{j}\) is a positive integer for any \(j\) satisfying \(\sum_{1\leq j\leq n}\frac{b_{j}}{a_{j}}=1\). In fact, for any negative plurisubharmonic function \(\tilde{\varphi}\geq\varphi\) on \(\Delta^{n}\) satisfying that \(|f_{0}|^{2}e^{-2\tilde{\varphi}}\) is not integrable near \(o\), we have_ \[\tilde{G}(t):=\inf\left\{\int_{\{\tilde{\varphi}<-t\}}|\tilde{f}|^{2}:\tilde {f}\in\mathcal{O}(\{\tilde{\varphi}<-t\})\ \&\ (\tilde{f}-f_{0},o)\in\mathcal{I}(\tilde{\varphi})_{o} \right\}\leq\int_{\{\tilde{\varphi}<-t\}}|f_{0}|^{2}.\] _By a direct calculation, we have_ \[G(t):=\inf\left\{\int_{\{\varphi<-t\}}|\tilde{f}|^{2}:\tilde{f}\in\mathcal{O} (\{\tilde{\varphi}<-t\})\ \&\ (\tilde{f}-f_{0},o)\in\mathcal{I}(\varphi)_{o} \right\}=\int_{\{\varphi<-t\}}|f_{0}|^{2}.\] _Note that \(\tilde{G}(0)=G(0)\), \(\tilde{G}(-\log r)\) is concave on \((0,1)\) (see [22]) and \(G(-\log r)\) is linear on \((0,1)\) (see [23, 24]), then \(\tilde{G}(t)=G(t)\) and \(\int_{\{\tilde{\varphi}<-t\}}|f_{0}|^{2}=\int_{\{\varphi<-t\}}|f_{0}|^{2}\) for any \(t\geq 0\), which implies that \(\tilde{\varphi}\equiv\varphi\). Thus, \(\varphi\) is a global Zhou weight with respect to \(|f_{0}|^{2}\) near \(o\) on \(\Delta^{n}\)._ Now, we give two properties of the global Zhou weights. **Proposition 1.18**.: _Assume that \(\tilde{L}_{o}\neq\emptyset\). Let \(\Phi^{D}_{o,\max}\) be a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\), then_ \[\Phi^{D}_{o,\max}\in\mathrm{PSH}(D)\cap L^{\infty}_{\mathrm{loc}}(D\backslash\{ o\})\] _and_ \[\big{(}dd^{c}\Phi^{D}_{o,\max}\big{)}^{n}=0\ \text{on}\ D\backslash\{o\}.\] The following proposition gives the continuity of \(\Phi^{D}_{o,\max}\). **Proposition 1.19**.: _Assume that \(D\) is a bounded hyperconvex domain. Let \(\Phi^{D}_{o,\max}\) be a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\), then \(e^{\Phi^{D}_{o,\max}}\) is continuous on \(D\) and \(\Phi^{D}_{o,\max}(z)\to 0\) when \(z\to\partial D\)._ Next, we discuss some approximations of global Zhou weights. For any \(m\in\mathbb{N}_{+}\), we define two compact subsets of \(\mathcal{O}(D)\) as follows: \[\mathscr{E}_{m}(D):=\big{\{}f\in\mathcal{O}(D):\sup_{z\in D}|f(z)|\leq 1,(f,o) \in\mathcal{I}(m\Phi^{D}_{o,\max})_{o}\big{\}},\] \[\mathscr{A}_{m}^{2}(D):=\big{\{}f\in\mathcal{O}(D):\|f\|_{D}\leq 1,(f,o)\in \mathcal{I}(m\Phi^{D}_{o,\max})_{o}\big{\}},\] where \(\|f\|_{D}^{2}:=\int_{D}|f|^{2}\). We also define two plurisubharmonic functions \(\phi_{m}\) and \(\varphi_{m}\) for any \(m\) by: \[\phi_{m}(z):=\sup_{f\in\mathscr{E}_{m}(D)}\frac{1}{m}\log|f(z)|,\ \forall z\in D,\] \[\varphi_{m}(z):=\sup_{f\in\mathscr{A}_{m}^{2}(D)}\frac{1}{m}\log|f(z)|,\ \forall z\in D.\] We obtain an approximation theorem for global Zhou weights \(\Phi^{D}_{o,\max}\). **Theorem 1.20**.: _If \(D\) is a bounded strictly hyperconvex domain, then_ 1. \[\lim_{m\to\infty}\phi_{m}(z)=\lim_{m\to\infty}\varphi_{m}(z)=\Phi^{D}_{o,\max}( z),\ \forall z\in D.\] 2. _There exists a constant_ \(\mathsf{C}\) _independent of_ \(m\)_, such that for any_ \(m\in\mathbb{N}_{+}\)_,_ \[1-\frac{\mathsf{C}}{m}\leq\sigma(\phi_{m},\Phi^{D}_{o,\max})\leq 1,\] _and_ \[1-\frac{\mathsf{C}}{m}\leq\sigma(\varphi_{m},\Phi^{D}_{o,\max})\leq 1.\] Theorem 1.20 shows that we can reconstruct the global Zhou weights \(\Phi^{D}_{o,\max}\) on \(D\) by the data of the multiplier ideals \(\mathcal{I}(m\Phi^{D}_{o,\max})_{o}\). A similar method to approximate the pluricomplex Green functions by \(\phi_{m}\) can be referred to [37]. We give some corollaries of Theorem 1.20 below. **Corollary 1.21**.: _If \(D\) is a bounded strictly hyperconvex domain, and \(\Phi^{D}_{o,\max}\) is a global Zhou weight related to some \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\) near \(o\), then for any \(w\in D\), we have_ \[\Phi^{D}_{o,\max}(w)=\sup\left\{\frac{\log|f(w)|}{\sigma(\log|f|,\Phi^{D}_{o, \max})}:f\in\mathcal{O}(D),\ \sup_{D}|f|\leq 1,\ f(o)=0,\ f\not\equiv 0\right\}.\] Note that for any non-constant holomorphic function \(f\) with \(f(o)=0\) near \(o\), we have \(\sigma(\log|f|,\Phi^{D}_{o,\max})>0\). In fact, since \(\Phi^{D}_{o,\max}(z)\geq N\log|z|+O(1)\) near \(o\) for some \(N>0\), we only need to show that there exists \(N^{\prime}>0\) such that \(\log|f|\leq N^{\prime}\log|z|+O(1)\) near \(o\). Since \(f(o)=0\) and \(f\not\equiv 0\), the Lelong number \(\nu(\log|f|,o)\in(0,+\infty)\), which implies the existence of \(N^{\prime}\). **Corollary 1.22**.: _Let \(D\) be a bounded strictly hyperconvex domain, and \(\Phi^{D}_{o,\max}\) be a global Zhou weight related to some \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\) near \(o\), then for any \(z\in D\), we have_ \[\Phi^{D}_{o,\max}(z)=\sup\big{\{}\phi(z):\phi\in\mathrm{PSH}^{-}(D),\ \phi\sim_{\mathcal{I}}\Phi^{D}_{o,\max}\text{ at }o\big{\}},\] _where we write '\(\phi\sim_{\mathcal{I}}\Phi^{D}_{o,\max}\) at \(o\)' if there exists real numbers \(\alpha\geq\beta\) independent of \(m\), such that for \(m\gg 1\),_ \[\mathcal{I}\big{(}(m+\alpha)\Phi^{D}_{o,\max}\big{)}_{o}\subset\mathcal{I}(m \phi)_{o}\subset\mathcal{I}\big{(}(m+\beta)\Phi^{D}_{o,\max}\big{)}_{o}.\] _In particular, we have_ \[\Phi^{D}_{o,\max}(z)=\sup\big{\{}\phi(z):\phi\in\mathrm{PSH}^{-}(D),\ \mathcal{I}(m \phi)_{o}=\mathcal{I}\big{(}m\Phi^{D}_{o,\max}\big{)}_{o},\ \forall m\in\mathbb{N}_{+}\big{\}}.\] One can also obtain Corollary 1.21 by Theorem 1.12, which will not be shown in this paper. Note that the pluricomplex Green function \(G_{D}(o,\cdot)\) is a global Zhou weight related to \(e^{-2(n-1)\log|z|}\) (with \(f_{0}\equiv 1\) and \(\varphi_{0}=(n-1)\log|z|\)) on \(D\) near \(o\). Then Theorem 1.20 recovers the following result in [37]. **Corollary 1.23** ([37]).: _Let \(D\) be a bounded strictly hyperconvex domain containing \(o\), then_ \[\lim_{m\to\infty}g_{m}(z)=G_{D}(o,z),\ \forall z\in D,\] _where_ \[g_{m}(z)=\sup_{f\in\mathcal{E}_{m}(D)}\frac{1}{m}\log|f(z)|,\ \forall z\in D,\] _and_ \[\mathcal{E}_{m}(D):=\left\{f\in\mathcal{O}(D):\sup_{w\in D}|f(w)|\leq 1,\ D^{( \nu)}f(o)=0,\ \forall\nu\in\mathbb{N}^{n}\ \text{with}\ |\nu|\leq m-1\right\}.\] ## 2. Preliminaries In this section, we give some preliminaries. ### Recall: \(L^{2}\) method In this section, we recall the following lemma, whose various forms already appeared in [25, 26, 22] etc.: **Lemma 2.1** (see [22]).: _Let \(B\in(0,+\infty)\) and \(t_{0}\geq T\) be arbitrarily given. Let \(D\subset\subset\mathbb{C}^{n}\) be a pseudoconvex domain. Let \(\psi<-T\) be a plurisubharmonic function on \(D\). Let \(\varphi\) be a plurisubharmonic function on \(D\). Let \(F\) be a holomorphic function on \(\{\psi<-t_{0}\}\), such that_ \[\int_{K\cap\{\psi<-t_{0}\}}|F|^{2}<+\infty\] _for any compact subset \(K\) of \(D\), and_ \[\int_{D}\frac{1}{B}\mathbb{I}_{\{-t_{0}-B<\psi<-t_{0}\}}|F|^{2}e^{-\varphi}<+\infty.\] _Then there exists a holomorphic function \(\tilde{F}\) on \(D\), such that,_ \[\int_{D}|\tilde{F}_{t_{0}}-(1-b(\psi))F|^{2}e^{-\varphi+v(\psi)}\] \[\leq \big{(}e^{-T}-e^{-t_{0}-B}\big{)}\int_{D}\frac{1}{B}\mathbb{I}_{ \{-t_{0}-B<\psi<-t_{0}\}}|F|^{2}e^{-\varphi}\] _where \(b(t)=\int_{-\infty}^{t}\frac{1}{B}\mathbb{I}_{\{-t_{0}-B<s<-t_{0}\}}ds\), \(v(t)=\int_{-t_{0}}^{t}b(s)ds-t_{0}\)._ It is clear that \(\mathbb{I}_{(-t_{0},+\infty)}\leq b(t)\leq\mathbb{I}_{(-t_{0}-B,+\infty)}\) and \(\max\{t,-t_{0}-B\}\leq v(t)\leq\max\{t,-t_{0}\}\). Let \(\varphi\) and \(\varphi_{0}\) be plurisubharmonic functions on \(\Delta^{n}\subset\mathbb{C}^{n}\), and \(f_{0}\) be a holomorphic function on \(\Delta^{n}\). **Lemma 2.2** (see [28]).: _Assume that \(|f_{0}|^{2}e^{-2(\varphi+\varphi_{0})}\) is not integrable near \(o\), and \(|f_{0}|^{2}e^{-2\varphi_{0}}\) is integrable near \(o\). Then for any small enough neighborhood \(U\subset\subset\Delta^{n}\) of \(o\), there exists \(C>0\) such that_ \[e^{2t}\int_{\{\varphi<-t\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}>C\] _for any \(t\geq 0\)._ Proof.: Note that for small enough \(U\), (1) \(\sup_{U}\varphi\leq 0\); (2) \(\sup_{U}\varphi_{0}\leq 0\); (3) \(\int_{U}|f_{0}|^{2}e^{-2\varphi_{0}}<+\infty\) implies that \[\int_{U}|f_{0}|^{2}\leq e^{2\sup_{U}\varphi_{0}}\int_{U}|f_{0}|^{2}e^{-2 \varphi_{0}}<+\infty,\] and \[\int_{U}\mathbb{I}_{\{-2t_{0}-1<2\varphi<-2t_{0}\}}|f_{0}|^{2}e^{-2\varphi_{0 }-2\varphi}\leq\int_{U}|f_{0}|^{2}e^{-2\varphi_{0}}e^{2t_{0}+1}<+\infty\] hold for any \(t_{0}\geq 0\). Then it follows from Lemma 2.1 (\(T\sim 0\), \(t_{0}\sim 2t_{0}\), \(B\sim 1\), \(\psi\sim 2\varphi\), \(\varphi\sim\varphi+\varphi_{0}\), here '\(\sim\)' means that the former is replaced by the latter, and the notation will be used throughout the paper) that there exists a holomorphic function \(\tilde{F}_{t_{0}}\) on \(U\) such that \[\int_{U}|\tilde{F}_{t_{0}}-\big{(}1-b(2\varphi)\big{)}f_{0}|^{2}e ^{-2(\varphi+\varphi_{0})+v(2\varphi)}\] \[\leq \Big{(}1-e^{-(2t_{0}+1)}\Big{)}\int_{U}\mathbb{I}_{\{-2t_{0}-1<2 \varphi<-2t_{0}\}}|f_{0}|^{2}e^{-2(\varphi+\varphi_{0})}\] \[\leq e^{2t_{0}+1}\int_{U}\mathbb{I}_{\{-2t_{0}-1<2\varphi<-2t_{0}\}} |f_{0}|^{2}e^{-2\varphi_{0}}\] \[\leq e^{2t_{0}+1}\int_{U}\mathbb{I}_{\{2\varphi<-2t_{0}\}}|f_{0}|^{2} e^{-2\varphi_{0}}\] \[= e^{2t_{0}+1}\int_{U}\mathbb{I}_{\{\varphi<-t_{0}\}}|f_{0}|^{2}e ^{-2\varphi_{0}}, \tag{2.1}\] where \(b(t)=\int_{0}^{t}\mathbb{I}_{\{-t_{0}-1<s<-t_{0}\}}ds\), \(v(t)=\int_{0}^{t}b(s)ds\). Inequality (2.1) shows that \[\int_{U}|\tilde{F}_{t_{0}}-\big{(}1-b(2\varphi)\big{)}f_{0}|^{2}e^{-2(\varphi +\varphi_{0})+v(2\varphi)}<+\infty.\] Note that \(v(2\varphi|_{U})\geq-(2t_{0}+1)\), then \[\int_{U}|\tilde{F}_{t_{0}}-\big{(}1-b(2\varphi)\big{)}f_{0}|^{2}e^{-2(\varphi +\varphi_{0})}<+\infty.\] Note that \((1-b(2\varphi))=1\) on \(\{2\varphi<-2t_{0}-1\}\), then \((\tilde{F}_{t_{0}}-f_{0},o)\in\mathcal{I}(\varphi+\varphi_{0})_{o}\). As \((f_{0},o)\not\in\mathcal{I}(\varphi+\varphi_{0})_{o}\), \((\tilde{F}_{t_{0}},o)\not\in\mathcal{I}(\varphi+\varphi_{0})_{o}\), which implies that \[\liminf_{t_{0}\to+\infty}\int_{U}|\tilde{F}_{t_{0}}|^{2}>0. \tag{2.2}\] Inequality (2.2) could be proved by contradiction: if not, there exists a subsequence of \(\{\tilde{F}_{t_{0}}\}_{t_{0}}\) compactly converging to \(0\) (\(t_{0}\to+\infty\)), which implies that there exists a subsequence of \(\tilde{F}_{t_{0}}-f_{0}\) compactly converging to \(-f_{0}\), which contradicts the closedness of coherent sheaves (see [21]) by \((\tilde{F}_{t_{0}}-f_{0},o)\in\mathcal{I}(\varphi+\varphi_{0})_{o}\) and \((-f_{0},o)\not\in\mathcal{I}(\varphi+\varphi_{0})_{o}\). Note that \(\operatorname{Supp}\bigl{(}1-b(t)\bigr{)}\subset(-\infty,-2t_{0}]\), \(v(t)\geq t\), \[\left(\int_{U}\big{|}\tilde{F}_{t_{0}}-\bigl{(}1-b(2\varphi) \bigr{)}f_{0}\big{|}^{2}e^{-2\varphi_{0}}\right)^{1/2}\] \[\geq \left(\int_{U}\big{|}\tilde{F}_{t_{0}}-\bigl{(}1-b(2\varphi) \bigr{)}f_{0}\big{|}^{2}e^{-2\varphi_{0}}\right)^{1/2}\] \[\geq \left(\int_{U}|\tilde{F}_{t_{0}}|^{2}e^{-2\varphi_{0}}\right)^{1/ 2}-\left(\int_{U\cap\{2\varphi<-2t_{0}\}}|f_{0}|^{2}e^{-2\varphi_{0}}\right)^ {1/2},\] \[\geq \left(\int_{U}|\tilde{F}_{t_{0}}|^{2}\right)^{1/2}-\left(\int_{U \cap\{2\varphi<-2t_{0}\}}|f_{0}|^{2}e^{-2\varphi_{0}}\right)^{1/2}, \tag{2.3}\] and \[\lim_{t_{0}\to+\infty}\int_{U\cap\{2\varphi<-2t_{0}\}}|f_{0}|^{2}e^{-2\varphi_ {0}}=0, \tag{2.4}\] then the combination of inequality (2.1), equality (2.3), inequality (2.4) and inequality (2.2) proves the present Lemma. ### Some asymptotic properties related to integrability Let \(\varphi\) and \(\varphi_{0}\) be plurisubharmonic functions on \(\Delta^{n}\subset\mathbb{C}^{n}\) such that \(|f_{0}|^{2}e^{-2\varphi_{0}}\) is integrable near \(o\). **Lemma 2.3**.: _Assume that \((f_{0},o)\not\in\mathcal{I}(\varphi+\varphi_{0})_{o}\). Then for any neighborhood \(U\subset\subset\Delta^{n}\) of \(o\),_ \[\limsup_{t\to+\infty}\frac{-\log\int_{\{\varphi<-t\}\cap U}|f_{0}|^{2}e^{-2 \varphi_{0}}}{2t}\leq 1.\] Proof.: As \((f_{0},o)\not\in\mathcal{I}(\varphi+\varphi_{0})_{o}\), we have \[\liminf_{t\to+\infty}e^{2t}\int_{\{\varphi<-t\}\cap U}|f_{0}|^{2}e^{-2\varphi_ {0}}>0\] (see Lemma 2.2), which implies that \[\limsup_{t\to+\infty}\frac{-\log\int_{\{\varphi<-t\}\cap U}|f_{0}|^{2}e^{-2 \varphi_{0}}}{2t}\leq 1.\] Assume that \(\varphi\geq N\log|z|+O(1)\) for large enough \(N\gg 0\). **Lemma 2.4**.: _Assume that \((f_{0},o)\in\mathcal{I}(\varphi+\varphi_{0})_{o}\). Then for any neighborhood \(U\subset\subset\Delta^{n}\) of \(o\),_ \[\lim_{t\to+\infty}e^{2t}\int_{\{\varphi<-t\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0} }=0.\] Proof.: As \(\varphi\geq N\log|z|+O(1)\) for large enough \(N\gg 0\) and \((f_{0},o)\in\mathcal{I}(\varphi+\varphi_{0})_{o}\), we have \[\limsup_{t\to+\infty}\int_{\{\varphi<-t\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}-2 \varphi}=0,\] which implies that \[\limsup_{t\to+\infty}e^{2t}\int_{\{\varphi<-t\}\cap U}|f_{0}|^{2}e^ {-2\varphi_{0}}\] \[\leq \limsup_{t\to+\infty}\int_{\{\varphi<-t\}\cap U}|f_{0}|^{2}e^{-2 \varphi_{0}-2\varphi}\] \[= 0.\] Theorem 2.8 shows that \((f_{0},o)\in\mathcal{I}(\varphi+\varphi_{0})_{o}\) implies that there exists small enough \(\varepsilon>0\) such that \((f_{0},o)\in\mathcal{I}\big{(}(1+\varepsilon)\varphi+\varphi_{0}\big{)}_{o}\). **Lemma 2.5**.: _Assume that \((f_{0},o)\in\mathcal{I}(\varphi+\varphi_{0})_{o}\). Then for some neighborhood \(U\subset\subset\Delta^{n}\) of \(o\) there exists small enough \(\varepsilon>0\) such that_ \[\lim_{t\to+\infty}e^{2t(1+\varepsilon)}\int_{\{\varphi<-t\}\cap U}|f_{0}|^{2} e^{-2\varphi_{0}}=0.\] Lemma 2.5 implies the following result. **Lemma 2.6**.: _Assume that \((f_{0},o)\in\mathcal{I}(\varphi+\varphi_{0})_{o}\). Then for some neighborhood \(U\) of \(o\),_ \[\liminf_{t\to+\infty}\frac{-\log\int_{\{\varphi<-t\}\cap U}|f_{0}|^{2}e^{-2 \varphi_{0}}}{2t}>1.\] It follows from Lemma 2.3 and Lemma 2.6 that the following lemma holds. **Lemma 2.7**.: _Assume that \((f_{0},o)\not\in\mathcal{I}(\varphi+\varphi_{0})_{o}\), and \((f_{0},o)\in\mathcal{I}\big{(}(1-\varepsilon)\varphi+\varphi_{0}\big{)}_{o}\) for any \(\varepsilon\in(0,1)\). Then for some neighborhood \(U\subset\subset\Delta^{n}\) of \(o\),_ \[\lim_{t\to+\infty}\frac{-\log\int_{\{\varphi<-t\}\cap U}|f_{0}|^{2}e^{-2 \varphi_{0}}}{2t}=1.\] ### Strong openness property and related results Recall that the _strong openness property_ of the multiplier ideal sheaves is that \[\mathcal{I}(u)=\mathcal{I}_{+}(u):=\bigcup_{p>1}\mathcal{I}(pu)\] for any plurisubharmonic function \(u\), which was conjectured by Demailly in [10] and [11] (called _strong openness conjecture_). When \(\mathcal{I}(u)=\mathcal{O}\), Demailly's strong openness conjecture degenerates to the _openness conjecture_, which was posed by Demailly-Kollar in [14]. Favre-Jonsson [17] proved the \(2\)-dimensional case of the openness conjecture by using valuation theory. By studying asymptotic jumping numbers for graded sequences of ideals in valuation theory, Jonsson-Mustata [19] proved the \(2\)-dimensional case of the strong openness conjecture. For the higher dimension, the connection between valuation theory and the openness conjecture has been highlighted by the higher-dimensional framework in [6]. In [2], Berndtsson proved the openness conjecture by using a complex variant of the Brunn-Minkowski inequality. After that, Guan-Zhou [27] proved the strong openness conjecture by using the famous Ohsawa-Takegoshi \(L^{2}\) extension theorem [38]. **Theorem 2.8** ([27], see also [35] and [29]).: _Let \((\varphi_{j})\) be a sequence of plurisubharmonic functions, which is increasingly convergent to a plurisubharmonic function \(\varphi\). Then \(\mathcal{I}(\varphi)=\bigcup_{j}\mathcal{I}(\varphi_{j})\)._ Recently, Xu [47] completed the algebraic approach of solving the openness conjecture, which was conjectured by Jonsson-Mustata [19, 20]. Let \(\{\phi_{m}\}_{m\in\mathbb{N}^{+}}\) be a sequence of negative plurisubharmonic functions on \(\Delta^{n}\), which is convergent to a negative Lebesgue measurable function \(\phi\) on \(\Delta^{n}\) in Lebesgue measure. Let \(f\) be a holomorphic function near \(o\), and let \(I\) be an ideal of \(\mathcal{O}_{o}\). We denote \[C_{f,I}(U):=\inf\left\{\int_{U}|\tilde{f}|^{2}:(\tilde{f}-f,o)\in I\ \&\ \tilde{f}\in \mathcal{O}(U)\right\},\] where \(U\subseteq\Delta^{n}\) is a domain with \(o\in U\). Especially, if \(I=\mathcal{I}(\phi)_{o}\), we denote \(C_{f,\phi}(U):=C_{f,I}(U)\). In [28], Guan-Zhou presented the following lower semicontinuity property of plurisubharmonic functions with a multiplier. **Proposition 2.9** ([28]).: _Let \(f\) be a holomorphic function near \(o\). Assume that for any small enough neighborhood \(U\) of \(o\), the pairs \((f,\phi_{m})\)\((m\in\mathbb{N}^{+})\) satisfies_ \[\inf_{m}C_{f,\varphi_{0}+\phi_{m}}(U)>0. \tag{2.5}\] _Then \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\phi}\) is not integrable near \(o\)._ The Noetherian property of multiplier ideal sheaves (see [12]) shows that **Remark 2.10**.: _Assume that_ (1)_\(\phi_{m+1}\geq\phi_{m}\) holds for any \(m\);_ (2)_\(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\phi_{m}}\) is not integrable near \(o\) for any \(m\)._ _Then inequality (2.5) holds._ ### Proofs of Remark 1.3 and Remark 1.4 In this section, we give the proofs of Remark 1.3 and Remark 1.4. Firstly, we recall two useful results. **Lemma 2.11** (_Choquet's lemma_, see [12]).: _Every family \((u_{\alpha})\) of uppersemicon- tinuous functions has a countable subfamily \((v_{j})=(u_{\alpha(j)})\), such that its upper envelope \(v=\sup_{j}v_{j}\) satisfies \(v\leq u\leq u^{*}=v^{*}\), where \(u=\sup_{\alpha}u_{\alpha}\), \(u^{*}(z):=\lim_{\varepsilon\to 0}\sup_{\mathbb{B}^{n}(z,\varepsilon)}u\) and \(v^{*}(z):=\lim_{\varepsilon\to 0}\sup_{\mathbb{B}^{n}(z,\varepsilon)}v\) are the regularizations of \(u\) and \(v\)._ **Proposition 2.12** (see Proposition (4.24) in [12]).: _If all \((u_{\alpha})\) are subharmonic, the upper regularization \(u^{*}\) is subharmonic and equals almost everywhere to \(u\)._ Then, we prove Remark 1.3. Proof of Remark 1.3.: Theorem 2.8 shows that there exists \(\varepsilon>0\), such that \[|f_{0}|^{2}e^{-2\varphi_{0}}|z|^{2N_{0}}e^{-2(1+\varepsilon)\varphi}\] is integrable near \(o\). We have \[\int_{U}|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\varphi}-|f_{0}|^{2}e^{-2 \varphi_{0}}e^{-2\max\big{\{}\varphi,\frac{N_{0}}{\varepsilon}\log|z|\big{\}}}\] \[\leq \int_{U\cap\{\frac{N_{0}}{\varepsilon}\log|z|\geq\varphi\}}|f_{0} |^{2}e^{-2\varphi_{0}}e^{-2\varphi}\] \[\leq \int_{U\cap\{\frac{N_{0}}{\varepsilon}\log|z|\geq\varphi\}}|f_{0} |^{2}e^{-2\varphi_{0}}|z|^{2N_{0}}e^{-2(1+\varepsilon)\varphi}\] \[< +\infty,\] where \(U\) is a neighborhood of \(o\). Then it suffices to consider that there exists \(N\) large enough such that \[N\log|z|\leq\varphi\] near \(o\). Let \(V\) be a small neighborhood of \(o\) such that \(\operatorname{Supp}\bigl{(}(\mathcal{O}/\mathcal{I}(\varphi))|_{V}\bigr{)}=\{o\}\) and let \((u_{\alpha})_{\alpha}\) be the negative plurisubharmonic functions on \(V\) such that \(u_{\alpha}\geq\varphi+O(1)\) near \(o\) and \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2u_{\alpha}}\) is not integrable near \(o\). _Zorn's Lemma_ shows that there exists \(\Gamma\) which is the maximal set such that for any \(\alpha,\alpha^{\prime}\in\Gamma\), \(u_{\alpha}\leq u_{\alpha^{\prime}}+O(1)\) or \(u_{\alpha^{\prime}}\leq u_{\alpha}+O(1)\) holds near \(o\), where \((u_{\alpha})\) are negative plurisubharmonic functions on \(V\). Let \(u(z):=\sup_{\alpha\in\Gamma}u_{\alpha}(z)\) on \(V\), and let \(u^{*}(z)=\lim_{\varepsilon\to 0}\sup_{\mathbb{B}^{n}(z,\varepsilon)}u\). Lemma 2.11 shows that there exists subsequence \((v_{j})\) of \((u_{\alpha})\) such that \((\max_{j}v_{j})^{*}=u^{*}\). Moreover one can choose \(v_{j}(:=\sup_{j^{\prime}\leq j}v_{j})\) increasing with respect to \(j\) such that \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2v_{j}}\) is not integrable near \(o\). Proposition 2.12 shows that \((v_{j})\) is convergent to \(v^{*}\) with respect to \(j\) almost everywhere with respect to Lebesgue measure, and \(v^{*}\) is a plurisubharmonic function on \(V\). Proposition 2.9 (and Remark 2.10) shows that \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2v^{*}}\) is not integrable near \(o\). In fact, the definition of \(u\) implies that \(u=v^{*}=u^{*}\). In the following part, we prove that \(v^{*}\) is a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\) by contradiction. If not, then there exists a plurisubharmonic function \(\tilde{v}\) near \(o\) such that \(\tilde{v}\geq v^{*}\), \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\tilde{v}}\) is not integrable near \(o\), and \[\limsup_{z\to o}\big{(}\tilde{v}(z)-v^{*}(z)\big{)}=+\infty.\] Note that \(v^{*}\geq\varphi\geq N\log|z|\). Then for small ball \(\mathbb{B}(o,\varepsilon)\), there exists \(M\ll 0\) such that \(\tilde{v}+M<N\log|z|<v^{*}\) near the boundary of \(\mathbb{B}(o,\varepsilon)\), which implies that \(\max\{\tilde{v}+M,v^{*}\}=v^{*}\) near the boundary of \(\mathbb{B}(o,\varepsilon)\). Let \[\tilde{\varphi}:=\left\{\begin{array}{ll}\max\{\tilde{v}+M,v^{*}\}&\text{ on }\mathbb{B}(o,\varepsilon),\\ v^{*}&\text{ on }V\setminus\mathbb{B}(o,\varepsilon).\end{array}\right.\] \(\tilde{v}\geq v^{*}\) implies that \(\tilde{\varphi}=\tilde{v}+O(1)\) near \(o\). Then \(\tilde{\varphi}\) is a plurisubharmonic function on \(V\) such that \[\limsup_{z\to o}(\tilde{\varphi}(z)-u^{*}(z))=\limsup_{z\to o}(\tilde{ \varphi}(z)-v^{*}(z))\geq\limsup_{z\to o}(\tilde{v}(z)+M-v^{*}(z))=+\infty,\] and \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\tilde{\varphi}}\) is not integrable near \(o\), which contradicts the definition of \(u^{*}\). This proves Remark 1.3. Finally, we prove Remark 1.4. Proof of Remark 1.4.: For \(U\ni o\) small enough, there exists a negative \(\Phi_{o,\max}\) on \(U\). The uniqueness of \(\Phi_{o,\max}^{U}\) is just follows from inequality (1.1), then it suffices to prove the existence. For any negative plurisubharmonic functions \(\varphi_{1}\geq\Phi_{o,\max}\) and \(\varphi_{2}\geq\Phi_{o,\max}\) on \(U\), such that \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\varphi_{1}}\) and \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\varphi_{2}}\) are not integrable near \(o\), it follows from the definition of \(\Phi_{o,\max}\) that \(\varphi_{1}=\Phi_{o,\max}+O(1)\) and \(\varphi_{2}=\Phi_{o,\max}+O(1)\) near \(o\), which implies that \(\max\{\varphi_{1},\varphi_{2}\}=\Phi_{o,\max}+O(1)\) near \(o\). We consider the upper-envelop of \[\sup\big{\{}\varphi(z):\varphi\in\mathrm{PSH}^{-}(U)\ \&\ (f_{0},o)\notin \mathcal{I}(\varphi_{0}+\varphi)_{o}\ \&\ \varphi\geq\Phi_{o,\max}+O(1)\big{\}},\] denoted by \(\Phi_{o,\max}^{U}\). Lemma 2.11 shows that there exist negative plurisubharmonic functions \(\{\varphi_{j}\}_{j}\) on \(U\) such that (1) \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\varphi_{j}}\) is not integrable near \(o\); (2) \(\varphi_{j}\geq\Phi_{o,\max}+O(1)\); (3) the upper-envelop \((\sup_{j}\{\varphi_{j}\})^{*}=\Phi_{o,\max}^{U}\). Note that for any \(t_{0}>0\), \(\sup_{j<t_{0}}\{\varphi_{j}\}=\Phi_{o,\max}+O(1)\), and \(\sup_{j<t_{0}}\{\varphi_{j}\}\) is increasingly convergent to \(\Phi_{o,\max}^{U}\) almost everywhere on \(U\) when \(t_{0}\to+\infty\), then it follows from Proposition 2.9 (Remark 2.10) that (4) \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\Phi_{o,\max}^{U}}\) is not integrable near \(o\); (5) \(\Phi_{o,\max}^{U}\geq\Phi_{o,\max}+O(1)\), which implies that \(\Phi_{o,\max}^{U}=\Phi_{o,\max}+O(1)\). (5) shows that for any negative plurisubharmonic function \(\psi\) on \(U\), \[\psi\leq\sigma(\psi,\Phi_{o,\max})\Phi_{o,\max}^{U}+O(1),\] which implies that \(|f_{0}|^{2}e^{-\varphi_{0}}e^{-2\max\big{\{}\Phi_{o,\max}^{U}\cdot\overline{ \sigma(\psi,\Phi_{o,\max})}^{\psi}\big{\}}}\) is not integrable near \(o\). Note that \(\max\big{\{}\Phi_{o,\max}^{U},\frac{1}{\sigma(\psi,\Phi_{o,\max})}\psi\big{\}}<0\) on \(U\), then the definition of \(\Phi_{o,\max}^{U}\) shows that \[\Phi_{o,\max}^{U}\geq\max\bigg{\{}\Phi_{o,\max}^{U},\frac{1}{\sigma(\psi,\Phi _{o,\max})}\psi\bigg{\}},\] which implies that \[\frac{1}{\sigma(\psi,\Phi_{o,\max})}\psi\leq\Phi_{o,\max}^{U}.\] This proves inequality (1.1). In the following part, we prove inequality (1.2). Note that for small enough neighborhood \(U\ni o\), the following statements hold (1) \(\psi<0\) on \(U\); (2) there exists subsequence \(\psi_{j_{k}}\) of \(\psi_{j}\) convergent to \(\psi\) in the sense of \(L_{\mathrm{loc}}^{1}\) and almost everywhere with respect to the Lebesgue measure at the same time when \(k\to+\infty\); (3) \(\psi_{j_{k}}<0\) on \(U\) for any \(k\). Then inequality (1.1) shows that \(\psi_{j_{k}}\leq\sigma(\psi_{j_{k}},\Phi_{o,\max})\Phi_{o,\max}^{U}\) for any \(k\). Without loss of generality, we can assume that \[\lim_{k\to+\infty}\sigma(\psi_{j_{k}},\Phi_{o,\max})=\limsup_{j\to+\infty} \sigma(\psi_{j},\Phi_{o,\max})\] on \(U\). Note that \[\sup_{k\geq k_{0}}\{\psi_{j_{k}}\}\leq\sup_{k\geq k_{0}}\big{(}\sigma(\psi_{j_ {k}},\Phi_{o,\max})\Phi_{o,\max}^{U}\big{)}\] on \(U\), then \[\begin{split}&\lim_{k_{0}\to+\infty}\sup_{k\geq k_{0}}\{\psi_{j_{k}}\} \\ &\leq\lim_{k_{0}\to+\infty}\sup_{k\geq k_{0}}\big{(}\sigma(\psi_{j_{k }},\Phi_{o,\max})\Phi_{o,\max}^{U}\big{)}\\ &=\lim_{k\to+\infty}\sigma(\psi_{j_{k}},\Phi_{o,\max})\Phi_{o, \max}^{U}\\ &=\left(\limsup_{j\to+\infty}\sigma(\psi_{j},\Phi_{o,\max})\right) \Phi_{o,\max}^{U}\end{split} \tag{2.6}\] on \(U\). Lemma 2.11 shows that \[\lim_{k_{0}\to+\infty}\big{(}\sup_{k\geq k_{0}}\{\psi_{j_{k}}\}\big{)}^{*}\] is a plurisubharmonic function on \(U\) and \[\lim_{k_{0}\to+\infty}\big{(}\sup_{k\geq k_{0}}\{\psi_{j_{k}}\}\big{)}^{*}= \lim_{k_{0}\to+\infty}\big{(}\sup_{k\geq k_{0}}\{\psi_{j_{k}}\}\big{)}\] on \(U\) almost everywhere with respect to the Lebesgue measure. Note that \[\psi=\lim_{k_{0}\to+\infty}\sup_{k\geq k_{0}}\{\psi_{j_{k}}\}\] almost everywhere with respect to the Lebesgue measure on \(U\), then \[\psi=\lim_{k_{0}\to+\infty}(\sup_{k\geq k_{0}}\{\psi_{j_{k}}\})^{*}\] on \(U\). Combining with inequality (2.6), we obtain \[\psi\leq\limsup_{j\to+\infty}\sigma(\psi_{j},\Phi_{o,\max})\Phi_{o,\max}^{U}\] on \(U\), which is inequality (1.2). This proves Remark 1.4. ## 3. Concavity: real analysis Let \(D\) be a bounded domain in \(\mathbb{C}^{n}\), and the origin \(o\in D\). Let \(u\) and \(v\) be Lebesgue measurable functions on \(D\) with upper-bound near \(o\). Let \(g\) be a nonnegative Lebesgue measurable function on \(D\). **Lemma 3.1** (see [11]).: _Assume that \(g^{2}e^{2(l_{1}v-(1+l_{2})u)}\) is integrable near \(o\), where \(l_{1},l_{2}>0\). Then \(g^{2}e^{-2u}-g^{2}e^{-2\max\big{\{}u,\frac{l_{1}}{l_{2}}v\big{\}}}\) is integrable on a small enough neighborhood \(V_{o}\) of \(o\)._ Proof.: Recall (see [11]) that \[\int_{V_{o}}g^{2}\left(e^{-2u}-e^{-2\max\left\{u,\frac{l_{1}}{l_{2}}v \right\}}\right)\] \[= \int_{V_{o}\cap\{l_{2}u<l_{1}v\}}g^{2}\left(e^{-2u}-e^{-2\max \left\{u,\frac{l_{1}}{l_{2}}v\right\}}\right)\] \[\leq \int_{V_{o}\cap\{l_{2}u<l_{1}v\}}g^{2}e^{-2u}\] \[\leq \int_{V_{o}\cap\{l_{2}u<l_{1}v\}}g^{2}\left(e^{-2(l_{2}u-l_{1}v)} e^{-2u}\right)\] \[\leq \int_{V_{o}}g^{2}e^{-2(l_{2}u-l_{1}v)-2u}\] \[\leq \int_{V_{o}}g^{2}e^{2l_{1}v-2(1+l_{2})u}.\] This proves Lemma 3.1 for \(V_{o}\) small enough. Denote that \[A_{u,v}(t):=\sup\big{\{}c:g^{2}e^{2(tv-cu)}\text{ is integrable near }o\big{\}}.\] Note that \(u\) and \(v\) are local upper-bounded near \(o\), then \(A_{u,v}(t)\) is increasing with respect to \(t\) (maybe \(+\infty\) or \(-\infty\)). Assume that \(A_{u,v}(t)\in(0,+\infty]\) on \((t_{0}-\delta,t_{0}+\delta)\). It follows from Holder inequality that \(A_{u,v}(t)\) is concave on \((t_{0}-\delta,t_{0}+\delta)\). **Lemma 3.2**.: _Assume that \(A_{u,v}(t)\) is strictly increasing on \((t_{0}-\delta,t_{0}+\delta)\). Then_ \[A_{\max\{u,\frac{1}{b}v\},v}(t_{0})=A_{u,v}(t_{0})\] _holds for any \(b\in\left(0,\lim_{\Delta t\to 0+0}\frac{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}{ \Delta t}\right)\)._ Proof.: As \(A_{u,v}(t)\) is strictly increasing, then \(g^{2}e^{2(t_{0}+\Delta t+\varepsilon)v}e^{-2A_{u,v}(t_{0}+\Delta t)u}\) is integrable for any pair \((\Delta t,\varepsilon)\) satisfying \(\Delta t\), \(\varepsilon>0\) and \(\Delta t+\varepsilon<+\delta\). Note that \[g^{2}e^{2(t_{0}+\Delta t+\varepsilon)v}e^{-2A_{u,v}(t_{0}+\Delta t )u}\] \[= g^{2}e^{2(t_{0}-\varepsilon)v}e^{2(\Delta t+2\varepsilon)v}e^{-2 \left(\frac{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}{A_{u,v}(t_{0})}+1\right)A _{u,v}(t_{0})u}.\] Then it follows from Lemma 3.1 \[\left(g^{2}\sim g^{2}e^{2(t_{0}-\varepsilon)v},\ l_{1}\sim\Delta t+2 \varepsilon,\ l_{2}\sim\frac{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}{A_{u,v}( t_{0})}\right)\] that \[g^{2}e^{2(t_{0}-\varepsilon)v}e^{-2A_{u,v}(t_{0})u}-g^{2}e^{2(t_ {0}-\varepsilon)v}e^{-2\max\left\{A_{u,v}(t_{0})u,\frac{\Delta t+2\varepsilon }{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}v\right\}}\] \[= g^{2}e^{2(t_{0}-\varepsilon)v}e^{-2A_{u,v}(t_{0})u}-g^{2}e^{2(t_ {0}-\varepsilon)v}e^{-2A_{u,v}(t_{0})\max\left\{u,\frac{1}{\frac{A_{u,v}(t_{ 0}+\Delta t)-A_{u,v}(t_{0})}{\Delta t+2\varepsilon}}v\right\}}\] is integrable near \(o\). Note that \(A_{u,v}\) is strictly increasing on \((t_{0}-\delta,t_{0}+\delta)\), then \[g^{2}e^{2(t_{0}-\varepsilon)v}e^{-2A_{u,v}(t_{0})u}\] is not integrable near \(o\), which implies that \[g^{2}e^{2(t_{0}-\varepsilon)v}e^{-2A_{u,v}(t_{0})\max\big{\{}u,\frac{A(t_{0}+ \Delta t)-A(t_{0})}{\Delta t+2\varepsilon}v\big{\}}}\] is not integrable near \(o\). Note that for any \(b\in\Big{(}0,\lim_{\Delta t\to 0+0}\frac{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}{ \Delta t}\Big{)}\), there exists small enough \(\delta_{1},\delta_{2}>0\) such that for any \(\Delta t\in(0,\delta_{1})\) and \(\varepsilon\in(0,\delta_{2}\Delta t)\), \[\frac{A(t_{0}+\Delta t)-A(t_{0})}{\Delta t+2\varepsilon}>b,\] which implies that \[g^{2}e^{2(t_{0}-\varepsilon)v}e^{-2A_{u,v}(t_{0})\max\{u,\frac{1}{b}v\}}\left( \geq g^{2}e^{2(t_{0}-\varepsilon)v}e^{-2A_{u,v}(t_{0})\max\big{\{}u,\frac{1}{ A(t_{0}+\Delta t)-A(t_{0})}v\big{\}}}\right)\] is not integrable near \(o\). It is clear that \(A_{\max\{u,\frac{1}{b}v\},v}(t_{0}-\varepsilon)\leq A_{u,v}(t_{0})\). For the arbitrariness of \(\varepsilon\big{(}\in(0,\delta_{2}\Delta t)\big{)}\), and the continuity of \(A_{\max\{u,\frac{1}{b}v\},v}(t)\) near \(t_{0}\), it is clear that \(A_{\max\{u,\frac{1}{b}v\},v}(t_{0})\leq A_{u,v}(t_{0})\). Note that \(\max\{u,\frac{1}{b}v\}\geq u\), then it follows that \(A_{\max\{u,\frac{1}{b}v\},v}(t_{0})\geq A_{u,v}(t_{0})\), which implies that \(A_{\max\{u,\frac{1}{b}v\},v}(t_{0})=A_{u,v}(t_{0})\). **Lemma 3.3**.: _Assume that \(A_{u,v}(t)\) is strictly increasing on \((t_{0}-\delta,t_{0}+\delta)\). Then_ \[A_{\max\{u,\frac{1}{b}v\},v}(t_{0})=A_{u,v}(t_{0})\] _holds for any \(b\in\Big{(}0,\lim_{\Delta t\to 0-0}\frac{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}{ \Delta t}\Big{)}\)._ Proof.: The concavity of \(A_{u,v}\) shows that for any \(t\in(t_{0}-\delta,t_{0})\), \[\lim_{\Delta t\to 0+0}\frac{A_{u,v}(t+\Delta t)-A_{u,v}(t)}{\Delta t}\geq \lim_{\Delta t\to 0-0}\frac{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}{\Delta t}.\] Lemma 3.2 shows that for any \[b\in\left(0,\lim_{\Delta t\to 0-0}\frac{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}{ \Delta t}\right)\subseteq\left(0,\lim_{\Delta t\to 0+0}\frac{A_{u,v}(t+\Delta t)-A_{u,v}(t)} {\Delta t}\right),\] we have that \[A_{\max\{u,\frac{1}{b}v\},v}(t)=A_{u,v}(t)\] holds for any \(t\in(t_{0}-\delta,t_{0})\). For the continuity of \(A_{\max\{u,\frac{1}{b}v\},v}\) and \(A_{u,v}\) at \(t_{0}\), the proof is done. **Lemma 3.4**.: _Assume that \(A_{u,v}(t)\) is strictly increasing on \((t_{0}-\delta,t_{0}+\delta)\). Then_ \[A_{\max\{u,\frac{1}{b}v\},v}(t_{0})=A_{u,v}(t_{0}) \tag{3.1}\] _holds for any \(b\in\Big{(}0,\lim_{\Delta t\to 0-0}\frac{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}{ \Delta t}\Big{]}\)._ Proof.: Let \[b_{0}=\lim_{\Delta t\to 0-0}\frac{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}{ \Delta t}.\] Lemma 3.3 shows that equality (3.1) holds for any \(b\in(0,b_{0})\). As \(u\) and \(v\) are local bounded above near \(o\), we assume that \(u<0\) and \(v<0\). Note that for any \(b\in(0,b_{0})\), \[u\leq\max\left\{u,\frac{1}{b_{0}}v\right\}\leq\max\left\{\frac{b}{b_{0}}u,\frac {1}{b_{0}}v\right\}\leq\frac{b}{b_{0}}\max\left\{u,\frac{1}{b}v\right\},\] then it is clear that \[A_{u,v}(t_{0})\leq A_{\max\{u,\frac{1}{b_{0}}v\},v}(t_{0})\] \[\leq A_{\max\{\frac{b}{b_{0}}u,\frac{1}{b_{0}}v\},v}(t_{0})\leq A_{\frac{b}{b_{0}}\max\{u,\frac{1}{b}v\},v}(t_{0})\] \[= \frac{b}{b_{0}}A_{\max\{u,\frac{1}{b}v\},v}(t_{0})=\frac{b}{b_{0} }A_{u,v}(t_{0}).\] Letting \(b\to b_{0}\), we obtain equality (3.1). **Lemma 3.5**.: _Assume that \(A_{u,v}(t)\) is strictly increasing on \((t_{0}-\delta,t_{0}+\delta)\). Then_ \[A_{\max\{u,\frac{1}{b}v\},v}(t_{0})=A_{u,v}(t_{0})\] _does not hold for any \(b>\lim_{\Delta t\to 0-0}\frac{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}{ \Delta t}\)._ Proof.: We prove by contradiction: if not, then \(A_{\max\{u,\frac{1}{b}v\},v}(t_{0})=A_{u,v}(t_{0})\) holds for some \(b>\lim_{\Delta t\to 0-0}\frac{A_{u,v}(t_{0}+\Delta t)-A_{u,v}(t_{0})}{ \Delta t}\). Note that \[A_{\max\{u,\frac{1}{b}v\},v}(t)\geq A_{u,v}(t)\] holds for any \(t\in(t_{0}-\delta,t_{0}+\delta)\). Then \[\begin{split}&\lim_{\Delta t\to 0-0}\frac{A_{\max\{u,\frac{1}{b}v\},v} (t_{0}+\Delta t)-A_{\max\{u,\frac{1}{b}v\},v}(t_{0})}{\Delta t}\\ \leq&\lim_{\Delta t\to 0-0}\frac{A_{u,v}(t_{0}+ \Delta t)-A_{u,v}(t_{0})}{\Delta t}<b.\end{split} \tag{3.2}\] Note that \[\max\left\{u,\frac{1}{b}v\right\}\geq\frac{1}{b}v,\] which implies that \[A_{\max\{u,\frac{1}{b}v\},v}(t_{0}+\Delta t)\geq A_{\max\{u,\frac{1}{b}v\},v }(t_{0})+b\Delta t.\] Then we obtain \[\lim_{\Delta t\to 0+0}\frac{A_{\max\{u,\frac{1}{b}v\},v}(t_{0}+\Delta t)-A_{\max\{u, \frac{1}{b}v\},v}(t_{0})}{\Delta t}\geq b,\] which contradicts inequality (3.2). ## 4. Tian functions and Zhou numbers Let \(\varphi\), \(\psi\), \(\varphi_{0}\) be plurisubharmonic functions near \(o\), and let \(f_{0}=(f_{0,1},\ldots,f_{0,m})\) be a vector, where \(f_{0,1},\ldots,f_{0,m}\) are holomorphic functions near \(o\). Denote that \[c_{o}(\varphi,t\psi):=\sup\big{\{}c:|f_{0}|^{2}e^{-2\varphi_{0}}e^{2t\psi}e^{ -2c\varphi}\text{ is integrable near }o\big{\}},\] which is a generalization of the jumping number (see [19, 20]). Let \[A(t):=c_{o}(\varphi,t\psi)\] for any \(t\in\mathbb{R}\). The complex singularity exponent was firstly studied by Tian (see [45, 46, 11]), and the jumping number is a generalization of the notion of the complex singularity exponent. Thus, we call the function \(A(t)\)**Tian function**. Assume that the following three statements hold (1) \(|f_{0}|^{2}e^{-2\varphi_{0}}\) is integrable near \(o\); (2) There exists integer \(N_{0}\gg 0\) such that \(|f_{0}|^{2}e^{-2\varphi_{0}}|z|^{2N_{0}}e^{-2A(0)\varphi}\) is integrable near \(o\); (3) \(\nu(\psi,o)>0\) (i.e. there exists \(\nu_{0}>0\), such that \(\psi\leq\nu_{0}\log|z|+O(1)\) holds (see [12])), where \(\nu(\psi,o)\) is the Lelong number of \(\psi\) at \(o\). Theorem 2.8 shows that (2) implies that there exists \(\varepsilon>0\), such that \[|f_{0}|^{2}e^{-2\varphi_{0}}|z|^{2N_{1}}e^{-2(1+\varepsilon)A(0)\varphi}\] is integrable near \(o\). Lemma 3.1 shows that \[|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2A(0)\varphi}-|f_{0}|^{2}e^{-2\varphi_{0}}e^{- 2\max\big{\{}A(0)\varphi,\frac{N_{1}}{\varepsilon}\log|z|\big{\}}}\] is integrable near \(o\). Then it follows from the definition of \(\Phi_{o,\max}\) that \[\Phi_{o,\max}\geq\max\bigg{\{}A(0)\varphi,\frac{N_{1}}{\varepsilon}\log|z| \bigg{\}}+O(1)\geq N\log|z|+O(1)\] near \(o\). The Holder inequality shows that \(A(t)\) is concave with respect to \(t\in(-\infty,+\infty)\). ### \(A(t)\) related to \(\varphi\) In this section, we discuss the derivatives of Tian functions \(A(t)\). We give the strictly increasing property of Tian functions in the following lemma. **Lemma 4.1**.: \(A(t)\) _is strictly increasing near \(0\)._ Proof.: Theorem 2.8 implies that there exists \(\varepsilon_{0}>0\) such that \[|f_{0}|^{2}e^{-2\varphi_{0}}|z|^{2N_{0}}e^{-2(1+\varepsilon_{0})A(0)\varphi}\] is integrable near \(o\). Lemma 3.1 shows that \[|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\big{(}1+\frac{\varepsilon_{0}}{2}\big{)}A( 0)\varphi}-|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\max\big{\{}\big{(}1+\frac{ \varepsilon_{0}}{2}\big{)}A(0)\varphi,\frac{2+\varepsilon_{0}}{\varepsilon_{0 }}\log|z|^{N_{0}}\big{\}}}\] is integrable near \(o\). Note that for any \(t>0\), \[|f_{0}|^{2}e^{-2\varphi_{0}}\left(e^{2t\psi}e^{-2\big{(}1+\frac{\varepsilon_{ 0}}{2}\big{)}A(0)\varphi}-e^{2t\psi}e^{-2\max\big{\{}\big{(}1+\frac{ \varepsilon_{0}}{2}\big{)}A(0)\varphi,\frac{2+\varepsilon_{0}}{\varepsilon_{0 }}\log|z|^{N_{0}}\big{\}}}\right)\] is integrable near \(o\) and \[|f_{0}|^{2}e^{-2\varphi_{0}}e^{2t\psi}e^{-2\max\big{\{}\big{(}1 +\frac{\varepsilon_{0}}{2}\big{)}A(0)\varphi,\frac{2+\varepsilon_{0}}{ \varepsilon_{0}}\log|z|^{N_{0}}\big{\}}}\] \[\leq C|f_{0}|^{2}e^{-2\varphi_{0}}|z|^{2t\nu(\psi,o)}e^{-2\max\big{\{} \big{(}1+\frac{\varepsilon_{0}}{2}\big{)}A(0)\varphi,\frac{2+\varepsilon_{0}}{ \varepsilon_{0}}\log|z|^{N_{0}}\big{\}}}\] \[\leq C|f_{0}|^{2}e^{-2\varphi_{0}}|z|^{2t\nu(\psi,o)}e^{-2\frac{(2+ \varepsilon_{0})N_{0}}{\varepsilon_{0}}\log|z|}\] near \(o\), then it is clear that for any \(t>\frac{2+\varepsilon_{0}}{\varepsilon_{0}}\frac{N_{0}}{\nu(\psi,o)}\), \[|f_{0}|^{2}e^{-2\varphi_{0}}e^{2t\psi}e^{-2(1+\frac{\varepsilon_{0}}{2})A(0)\varphi}\] is integrable near \(o\), which shows \(A(t)>\big{(}1+\frac{\varepsilon_{0}}{2}\big{)}A(0)\) for any \(t>\frac{2+\varepsilon_{0}}{\varepsilon_{0}}\frac{N_{0}}{\nu(\psi,o)}\). Then the concavity of \(A(t)\) implies that \(A(t)\) is strictly increasing near \(0\) Denote that \(b_{0}:=\lim_{t\to 0+0}\frac{A(0)-A(-t)}{t}\). Lemma 3.4 shows that **Lemma 4.2**.: \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\max\{A(0)\varphi,\frac{1}{b_{0}}A(0)\psi\}}\) _is not integrable near \(o\)._ Lemma 3.5 shows that **Lemma 4.3**.: _For any \(b>b_{0}\), \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\max\{A(0)\varphi,\frac{1}{b}A(0)\psi\}}\) is integrable near \(o\)._ The combination of Lemma 4.2 and Lemma 4.3 shows **Lemma 4.4**.: \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\max\{A(0)\varphi,\frac{1}{b}A(0)\psi\}}\) _is not integrable near \(o\) if and only if \(b\leq b_{0}\)._ Using Lemma 4.4, considering \(\max\big{\{}A(0)\varphi,\frac{1}{b_{0}}A(0)\psi\big{\}}\) instead of \(\varphi\) in Remark 1.3, we obtain **Lemma 4.5**.: _There exists a local Zhou weight \(\Phi_{o,\max}\) related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) such that_ _(1) \(\sigma(\psi,\Phi_{o,\max})=\frac{b_{0}}{A(0)}\);_ _(2) \(\Phi_{o,\max}\geq\max\big{\{}A(0)\varphi,\frac{1}{b_{0}}A(0)\psi\big{\}}+O(1)\) near \(o\)._ The following property of Tian functions \(A(t)\) will be used in the proof of Theorem 1.8. **Proposition 4.6**.: _Assume that there exists \(N\gg 0\) such that \(\varphi\geq N\log|z|\) near \(o\). The following inequality holds_ \[\frac{1}{A(0)}\lim_{t\to 0+0}\frac{A(0)-A(t)}{-t}\] \[\leq \liminf_{t_{1}\to+\infty}\frac{1}{2t_{1}}\frac{\int_{\{A(0) \varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}(-2\psi)}{\int_{\{A(0) \varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}}\] \[\leq \limsup_{t_{1}\to+\infty}\frac{1}{2t_{1}}\frac{\int_{\{A(0) \varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}(-2\psi)}{\int_{\{A(0) \varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}}\] \[\leq \frac{1}{A(0)}\lim_{t\to 0+0}\frac{A(0)-A(-t)}{t}.\] Proof.: We prove Proposition 4.6 in two steps. **Step 1.** Theorem 2.8 shows that \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2t\psi}\) is local integrable near \(o\) for small enough \(t>0\). Note that there exists \(N\gg 0\) such that \(\varphi\geq N\log|z|\) near \(o\), then there exists a neighborhood \(U\) of \(o\) such that for any small enough \(t>0\) and \(\varepsilon>0\), \[\limsup_{t_{1}\to+\infty}\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2 \varphi_{0}}e^{-2\big{(}t\psi+(1-\varepsilon)A(-t)\varphi\big{)}}=0,\] which implies that \[\limsup_{t_{1}\to+\infty}e^{2t_{1}}\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^ {2}e^{-2\varphi_{0}}e^{-2\big{(}t\psi+(1-\varepsilon)A(-t)\varphi+t_{1}\big{)} }=0.\] Then for large enough \(t_{1}>0\), \[\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\big{(}t \psi+(1-\varepsilon)A(-t)\varphi+t_{1}\big{)}}<e^{-2t_{1}},\] i.e. \[\log\left(\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2 \left(t\psi+(1-\varepsilon)A(-t)\varphi+t_{1}\right)}\right)<-2t_{1}.\] Combining with Lemma 2.7, we obtain \[\limsup_{t_{1}\to+\infty}\frac{1}{2t_{1}}\log\frac{\int_{\{A(0) \varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\left(t\psi+(1- \varepsilon)A(-t)\varphi+t_{1}\right)}}{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{ 0}|^{2}e^{-2\varphi_{0}}}\] \[= \limsup_{t_{1}\to+\infty}\frac{1}{2t_{1}}\log\int_{\{A(0)\varphi<-t _{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\left(t\psi+(1-\varepsilon)A(-t) \varphi+t_{1}\right)}\] \[-\lim_{t_{1}\to+\infty}\frac{1}{2t_{1}}\log\int_{\{A(0)\varphi<-t _{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}\] \[\leq 1-1=0. \tag{4.1}\] Jensen's inequality and the concavity of logarithm shows that \[\log\frac{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2 \varphi_{0}}e^{-2\left(t\psi+(1-\varepsilon)A(-t)\varphi+t_{1}\right)}}{\int_ {\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}}\] \[\geq \frac{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_ {0}}\log\left(e^{-2(t\psi+(1-\varepsilon)A(-t)\varphi)+t_{1}}\right)}{\int_{ \{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}}\] \[= \frac{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_ {0}}\big{(}-2(t\psi+(1-\varepsilon)A(-t)\varphi+t_{1})\big{)}}{\int_{\{A(0) \varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}}\] \[\geq \frac{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_ {0}}\big{(}-2(t\psi+(1-\varepsilon)A(-t)(-t_{1})\frac{1}{A(0)}+t_{1})\big{)}}{ \int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}}.\] Combining with inequality (4.1), we obtain that \[\limsup_{t_{1}\to+\infty}\frac{1}{2t_{1}}\frac{\int_{\{A(0)\varphi<-t_{1}\} \cap U}|f_{0}|^{2}e^{-2\varphi_{0}}\big{(}-2(t\psi+(1-\varepsilon)A(-t)(-t_{1} )\frac{1}{A(0)}+t_{1})\big{)}}{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2} e^{-2\varphi_{0}}}\leq 0.\] Letting \(\varepsilon\to 0+0\), we obtain \[\limsup_{t_{1}\to+\infty}\frac{1}{2t_{1}}\frac{\int_{\{A(0)\varphi<-t_{1}\} \cap U}|f_{0}|^{2}e^{-2\varphi_{0}}(-2\psi)}{\int_{\{A(0)\varphi<-t_{1}\} \cap U}|f_{0}|^{2}e^{-2\varphi_{0}}}\leq\frac{1}{A(0)}\lim_{t\to 0+0} \frac{A(0)-A(-t)}{t}. \tag{4.2}\] **Step 2.** Theorem 2.8 shows that \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2t\psi}\) is local integrable near \(o\) for small enough \(t>0\). Note that there exists \(N\gg 0\) such that \(\varphi\geq N\log|z|\) near \(o\), then there exists a neighborhood \(U\) of \(o\) such that for any small enough \(t>0\) and \(\varepsilon>0\), \[\lim_{t_{1}\to+\infty}\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2 \varphi_{0}}e^{-2\left(-t\psi+(1-\varepsilon)A(t)\varphi\right)}=0,\] which implies that \[\lim_{t_{1}\to+\infty}e^{2t_{1}}\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2} e^{-2\varphi_{0}}e^{-2\left(-t\psi+(1-\varepsilon)A(t)\varphi+t_{1}\right)}=0.\] Then for large enough \(t_{1}>0\), \[\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\left(-t \psi+(1-\varepsilon)A(t)\varphi+t_{1}\right)}<e^{-2t_{1}},\] i.e. \[\log\left(\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2 \left(-t\psi+(1-\varepsilon)A(t)\varphi+t_{1}\right)}\right)<-2t_{1}.\] Combining with Lemma 2.7, we obtain \[\limsup_{t_{1}\to+\infty}\frac{1}{2t_{1}}\log\frac{\int_{\{A(0) \varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\left(-t\psi+(1- \varepsilon)A(t)\varphi+t_{1}\right)}}{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0 }|^{2}e^{-2\varphi_{0}}}\] \[= \limsup_{t_{1}\to+\infty}\frac{1}{2t_{1}}\log\int_{\{A(0)\varphi<- t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\left(-t\psi+(1-\varepsilon)A(t) \varphi+t_{1}\right)}\] \[-\lim_{t_{1}\to+\infty}\frac{1}{2t_{1}}\log\int_{\{A(0)\varphi<- t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}\] \[\leq 1-1=0. \tag{4.3}\] Jensen's inequality and the concavity of logarithm shows that \[\log\frac{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2 \varphi_{0}}e^{-2\left(-t\psi+(1-\varepsilon)A(t)\varphi+t_{1}\right)}}{\int_ {\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}}\] \[\geq \frac{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_ {0}}\log\left(e^{-2(-t\psi+(1-\varepsilon)A(t)\varphi+t_{1})}\right)}{\int_ {\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}}\] \[= \frac{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_ {0}}\big{(}-2(-t\psi+(1-\varepsilon)A(t)\varphi+t_{1})\big{)}}{\int_{\{A(0) \varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}}\] \[\geq \frac{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_ {0}}\big{(}-2(-t\psi+(1-\varepsilon)A(t)(-t_{1})\frac{1}{A(0)}+t_{1})\big{)}} {\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2\varphi_{0}}}.\] Combining with inequality (4.3), we obtain that \[\limsup_{t_{1}\to+\infty}\frac{1}{2t_{1}}\frac{\int_{\{A(0)\varphi<-t_{1}\} \cap U}|f_{0}|^{2}e^{-2\varphi_{0}}\big{(}-2(-t\psi+(1-\varepsilon)A(t)(-t_{1} )\frac{1}{A(0)}+t_{1})\big{)}}{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e ^{-2\varphi_{0}}}\leq 0\] Letting \(\varepsilon\to 0+0\), we obtain \[\liminf_{t_{1}\to+\infty}\frac{1}{2t_{1}}\frac{\int_{\{A(0)\varphi<-t_{1}\} \cap U}|f_{0}|^{2}e^{-2\varphi_{0}}(-2\psi)}{\int_{\{A(0)\varphi<-t_{1}\}\cap U }|f_{0}|^{2}e^{-2\varphi_{0}}}\geq\frac{1}{A(0)}\frac{A(0)-A(t)}{-t}. \tag{4.4}\] Letting \(t\to 0+0\), we obtain \[\frac{1}{A(0)}\lim_{t\to 0+0}\frac{A(0)-A(t)}{-t}\leq\liminf_{t_{1}\to+ \infty}\frac{1}{2t_{1}}\frac{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{- 2\varphi_{0}}(-2\psi)}{\int_{\{A(0)\varphi<-t_{1}\}\cap U}|f_{0}|^{2}e^{-2 \varphi_{0}}}. \tag{4.5}\] Combining inequality (4.5) and inequality (4.2), we obtain Proposition 4.6. ### Zhou numbers related to \(\Phi_{o,\max}\) Let \(\Phi_{o,\max}\) be a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\), and let Tian function \(A(t):=c_{o}(\Phi_{o,\max},t\psi)\). The definition of \(\Phi_{o,\max}\) shows that \(A(0)=1\). Note that \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\max\{\Phi_{o,\max},\frac{1}{t}\psi\}}\) is not integrable near \(o\) if and only if \(b\leq\sigma(\psi,\Phi_{o,\max})\), then Lemma 4.4 shows **Proposition 4.7**.: _For any plurisubharmonic function \(\psi\) near \(o\),_ \[\sigma(\psi,\Phi_{o,\max})=\lim_{t\to 0+0}\frac{A(0)-A(-t)}{t}.\] As \(\psi\leq\sigma(\psi,\Phi_{o,\max})\Phi_{o,\max}+O(1)\) near \(o\), the definition of \(A(t)\) shows that \(A(t)\geq A(0)+\sigma(\psi,\Phi_{o,\max})t\) for any \(t>0\). Note that \(A(t)\) is concave, then \(A(t)\leq A(0)+\sigma(\psi,\Phi_{o,\max})t\) for any \(t>0\), which implies that **Proposition 4.8**.: _The Tian function \(A(t)\) is differentiable at \(t=0\), and_ \[A(t)=A(0)+\sigma(\psi,\Phi_{o,\max})t\] _holds for any \(t\geq 0\)._ ## 5. Proofs of Proposition 1.6, Proposition 1.7, Theorem 1.8 and Corollary 1.9 In this section, we prove Proposition 1.6, Proposition 1.7, Theorem 1.8 and Corollary 1.9. Firstly, we prove Proposition 1.6 by using Proposition 4.8. Proof of Proposition 1.6.: Firstly, we prove the statement (1) in Proposition 1.6. Denote \[A(t):=\sup\big{\{}c:|f_{0}|^{2}|f|^{2t}e^{-2\varphi_{0}-2c\Phi_{o,\max}}\text{ is integrable near }o\big{\}}.\] By Proposition 4.8, we have \(A(\alpha)=1+\sigma(\log|f|,\Phi_{o,\max})\alpha\), which implies that \[|f_{0}|^{2}|f|^{2\alpha}e^{-2\varphi_{0}-2(1+\alpha\sigma(\log|f|,\Phi_{o,\max }))\Phi_{o,\max}}\] is not integrable near \(o\) by Theorem 2.8. As \(\log|f|\leq\sigma(\log|f|,\Phi_{o,\max})\Phi_{o,\max}+O(1)\) and there exists \(N\gg 0\) such that \(|f_{0}|^{2}|z|^{2N}e^{-2\varphi_{0}-2\Phi_{o,\max}}\) is integrable near \(o\), we know that \[|f_{0}|^{2}|f|^{2\alpha}|z|^{2N}e^{-2\varphi_{0}-2\big{(}1+\alpha\sigma(\log| f|,\Phi_{o,\max})\big{)}\Phi_{o,\max}}\] is integrable near \(o\). Let \(\tilde{\varphi}\) be a subharmonic function near \(o\) satisfying that \[\tilde{\varphi}\geq\big{(}1+\alpha\sigma(\log|f|,\Phi_{o,\max})\big{)}\Phi_{o,\max}+O(1)\] and \(|f|^{2\alpha}|f_{0}|^{2}e^{-2\varphi_{0}-2\tilde{\varphi}}\) is not integrable near \(o\). Note that \[\log|f|\leq\sigma(\log|f|,\Phi_{o,\max})\Phi_{o,\max}+O(1),\] then \[\tilde{\varphi}\geq\frac{1+\alpha\sigma(\log|f|,\Phi_{o,\max})}{\sigma(\log|f |,\Phi_{o,\max})}\log|f|+O(1)\] and \[|f|^{2\alpha}|f_{0}|^{2}e^{-2\varphi_{0}-2\tilde{\varphi}}\leq C|f_{0}|^{2}e^ {-2\varphi_{0}}e^{-\frac{2}{1+\alpha\sigma(\log|f|,\Phi_{o,\max})}\tilde{ \varphi}}.\] As \(|f|^{2\alpha}|f_{0}|^{2}e^{-2\varphi_{0}-2\tilde{\varphi}}\) is not integrable near \(o\), we know that \[|f_{0}|^{2}e^{-2\varphi_{0}-\frac{2}{1+\alpha\sigma(\log|f|,\Phi_{o,\max})} \tilde{\varphi}}\] is not integrable near \(o\). Note that \(\Phi_{o,\max}\) is a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\). Then we obtain \[\tilde{\varphi}=(1+\alpha\sigma(\log|f|,\Phi_{o,\max}))\Phi_{o,\max}+O(1),\] which shows that \(\big{(}1+\alpha\sigma(\log|f|,\Phi_{o,\max})\big{)}\Phi_{o,\max}\) is a local Zhou weight related to \(|f|^{2\alpha}|f_{0}|^{2}e^{-2\varphi_{0}}\). Next, we give the proof of statement (2), which is similar to the proof of statement (1). Denote that \[A(t):=\sup\big{\{}c:|f_{0}|^{2}e^{-2\varphi_{0}}e^{2t\varphi_{0}}e^{-2c\Phi_{o, \max}}\text{ is integrable near }o\big{\}}.\] By Proposition 4.8, we have \(A(1)=1+\sigma(\varphi_{0},\Phi_{o,\max})\), which implies that \[|f_{0}|^{2}e^{-2\big{(}1+\sigma(\varphi_{0},\Phi_{o,\max})\big{)}\Phi_{o,\max}}\] is not integrable near \(o\). As \(\varphi_{0}\leq\sigma(\varphi_{0},\Phi_{o,\max})\Phi_{o,\max}+O(1)\) and there exists \(N\gg 0\) such that \(|f_{0}|^{2}|z|^{2N}e^{-2\varphi_{0}-2\Phi_{o,\max}}\) is integrable near \(o\), we know that \(|f_{0}|^{2}|z|^{2N}e^{-2\big{(}1+\sigma(\varphi_{0},\Phi_{o,\max})\big{)}\Phi_ {o,\max}}\) is integrable near \(o\). Let \(\tilde{\varphi}\) be a subharmonic function near \(o\) satisfying that \[\tilde{\varphi}\geq\big{(}1+\sigma(\varphi_{0},\Phi_{o,\max})\big{)}\Phi_{o, \max}+O(1)\] and \(|f_{0}|^{2}e^{-2\tilde{\varphi}}\) is not integrable near \(o\). It follows from \(\varphi_{0}\leq\sigma(\varphi_{0},\Phi_{o,\max})\Phi_{o,\max}+O(1)\) that \[|f_{0}|^{2}e^{-2\tilde{\varphi}} =e^{2\varphi_{0}}|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\tilde{\varphi}}\] \[\leq Ce^{2\sigma(\varphi_{0},\Phi_{o,\max})\Phi_{o,\max}}|f_{0}|^ {2}e^{-2\varphi_{0}}e^{-2\tilde{\varphi}}\] \[\leq C_{1}|f_{0}|^{2}e^{-2\varphi_{0}}e^{\frac{\sigma(\varphi_{0},\Phi_{o,\max})}{1+\sigma(\varphi_{0},\Phi_{o,\max})\tilde{\varphi}}}e^{-2 \tilde{\varphi}}\] \[=C_{1}|f_{0}|^{2}e^{-2\varphi_{0}}e^{-\frac{2}{1+\sigma(\varphi_{ 0},\varphi)}\tilde{\varphi}}.\] As \(|f_{0}|^{2}e^{-2\tilde{\varphi}}\) is not integrable near \(o\), we know that \(|f_{0}|^{2}e^{-2\varphi_{0}-\frac{2}{1+\sigma(\varphi_{0},\Phi_{o,\max})} \tilde{\varphi}}\) is not integrable near \(o\). Note that \(\Phi_{o,\max}\) is a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\). Then we obtain \[\tilde{\varphi}=\big{(}1+\sigma(\varphi_{0},\Phi_{o,\max})\big{)}\Phi_{o,\max }+O(1),\] which shows that \(\big{(}1+\sigma(\varphi_{0},\Phi_{o,\max})\big{)}\Phi_{o,\max}\) is a local Zhou weight related to \(|f_{0}|^{2}\). The following lemma will be used in the proof of Proposition 1.7. **Lemma 5.1**.: _Let \(\varphi\) be a local Zhou weight related to \(|f_{0}|^{2}\) at \(o\) on a bounded pseudoconvex domain \(D\). Let \(\{f_{j}\}_{j=1,\cdots,N}\) be holomorphic functions on \(D\) such that \(\{(f_{j},o)\}_{j}\) generates \(\mathcal{I}(m\varphi)_{o}\). Then_ \[\frac{1}{\sigma(\log\sum_{j}|f_{j}|,\varphi)+1}\log\sum_{j}|f_{j}|+O(1)\geq \varphi\geq\frac{1}{\sigma(\log\sum_{j}|f_{j}|,\varphi)}\log\sum_{j}|f_{j}|+O(1).\] Proof.: As \(\varphi\) is a local Zhou weight related to \(|f_{0}|^{2}\), we have \[\log\sum_{j}|f_{j}|\leq\sigma\big{(}\log\sum_{j}|f_{j}|,\varphi)\varphi+O(1).\] Thus, it suffices to prove \[\frac{1}{\sigma(\log\sum_{j}|f_{j}|,\varphi)+1}\log\sum_{j}|f_{j}|+O(1)\geq\varphi.\] Denote \(A(t):=c_{o}(\varphi,t\log\sum_{j}|f_{j}|)\). As \(\{(f_{j},o)\}_{j}\) generates \(\mathcal{I}(m\varphi)_{o}\), we have \(A(1)\geq m\), which implies that \[\sigma(\log\sum_{j}|f_{j}|,\varphi)\geq\lim_{t\to 0+0}\frac{A(t)-1}{t}\geq m-1 \tag{5.1}\] by Proposition 4.7 and the concavity property of \(A(t)\). By Demailly's approximation theorem (see [11], see also Lemma 7.2), we have \[m\varphi\leq\log\sum_{j}|f_{j}|+O(1). \tag{5.2}\] Thus, we have \(\frac{1}{\sigma(\log\sum_{j}|f_{j}|,\varphi)+1}\log\sum_{j}|f_{j}|+O(1)\geq\varphi\). Now, we prove Proposition 1.7. Proof of Proposition 1.7.: Let \(\{f_{j}\}_{j=1,\cdots,N}\) be holomorphic functions on bounded pseudoconvex domain \(D\) such that \(\{(f_{j},o)\}_{j}\) generates \(\mathcal{I}(m\varphi_{1})_{o}\). By definition, we have \(\sigma(\log\sum_{j}|f_{j}|,\varphi_{i})=\sigma(\max_{j}\{\log|f_{j}|\}, \varphi_{i})=\min_{j}\{\sigma(\log|f_{j}|,\varphi_{i})\}\) for \(i\in\{1,2\}\) (see the proof of statement (3) of Theorem 1.8). Then we have \[\sigma(\log\sum_{j}|f_{j}|,\varphi_{1})\leq\sigma(\log\sum_{j}|f_{j}|,\varphi_ {2}).\] It follows from inequality (5.1) and (5.2) that \(\sigma(\log\sum_{j}|f_{j}|,\varphi_{1})\in[m-1,m]\). Then we have \[\varphi_{2} \geq\frac{1}{\sigma(\log\sum_{j}|f_{j}|,\varphi_{2})}\log\sum_{j} |f_{j}|+O(1)\] \[\geq\frac{1}{\sigma(\log\sum_{j}|f_{j}|,\varphi_{1})}\log\sum_{j} |f_{j}|+O(1)\] \[\geq\frac{m+1}{m-1}\frac{1}{\sigma(\log\sum_{j}|f_{j}|,\varphi_{1 })+1}\log\sum_{j}|f_{j}|+O(1)\] \[\geq\frac{m+1}{m-1}\varphi_{1}+O(1)\] near \(o\), where the last inequality follows from Lemma 5.1. Hence, we have \[\sigma(\varphi_{2},\varphi_{1})=1\] by letting \(m\to+\infty\). Note that \(\varphi_{2}\) is a local Zhou weight related to \(|f_{0,2}|^{2}\), then we have \[\varphi_{1}\leq\sigma(\varphi_{2},\varphi_{1})\varphi_{2}+O(1)=\varphi_{2}+O(1)\] near \(o\). Finally, we prove Theorem 1.8 and Corollary 1.9. Proof of Theorem 1.8.: It follows from Propositions 4.6, 4.7 and 4.8 that Theorem 1.8 holds. Proof of Corollary 1.9.: The statement (1) holds by Theorem 1.8. Denote \(a_{i}:=\sigma(\log|f_{i}|,\Phi_{o,\max})\) for \(i\in\{1,2\}\), then we have \(\log|f_{i}|\leq a_{i}\Phi_{o,\max}+O(1)\) for \(i\in\{1,2\}\), which implies that \[\log|f_{1}+f_{2}| \leq\log(|f_{1}|+|f_{2}|)\] \[\leq\log\big{(}e^{a_{1}\Phi_{o,\max}}+e^{a_{2}\Phi_{o,\max}}\big{)} +O(1)\] \[\leq\min\{a_{1},a_{2}\}\Phi_{o,\max}+O(1).\] Thus, \(\sigma(\log|f_{1}+f_{2}|,\Phi_{o,\max})\geq\min\big{\{}\sigma(\log|f_{1}|,\Phi _{o,\max}),\sigma(\log|f_{2}|,\Phi_{o,\max})\big{\}}\). Denote that \(b_{i}:=\sigma(\psi_{i},\Phi_{o,\max})\) for \(i\in\{1,2\}\), then we have \(\psi_{i}\leq b_{i}\Phi_{o,\max}+O(1)\) for \(i\in\{1,2\}\), which implies that \[\max\{\psi_{1},\psi_{2}\} \leq\max\{b_{1}\Phi_{o,\max},b_{2}\Phi_{o,\max}\}+O(1)\] \[=\min\{b_{1},b_{2}\}\Phi_{o,\max}+O(1).\] Thus, \(\sigma(\max\{\psi_{1},\psi_{2}\},\Phi_{o,\max})\geq\min\big{\{}\sigma(\psi_{1},\Phi_{o,\max}),\sigma(\psi_{2},\Phi_{o,\max})\big{\}}\). \(\max\{\psi_{1},\psi_{2}\}\geq\psi_{i}\) for \(i\in\{1,2\}\) implies \(\sigma(\max\{\psi_{1},\psi_{2}\},\Phi_{o,\max})\leq\min\big{\{}\sigma(\psi_{1 },\Phi_{o,\max}),\sigma(\psi_{2},\Phi_{o,\max})\big{\}}\). Then the statement (3) holds. ## 6. Jumping numbers and Zhou valuations: proof of Theorem 1.11 Let \(f_{0}=(f_{0,1},\cdots,f_{0,m})\) be a vector, where \(f_{0,1},\cdots,f_{0,m}\) are holomorphic functions near \(o\). Denote \(|f_{0}|^{2}=|f_{0,1}|^{2}+\cdots+|f_{0,m}|^{2}\). Let \(\Phi_{o,\max}\) be a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\). Let \(G\) be a holomorphic function on \(U\). Denote \[k_{\varphi_{0}}:=\sigma(\varphi_{0},\Phi_{o,\max}).\] Recall the definition of jumping number \[c_{o}^{G}(\Phi_{o,\max}):=\sup\big{\{}c:|G|^{2}e^{-2c\Phi_{o,\max}}\text{ is integrable near }o\big{\}}\] (see [19, 20]). In this section, we discuss the relation between the jumping numbers and the Zhou valuations. The following remark shows that it suffices to consider the case \(\varphi\equiv 0\). **Remark 6.1**.: _It follows from Proposition 1.6 that \((1+k_{\varphi_{0}})\Phi_{o,\max}\) is a local Zhou weight related to \(|f_{0}|^{2}\). By definitions, we have_ \[\sigma\big{(}\log|G|,(1+k_{\varphi_{0}})\Phi_{o,\max}\big{)}=\frac{\nu(G,\Phi _{o,\max})}{1+k_{\varphi_{0}}}\] _and_ \[c_{o}^{G}\big{(}(1+k_{\varphi_{0}})\Phi_{o,\max}\big{)} :=\sup\big{\{}c:|G|^{2}e^{-2c(1+k_{\varphi_{0}})\Phi_{o,\max}} \text{ is integrable near }o\big{\}}\] \[=\sup\bigg{\{}\frac{c_{1}}{1+k_{\varphi_{0}}}:|G|^{2}e^{-2c_{1} \Phi_{o,\max}}\text{ is integrable near }o\bigg{\}}\] \[=\frac{1}{1+k_{\varphi_{0}}}\sup\big{\{}c:|G|^{2}e^{-2c\Phi_{o, \max}}\text{ is integrable near }o\big{\}}\] \[=\frac{1}{1+k_{\varphi_{0}}}c_{o}^{G}(\Phi_{o,\max}).\] _Hence when discussing the relation between the jumping number \(\nu(G,\Phi_{o,\max})\) and the Zhou valuation \(\sigma(\cdot,\Phi_{o,\max})\), it suffices to consider \((1+k_{\varphi_{0}})\Phi_{o,\max}\), where \((1+k_{\varphi_{0}})\Phi_{o,\max}\) is a local Zhou weight related to \(|f_{0}|^{2}\)._ Let \(\Phi_{o,\max}\) be any local Zhou weight related to \(|f_{0}|^{2}\). Denote \[k_{1}:=\sigma(\log|f_{0}|,\Phi_{o,\max})\] and \[k_{2}:=\nu(G,\Phi_{o,\max}).\] We introduce following notations. Let us consider the following Tian functions: \[A_{1}(t):=\sup\big{\{}c:|f_{0}|^{2t}e^{-2c\Phi_{o,\max}}\text{ is integrable near }o\big{\}},\] \[A_{2}(t):=\sup\big{\{}c:|G|^{2t}e^{-2c\Phi_{o,\max}}\text{ is integrable near }o\big{\}},\] and \[A_{3}(s,t):=\sup\big{\{}c:|G|^{2s}|f_{0}|^{2t}e^{-2c\Phi_{o,\max}}\text{ is integrable near }o\big{\}}.\] Let \(c_{o}(\Phi_{o,\max}):=\sup\big{\{}c:e^{-2c\Phi_{o,\max}}\text{ is integrable near }o\big{\}}\) be the complex singularity exponent of \(\Phi_{o,\max}\) at \(o\). It is easy to see that \(A_{3}(0,t)=A_{1}(t)\), \(A_{3}(s,0)=A_{2}(s)\), and \[A_{3}(0,0)=A_{1}(0)=A_{2}(0)=c_{o}(\Phi_{o,\max}).\] We note that \(A_{1}(t)\) is concave with respect to \(t\in\mathbb{R}\). By the definitions of \(A_{1}(t)\) and \(\Phi_{o,\max}\), we know \(A_{1}(1)=1\). It follows from Proposition 4.8 that, when \(t\geq 1\), \[A_{1}(t)=k_{1}t-k_{1}+1.\] We present an estimate for Tian function \(A_{2}(t)\) in the view of Zhou valuation. **Proposition 6.2**.: _When \(t\geq 0\), we have_ \[c_{o}(\Phi_{o,\max})+k_{2}t=A_{2}(0)+k_{2}t\leq A_{2}(t)\leq k_{2}t-k_{1}+1,\] _where \(k_{1}=\sigma(\log|f_{0}|,\Phi_{o,\max})\) and \(k_{2}=\nu(G,\Phi_{o,\max})\)._ We need the following Lemma 6.3 and 6.4 in the proof of Proposition 6.2. **Lemma 6.3**.: _Let \(m\geq 0\) be fixed, then we have_ \[A_{3}(s,m+1)=1+mk_{1}+k_{2}s, \tag{6.1}\] _holds for any \(s\geq 0\)._ Proof.: Recall that \(k_{1}:=\sigma(\log|f_{0}|,\Phi_{o,\max})\). It follows from Proposition 1.6 that \(\Phi_{o,\max}^{f_{0}^{m+1}}=(1+mk_{1})\Phi_{o,\max}\) is a local Zhou weight related to \(|f_{0}|^{2m+2}\) near \(o\). Then we have \[\begin{split}& A_{3}(s,m+1)\\ =&\sup\big{\{}c:|G|^{2s}|f_{0}|^{2(m+1)}e^{-2c\Phi_{o,\max}} \text{ is integrable near }o\big{\}}\\ =&\sup\big{\{}c:|G|^{2s}|f_{0}|^{2m+2}e^{-2c\frac{1} {1+mk_{1}}\Phi_{o,\max}^{f_{0}^{m+1}}}\text{ is integrable near }o\big{\}}\\ =&(1+mk_{1})\sup\big{\{}c:|G|^{2s}|f_{0}|^{2m+2}e^{ -2c\Phi_{o,\max}^{f_{0}^{m+1}}}\text{ is integrable near }o\big{\}}.\end{split} \tag{6.2}\] Let \[c_{o}\big{(}\Phi_{o,\max}^{f_{0}^{m+1}},s\log|G|\big{)}:=\sup\big{\{}c:|G|^{2s }|f_{0}|^{2m+2}e^{-2c\Phi_{o,\max}^{f_{0}^{m+1}}}\text{ is integrable near }o\big{\}}.\] As \(\Phi_{o,\max}^{f_{0}^{m+1}}=(1+mk_{1})\Phi_{o,\max}\) and \(k_{2}=\nu(G,\Phi_{o,\max})\), we know that \(\nu(G,\Phi_{o,\max}^{f_{0}^{m+1}})=\frac{k_{2}}{1+mk_{1}}\). It follows from Proposition 4.8 that, when \(s\geq 0\), \[\begin{split} c_{o}(\Phi_{o,\max}^{f_{0}^{m+1}},s\log|G|)=& c_{o}(\Phi_{o,\max}^{f_{0}^{m+1}},0)+\nu(G,\Phi_{o,\max}^{f_{0}^{m+1}})s \\ =& 1+\frac{k_{2}}{1+mk_{1}}s.\end{split} \tag{6.3}\] Combining (6.2) and (6.3), when \(m\geq 0\) is fixed, we have that \[\begin{split} A_{3}(s,m+1)&=(1+mk_{1})\big{(}1+ \frac{k_{2}}{1+mk_{1}}s\big{)}\\ &=1+mk_{1}+k_{2}s\end{split}\] holds for any \(s\geq 0\). Lemma 6.3 is proved. **Lemma 6.4**.: _Let \(t_{1}\geq t_{2}\) be two nonnegative integers. Let \(H\) be a holomorphic function near \(o\). Let \(F\) be any holomorphic function near \(o\) and \(k:=\sigma(\log|F|,\Phi_{o,\max})\). Denote_ \[A_{F}(t):=\sup\{c:|F|^{2t}|H|^{2}e^{-2c\Phi_{o,\max}}\text{ is integrable near }o\ \}.\] _Then we have \(A_{F}(t_{1})\geq A_{F}(t_{2})+k(t_{1}-t_{2})\)._ Proof.: It follows from \(k=\sigma(\log|F|,\Phi_{o,\max})\) and the definition of \(\sigma(\log|F|,\Phi_{o,\max})\) that \[\log|F|\leq k\Phi_{o,\max}+O(1)\] holds near \(o\), which implies that \(|F|^{2(t_{1}-t_{2})}te^{-2k(t_{1}-t_{2})\Phi_{o,\max}}\leq C_{1}\) for some positive constant \(C_{1}\) on a neighborhood \(U\) of \(o\). Let \(c\) be any real number such that \(|F|^{2t_{2}}|H|^{2}e^{-2c\Phi_{o,\max}}\) is integrable near \(o\). Then we have \[\begin{split}|F|^{2t_{1}}|H|^{2}e^{-2k(t_{1}-t_{2})\Phi_{o,\max }}e^{-2c\Phi_{o,\max}}\\ =&|F|^{2t_{2}}|F|^{2(t_{1}-t_{2})}|H|^{2}e^{-2k(t_{1 }-t_{2})\Phi_{o,\max}}e^{-2c\Phi_{o,\max}}\\ \leq& C_{1}|F|^{2t_{2}}|H|^{2}e^{-2c\Phi_{o,\max}} \end{split}\] on \(U\). Hence \(|F|^{2t_{1}}|H|^{2}e^{-2k(t_{1}-t_{2})\Phi_{o,\max}}e^{-2c\Phi_{o,\max}}\) is integrable near \(o\), and \(A_{F}(t_{1})\geq c+k(t_{1}-t_{2})\). By the choice of \(c\), \(A_{F}(t_{1})\geq A_{F}(t_{2})+k(t_{1}-t_{2})\). Lemma 6.4 is proved. Now we prove Proposition 6.2. Proof of Proposition 6.2.: Note that \(A_{2}(0)=c_{o}(\Phi_{o,\max})\). It follows from Lemma 6.4 that \(A_{2}(t)\geq A_{2}(0)+k_{2}t=c_{o}(\Phi_{o,\max})+k_{2}t\). Hence the first inequality in Proposition 6.2 has been proved. We now prove the inequality \(A_{2}(t)\leq k_{2}t-k_{1}+1\) by contradiction. If not, there exists a positive real number \(t_{0}\) such that \(A_{2}(t_{0})>k_{2}t_{0}-k_{1}+1\). Let \(n=\lfloor t_{0}\rfloor\) be the largest integer less than or equal to \(t_{0}\). Then it follows from Lemma 6.4 that \(A_{3}(n+1,0)=A_{2}(n+1)\geq A_{2}(t_{0})+k_{2}(n+1-t_{0})>k_{2}t_{0}-k_{1}+1+k_ {2}(n+1-t_{0})>k_{2}(n+1)-k_{1}+1\). Note that \(k_{1}=\sigma(\log|f_{0}|,\Phi_{o,\max})\). Again, by Lemma 6.4, we have \[\begin{split} A_{3}(n+1,1)\geq& A_{3}(n+1,0)+k_{1}\\ >& k_{2}(n+1)-k_{1}+1+k_{1}\\ =& k_{2}(n+1)+1.\end{split} \tag{6.4}\] However, it follows from equality (6.1) that \(A_{3}(n+1,1)=k_{2}(n+1)+1\), which contradicts to inequality (6.4). Hence \(A_{2}(t)\leq k_{2}t-k_{1}+1\) holds when \(t\in[0,+\infty)\). Proposition 6.2 has been proved. Now, we prove Theorem 1.11. Proof of Theorem 1.11.: It follows from Remark 6.1 and Proposition 6.2 (taking \(t=1\)) that Theorem 1.11 holds. ## 7. Proofs of Theorem 1.12 and Corollary 1.14 In this section, we prove Theorem 1.12 and Corollary 1.14. Firstly, we consider the case of \(u,v\geq N\log|z|\) for \(N\gg 0\). **Lemma 7.1**.: _Let \(u,\,v\) be two plurisubharmonic functions near \(o\) satisfying that there exists \(N\gg 0\) such that \(u\geq N\log|z|\) and \(v\geq N\log|z|\) near \(o\). Assume that \(\sigma(u,\Phi_{o,\max})\leq\sigma(v,\Phi_{o,\max})\) for any local Zhou weight \(\Phi_{o,\max}\) near \(o\), then \(\mathcal{I}(tv)_{o}\subset\mathcal{I}(tu)_{o}\) for any \(t>0\)._ Proof.: Theorem 2.8 shows that for any holomorphic function \(f\) and plurisubharmonic function \(\varphi\) near \(o\), \((f,o)\in\mathcal{I}(\varphi)_{o}\) if and only if \(c_{o}^{f}(\varphi)>1\). We prove Lemma 7.1 by contradiction: if not, there exists a holomorphic function \(f\) near \(o\) such that \[t_{0}:=c_{o}^{f}(u)<c_{o}^{f}(v).\] Then we have \(|f|^{2}e^{-2t_{0}u}\) is not integrable near \(o\) and \(|f|^{2}|z|^{2t_{0}N}e^{-2t_{0}u}\) is integrable near \(o\) for \(u\geq N\log|z|\). By Remark 1.3, there exists a local Zhou weight \(\Phi_{o,\max}\) related to \(|f|^{2}\) near \(o\) satisfying that \[\Phi_{o,\max}\geq t_{0}u \tag{7.1}\] and \(|f|^{2}e^{-2\Phi_{o,\max}}\) is not integrable near \(o\). Noting that \(t_{0}u\leq\sigma(t_{0}u,\Phi_{o,\max})\Phi_{o,\max}+O(1)\) near \(o\), we have \(|f|^{2}e^{-2\frac{t_{0}}{\sigma(t_{0}u,\Phi_{o,\max})}u}\) is not integrable near \(o\). As \(c_{o}^{f}(u)=t_{0}\), we have \(\sigma(t_{0}u,\Phi_{o,\max})\leq 1\). Combining inequality (7.1), we have \[\sigma(t_{0}u,\Phi_{o,\max})=1.\] By the assumption of Lemma 7.1, we have \[\sigma(t_{0}v,\Phi_{o,\max})\geq 1,\] then \(|f|^{2}e^{-2t_{0}v}\) is not integrable near \(o\) which contradicts to \(c_{o}^{f}(v)>t_{0}\). Thus, Lemma 7.1 holds. Let us recall _Demailly's approximation theorem_. **Lemma 7.2** (see [11]).: _Let \(D\subset\mathbb{C}^{n}\) be a bounded pseudoconvex domain, and let \(\varphi\in\operatorname{PSH}(D)\). For any positive integer \(m\), let \(\{\sigma_{m,k}\}_{k=1}^{\infty}\) be an orthonormal basis of \(A^{2}(D,2m\varphi):=\big{\{}f\in\mathcal{O}(D):\int_{D}|f|^{2}e^{-2m\varphi}d \lambda<+\infty\big{\}}\). Denote that_ \[\varphi_{m}:=\frac{1}{2m}\log\sum_{k=1}^{\infty}|\sigma_{m,k}|^{2}\] _on \(D\). Then there exist two positive constants \(c_{1}\) (depending only on \(n\) and diameter of \(D\)) and \(c_{2}\) such that_ \[\varphi(z)-\frac{c_{1}}{m}\leq\varphi_{m}(z)\leq\sup_{|\tilde{z}-z|<r}\varphi (\tilde{z})+\frac{1}{m}\log\frac{c_{2}}{r^{n}} \tag{7.2}\] _for any \(z\in D\) satisfying \(\{\tilde{z}\in\mathbb{C}^{n}:|\tilde{z}-z|<r\}\subset\subset D\). Especially, \(\varphi_{m}\) converges to \(\varphi\) pointwisely and in \(L^{1}_{\rm loc}\) on \(D\)._ **Remark 7.3**.: _Let \((\tau_{l})\) be an orthonormal basis of a closed subspace \(H\) of the space \(A^{2}(\Omega,2m\varphi)\), then we can see that_ \[\sum_{l}|\tau_{l}(z)|^{2}=\sup\left\{|f(z)|^{2}:f\in H\ \&\ \int_{\Omega}|f|^{2}e^{-2m \varphi}\leq 1\right\}\] _for any \(z\in\Omega\)._ The following two lemmas will be used in the proofs of Theorem 1.12 and Corollary 1.14, and we prove them in Appendix. **Lemma 7.4**.: _For any plurisubharmonic function \(u\) near \(o\) and any holomorphic function \(f\) near \(o\), we have \(\lim_{N\to+\infty}c_{o}^{f}(\max\{\varphi,N\log|z|\})=c_{o}^{f}(\varphi)\)._ **Lemma 7.5**.: _Let \(u\), \(v\) be two plurisubharmonic functions near \(o\). Assume that \(u\) has analytic singularities near \(o\). If \(v\leq(1-\epsilon)u+O(1)\) for any \(\epsilon>0\), then \(v\leq u+O(1)\)._ Now, we prove Theorem 1.12. Proof of Theorem 1.12.: Note that \((2)\Rightarrow(1)\) is trivial, then we only prove that \((1)\Rightarrow(3)\) and \((3)\Rightarrow(2)\). We prove Theorem 1.12 in two steps. **Step 1**. \((1)\Rightarrow(3)\). Assume that the statement \((1)\) holds. Then there exist two sequences of numbers \(\{t_{i,j}\}_{j\in\mathbb{Z}_{\geq 0}}\) (\(t_{i,j}\to+\infty\) when \(j\to+\infty\), \(i=1,2\)) such that \(\lim_{j\to+\infty}\frac{t_{1,j}}{t_{2,j}}=1\) and \[\mathcal{I}(t_{1,j}v)_{o}\subset\mathcal{I}(t_{2,j}u)_{o}\] for any \(j\). Let \(\{f_{j,1},\ldots,f_{j,m_{j}}\}\) be the generators set of \(\mathcal{I}(t_{1,j}v)_{o}\). Then it follows from Demailly's approximation theorem (Lemma 7.2) that \[v\leq\frac{1}{2t_{1,j}}\log\sum_{1\leq l\leq m_{j}}|f_{j,l}|^{2}+O(1) \tag{7.3}\] near \(o\). Let \(\Phi_{o,\max}\) be any local Zhou weight near \(o\). Note that \(\mathcal{I}(t_{1,j}v)_{o}\subset\mathcal{I}(t_{2,j}u)_{o}\) and \[\int_{U}\left(\sum_{1\leq l\leq m_{j}}|f_{j,l}|^{2}\right)e^{-2t_{2,j}u}<+\infty\] for a neighborhood \(U\) of \(o\), then \[\int_{U}\left(\sum_{1\leq l\leq m_{j}}|f_{j,l}|^{2}\right)e^{-2t_{2,j}\sigma(u,\Phi_{o,\max})\Phi_{o,\max}}<+\infty, \tag{7.4}\] which implies that \[\sigma\left(\log\sum_{1\leq l\leq m_{j}}|f_{j,l}|,\Phi_{o,\max}\right)\geq t_ {2,j}\sigma(v,\Phi_{o,\max})-C_{1} \tag{7.5}\] by Theorem 1.11, where \(C_{1}\) is a constant independent of \(j\). Combining inequality (7.3), we have \[v\leq\frac{1}{2t_{1,j}}\log\sum_{1\leq l\leq m_{j}}|f_{j,l}|^{2}+O(1)\leq\left( \frac{t_{2,j}}{t_{1,j}}\sigma(v,\Phi_{o,\max})-\frac{C_{1}}{t_{1,j}}\right)\Phi _{o,\max}+O(1)\] near \(o\), which shows that \[\sigma(v,\Phi_{o,\max})\geq\frac{t_{2,j}}{t_{1,j}}\sigma(v,\Phi_{o,\max})- \frac{C_{1}}{t_{1,j}}.\] As \(\lim_{j\to+\infty}\frac{t_{1,j}}{t_{2,j}}=1\), we get \[\sigma(v,\Phi_{o,\max})\geq\sigma(u,\Phi_{o,\max})\] by letting \(j\to+\infty\). Thus, \((1)\Rightarrow(3)\) has been proved. **Step 2.**\((3)\Rightarrow(2)\). Using the statement \((3)\) and Corollary 1.9, we get that \[\sigma(\varphi_{0}+tu,\Phi_{o,\max})\] \[= \sigma(\varphi_{0},\Phi_{o,\max})+t\sigma(u,\Phi_{o,\max})\] \[\leq \sigma(\varphi_{0},\Phi_{o,\max})+t\sigma(v,\Phi_{o,\max})\] \[= \sigma(\varphi_{0}+tv,\Phi_{o,\max})\] holds for any local Zhou weight \(\Phi_{o,\max}\) near \(o\) and any \(t>0\), then it suffices to prove that: if \(\sigma(u,\Phi_{o,\max})\leq\sigma(v,\Phi_{o,\max})\) holds for any local Zhou weight \(\Phi_{o,\max}\) near \(o\), then \(\mathcal{I}(v)_{o}\subset\mathcal{I}(u)_{o}\). Note that Theorem 2.8 shows that for any holomorphic function \(f\) and plurisubharmonic function \(\varphi\) near \(o\), \((f,o)\in\mathcal{I}(\varphi)_{o}\) if and only if \(c_{o}^{f}(\varphi)>1\). For any \(N>0\), it follows from the statement \((3)\) and Corollary 1.9 that \[\sigma\big{(}\max\{u,N\log|z|\},\Phi_{o,\max}\big{)}\] \[= \min\big{\{}\sigma(u,\Phi_{o,\max}),\sigma(N\log|z|,\Phi_{o,\max })\big{\}}\] \[\leq \min\big{\{}\sigma(v,\Phi_{o,\max}),\sigma(N\log|z|,\Phi_{o,\max })\big{\}}\] \[= \sigma\big{(}\max\{v,N\log|z|\},\Phi_{o,\max}\big{)}\] for any local Zhou weight \(\Phi_{o,\max}\) near \(o\). Using Lemma 7.1, we have \[\mathcal{I}\big{(}t\max\{v,N\log|z|\}\big{)}_{o}\subset\mathcal{I}\big{(}t \max\{u,N\log|z|\}\big{)}_{o}\] for any \(t>0\) and any \(N>0\), which implies that \[c_{o}^{f}\big{(}\max\{u,N\log|z|\}\big{)}=c_{o}^{f}\big{(}\max\{v,N\log|z|\} \big{)}\] for any \(N>0\) and any holomorphic function \(f\) near \(o\). By Lemma 7.4, we have \[c_{o}^{f}(u)=c_{o}^{f}(v)\] for any holomorphic function \(f\) near \(o\), which implies \(\mathcal{I}(tu)_{o}=\mathcal{I}(tv)_{o}\) for any \(t>0\). Thus, Theorem 1.12 holds. **Remark 7.6**.: _Note that "inequality (7.4) \(\Rightarrow\) inequality (7.5)" also holds when replacing \(\Phi_{o,\max}\) with any tame maximal weight. Thus, the Step 1 in the above proof shows that statement \((1)\) implies \(\sigma(u,\varphi)\leq\sigma(v,\varphi)\) for any tame maximal weight \(\varphi\)._ Finally, we prove Corollary 1.14. Proof of Corollary 1.14.: It follows from Theorem 1.12 that it suffices to prove \((3)\Rightarrow(1)\). Assume that \(\sigma(u,\Phi_{o,\max})\leq\sigma(v,\Phi_{o,\max})\) holds for any local Zhou weight \(\Phi_{o,\max}\) near \(o\). Theorem 1.12 shows that \[\mathcal{I}(tv)_{o}\subset\mathcal{I}(tu)_{o}\] for any \(t>0\). Let \(D\) be a small enough neighborhood of \(o\), and let \(\{\sigma_{m,k}\}_{k=1}^{\infty}\) be an orthonormal basis of \(A^{2}(D,2mu):=\big{\{}f\in\mathcal{O}(D):\int_{D}|f|^{2}e^{-2mu}d\lambda<+ \infty\big{\}}\). Without loss of generality, assume that \(e^{u}\) is smooth on \(D\). Denote that \[u_{m}:=\frac{1}{2m}\log\sum_{k=1}^{\infty}|\sigma_{m,k}|^{2}\] on \(D\). Then it follows from Lemma 7.2 and \(\mathcal{I}(mv)_{o}\subset\mathcal{I}(mu)_{o}\) that \[v\leq u_{m}+O(1)\] near \(o\) and \[u_{m}(z)\leq\sup_{|\tilde{z}-z|<r}u(\tilde{z})+\frac{1}{m}\log\frac{c_{2}}{r^{ n}} \tag{7.6}\] for any \(z\in D\) satisfying \(\{\tilde{z}\in\mathbb{C}^{n}:|\tilde{z}-z|<r\}\subset\subset D\). As \(e^{u}\) is smooth on \(D\), we have \(\sup_{|\tilde{z}-z|<r}e^{u(\tilde{z})}\leq e^{u(z)}+Cr\). Taking \(r=e^{u(z)}\), inequality (7.6) implies that \[u_{m}\leq\left(1-\frac{n}{m}\right)u+O(1)\] near \(o\), which implies \(v\leq\left(1-\frac{n}{m}\right)u+O(1)\) near \(o\) for any \(m\). It follows from Lemma 7.5 that \[v\leq u+O(1)\] near \(o\). Thus, Corollary 1.14 holds. ## 8. Global Zhou weights In this section, we discuss the global Zhou weights, and prove Proposition 1.18 and Proposition 1.19. ### Some properties of the global Zhou weights Let \(D\) be a domain in \(\mathbb{C}^{n}\), such that the origin \(o\in D\). Let \(f_{0}=(f_{0,1},\cdots,f_{0,m})\) be a vector, where \(f_{0,1},\cdots,f_{0,m}\) are holomorphic functions near \(o\). Denote \(|f_{0}|^{2}=|f_{0,1}|^{2}+\cdots+|f_{0,m}|^{2}\). Let \(\varphi_{0}\) be a plurisubharmonic function near \(o\), such that \(|f_{0}|^{2}e^{-2\varphi_{0}}\) is integrable near \(o\). Let us recall the definition of global Zhou weights. **Definition 8.1**.: _We call a negative plurisubharmonic function \(\Phi_{o,\max}^{f_{0},\varphi_{0},D}\) (\(\Phi_{o,\max}^{D}\) for short) on \(D\) a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) if the following statements hold:_ \((1)\)_\(|f_{0}|^{2}e^{-2\varphi_{0}}|z|^{2N_{0}}e^{-2\Phi_{o,\max}^{D}}\) is integrable near \(o\) for large enough \(N_{0}\);_ \((2)\)_\(|f_{0}|^{2}e^{-2\varphi_{0}-2\Phi_{o,\max}^{D}}\) is not integrable near \(o\);_ \((3)\) _for any negative plurisubharmonic function \(\tilde{\varphi}\) on \(D\) satisfying that \(\tilde{\varphi}\geq\Phi_{o,\max}^{D}\) on \(D\) and \(|f_{0}|^{2}e^{-2\varphi_{0}-2\tilde{\varphi}}\) is not integrable near \(o\), \(\tilde{\varphi}=\Phi_{o,\max}^{D}\) holds on \(D\)._ The following remark shows the existence of the global Zhou weights. **Remark 8.2**.: _Assume that there exists a negative plurisubharmonic function \(\varphi\) on \(D\) such that \(|f_{0}|^{2}e^{-2\varphi_{0}-2\varphi}|z|^{2N_{0}}\) is integrable near \(o\) for large enough \(N_{0}\) and \((f_{0},o)\not\in\mathcal{I}(\varphi+\varphi_{0})_{o}\)._ _Then there exists a global Zhou weight \(\Phi^{D}_{o,\max}\) on \(D\) related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) such that \(\Phi^{D}_{o,\max}\geq\varphi\) on \(D\)._ Proof.: Let \((\varphi_{\alpha})_{\alpha}\) be the negative plurisubharmonic functions on \(V\) such that \(\varphi_{\alpha}\geq\varphi\) and \(|f_{0}|^{-2\varphi_{0}-2\varphi_{\alpha}}\) is not integrable near \(o\). Zorn's Lemma shows that there exists \(\Gamma\) which is the maximal set such that for any \(\alpha,\alpha^{\prime}\in\Gamma\), \(\varphi_{\alpha}\leq\varphi_{\alpha^{\prime}}\) or \(\varphi_{\alpha^{\prime}}\leq\varphi_{\alpha}\) holds on \(D\), where \((\varphi_{\alpha})\) are negative plurisubharmonic functions on \(D\). Let \(u(z)=\sup_{\alpha\in\Gamma}\varphi_{\alpha}(z)\), and let \(u^{*}(z)=\lim_{\varepsilon\to 0}\sup_{\mathbb{B}^{n}(z,\varepsilon)}u\). Lemma 2.11 shows that there exists increasing subsequence \((\varphi_{j})\) of \((\varphi_{\alpha})\) such that \((\lim_{j\to+\infty}\varphi_{j})^{*}=u^{*}\). Proposition 2.12 shows that \((\varphi_{j})\) is convergent to \(u^{*}\) with respect to \(j\) almost everywhere with respect to Lebesgue measure, and \(u^{*}\) is a plurisubharmonic function on \(D\). Proposition 2.9 (and Remark 2.10) shows that \(|f_{0}|^{2}e^{-2\varphi_{0}-2u^{*}}\) is not integrable near \(o\). In fact, the definition of \(u^{*}\) shows that \(\Phi^{D}_{o,\max}:=u^{*}\) is a global Zhou weight on \(D\) related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\). Denote that \[L_{w}:=\big{\{}u\in\mathrm{PSH}(D):u<0\ \&\ \limsup_{z\to w}(u(z)-\log|z-w|)<+ \infty\big{\}}.\] If \(L_{w}\neq\emptyset\), the pluricomplex Green function of \(D\) with a pole at \(w\) is defined as follows: \[G_{D}(w,\cdot):=\sup\{u:u\in L_{w}\}.\] We give a relation between global Zhou weights \(\Phi^{D}_{o,\max}\) and the pluricomplex Green functions \(G_{D}(o,\cdot)\). **Lemma 8.3**.: _Assume that \(L_{o}\neq\emptyset\). Then \(\Phi^{D}_{o,\max}\geq NG_{D}(o,\cdot)\) for some \(N\gg 0\)._ Proof.: As \(|f_{0}|^{2}e^{-2\varphi_{0}-2\Phi^{D}_{o,\max}}|z|^{2N_{0}}\) is integrable near \(o\), it follows from Theorem 2.8 that \[|f_{0}|^{2}e^{-2\varphi_{0}}|z|^{2N_{0}}e^{-2(1+\epsilon)\Phi^{D}_{o,\max}}\] is integrable near \(o\). Note that \(\sup_{z\in U}\big{(}G_{D}(o,z)-\log|z|\big{)}<+\infty\) for a neighborhood \(U\) of \(o\), then \[|f_{0}|^{2}e^{-2\varphi_{0}}e^{2N_{0}G_{D}(o,\cdot)}e^{-2(1+\epsilon)\Phi^{D}_ {o,\max}}\] is integrable near \(o\). Lemma 3.1 shows that \[|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\Phi^{D}_{o,\max}}-|f_{0}|^{2}e^{-2\varphi_{ 0}}e^{-2\max\big{\{}\Phi^{D}_{o,\max},\frac{N_{0}}{\epsilon}G_{D}(o,\cdot) \big{\}}}\] is integrable near \(o\). Then it follows from the definition of \(\Phi^{D}_{o,\max}\) that \[\Phi^{D}_{o,\max}\geq\frac{N_{0}}{\epsilon}G_{D}(o,\cdot).\] Lemma 8.3 has been proved. Denote that \[\tilde{L}_{o}:=\big{\{}u\in L_{o}:u\in L^{\infty}_{\mathrm{loc}}(U\backslash\{ o\})\ \text{for some neighborhood $U$ of $o$}\big{\}}.\] **Remark 8.4**.: _If \(D\) is bounded or hyperconvex, then \(\tilde{L}_{o}\neq\emptyset\)._ Proof.: If \(D\) is bounded, \(\log|z|-C\in L_{o}\) for large enough \(C>0\). Assume that \(D\) is hyperconvex, then there exists a continuous exhaustion plurisubharmonic function \(\varphi_{1}<0\) on \(D\). For \(0<c_{1}<c_{2}\) satisfying \(\{|z|<c_{2}\}\subset\subset D\), there exist two constants \(a>0\) and \(b\in\mathbb{R}\) such that \(a\varphi_{1}+b<\log|z|\) on \(\{|z|=c_{1}\}\) and \(a\varphi_{1}+b>\log|z|\) on \(\{|z|=c_{2}\}\). Let \[\tilde{\varphi}:=\left\{\begin{array}{ll}\log|z|-b&\mbox{ on }\{|z|\leq c_{1}\}, \\ \max\{a\varphi_{1},\log|z|-b\}&\mbox{ on }\{c_{1}<|z|<c_{2}\},\\ a\varphi_{1}&\mbox{ on }D\backslash\{|z|<c_{2}\}.\end{array}\right.\] Then \(\tilde{\varphi}\) is a plurisubharmonic function and \(\tilde{\varphi}\in\tilde{L}_{o}\cap L_{o}\). Given any local Zhou weight, we can obtain a global Zhou weight on \(D\) by using the following lemma. **Lemma 8.5**.: _Let \(\Phi_{o,\max}\) be a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\), and denote that_ \[L(\Phi_{o,\max}):= \big{\{}\tilde{\varphi}(z)\in\mathrm{PSH}(D):\tilde{\varphi}<0\] \[\ \&\ |f_{0}|^{2}e^{-2\varphi_{0}-2\tilde{\varphi}}\mbox{ is not integrable near }o\ \&\ \tilde{\varphi}\geq\Phi_{o,\max}+O(1)\mbox{ near }o\big{\}}.\] _If \(L(\Phi_{o,\max})\neq\emptyset\), then_ \[\Phi_{o,\max}^{D}(z):=\sup\big{\{}\tilde{\varphi}(z):\tilde{\varphi}\in L(\Phi _{o,\max})\big{\}},\ \forall z\in D \tag{8.1}\] _is a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\) satisfying that_ \[\Phi_{o,\max}^{D}=\Phi_{o,\max}+O(1)\] _near \(o\)._ Proof.: Choosing any \(\tilde{\varphi}_{1}\in L(\Phi_{o,\max})\), it follows from Remark 8.2 that there exists a global Zhou weight \(\tilde{\varphi}_{1}^{*}\) related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\) satisfying that \(\tilde{\varphi}_{1}^{*}\geq\tilde{\varphi}_{1}\) on \(D\) and \[\tilde{\varphi}_{1}^{*}\in L(\Phi_{o,\max}).\] As \(\Phi_{o,\max}\) is a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\), we have that \[\tilde{\varphi}=\Phi_{o,\max}+O(1)\] near \(o\) for any \(\tilde{\varphi}\in L(\Phi_{o,\max})\), which shows that \(|f_{0}|^{2}e^{-2\varphi_{0}}e^{-2\max\{\tilde{\varphi},\tilde{\varphi}_{1}^{*}\}}\) is not integrable near \(o\). By the definition of global Zhou weight, we have \(\tilde{\varphi}_{1}^{*}\geq\tilde{\varphi}\). Thus, \(\tilde{\varphi}_{1}^{*}=\Phi_{o,\max}^{D}:=\sup\big{\{}\tilde{\varphi}:\tilde {\varphi}\in L(\Phi_{o,\max})\big{\}}\) is a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\) satisfying that \(\Phi_{o,\max}^{D}=\Phi_{o,\max}+O(1)\) near \(o\). We give a sufficient and necessary condition for \(L(\Phi_{o,\max})\neq\emptyset\). **Remark 8.6**.: \(L(\Phi_{o,\max})\neq\emptyset\) _if and only if \(\tilde{L}_{o}\neq\emptyset\)._ Proof.: For any \(\tilde{\varphi}\in L(\Phi_{o,\max})\), there exists two positive constants \(N_{1}\) and \(N_{2}\) such that \(N_{2}\log|z|+O(1)\geq\tilde{\varphi}\geq N_{1}\log|z|+O(1)\) near \(o\), which implies that \(\tilde{L}_{o}\neq\emptyset\). For any \(u\in\tilde{L}_{o}\), note that \(\Phi_{o,\max}\geq N\log|z|+O(1)\geq Nu+O(1)\) near \(o\) for large enough \(N\gg 0\). Letting \(r>0\) small enough, there exists a constant \(C_{1}\) such that \(\Phi_{o,\max}+C_{1}<Nu\) on a neighborhood of \(\{|z|=r\}\). Define \[\tilde{\varphi}:=\left\{\begin{array}{ll}\max\big{\{}\Phi_{o,\max}+C_{1}, Nu\big{\}}&\mbox{ on }\{|z|<r\},\\ Nu&\mbox{ on }D\backslash\{|z|<r\}.\end{array}\right.\] Then \(\tilde{\varphi}\) is a negative plurisubharmonic function on \(D\) satisfying that \[\tilde{\varphi}=\Phi_{o,\max}+O(1),\] which shows that \(\tilde{\varphi}\in L(\Phi_{o,\max})\). Thus, Remark 8.6 holds. Remark 8.4, Lemma 8.5 and Remark 8.6 show that if \(D\) is bounded or hyperconvex, given any local Zhou weight, we can obtain a global Zhou weight by using equality (8.1). On the other hand, the following lemma shows that any global Zhou weight is also a local Zhou weight. **Lemma 8.7**.: _Let \(\Phi^{D}_{o,\max}\) be a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\). Then \(\Phi^{D}_{o,\max}\) is a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\) if and only if \(\tilde{L}_{o}\neq\emptyset\)._ Proof.: If \(\Phi^{D}_{o,\max}\) is a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\), then \(\Phi^{D}_{o,\max}\geq N\log|z|+O(1)\) for large enough \(N\gg 0\), i.e., \(\Phi^{D}_{o,\max}\in\tilde{L}_{o}\neq\emptyset\). In the following, assume that \(\tilde{L}_{o}\neq\emptyset\). By Remark 1.3, there exists a local Zhou weight \(\Phi_{o,\max}\) related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\) satisfying that \[\Phi_{o,\max}\geq\Phi^{D}_{o,\max}\] near \(o\). Remark 8.6 tells us that there exists a negative plurisubharmonic function \(\tilde{\varphi}\) on \(D\) satisfying that \(\tilde{\varphi}\geq\Phi_{o,\max}+O(1)\) near \(o\) and \(|f_{0}|^{2}e^{-2\varphi_{0}-2\tilde{\varphi}}\) is not integrable near \(o\). Hence, \[\tilde{\varphi}\geq\Phi_{o,\max}+O(1)\geq\Phi^{D}_{o,\max}+O(1),\] and \(|f_{0}|^{2}e^{-2\varphi_{0}-2\max\{\tilde{\varphi},\Phi^{D}_{o,\max}\}}\) is not integrable near \(o\). By the definition of \(\Phi^{D}_{o,\max}\), we have \[\Phi^{D}_{o,\max}=\max\left\{\tilde{\varphi},\Phi^{D}_{o,\max}\right\}=\tilde {\varphi},\] which implies that \(\Phi^{D}_{o,\max}=\Phi_{o,\max}+O(1)\) is a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\). The following lemma gives a global inequality with the Zhou number \(\sigma(\cdot,\Phi^{D}_{o,\max})\). **Lemma 8.8**.: _Let \(\Phi^{D}_{o,\max}\) be a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\), and let \(\psi\) be any negative plurisubharmonic function \(D\). Then the inequality_ \[\psi\leq\sigma(\psi,\Phi^{D}_{o,\max})\Phi^{D}_{o,\max}\] _holds on \(D\)._ Proof.: For any \(p\in\left(0,\sigma(\psi,\Phi^{D}_{o,\max})\right)\), \(\psi\leq p\Phi^{D}_{o,\max}+O(1)\) near \(o\), which shows that \(|f_{0}|^{2}e^{-2\varphi_{0}-2\max\{\Phi^{D}_{o,\max},\frac{1}{\sigma(\psi, \Phi^{D}_{o,\max})}\psi\}}\) is not integrable near \(o\). Following from Theorem 2.8, \(|f_{0}|^{2}e^{-2\varphi_{0}-2\max\left\{\Phi^{D}_{o,\max},\frac{1}{\sigma( \psi,\Phi^{D}_{o,\max})}\psi\right\}}\) is not integrable near \(o\). Note that \(\max\left\{\Phi^{D}_{o,\max},\frac{1}{\sigma(\psi,\Phi^{D}_{o,\max})}\psi \right\}<0\) on \(D\), then the definition of \(\Phi^{D}_{o,\max}\) shows that \[\Phi^{D}_{o,\max}\geq\max\left\{\Phi^{D}_{o,\max},\frac{1}{\sigma(\psi,\Phi^{ D}_{o,\max})}\psi\right\},\] which implies that \[\psi\leq\sigma(\psi,\Phi^{D}_{o,\max})\Phi^{D}_{o,\max}\] holds on \(D\). We give a global Zhou weight on the sublevel set \(\{\Phi^{D}_{o,\max}<-t\}\). **Lemma 8.9**.: _Let \(\Phi^{D}_{o,\max}\) be a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\), then for any \(t>0\), \(\Phi^{D}_{o,\max}+t\) is a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(\{\Phi^{D}_{o,\max}<-t\}\)._ Proof.: By Remark 8.2, there exists a global Zhou weight \(\Phi^{\{\Phi^{D}_{o,\max}<-t\}}_{o,\max}\) related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(\{\Phi^{D}_{o,\max}<-t\}\) satisfying that \[\Phi^{\{\Phi^{D}_{o,\max}<-t\}}_{o,\max}\geq\Phi^{D}_{o,\max}+t\] on \(\{\Phi^{D}_{o,\max}<-t\}\). Let \[\phi:=\left\{\begin{array}{ll}\Phi^{\{\Phi^{D}_{o,\max}<-t\}}_{o,\max}-t& \mbox{ on }\{\Phi^{D}_{o,\max}<-t\},\\ \Phi^{D}_{o,\max}&\mbox{ on }\{\Phi^{D}_{o,\max}\geq-t\}.\end{array}\right.\] Then we have \(\phi\geq\Phi^{D}_{o,\max}\) on \(D\). As \(\Phi^{\{\Phi^{D}_{o,\max}<-t\}}_{o,\max}\) is plurisubharmonic on \(\{\Phi^{D}_{o,\max}<-t\}\) and \(\Phi^{D}_{o,\max}\) is plurisubharmonic on \(D\), we have \[\limsup_{\tilde{z}\to z}\phi(\tilde{z})\leq\phi(z),\ \forall z\in D \backslash\partial\{\Phi^{D}_{o,\max}<-t\}.\] For any \(z\in\partial\{\Phi^{D}_{o,\max}<-t\}\cap D\), \(\phi(z)=\Phi^{D}_{o,\max}(z)\geq-t\), then \[\limsup_{\tilde{z}\to z}\phi(\tilde{z}) =\max\left\{\limsup_{\{\Phi^{D}_{o,\max}<-t\}\ni\tilde{z}\to z} \phi(\tilde{z}),\limsup_{(D\backslash\{\Phi^{D}_{o,\max}<-t\})\ni\tilde{z} \to z}\phi(\tilde{z})\right\}\] \[\leq-t\leq\phi(z).\] Thus, \(\phi\) is an upper semicontinuous function on \(D\). For any \(z\in\partial\{\Phi^{D}_{o,\max}<-t\}\cap D\), as \(\phi\geq\Phi^{D}_{o,\max}\) on \(D\), we have \[\phi(z)=\Phi^{D}_{o,\max}(z)\leq\frac{1}{\lambda(\mathbb{B}(z,r))}\int_{B(z;r )}\Phi^{D}_{o,\max}\ d\lambda\leq\frac{1}{\lambda(\mathbb{B}(z,r))}\int_{B(z; r)}\phi\ d\lambda,\] where \(\mathbb{B}(z,r)\subset D\) is a ball with a radius of \(r\) and \(\lambda\) is the Lebesgue measure on \(\mathbb{C}^{n}\). Hence, \(\phi\) is a plurisubharmonic function on \(D\). As \(\Phi^{D}_{o,\max}\) is a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\), we get \(\Phi^{D}_{o,\max}\geq\phi\), which implies that \(\Phi^{\{\Phi^{D}_{o,\max}<-t\}}_{o,\max}=\Phi^{D}_{o,\max}+t\) on \(\{\Phi^{D}_{o,\max}<-t\}\). Lemma 8.9 has been proved. ### Proofs of Proposition 1.18 and Proposition 1.19 In this section, we prove Proposition 1.18 and Proposition 1.19. Firstly, we recall a lemma, which will be used in the proof of Proposition 1.18. **Lemma 8.10** (see [5], see also [1, 4]).: _Let \(\varphi\in\operatorname{PSH}(\Omega)\cap L^{\infty}_{\operatorname{loc}}(\Omega)\) on an open subset \(\Omega\) of \(\mathbb{C}^{n}\). If for any \(u\in\operatorname{PSH}(\Omega)\) such that \(\varphi\geq u\) outside a compact subset of \(\Omega\) we have \(\varphi\geq u\) on \(\Omega\), then \((dd^{c}\varphi)^{n}=0\) on \(\Omega\)._ Now, we prove Proposition 1.18. Proof of Proposition 1.18.: As \(\tilde{L}_{o}\neq\emptyset\), it follows from Lemma 8.7 that \(\Phi^{D}_{o,\max}\) is a local Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) near \(o\). Thus, \(\Phi^{D}_{o,\max}\geq N\log|z|+O(1)\) for large enough \(N\gg 0\). Then there exist \(r>0\) such that \[C:=\inf_{\{|z|=r\}}\Phi^{D}_{o,\max}>-\infty.\] Let \[\tilde{\varphi}:=\left\{\begin{array}{ll}\Phi^{D}_{o,\max}&\mbox{ on }\{|z|<r\},\\ \max\{\Phi^{D}_{o,\max},C-1\}&\mbox{ on }D\backslash\{|z|<r\}.\end{array}\right.\] Then \(\tilde{\varphi}\) is a negative plurisubharmonic function on \(D\). By the definition of \(\Phi^{D}_{o,\max}\), \(\Phi^{D}_{o,\max}=\tilde{\varphi}\) on \(D\), which implies that \(\Phi^{D}_{o,\max}\in L^{\infty}_{\rm loc}(D\backslash\{o\})\). Using Lemma 8.10, we have \(\big{(}dd^{c}\Phi^{D}_{o,\max}\big{)}^{n}=0\) on \(D\backslash\{o\}\). Finally, we prove Proposition 1.19. Proof of Proposition 1.19.: Note that there exists \(N_{0}\gg 0\) such that \(\Phi^{D}_{o,\max}\geq N_{0}\log|z|+O(1)\) near \(o\). As \(D\) is a hyperconvex domain, there exists a continuous exhaustion plurisubharmonic function \(\varphi_{1}<0\) on \(D\). Then there exist \(r_{1}>r_{2}>0\) and \(N_{1}>0\) such that \[\inf_{\{r_{2}<|z|<r_{1}\}}\Phi^{D}_{o,\max}(z)\geq N_{1}\sup_{\{r_{2}<|z|<r_{1} \}}\varphi_{1}(z).\] Let \[\varphi_{2}:=\left\{\begin{array}{ll}\Phi^{D}_{o,\max}&\mbox{ on }\{|z|<r_{1}\},\\ \max\{\Phi^{D}_{o,\max},N_{1}\varphi_{1}\}&\mbox{ on }{\rm D}\backslash\{|z|<r_{1}\}.\end{array}\right.\] Then \(\varphi_{2}\) is a negative plurisubharmonic function on \(D\) and \(\varphi_{2}\geq\Phi^{D}_{o,\max}\). By the definition of \(\Phi^{D}_{o,\max}\), we have \(\varphi_{2}=\Phi^{D}_{o,\max}\), i.e. \(\Phi^{D}_{o,\max}\geq N_{1}\varphi_{1}\) on \(D\backslash\{|z|<r_{1}\}\), which implies that \(\Phi^{D}_{o,\max}(z)\to 0\) when \(z\to\partial D\). For any positive integer \(m\), let \(\{\sigma_{m,k}\}_{k=1}^{\infty}\) be an orthonormal basis of \[A^{2}\big{(}D,2m\Phi^{D}_{o,\max}\big{)}:=\left\{f\in\mathcal{O}(D):\int_{D}| f|^{2}e^{-2m\Phi^{D}_{o,\max}}d\lambda<+\infty\right\}.\] Denote \[\varphi_{m}:=\frac{1}{2m}\log\sum_{k=1}^{\infty}|\sigma_{m,k}|^{2}\] on \(D\). Fixed any \(z\in D\), there exists a holomorphic function \(f_{z,m}\) on \(D\) such that \[\int_{D}|f_{z,m}|^{2}e^{-2m\Phi^{D}_{o,\max}}d\lambda=1\ \&\ \frac{1}{2m}\log|f_{z,m}(z)|^{2}= \varphi_{m}(z).\] For any \(t>0\) satisfying \(z\in\{\Phi^{D}_{o,\max}<-t\}\), since \(\Phi^{D}_{o,\max}(z)\to 0\) when \(z\to\partial D\) and \(\int_{D}|f_{z,m}|^{2}e^{-2m\Phi^{D}_{o,\max}}d\lambda=1\), there exists \(M\gg 0\) such that \(|f_{z,m}|<1\) on \(\{\Phi^{D}_{o,\max}<-t\}\) for any \(m>M\). It follows from Lemma 8.9 and Lemma 8.8 that \[\varphi_{m}(z)=\frac{1}{m}\log|f_{z,m}(z)|\leq\frac{\sigma\big{(}\log|f_{z,m} |,\Phi^{D}_{o,\max}\big{)}}{m}(\Phi^{D}_{o,\max}+t)(z)\] for \(z\in\{\Phi^{D}_{o,\max}<-t\}\). Note that \(c^{f_{z,m}}_{o}(\Phi^{D}_{o,\max})\geq m\), then Theorem 1.11 shows that \(\sigma\big{(}\log|f_{z,m}|,\Phi^{D}_{o,\max}\big{)}\geq m+c_{3}\), where \(c_{3}\) is a constant independent of \(m\). Hence, we have \[\frac{\varphi_{m}(z)}{1+\frac{c_{3}}{m}}-t\leq\Phi^{D}_{o,\max}(z). \tag{8.2}\] Using inequality (7.2) in Lemma 7.2, we get \[\Phi^{D}_{o,\max}\leq\varphi_{m}+\frac{c_{1}}{m} \tag{8.3}\] on \(D\). Note that \(e^{\varphi_{m}}\) is smooth on \(D\), then it follows from inequality (8.2) and inequality (8.3) that \(e^{\Phi^{D}_{o,\max}}\) is continuous on \(D\) ## 9. Approximations of global Zhou weights: proofs of Theorem 1.20, Corollary 1.21 and Corollary 1.22 In this section, we discuss the approximations of global Zhou weights, and prove Theorem 1.20, Corollary 1.21 and Corollary 1.22. ### Preparations Let \(D\) be a hyperconvex domain in \(\mathbb{C}^{n}\) such that the origin \(o\in D\). Let \(\Phi_{o,\max}^{f_{0},\varphi_{0},D}\) (\(\Phi_{o,\max}^{D}\) for short in this section) be a global Zhou weight related to some \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D\) near \(o\) defined in this section, where \(f_{0}\) is a holomorphic vector on \(D\), and \(\varphi_{0}\) is a plurisubharmonic function near \(o\) such that \(|f_{0}|^{2}e^{-2\varphi_{0}}\) is integrable near \(o\). Denote \(\sigma(\cdot,\Phi_{o,\max}^{D})\) be the Zhou number with respect to \(\Phi_{o,\max}^{D}\). For any \(m\in\mathbb{N}_{+}\), we recall two compact subsets of \(\mathcal{O}(D)\) as follows: \[\mathscr{E}_{m}(D):=\big{\{}f\in\mathcal{O}(D):\sup_{z\in D}|f(z)|\leq 1,(f,o) \in\mathcal{I}(m\Phi_{o,\max}^{D})_{o}\big{\}},\] \[\mathscr{A}_{m}^{2}(D):=\big{\{}f\in\mathcal{O}(D):\|f\|_{D}\leq 1,(f,o)\in \mathcal{I}(m\Phi_{o,\max}^{D})_{o}\big{\}},\] where \(\|f\|_{D}^{2}:=\int_{D}|f|^{2}\). The compactness of \(\mathscr{E}_{m}(D)\) and \(\mathscr{A}_{m}^{2}(D)\) is due to the closedness property of coherent sheaves. We also recall the definitions of \(\phi_{m}\) and \(\varphi_{m}\) for any \(m\): \[\phi_{m}(z):=\sup_{f\in\mathscr{E}_{m}(D)}\frac{1}{m}\log|f(z)|,\ \forall z\in D, \tag{9.1}\] \[\varphi_{m}(z):=\sup_{f\in\mathscr{A}_{m}^{2}(D)}\frac{1}{m}\log|f(z)|,\ \forall z\in D. \tag{9.2}\] Suppose that \(D\) is a strictly hyperconvex domain with the function \(\varrho\) defined in Definition 1.16. For any \(j\in\mathbb{N}_{+}\), denote \(D_{j}:=\{z\in\Omega:\varrho(z)<1/j\}\), which is a decreasing sequence of bounded hyperconvex domains. Define \[\Phi_{o,\max}^{D_{j}}(z):=\sup\bigg{\{}\phi(z):\phi\in\mathrm{PSH}^{-}(D_{j}),\ (f_{0},o)\notin\mathcal{I}(\varphi_{0}+\phi)_{o},\ \phi\geq\Phi_{o,\max}^{D}+O(1)\bigg{\}}\] for any \(z\in D_{j}\), where the inequality \(\phi\geq\Phi_{o,\max}^{D}+O(1)\) means that it holds near \(o\) (similar inequalities in this section all mean that they hold near \(o\)). Then \(\Phi_{o,\max}^{D_{j}}\) is a global Zhou weight related to \(|f_{0}|^{2}e^{-2\varphi_{0}}\) on \(D_{j}\) (see Lemma 8.5). **Lemma 9.1**.: _For any \(1\leq j<j^{\prime}\), the following statements hold:_ \[\Phi_{o,\max}^{D_{j}}\leq\Phi_{o,\max}^{D_{j^{\prime}}}\ \text{on}\ D_{j^{ \prime}}, \tag{1}\] _and_ \[\Phi_{o,\max}^{D_{j}}\leq\Phi_{o,\max}^{D}\ \text{on}\ D.\] \[\Phi_{o,\max}^{D_{j}}=\Phi_{o,\max}^{D}+O(1)\ \text{near}\ o. \tag{2}\] (3) _For any \(t>0\),_ \[\mathcal{I}\big{(}t\Phi_{o,\max}^{D_{j}}\big{)}_{o}=\mathcal{I}\big{(}t\Phi_{o,\max}^{D}\big{)}_{o}.\] (4) _There exists \(N>0\), which is independent of \(j\), such that_ \[\Phi_{o,\max}^{D_{j}}(z)\geq NG_{D_{j}}(o,z),\ \forall z\in D,\] _where \(G_{D_{j}}(o,\cdot)\) is the pluricomplex Green function of \(D_{j}\) with a pole at \(o\)._ Proof.: The statement (1) just follows from the definitions of \(\Phi^{D_{j}}_{o,\max}\) and \(\Phi^{D}_{o,\max}\). It shows that \(\big{(}\Phi^{D_{j}}_{o,\max}\big{)}_{j\geq 1}\) is an increasing sequence on \(D\). The statement (2) follows from that global Zhou weights are also local Zhou weights. The statement (3) is a direct result of the statement (2). To show the statement (4), let us look back to the proof of Lemma 8.3. We can see that the sufficiently large constant \(N>0\) is completely determined by the local property of the global Zhou weight \(\Phi^{D_{j}}_{o,\max}\) near \(o\). Thus by the statement (2), the constant \(N\) here can be chosen independent of \(j\). To simplify the proof of the next lemma (Lemma 9.3), we recall a lemma in [37] about the pluricomplex Green functions on \(D_{j}\). **Lemma 9.2** ([37]).: _The sequence of pluricomplex Green functions \(\big{(}G_{D_{j}}(o,\cdot)\big{)}_{j\geq 1}\) converges uniformly to \(G_{D}(o,\cdot)\) on \(\overline{D}\)._ **Lemma 9.3**.: _The sequence \(\big{(}\Phi^{D_{j}}_{o,\max}\big{)}_{j\geq 1}\) converges uniformly to \(\Phi^{D}_{o,\max}\) on \(\overline{D}\), where \(\Phi^{D}_{o,\max}|_{\partial D}\) is defined to be \(0\)._ Proof.: According to Lemma 9.1 and Lemma 9.2, we have that, restricted on the boundary of \(D\), the sequence \(\big{(}\Phi^{D_{j}}_{o,\max}\big{)}\) increasingly converges uniformly to \(\Phi^{D}_{o,\max}\), i.e. to \(0\). Denote \[C_{j}:=\inf_{z\in\partial D}\Phi^{D_{j}}_{o,\max}(z)\in(-\infty,0),\] then \(\lim_{j}C_{j}=0\). For any \(j\geq 1\), consider a function \(\Phi_{j}\) defined on \(D_{j}\) by \[\Phi_{j}(z):=\left\{\begin{array}{ll}\Phi^{D_{j}}_{o,\max}(z)&z\in D_{j} \setminus D,\\ \max\Big{\{}\Phi^{D_{j}}_{o,\max}(z),\Phi^{D}_{o,\max}(z)+C_{j}\Big{\}}&z\in \overline{D}.\end{array}\right.\] Then \(\Phi_{j}\in\mathrm{PSH}^{-}(D_{j})\), and continuous near \(\partial D\). In addition, \(\Phi_{j}\geq\Phi^{D_{j}}_{o,\max}\) on \(D_{j}\). On the other hand, we have \[\sigma\big{(}\Phi_{j},\Phi^{D_{j}}_{o,\max}\big{)}=\min\big{\{}\sigma\big{(} \Phi^{D_{j}}_{o,\max},\Phi^{D_{j}}_{o,\max}\big{)},\sigma\big{(}\Phi^{D}_{o, \max}+C_{j},\Phi^{D_{j}}_{o,\max}\big{)}\big{\}}=1,\] which implies \(\Phi_{j}\leq\Phi^{D_{j}}_{o,\max}\). Consequently, \(\Phi^{D}_{o,\max}+C_{j}\leq\Phi^{D_{j}}_{o,\max}\) on \(\overline{D}\). Thus, we get \[\Phi^{D_{j}}_{o,\max}\leq\Phi^{D}_{o,\max}\leq\Phi^{D_{j}}_{o,\max}-C_{j}\text{ on }\overline{D}.\] Since \(\lim_{j}C_{j}=0\), the lemma is proved. We also have the following property that shows the approximation of Zhou weights from inside domains. Suppose \(D\) is a bounded hyperconvex domain containing \(o\). Let \(\{\mathscr{D}_{l}\}_{l\geq 1}\) be an increasing sequence of hyperconvex domains containing \(o\) such that \(\bigcup_{l}\mathscr{D}_{l}=D\). Define \[\Phi^{\mathscr{D}_{l}}_{o,\max}(z):=\sup\left\{\phi(z):\phi\in\mathrm{PSH}^{-} (\mathscr{D}_{l}),\;(f_{0},o)\notin\mathcal{I}(\varphi_{0}+\phi)_{o},\;\phi \geq\Phi^{D}_{o,\max}+O(1)\right\}\] for any \(z\in\mathscr{D}_{l}\). **Lemma 9.4**.: _The sequence \(\big{(}\Phi^{\mathscr{D}_{l}}_{o,\max}\big{)}_{l\geq 1}\) converges on \(D\) to \(\Phi^{D}_{o,\max}\) pointwisely._ Proof.: Observe that \(\big{(}\Phi_{o,\max}^{\mathscr{D}_{l}}\big{)}_{l\geq 1}\) is a decreasing sequence of negative plurisubharmonic functions on \(D\), then the pointwise limit \(\varPhi:=\lim_{l}\Phi_{o,\max}^{\mathscr{D}_{l}}\) exists, and \(\varPhi\) is negative and plurisubharmonic on \(D\). In addition, we have \(\varPhi\geq\Phi_{o,\max}^{D}\) since \(\Phi_{o,\max}^{\mathscr{D}_{l}}\geq\Phi_{o,\max}^{D}\) on \(\mathscr{D}_{l}\) for every \(l\), and \((f_{0},o)\notin\mathcal{I}(\varphi_{0}+\varPhi)_{o}\) since \(\varPhi\leq\Phi_{o,\max}^{\mathscr{B}_{l}}\). Then according to the definition of \(\Phi_{o,\max}^{D}\), we get \(\varPhi=\Phi_{o,\max}^{D}\) on \(D\). The following lemma gives a relation between the plurisubharmonic functions \(\phi_{m}\) and \(\varphi_{m}\). **Lemma 9.5**.: _Let \(D\) be a bounded hyperconvex domain, and for any \(m\in\mathbb{N}_{+}\) let the functions \(\phi_{m}\) and \(\varphi_{m}\) be defined by equation (9.1) and (9.2). Then for any \(m\in\mathbb{N}_{+}\), we have_ (1)_\(\phi_{m}\) and \(\varphi_{m}\) are continuous and plurisubharmonic functions on \(D\) (may taking the value \(-\infty\)), and \(\phi_{m}\) takes values in \([-\infty,0)\);_ \[\phi_{m}(z)\leq\varphi_{m}(z)+\frac{1}{2m}\log\lambda(D),\ \forall z\in D, \tag{2}\] _where \(\lambda(D)\) is the Lebesgue measure of \(D\)._ Proof.: (1) For every \(f\in\mathscr{E}_{m}(D)\) or \(\mathscr{A}_{m}^{2}(D)\), \((\log|f|)/m\) is continuous on \(D\), so \(\phi_{m}\) and \(\varphi_{m}\) are lower-semicontinuous on \(D\). Since \(\mathscr{E}_{m}(D)\) and \(\mathscr{A}^{2}(D)\) are compact subsets of \(\mathcal{O}(D)\), using Montel's Theorem, we can see that \(\phi_{m}\) and \(\varphi_{m}\) are also upper-semicontinuous on \(D\). It follows that \(\phi_{m}\) and \(\varphi_{m}\) are continuous on \(D\). Now the plurisubharmonicity of \(\phi_{m}\) and \(\varphi_{m}\) is a consequence of the definitions of \(\phi_{m}\) and \(\varphi_{m}\). (2) Observe that for any \(f\in\mathscr{E}_{m}(D)\), we have \((\lambda(D))^{-1/2}f\in\mathscr{A}_{m}^{2}(D)\), which implies inequality (9.3). Now we suppose \(D\) is a bounded strictly hyperconvex domain in \(\mathbb{C}^{n}\) with the function \(\varrho\) defined in Definition 1.16 and \(D_{j}=\{z\in\Omega:\varrho(z)<1/j\}\). **Lemma 9.6**.: _For any \(j\geq 1\) and \(m\geq 1\), we have_ \[h_{m,D_{j}}(z)\leq\varphi_{m,D_{j}}(z)\leq\phi_{m}(z)+\frac{\log(c\delta(j)^{- n})}{m},\ \forall z\in D,\] _where \(c\) is a constant only depending on \(n\), \(\delta(j)=\operatorname{dist}(\overline{D},\partial D_{j})\), and for any \(z\in D_{j}\),_ \[h_{m,D_{j}}(z):=\sup\left\{\frac{1}{m}\log|f(z)|:f\in\mathcal{O}(D_{j}),\ \int_{D_{j}}|f|^{2}e^{-2m\Phi_{o,\max}^{D_{j}}}\leq 1 \right\},\] \[\varphi_{m,D_{j}}(z):=\sup\left\{\frac{1}{m}\log|f(z)|:f\in\mathcal{O}(D_{j}),\ \int_{D_{j}}|f|^{2}\leq 1,\ (f,o)\in\mathcal{I}\big{(}m\Phi_{o,\max}^{D_{j}} \big{)}_{o}\right\}.\] Proof.: Let \(f\in\mathcal{O}(D_{j})\cap L^{2}(D_{j})\) with \((f,o)\in\mathcal{I}\big{(}m\Phi_{o,\max}^{D_{j}}\big{)}_{o}=\mathcal{I}\big{(} m\Phi_{o,\max}^{D_{j}}\big{)}_{o}\) (by Lemma 9.1(3)), then there exists some \(w\in\overline{D}\) such that \(f(w)=\sup_{\overline{D}}|f|\). According to the mean value inequality applied to the plurisubharmonic function \(|f|^{2}\), we have \[|f(w)|^{2}\leq\frac{c^{2}}{\delta(j)^{2n}}\int_{B(w,\delta(j))}|f(z)|^{2}\leq \frac{c^{2}}{\delta(j)^{2n}}\int_{D_{j}}|f(z)|^{2}.\] Thus \[\varphi_{m,D_{j}}(z)\leq\phi_{m}(z)+\frac{\log(c\delta(j)^{-n})}{m},\ \forall z\in D.\] Since \(\Phi^{D_{j}}_{o,\max}\) is negative on \(D_{j}\), for any \(f\in A^{2}\big{(}D_{j},2m\Phi^{D_{j}}_{o,\max}\big{)}\), we have \(f\in\mathcal{O}(D_{j})\cap L^{2}(D_{j})\), \((f,o)\in\mathcal{I}\big{(}m\Phi^{D_{j}}_{o,\max}\big{)}_{o}\) and \[\int_{D_{j}}|f|^{2}\leq\int_{D_{j}}|f|^{2}e^{-2m\Phi^{D_{j}}_{o,\max}}.\] Thus we obtain \(h_{m,D_{j}}\leq\varphi_{m,D_{j}}\). For fixed \(z\in D\), let \(k\) be a negative integer with \(|k|\) sufficiently large such that \(z\in D_{k}:=\{z\in D:\varrho(z)<1/k\}\). Note that \(D_{k}\) is a bounded strictly hyperconvex domain and \(D_{k}\subset\subset D\) for \(k<0\). Denote \[\Phi^{D_{k}}_{o,\max}(w):=\sup\left\{\phi(w):\phi\in\mathrm{PSH}^{-}(D_{k}),\, (f_{0},o)\notin\mathcal{I}(\varphi_{0}+\phi)_{o},\;\phi\geq\Phi^{D}_{o,\max}+O (1)\right\}\] for \(w\in D_{k}\). Then \(\Phi^{D_{k}}_{o,\max}\) is a global Zhou weight related to \(|f_{0}|^{2}e^{-\varphi_{0}}\) on \(D_{k}\) near \(o\) satisfying that \(\Phi^{D_{k}}_{o,\max}=\Phi^{D}_{o,\max}+O(1)\) near \(o\) (see Lemma 8.5 and Lemma 8.7), and \[\Phi^{D}_{o,\max}(w^{\prime}):=\sup\left\{\phi(w^{\prime}):\phi\in\mathrm{PSH }^{-}(D),\,(f_{0},o)\notin\mathcal{I}(\varphi_{0}+\phi)_{o},\;\phi\geq\Phi^{D} _{o,\max}+O(1)\right\}\] for any \(w^{\prime}\in D\). Replacing \(D\) by \(D_{k}\) and \(\varrho\) by \(\varrho-1/k\) in Lemma 9.6, we get **Lemma 9.7**.: _For fixed \(z\in D\) and \(k<0\) with \(z\in D_{k}\), we have_ \[\varphi_{m}(z)\leq\phi_{m,D_{k}}(z)+\frac{\log(c\delta(k)^{-n})}{m},\,\,\forall m \geq 1,\] _where \(c\) is a constant only depending on \(n\), \(\delta(k)=\mathrm{dist}(\overline{D_{k}},\partial D)\), and for any \(w\in D_{k}\),_ \[\phi_{m,D_{k}}(w):=\sup\left\{\frac{1}{m}\log|f(w)|:f\in\mathcal{O}(D_{k}),\; \sup_{D_{k}}|f|\leq 1,\,\,(f,o)\in\mathcal{I}(m\Phi^{D_{k}}_{o,\max})_{o} \right\}.\] **Lemma 9.8**.: _If \(D\) is a bounded hyperconvex domain, then there exists a constant \(\mathsf{C}\) independent of \(m\) such that_ \[\phi_{m}\leq\frac{m-\mathsf{C}}{m}\Phi^{D}_{o,\max},\,\,\forall m\in\mathbb{N }_{+}.\] Proof.: Let \(f\in\mathscr{E}_{m}(D)\), then \(\log|f|\) is negative on \(D\) and \(c^{f}_{o}(\Phi^{D}_{o,\max})\geq m\). Theorem 1.11 shows that \[\sigma(\log|f|,\Phi^{D}_{o,\max})\geq m-\mathsf{C},\] where \(\mathsf{C}\) is a constant independent of \(f\) and \(m\). Now we get \[\log|f|\leq\sigma(\log|f|,\Phi^{D}_{o,\max})\Phi^{D}_{o,\max}\leq(m-\mathsf{C })\Phi^{D}_{o,\max},\,\,m>\mathsf{C}\] (the case \(m\leq\mathsf{C}\) is trivial), which implies \[\phi_{m}\leq\frac{m-\mathsf{C}}{m}\Phi^{D}_{o,\max},\forall m\in\mathbb{N}_{+}.\] ### Proofs of Theorem 1.20, Corollary 1.21 and Corollary 1.22 Now we give the proofs of Theorem 1.20, Corollary 1.21 and Corollary 1.22. Proof of Theorem 1.20.: Firstly, we prove the statement (2) in Theorem 1.20. It follows from Lemma 9.1(2), Lemma 7.2, Remark 7.3 and Lemma 9.6 that \[\phi_{m}\geq h_{m,D_{j}}+O(1)\geq\Phi_{o,\max}^{D_{j}}+O(1)=\Phi_{o,\max}^{D}+O (1)\] near \(o\). We also have \(\varphi_{m}\geq\Phi_{o,\max}^{D}+O(1)\) near \(o\) by Lemma 9.5. Thus, \[\sigma(\phi_{m},\Phi_{o,\max}^{D})\leq 1,\ \sigma(\varphi_{m},\Phi_{o,\max}^{D}) \leq 1.\] In addition, we get \[\sigma(\phi_{m},\Phi_{o,\max}^{D})\geq 1-\frac{\mathsf{C}}{m},\ \forall m\in \mathbb{N}_{+}\] according to Lemma 9.8 and \[\sigma(\varphi_{m},\Phi_{o,\max}^{D})\geq 1-\frac{\mathsf{C}}{m},\ \forall m\in \mathbb{N}_{+}\] according to Lemma 9.7 for some constant \(\mathsf{C}\) independent of \(m\). Next, we prove \[\lim_{m\to\infty}\phi_{m}(z)=\Phi_{o,\max}^{D}(z),\ \forall z\in D.\] According to Lemma 7.2 and Lemma 9.6, for any \(z\in D\) we have \[\Phi_{o,\max}^{D_{j}}(z)-\frac{C_{1}}{m}\leq h_{m,D_{j}}(z)\leq\phi_{m}+\frac {\log(c\delta(j)^{-n})}{m},\] where \(C_{1},c\) are constants independent of \(m\). Letting \(m\to\infty\), we deduce that \[\Phi_{o,\max}^{D_{j}}(z)\leq\liminf_{m\to\infty}\phi_{m}(z).\] Letting \(j\to\infty\) in the above inequality, by Lemma 9.3, we get \[\Phi_{o,\max}^{D}(z)\leq\liminf_{m\to\infty}\phi_{m}(z).\] On the other hand, letting \(m\to\infty\) in Lemma 9.8, we obtain \[\limsup_{m\to\infty}\phi_{m}(z)\leq\Phi_{o,\max}^{D}(z).\] Consequently, \[\lim_{m\to\infty}\phi_{m}(z)=\Phi_{o,\max}^{D}(z),\ \forall z\in D.\] Finally, we prove that for any \(z\in D\), \[\lim_{m\to\infty}\varphi_{m}(z)=\Phi_{o,\max}^{D}(z).\] Lemma 9.5 gives \[\liminf_{m\to\infty}\varphi_{m}(z)\geq\lim_{m\to\infty}\phi_{m}(z)=\Phi_{o, \max}^{D}(z).\] On the other hand, for \(k<0\) such that \(z\in D_{k}=\{\varrho<1/k\}\), by Lemma 9.7, \[\limsup_{m\to\infty}\varphi_{m}(z)\leq\liminf_{m\to\infty}\phi_{m,D_{k}}(z).\] It follows the previous result of this proof that \[\lim_{m\to\infty}\phi_{m,D_{k}}(z)=\Phi_{o,\max}^{D_{k}}(z),\] thus \[\limsup_{m\to\infty}\varphi_{m}(z)\leq\Phi_{o,\max}^{D_{k}}(z)\] for all \(k<0\) with \(|k|\gg 1\). Letting \(k\to-\infty\) and using Lemma 9.4, we obtain \[\limsup_{m\to\infty}\varphi_{m}(z)\leq\Phi^{D}_{o,\max}(z).\] Thus, \[\lim_{m\to\infty}\varphi_{m}(z)=\Phi^{D}_{o,\max}(z),\ \forall z\in D,\] which completes the proof. In the following, we prove Corollary 1.21 and Corollary 1.22. Proof of Corollary 1.21.: Denote \[\mathscr{E}(D):=\big{\{}f\in\mathcal{O}(D):\sup_{D}|f|\leq 1,\ f(o)=0,\ f\not \equiv 0\big{\}}.\] Then for any \(f\in\mathscr{E}(D)\), \(\log|f|\) is negative on \(D\). As \(\Phi^{D}_{o,\max}\) is a global Zhou weight, we have \[\frac{\log|f(w)|}{\sigma(\log|f|,\Phi^{D}_{o,\max})}\leq\Phi^{D}_{o,\max}(w),\] for any \(f\in\mathscr{E}(D)\) and any \(w\in D\). On the other hand, Theorem 1.20 gives \[\lim_{m\to\infty}\phi_{m}(w)=\Phi^{D}_{o,\max}(w),\ \forall w\in D.\] By the compactness of \(\mathscr{E}_{m}(D)\) and Montel's theorem, for fixed \(w\in D\), there exists \(F_{m,w}\in\mathscr{E}_{m}(D)\setminus\{0\}\subset\mathscr{E}(D)\) such that \[\phi_{m}(w)=\frac{1}{m}\log|F_{m,w}(w)|.\] Since \(c^{F_{m,w}}_{o}(\Phi^{D}_{o,\max})\geq m\), Theorem 1.11 shows that \[\sigma(\log|F_{m,w}|,\Phi^{D}_{o,\max})\geq m-\mathsf{C},\] where \(\mathsf{C}\) is a constant independent of \(m\). Thus \[\limsup_{m\to\infty}\frac{\log|F_{m,w}(w)|}{\sigma(\log|F_{m,w}|,\Phi^{D}_{o, \max})}\geq\limsup_{m\to\infty}\frac{m}{m-\mathsf{C}}\phi_{m}(w)=\Phi^{D}_{o, \max}(w),\] which completes the proof. Proof of Corollary 1.22.: Denote \[\mathcal{S}\big{(}\Phi^{D}_{o,\max}\big{)}:=\big{\{}\phi\in\mathrm{PSH}^{-}(D ):\phi\sim_{\mathcal{I}}\Phi^{D}_{o,\max}\text{ at }o\big{\}}.\] Then \(\Phi^{D}_{o,\max}\in\mathcal{S}\big{(}\Phi^{D}_{o,\max}\big{)}\). Take any \(\varphi\in\mathcal{S}\big{(}\Phi^{D}_{o,\max}\big{)}\). For any \(w\in D\), denote \[\tilde{\varphi}_{m}(w):=\sup\left\{\frac{1}{m}\log|f(w)|:f\in\mathcal{O}(D), \ \|f\|_{D}\leq 1,\ (f,o)\in\mathcal{I}(m\varphi)_{o}\right\}.\] Since \(\varphi\sim_{\mathcal{I}}\Phi^{D}_{o,\max}\) at \(o\), using Theorem 1.20, we can get \[\lim_{m\to\infty}\tilde{\varphi}_{m}(w)=\Phi^{D}_{o,\max}(w),\ \forall w\in D.\] Since \(\varphi\) is negative, we can see \[\tilde{\varphi}_{m}(w)\geq h_{m}(w),\ \forall w\in D,\] where \[h_{m}(w):=\sup\left\{\frac{1}{m}\log|f(w)|:f\in\mathcal{O}(D),\ \int_{D}|f|^{2}e^{-2m\varphi}\leq 1 \right\}.\] Lemma 7.2 gives \[\tilde{\varphi}_{m}(z)\geq h_{m}(z)\geq\varphi(z)-\frac{C_{1}}{m}\] for some \(C_{1}>0\) independent of \(m\). Letting \(m\to\infty\), we deduce that \(\Phi^{D}_{o,\max}(z)\geq\varphi(z)\) for any \(z\in D\). Thus we complete the proof. ## 10. Appendix: proofs of Lemma 7.4 and Lemma 7.5 We prove Lemma 7.4 and Lemma 7.5 in this section. Firstly, we present some lemmas. **Lemma 10.1**.: _Let \(\varphi\) be a plurisubharmonic function near \(o\), and let \(f\) be a holomorphic function near \(o\). Assume that \(c_{o}^{f}(\varphi)=1\), then \(\lim_{m\to+\infty}c_{o}^{f}(\varphi_{m})=1\), where_ \[\varphi_{m}:=\frac{1}{2m}\log\sum_{1\leq l\leq k_{m}}|f_{m,l}|^{2},\] _and \(\{f_{m,1},\ldots,f_{m,k_{m}}\}\) is the generators set of \(\mathcal{I}(m\varphi)_{o}\)._ Proof.: As \(\{f_{m,1},\ldots,f_{m,k_{m}}\}\) is the generators set of \(\mathcal{I}(m\varphi)_{o}\), there exists a neighborhood \(U\) of \(o\) such that \[\int_{U}e^{-2\varphi}-e^{-2\max\big{\{}\varphi,\frac{m+1}{m} \varphi_{m+1}\big{\}}}\] \[\leq \int_{U\cap\big{\{}\frac{m+1}{m}\varphi_{m+1}>\varphi\big{\}}}e^{ -2\varphi}\] \[\leq \int_{U}e^{2(m+1)\varphi_{m+1}-2(m+1)\varphi}\] \[= \int_{U}\sum_{1\leq l\leq k_{m+1}}|f_{m+1,l}|^{2}e^{-2(m+1)\varphi}\] \[<+\infty.\] Theorem 2.8 shows that \(|f|^{2}e^{-2\varphi}\) is not integrable near \(o\). Then \(|f|^{2}e^{-2\max\{\varphi,\frac{m+1}{m}\varphi_{m+1}\}}\) is not integrable near \(o\), i.e., \(c_{o}^{f}\big{(}\max\{\varphi,\frac{m+1}{m}\varphi_{m+1}\}\big{)}\leq 1\), which implies that \[c_{o}^{f}\left(\frac{m+1}{m}\varphi_{m+1}\right)\leq 1.\] It follows from Demailly's approximation theorem (Lemma 7.2) that \(\varphi_{m}\geq\varphi+O(1)\) near \(o\), which implies that \[c_{o}^{f}(\varphi_{m})\geq c_{o}^{f}(\varphi)=1\] for any \(m\). Thus, we have \(\lim_{m\to+\infty}c_{o}^{f}(\varphi_{m})=1\). The following lemma will be used in the proof of Lemma 10.4. **Lemma 10.2** (see [23]).: \((z_{1}^{\alpha_{1}}\cdots z_{n}^{\alpha_{n}},o)\in\mathcal{I}\big{(}\log\max \{|z_{1}|^{b_{1}},\ldots,|z_{n}|^{b_{n}}\}\big{)}_{o}\) _if and only if \(\sum\limits_{1\leq j\leq n}\frac{\alpha_{j}+1}{b_{j}}>1\)._ We recall the following desingularization theorem due to Hironaka. **Theorem 10.3** ([18], see also [3]).: _Let \(X\) be a complex manifold, and \(M\) be an analytic sub-variety in \(X\). Then there is a local finite sequence of blow-ups \(\mu_{j}:X_{j+1}\to X_{j}\)\((X_{1}:=X,j=1,2,...)\) with smooth centers \(S_{j}\) such that:_ \((1)\) _Each component of \(S_{j}\) lies either in \((M_{j})_{\rm sing}\) or in \(M_{j}\cap E_{j}\), where \(M_{1}:=M\), \(M_{j+1}\) denotes the strict transform of \(M_{j}\) by \(\mu_{j}\), \((M_{j})_{\rm sing}\) denotes the singular set of \(M_{j}\), and \(E_{j+1}\) denotes the exceptional divisor \(\mu_{j}^{-1}(S_{j}\cup E_{j})\);_ \((2)\) _Let \(M^{\prime}\) and \(E^{\prime}\) denote the final strict transform of \(M\) and the exceptional divisor respectively. Then:_ \((a)\) _The underlying point-set_ \(|M^{\prime}|\) _is smooth;_ \((b)\)__\(|M^{\prime}|\)_and_ \(E^{\prime}\) _simultaneously have only normal crossings._ The \((b)\) in the above theorem means that, local, there is a coordinate system in which \(E^{\prime}\) is a union of coordinate hyperplanes and \(|M^{\prime}|\) is a coordinate subspace. We prove Lemma 7.5 by using Theorem 10.3. Proof of Lemma 7.5.: Let \(D\) be a small neighborhood of \(o\). Using Theorem 10.3, there is a proper holomorphic map \(\mu:\widetilde{D}\to D\), which is local a finite composition of blow-ups with smooth centers, such that \[u\circ\mu(w)=c_{1}\log|w^{a}|+\tilde{u}(w)\] on \(W\), where \(a=(a_{1},\ldots,a_{n})\in\mathbb{Z}_{\geq 0}^{n}\) and \((W;w_{1},\ldots,w_{n})\) is a coordinate ball centered at \(\tilde{z}\in\widetilde{D}\) satisfying that \(W\Subset\widetilde{D}\). As \(v\circ\mu\leq(1-\epsilon)u\circ\mu+O(1)\) near \(\tilde{z}\) for any \(\epsilon>0\), it follows from Siu's decomposition theorem (see [41, 12]) that \[v\circ\mu\leq c_{1}\log|w^{a}|+O(1)=u\circ\mu+O(1)\] near \(\tilde{z}\), which implies that \(v\leq u+O(1)\) near \(o\) as \(\mu\) is proper. Thus, Lemma 7.5 holds. We prove a special case of Lemma 7.4. **Lemma 10.4**.: _Let \(\varphi:=c\log\sum_{1\leq j\leq m}|f_{j}|^{2}\) near \(o\), where \(c>0\) and \(f_{j}\) is a holomorphic function near \(o\) for any \(1\leq j\leq m\). Then_ \[\lim_{N\to+\infty}c_{o}^{f}\big{(}\max\{\varphi,N\log|z|\}\big{)}=c_{o}^{f}(\varphi)\] _for any holomorphic function \(f\) near \(o\)._ Proof.: Without loss of generality, assume that \(c_{o}^{f}(\varphi)<1\). Let \(D\) be a small neighborhood of \(o\), and \[Y:=\bigcap_{j}\big{\{}z\in D:f_{j}(z)=0\big{\}}\] be an analytic sub-variety in \(D\). Using Theorem 10.3, we get a proper holomorphic map \(\mu:\widetilde{D}\to D\) (local a finite composition of blow-ups with smooth centers), which satisfies that, local, there is a coordinate system in which the strict transform \(\tilde{Y}\) of \(Y\) is a coordinate hyperplane and the exceptional divisor \(E^{\prime}\) is a union of coordinate hyperplanes. Denote that \(X=\mu^{-1}(o)\). As \(c_{o}^{f}(\varphi)<1\), there exists \(\tilde{z}\in X\) and \(c_{1}<1\) such that \[\int_{U_{\tilde{z}}}\mu^{*}\big{(}|f|^{2}e^{-2c_{1}\varphi}\wedge_{1\leq j\leq n }\sqrt{-1}dz_{j}\wedge d\tilde{z}_{j}\big{)}=+\infty \tag{10.1}\] for any neighborhood of \(U_{\tilde{z}}\). Let \((W;w_{1},\ldots,w_{n})\) be a coordinate ball centered at \(\tilde{z}\) satisfying that \(W\Subset\tilde{M}\), \(w^{b}=0\) is the zero divisor of the Jacobian \(J_{\mu}\) (of \(\mu\)) and \[\varphi\circ\mu(w)=c_{2}\log|w^{a}|^{2}+\tilde{u}(w)\] on \(W\), where \(\tilde{u}\in\mathcal{C}^{\infty}(\overline{W})\), \(w^{a}:=\prod_{j=1}^{n}w_{j}^{a_{j}}\) and \(w^{b}:=\prod_{j=1}^{n}w_{j}^{b_{j}}\). Note that \(\mu^{*}(f)=\sum_{\alpha\in\mathbb{Z}_{\geq 0}^{n}}a_{\alpha}w^{\alpha}\) near \(\tilde{z}\). It follows from inequality (10.1) that there exists \(a_{\dot{\alpha}}\neq 0\) such that \[|w^{\dot{\alpha}+b}|^{2}e^{-2c_{1}\varphi\circ\mu}\] is not integrable near \(\tilde{z}\), which implies that there exists \(j_{0}\in\{1,\ldots,n\}\) (without loss of generality, assume that \(j_{0}=1\)) such that \[\hat{\alpha}_{1}+b_{1}+1\leq c_{1}c_{2}a_{1}.\] Noting that \(c_{1}<1\), there exists \(N\gg 0\) such that \[\frac{\hat{\alpha}_{1}+b_{1}+1}{c_{2}a_{1}}+\sum_{2\leq j\leq n}\frac{\hat{ \alpha}_{j}+b_{j}+1}{N}<1,\] then it follows from Lemma 10.2 that \[|w^{\hat{\alpha}+b}|^{2}e^{-2\log\max\{|w_{1}|^{c_{2}a_{1}},|w_{2}|^{N},|w_{3 }|^{N},\ldots,|w_{n}|^{N}\}}\] is not integrable near \(\tilde{z}\). Denote that \(U_{0}:=\{|w|<r\}\) (\(r>0\) is sufficiently small). We have \[\int_{U_{0}}\mu^{*}\big{(}|f|^{2}e^{-2\max\{\varphi,N\log|z|\}} \big{)}|J_{\mu}|^{2}\] \[\geq C_{1}\int_{U_{0}}|w^{\hat{\alpha}+b}|^{2}e^{-2\max\{c_{2}\log|w ^{a}|^{2},N\log|w|\}}\] \[\geq C_{2}\int_{U_{0}}|w^{\hat{\alpha}+b}|^{2}e^{-2\log\max\{|w_{1}|^ {c_{2}a_{1}},|w_{2}|^{N},|w_{3}|^{N},\ldots,|w_{n}|^{N}\}}\] \[= +\infty,\] which implies that \(|f|^{2}e^{-2\max\{\varphi,N\log|z|\}}\) is not integrable near \(o\), i.e., \[c_{o}^{f}\big{(}\max\{\varphi,N\log|z|\}\big{)}\leq 1.\] Now, we get that: \(c_{o}^{f}(\varphi)<1\) implies that there exists \(N\gg 0\) such that \[c_{o}^{f}\big{(}\max\{\varphi,N\log|z|\}\big{)}\leq 1.\] Note that \(c_{o}^{f}\big{(}\max\{\varphi,N\log|z|\}\big{)}\geq c_{o}^{f}(\varphi)\), then we have \[\lim_{N\to+\infty}c_{o}^{f}\big{(}\max\{\varphi,N\log|z|\}\big{)}=c_{o}^{f}(\varphi)\] for any holomorphic function \(f\) near \(o\). Now, we prove Lemma 7.4. Proof of Lemma 7.4.: Let \(f\not\equiv 0\) be a holomorphic function near \(o\). If \(c_{o}^{f}(\varphi)=0\), then \(\varphi\equiv-\infty\) near \(o\) and Lemma 7.4 holds clearly. In the following, we assume that \(c_{o}^{f}(\varphi)=1\). Lemma 10.1 tells us that \[\lim_{m\to+\infty}c_{o}^{f}(\varphi_{m})=1,\] where \(\varphi_{m}:=\frac{1}{2m}\log\sum_{1\leq l\leq k_{m}}|f_{m,l}|^{2}\) and \(\{f_{m,1},\ldots,f_{m,k_{m}}\}\) is the generators set of \(\mathcal{I}(m\varphi)_{o}\). Note that \(\varphi_{m}\geq\varphi+O(1)\) near \(o\) by Lemma 7.2, which shows that \[c_{o}^{f}(\varphi_{m})\geq c_{o}^{f}(\varphi)=1\] for any \(m\). For any \(\epsilon>0\), there exists \(m>0\) such that \[c_{o}^{f}(\varphi_{m})\in[1,1+\epsilon),\] then it follows from Lemma 10.4 that there exists \(N_{m}>0\) such that \[c_{o}^{f}\big{(}\max\{\varphi_{m},N_{m}\log|z|\}\big{)}\in[1,1+2\epsilon).\] As \(\varphi_{m}\geq\varphi+O(1)\) near \(o\), we have \[c_{o}^{f}\big{(}\max\{\varphi,N_{m}\log|z|\}\big{)}\in[1,1+2\epsilon).\] Thus, we have \[\lim_{N\to+\infty}c_{o}^{f}\big{(}\max\{\varphi,N\log|z|\}\big{)}=1=c_{o}^{f} (\varphi).\] Lemma 7.4 has been proved. _Acknowledgements._ The authors would like to thank Professor Xiangyu Zhou for helpful discussions and sincerely encouragements. The second author would like to thank Professor Mattias Jonsson for helpful discussions, and Professor Chenyang Xu for sharing his recent work and helpful discussions. The second author was supported by National Key R&D Program of China 2021YFA1003100, NSFC-11825101, NSFC-11522101 and NSFC-11431013, and the National Science Foundation Grant No. DMS-163852 and the Ky Fan and Yu-Fen Fan Membership Fund. The fourth author was supported by China Postdoctoral Science Foundation BX20230402.
2309.16469
Functional Renormalization Group for fermions on a one dimensional lattice at arbitrary filling
A formalism based on the fermionic functional-renormalization-group approach to interacting electron models defined on a lattice is presented. One-loop flow equations for the coupling constants and susceptibilities in the particle-particle and particle-hole channels are derived in weak-coupling conditions. It is shown that lattice effects manifest themselves through the curvature of the spectrum and the dependence of the coupling constants on momenta. This method is then applied to the one-dimensional extended Hubbard model; we thoroughly discuss the evolution of the phase diagram, and in particular the fate of the bond-centered charge-density-wave phase, as the system is doped away from half-filling. Our findings are compared to the predictions of the field-theory continuum limit and available numerical results.
Lucas Désoppi, Nicolas Dupuis, Claude Bourbonnais
2023-09-28T14:35:13Z
http://arxiv.org/abs/2309.16469v4
**Functional renormalization group for fermions on a one-dimensional lattice at arbitrary filling** ## Abstract **A formalism based on the fermionic functional-renormalization-group approach to interacting electron models defined on a lattice is presented. One-loop flow equations for the coupling constants and susceptibilities in the particle-particle and particle-hole channels are derived in weak-coupling conditions. It is shown that lattice effects manifest themselves through the curvature of the spectrum and the dependence of the coupling constants on momenta. This method is then applied to the one-dimensional extended Hubbard model; we thoroughly discuss the evolution of the phase diagram, and in particular the fate of the bond-centered charge-density-wave phase, as the system is doped away from half-filling. Our findings are compared to the predictions of the field-theory continuum limit and available numerical results.** ###### Contents * 1 Introduction * 2 FRG for the extended Hubbard Model at arbitrary filling * 2.1 One-dimensional Extended Hubbard Model * 2.2 One-loop flow equations * 2.3 Recovery of the \(g\)-ology continuum model * 2.3.1 Half-filling * 2.3.2 Away from half-filling * 2.4 Lattice effects and low-energy limit * 3 Lattice model: results and discussion * 3.1 Half-filled case * 3.2 Away from half-filling * 3.2.1 Small doping * 3.2.2 Intermediate doping * 3.2.3 Large doping * 4 Conclusions and perspectives * A Flows of coupling constants A.1 Finite-temperature, one-dimensional, single-band systemsA.2 Loop expressionsA.3 Choice of the regulator ## 1 Introduction The theory of interacting fermions in one spatial dimension gives the best understood examples of models whose asymptotic low-energy behavior distinctively deviates from that of a Fermi liquid, as commonly found in Fermi systems in higher dimension. Power-law decay of quasi-particle excitations and of correlation functions are governed by non universal exponents characterized by very few hydrodynamic and interaction-dependent parameters which separate into spin and charge bosonic entities for spin-\(\frac{1}{2}\) fermions [1, 2, 3, 4, 5, 6]. Such distinctive features form the basis of the Luttinger liquid (LL) fixed-point phenomenology [7]. This is asymptotically accurate in the low-energy (scaling) limit, namely when the fermion spectrum can be considered as strictly linear around the Fermi points and when interactions projected on those points are considered as momentum independent. These are well known to be at the core of the field theory or continuum \(g\)-ology models of interacting Fermi gas. The fixed-point behavior of a linear LL proves to be generic for gapless branches of excitations of most models of interacting fermions in one dimension. As one moves away in energy from the Fermi points the spectrum develops in practice some curvature. Deviations with respect to linearity alongside momentum dependence of interactions, although irrelevant in the renormalization group (RG) sense [7], were shown to modify the finite energy spectral properties predicted by the linear LL theory. Formulated in terms of an effective x-ray edge problem [8], the coupling of particles to a continuum of higher energy states is found to alter the power-law profiles of spectral lines near their absorption edges. These non-linear LL effects could be rigorously checked in the case of integrable spinless-fermion models defined on a lattice [9, 10, 11, 12]. Noticeable limitations of the linear \(g\)-ology mappings of non integrable lattice models could also be found in the calculation of singular correlations that enter in the determination of their phase diagrams. This has been best exemplified in the case of the one-dimensional extended Hubbard model (EHM) for spin-\(\frac{1}{2}\) fermions, which will serve here as the reference lattice model for the RG method developed in the present work. At half-filling numerical calculations soon identified a shift of the continuous transition line connecting charge- and spin-density-wave states [13, 14], a line that the continuum \(g\)-ology theory predicts to be gapless along the separatrix \(U=2V\), for the local (\(U\)) and nearest-neighbors (\(V\)) interaction parameters of the EHM. The origin of this alteration has resisted at least in weak coupling to all attempts of explanations formulated in the framework of the linear \(g\)-ology theory [15, 16]. Using exact diagonalizations, Nakamura showed later on that the shift underlies the incursion of a distinct phase, known as a bond-centered charge-density-wave (BOW) phase. The BOW phase is entirely gapped in both spin and charge sectors and extends across some finite region on both sides of the \(U=2V\) line of the phase diagram in weak coupling [17, 18]. This was subsequently confirmed numerically both by quantum Monte Carlo [19, 20], and density matrix RG methods [21, 22, 23]. On analytical grounds, Tsuchiizu and Furusaki showed from perturbation theory that by taking into account the momentum-dependent fermion-fermion scattering processes at high energy, that is, beyond the linear region, one can define, at some arbitrarily chosen lower energy, an effective weak-coupling linear \(g\)-ology model, but with modified and enlarged set of input parameters [24]. The modification is such that it allows the emergence of a BOW phase in the \(U=2V\) weak-coupling sector of the phase diagram [25, 26]. Using a functional fermionic RG approach at the one-loop level, Tam _et al._, [27] pointed out that by integrating out numerically all the scattering processes for a discrete set of fermion momenta along the tight-binding spectrum in the Brillouin zone, the existence of a BOW phase can be found in the \(U=2V\) weak-coupling region of the EHM phase diagram at half-filling. Menard _et al._[28] thereafter formulated an RG transformation for the half-filling tight-binding fermions in the Wilsonian scheme [29], in which irrelevant interaction terms can be classified from the momentum dependence of non-local scattering amplitudes away from the Fermi surface. Their impact on the low-energy RG flow has born out the presence of the BOW ordered phase where it is expected in the EHM phase diagram at weak coupling, alongside shifts of some other transitions lines where accidental symmetries are known to occur in the continuum \(g\)-ology limit. These RG results were strictly speaking limited to the EHM model at half-filling. It is the main motivation of the present work to propose a more general formulation of the weak-coupling RG method in the determination of quantum phases in lattice models of interacting fermions in one dimension away from half-filling. The method developed below can in principle apply to any form of non-linear spectrum and momentum-dependent interactions in models with fermion density away from half-filling. To achieve this program, we shall opt for the functional RG in the so-called one-particle-irreducible scheme [30, 31]. The one-loop RG equations for the momentum-dependent four-point vertices are expanded up to second order in the energy difference of the tight-binding spectrum from the Fermi level, which acts as the scaling variable in the power counting classification of marginal and irrelevant interaction terms. From the calculations of the most singular susceptibilities the phase diagram of the EHM model can be mapped out. At half-filling the results confirm previous RG calculations for the existence of a fully gapped BOW phase overlapping the \(U=2V\) line of the \(g\)-ology limit and bear out the shift of other transition lines between different ground states, in agreement with numerical results [18]. In both situations the role of the spectrum and irrelevant interactions terms in the qualitative change of initial conditions for an effective linear continuum theory in the low-energy limit can be confirmed. The method is carried out away from half-filling and the region of dominant BOW gapped state is found to gradually shrink in size to ultimately be suppressed as a function of doping. The whole phase diagram then evolves towards an incommensurate situation but where noticeable modifications of the stability regions of quantum states, as predicted by the \(g\)-ology continuum model, are found. The integration of high-energy electronic states in the particle-hole-asymmetric non-linear part of the spectrum reveals the existence of logarithmic screening effects which act at lower energy as an important factor in promoting singlet superconductivity or inversely antiferromagnetism against the charge-density-wave state. The paper is organized as follows. In Sec. II the fRG method is introduced and the flow equations of couplings and various susceptibilities are derived at the one-loop level. In this framework known results of the EHM phase diagram in the limit of the continuum \(g\)-ology model at and away from half-filling are recovered. In Sec. III, we broaden the formulation of fRG to include the tight-binding spectrum and the momentum-dependent interactions of the EHM, as actually defined on a lattice. The one-loop flow equations for marginal and up to second order for the set of irrelevant scattering amplitudes are derived. The phase diagrams at and away from half-filling are obtained and their comparison with the \(g\)-ology limit analyzed and critically discussed. We conclude this work in Sec. 4. ## 2 FRG for the extended Hubbard Model at arbitrary filling ### One-dimensional Extended Hubbard Model The 1D extended Fermi-Hubbard model (EHM) is defined by the Hamiltonian (in this paper, units are taken such that \(k_{\mathrm{B}}=\hbar=1\) and the lattice constant \(a=1\)) \[\mathcal{H}=-t\sum_{i,\,\sigma}\bigl{(}c_{i,\,\sigma}^{\dagger}c_{i+1,\,\sigma} +\mathrm{H.c.}\bigr{)}+U\sum_{i}n_{i,\uparrow}n_{i,\downarrow}+V\sum_{i}n_{i }n_{i+1}, \tag{1}\] describing electrons moving on a lattice with a hopping amplitude \(t>0\) and experiencing on-site and nearest-neighbor interactions with strengths \(U\) and \(V\), respectively. In Eq. (1), \(i\) denotes the site index, \(\sigma=\uparrow,\downarrow\) is the spin index, \(n_{i,\,\sigma}=c_{i,\,\sigma}^{\dagger}c_{i,\,\sigma}\) and \(n_{i}=n_{i,\uparrow}+n_{i,\downarrow}\) is the number of electrons at site \(i\). The one-particle states have energies \(\varepsilon(k)=-2t\cos(k)\) with wave vector \(k\) of the tight-binding form, such that with respect to the Fermi level, these are comprised in the interval \(-2t-\mu\leqslant\xi=\varepsilon-\mu\leqslant 2t-\mu\), where \(\mu\) is the chemical potential. The tight-binding spectrum \(\varepsilon(k)\) is shown in Fig.1. The corresponding density of states is written as follows: \[\mathcal{N}(\xi)=\frac{\Theta(2t-|\mu+\xi|)}{2\pi\sqrt{t^{2}-(\xi+\mu)^{2}/4}}, \tag{2}\] where \(\Theta(x)\) is the Heaviside step function. It will indeed be useful to write the density of states for an arbitrary value of \(\xi\), because in the RG flow, the momentum shell corresponding to the integration of the degrees of freedom will be taken at equal distance from the Fermi level for the empty and the occupied states (see Fig. 1). By definition, the Fermi level is related to the Fermi wave vector \(k_{\mathrm{F}}\), defined such that \(\varepsilon(k_{\mathrm{F}})=\mu\). One can also define the Fermi velocity \[\nu_{\mathrm{F}}=\left.\frac{\partial\varepsilon}{\partial k}\right|_{k_{ \mathrm{F}}}=2t\sin(k_{\mathrm{F}}). \tag{3}\] Let \(n\) be the fermion filling number. Obviously, we have \(0\leqslant n\leqslant 2\). This number is directly given by an integration of the density of states up to the Fermi level: \[n=2\int\limits_{-\infty}^{0}\mathrm{d}\xi\,\mathcal{N}(\xi)=2\int\limits_{-k_ {\mathrm{F}}}^{+k_{\mathrm{F}}}\frac{\mathrm{d}k}{2\pi}=\frac{2k_{\mathrm{F}} }{\pi}, \tag{4}\] which leads to the simple relations: \[k_{\mathrm{F}}=\frac{\pi}{2}n,\qquad\mu=-2\cos(\pi n/2), \tag{5}\] where from now on \(\mu\) is expressed in units of \(t\). In the reciprocal space, the Hamiltonian of the EHM is written as \[H=\sum_{k,\,\sigma}\bigl{(}\varepsilon(k)-U/2\bigr{)}c_{k,\,\sigma}^{\dagger }c_{k,\,\sigma}+\frac{\pi\nu_{\mathrm{F}}}{2L}\sum_{\{k,\sigma\}}g_{k_{1},k_{2 },k_{1}^{\prime}}c_{k_{1}^{\prime},\sigma_{1}}^{\dagger}c_{k_{2}^{\prime}, \sigma_{2}}^{\dagger}c_{k_{2},\,\sigma_{2}}c_{k_{1},\sigma_{1}}\delta_{k_{1}+ k_{2}-k_{1}^{\prime}-k_{2}^{\prime}}^{\mathrm{HL}}, \tag{6}\] where \(\delta^{\mathrm{HL}}\) denotes the momentum conservation condition on the lattice (RL stands for Reciprocal Lattice): \[\delta_{k}^{\mathrm{HL}}=\sum_{n=-\infty}^{+\infty}\delta_{k,2\pi n}, \tag{7}\] and the dimensionless coupling constants are given by: \[g_{k_{1},k_{2},k_{1}^{\prime}}=\frac{U}{\pi\nu_{\mathrm{F}}}+\frac{2V}{\pi\nu_ {\mathrm{F}}}\cos(k_{1}-k_{1}^{\prime}). \tag{8}\] Figure 1: The tight-binding spectrum of the EHM model. Here \(A_{0}\) is half of the initial bandwidth cutoff (\(\Lambda_{0}=2t+|\mu|\)) and \(\Lambda_{1}\) is the cutoff value at some intermediate step of the RG flow. On the right panel, \(\mathcal{N}(\xi)\) the density of states as a function of energy showing the van-Hove singularities at the band edges. ### One-loop flow equations The EHM is studied with the functional RG. We first recast the partition function of the model into a field theory setting at finite temperature \(T=1/\beta\), by means of a functional integral over a Grassmannian field \(\varphi\): \[\mathcal{Z}=\operatorname{Tr}\mathrm{e}^{-\beta(\mathcal{H}-\mu\mathcal{N})}= \int\mathcal{D}[\varphi]\mathrm{e}^{-\mathcal{S}[\varphi]}, \tag{9}\] where the action \(\mathcal{S}[\varphi]\) takes the form \[\mathcal{S}[\varphi]=-[G^{0}]^{-1}_{\alpha^{\prime}a}\bar{\varphi}_{\alpha^{ \prime},\sigma}\varphi_{a,\sigma}+\frac{V_{\alpha^{\prime}_{1}a^{\prime}_{2}a _{2}a_{1}}}{4!}\bar{\varphi}_{\alpha^{\prime}_{1},\sigma_{1}}\bar{\varphi}_{ \alpha^{\prime}_{2},\sigma_{2}}\varphi_{a_{2},\sigma_{2}}\varphi_{a_{1}, \sigma_{1}} \tag{10}\] and the index \(a\) carries all the relevant information about momentum \(k\), fermionic Matsubara frequency \(\omega_{n}=(2n+1)\pi T\), and an implicit sum over repeated indices is assumed. In what follows, we will restrict ourselves to single band models, hence we will make the following replacement: \[a\to(\omega_{n},k). \tag{11}\] The first term in the action is related to the free propagator \(G^{0}\) which is diagonal in reciprocal space \[\left[G^{0}\right]^{-1}_{\alpha^{\prime}a}=\left(\mathrm{i}\omega_{n}-\xi(k) \right)\delta^{\mathrm{RL}}_{k^{\prime}-k}. \tag{12}\] The second term describes two-body interactions, and takes the following form: \[\frac{1}{4!}V_{\alpha^{\prime}_{1}\alpha^{\prime}_{2}a_{2}a_{1}}=\frac{\pi \mathrm{v}_{P}T}{L}g_{k_{1},k_{2},k^{\prime}_{1}}\delta^{\mathrm{RL}}_{k^{ \prime}_{1}+k^{\prime}_{2}-k_{2}-k_{1}}, \tag{13}\] where \(L\) is the total number of points in the one-dimensional lattice. A quadratic term is added to the action, \[\mathcal{S}[\varphi]\to\mathcal{S}[\varphi]+\frac{1}{2}\bar{\varphi}_{\alpha^ {\prime},\sigma}R_{\Lambda,\alpha^{\prime}a}\varphi_{a,\sigma}, \tag{14}\] which regularizes the functional integral by suppressing the low-energy fluctuations. An anticommuting source field \(\eta,\bar{\eta}\) coupled to the fermion field is also included in the action which takes the form \(\bar{\eta}_{a,\sigma}\varphi_{a,\sigma}+\eta_{a,\sigma}\bar{\varphi}_{a,\sigma}\). This gives the regularized generating functional of correlation functions \(\mathcal{Z}_{\Lambda}[\eta]\). The regularized effective action \(\Gamma_{\Lambda}[\phi]\) is then defined as the modified Legendre transform of the generating functional of connected correlation functions \(\mathcal{W}_{\Lambda}[\eta]\)\(=\log\mathcal{Z}_{\Lambda}[\eta]\): \[\Gamma_{\Lambda}[\phi]+\mathcal{W}_{\Lambda}[\eta]=\bar{\eta}_{a,\sigma} \phi_{a,\sigma}+\eta_{a,\sigma}\bar{\phi}_{a,\sigma}-\frac{1}{2}\bar{\phi}_{ \alpha^{\prime},\sigma}R_{\Lambda,\alpha^{\prime}a}\phi_{a,\sigma}. \tag{15}\] The regularized effective action \(\Gamma_{\Lambda}[\phi]\) satisfies the Wetterich equation [32, 33, 34] \[\partial_{\Lambda}\Gamma_{\Lambda}[\phi]=\frac{1}{2}\operatorname{Tr}\left\{ \partial_{\Lambda}R_{\Lambda}\left(\Gamma^{(2)}_{\Lambda}[\phi]+R_{\Lambda} \right)^{-1}\right\}, \tag{16}\] where \(\Gamma^{(2)}_{\Lambda}[\phi]\) is the second functional derivative of the effective action with respect to the field and \(R_{\Lambda}\) is the regulator. Additional source fields \(J\) can be added to the effective action in order to generate flow equations for the response functions. The idea is then to decompose \(\Gamma_{\Lambda}[\phi,J]\) as a sum of monomials \(\Gamma^{[n,p]}_{\Lambda}[\phi,J]\sim\phi^{n}J^{p}\), and make identifications on both sides of the flow equation. At the one-loop level, this procedure leads to flow equations for the coupling constants \(g\), three-leg vertices \(Z\) and susceptibilities \(\chi\). These equations have the familiar schematic form \[\Lambda\partial_{\Lambda}g\sim\int\mathcal{L}gg,\quad\Lambda\partial_{\Lambda}Z \sim\int\mathcal{L}Zg,\quad\Lambda\partial_{\Lambda}\chi\sim\int\mathcal{L}ZZ, \tag{17}\] where \(\Lambda\) selects the degrees of freedom that are integrated at the step \(\Lambda\). The corresponding one-loop diagrammatic contributions to the flow equations are shown in Figs 2, 3 and 4, respectively. Figure 4: Flow equations for the charge/spin-density-wave and singlet/triplet superconducting susceptibilities. Figure 3: Corrections at the one-loop level for the flow equations of three-leg vertices for charge/spin-density-wave and singlet/triplet pairing susceptibilities. Figure 2: One-loop flow equations of the coupling constants in the diagrammatic form. Here a slashed line refers to a fermion line in the outer energy shell. ### Recovery of the \(g\)-ology continuum model Before we take into account the full dependence on the spectrum and irrelevant coupling constants, it is useful for later comparisons to recover the well known \(g\)-ology electron gas model in the continuum limit, also known as the 1D electron gas model, for which lattice effects are mostly discarded. Thus, we linearize the tight-binding spectrum \(\xi(k)=\varepsilon(k)-\mu\) in the vicinity of the two Fermi points\(\pm k_{\rm F}\). We can write \(k=\eta k_{\rm F}+(k-\eta k_{\rm F})\), where \(\eta=\pm 1\) is the branch index, which gives \[\xi(k)=\frac{\partial\varepsilon}{\partial k}\bigg{|}_{\eta k_{ \rm F}}(k-\eta k_{\rm F})+\ldots=\nu_{\rm F}(|k|-k_{\rm F})+\ldots. \tag{18}\] In the particular case of a half-filled band, \(k_{\rm F}=\pi/2\). Modulo a reciprocal lattice vector \(G=4k_{\rm F}=2\pi\), we can write \(\pm 3k_{\rm F}=\mp k_{\rm F}\). According to (5), away from half-filling, \(G\neq 4k_{\rm F}\) and these relations no longer hold. The \(g\)-ology model is obtained when the momenta appearing in the coupling constants are evaluated on the Fermi points \[g_{1}\equiv g_{+\eta k_{\rm F},-\eta k_{\rm F},-\eta k_{\rm F}}= \big{(}U-2V(1-\mu^{2}/2)\big{)}/\pi\nu_{\rm F},\] \[g_{2}\equiv g_{+\eta k_{\rm F},-\eta k_{\rm F},+\eta k_{\rm F}}= \big{(}U+2V\big{)}/\pi\nu_{\rm F},\] \[g_{3}\equiv g_{+\eta k_{\rm F},+\eta k_{\rm F},-\eta k_{\rm F}}= \big{(}U-2V(1-\mu^{2}/2)\big{)}/\pi\nu_{\rm F},\] \[g_{4}\equiv g_{+\eta k_{\rm F},+\eta k_{\rm F},+\eta k_{\rm F}}= \big{(}U+2V\big{)}/\pi\nu_{\rm F}, \tag{19}\] where \(\eta=\pm 1\), and the initialization condition (8) has been used. It appears that the constants \(g_{2}\) and \(g_{4}\) correspond to forward scattering, \(g_{1}\) to backward scattering, while \(g_{3}\) describes umklapp processes [35]. In order to find the expression of the two-particle vertex \(\Gamma^{[4,0]}[\phi]\), we write its restriction on the two Fermi points, which is indicated with the bracket \([\cdot]_{\rm F}\), \[\Big{[}\Gamma^{[4,0]}[\phi]\Big{]}_{\rm F} =\frac{\pi\nu_{\rm F}T}{8L}\Big{[}\delta_{k_{1},k_{2},k^{\prime}_ {1}}\Big{]}\delta^{\rm BL}_{k^{\prime}_{1}+k^{\prime}_{2}-k_{2}-k_{1}}\tilde{ \Phi}_{k^{\prime}_{1},\sigma_{1}}\tilde{\Phi}_{k^{\prime}_{2},\sigma_{2}}\phi _{k_{2},\sigma_{2}}\phi_{k_{1},\sigma_{1}}\] \[=\frac{\pi\nu_{\rm F}T}{L}\sum_{\eta}\sum_{k_{3}>0}\bigg{\{}\big{(} g_{1}\delta_{\sigma_{1},\sigma_{3}}-g_{2}\delta_{\sigma_{2},\sigma_{4}} \big{)}\tilde{\Phi}_{-\eta k^{\prime}_{1},\sigma_{3}}\tilde{\Phi}_{+\eta k^{ \prime}_{2},\sigma_{4}}\tilde{\Phi}_{-\eta k_{2},\sigma_{2}}\tilde{\Phi}_{+ \eta k_{1},\sigma_{1}}\] \[\qquad\qquad\qquad\qquad+\frac{g_{3}}{2}(\tilde{\Phi}_{+\eta k^{ \prime}_{1},\sigma_{1}}\tilde{\Phi}_{+\eta k^{\prime}_{2},\sigma_{2}}\phi_{- \eta k_{2},\sigma_{2}}\tilde{\Phi}_{-\eta k_{1},\sigma_{1}}\] \[\qquad\qquad\qquad\qquad+\frac{g_{4}}{2}(\tilde{\Phi}_{+\eta k^{ \prime}_{1},\sigma_{1}}\tilde{\Phi}_{+\eta k^{\prime}_{2},\sigma_{2}}\tilde{ \Phi}_{+\eta k_{2},\sigma_{2}}\tilde{\Phi}_{+\eta k_{1},\sigma_{1}}\] \[\qquad\qquad\qquad\qquad+\tilde{\Phi}_{-\eta k^{\prime}_{1}, \sigma_{1}}\tilde{\Phi}_{-\eta k^{\prime}_{2},\sigma_{2}}\tilde{\Phi}_{-\eta k _{2},\sigma_{2}}\tilde{\Phi}_{-\eta k_{1},\sigma_{1}}\big{)}\bigg{\}}, \tag{20}\] where momentum conservation is understood in the last equality. Umklapp processes require special care as a function of band filling since when \(4k_{\rm F}\neq G\) scattered electrons may end up in unavailable quantum states, namely above the scaled energy \(\Lambda\) of integrated degrees of freedom. A smooth cut-off procedure is thus required to assess their presence. The idea is simple: if the energy of the scattered electron from say \(k_{\rm F}\) to \(3k_{\rm F}\) is not available, that is located above the scaled cut-off \(\Lambda\), then \(|\xi(3k_{\rm F})|>\Lambda\). The cut-off procedure is realized with a smooth function \(f(x)\) such that \(f(x)\to 1\) for \(x\to 0\) and \(f(x)\to 0\) for \(x\to+\infty\). The variation of this function is supposed to happen for \(x\approx 1\). In the flow equations, we thus make the formal replacement \[g_{3}\to g_{3}f_{\Lambda},\quad f_{\Lambda}=f\big{(}|\xi(3k_{ \rm F})|/\Lambda\big{)}. \tag{21}\] When this simplified vertex is inserted in the equations (53), we end up with the well known \(g\)-ology flow equations \[\Lambda\partial_{A}g_{1} =-\mathcal{L}_{\rm P}g_{1}^{2}+(\mathcal{L}_{\rm C}+\mathcal{L}_{ \rm P})g_{1}g_{2}+\mathcal{L}_{\rm L}g_{1}g_{4},\] \[\Lambda\partial_{A}g_{2} =\mathcal{L}_{\rm C}g_{1}^{2}/2+(\mathcal{L}_{\rm C}+\mathcal{L}_ {\rm P})g_{2}^{2}/2+\mathcal{L}_{\rm P}f_{\Lambda}^{2}g_{3}^{2}/2+\mathcal{L}_ {\rm L}g_{4}(g_{1}-2g_{2}),\] \[\Lambda\partial_{A}g_{3} =-(\mathcal{L}_{\rm P}+\mathcal{L}_{\rm P^{\prime}})f_{A}g_{3}(g_ {1}-2g_{2})/2+\mathcal{L}_{\rm C^{\prime}}f_{\Lambda}^{2}g_{3}(g_{2}+g_{4})/2,\] \[\Lambda\partial_{A}g_{4} =\mathcal{L}_{\rm L}(g_{1}^{2}-2g_{2}^{2}+2g_{1}g_{2}+g_{4}^{2})/ 2+\mathcal{L}_{\rm C^{\prime}}(f_{\Lambda}^{2}/g_{3}^{2}+g_{4}^{2})/2. \tag{22}\] Here the \(\mathcal{L}_{\rm X}\) are the derivatives with respect to \(\Lambda\) of the bubbles associated to particle-particle (p-p) and particle-hole (p-h) scattering channels in which \(C\) and \(C^{\prime}\) refer to inter- and intra-branch Cooper pairings, and \(P\) and \(L\) refer to Peierls and Landau channels. For this particular calculation a sharp cutoff is chosen, which allows to compute the integrals in closed form. The resulting bubbles can be classified into two logarithmically divergent bubbles of the p-p channel at zero momentum pair and the particle-hole one at momentum \(2k_{\rm F}\), which leads to the most important contributions to the flow equations: \[\mathcal{L}_{\rm C} =\pi v_{\rm F}\,\Lambda\partial_{A}\int_{-\Lambda_{0}}^{\Lambda_{ 0}}\Theta(|\xi|-\Lambda)\,T\sum_{\omega_{n}}G^{0}(k_{\xi},\omega_{n})G^{0}(-k_ {\xi},-\omega_{n})\,\mathrm{d}\xi,\] \[\mathcal{L}_{\rm P} =-\pi v_{\rm F}\,\Lambda\partial_{A}\int_{-\Lambda_{0}}^{\Lambda _{0}}\Theta(|\xi|-\Lambda)\,T\sum_{\omega_{n}}G^{0}(k_{\xi}-2k_{\rm F},\omega_ {n})G^{0}(k_{\xi},\omega_{n})\,\mathrm{d}\xi,\] \[\mathcal{L}_{\rm P^{\prime}} =-\pi v_{\rm F}\,\Lambda\partial_{A}\int_{-\Lambda_{0}}^{\Lambda _{0}}\Theta(|\xi|-\Lambda)\,T\sum_{\omega_{n}}G^{0}(k_{\xi}+2k_{\rm F},\omega_ {n})G^{0}(k_{\xi},\omega_{n})\,\mathrm{d}\xi, \tag{23}\] where \(k_{\xi}=\arccos\big{(}-(\xi+\mu)/2\big{)}\). The last contribution \(\mathcal{L}_{\rm P^{\prime}}\) is affected by the fact that the nesting relation is not perfect away from half-filling. As a consequence, \(\mathcal{L}_{\rm P}\) and \(\mathcal{L}_{\rm P^{\prime}}\) differ in general, except at half-filling where \(G=4k_{\rm F}\)[36]. The second category comes from non-divergent bubbles of p-p and p-h scattering channels when both particles belong to the same momentum branch. These take the form \[\mathcal{L}_{\rm L,C^{\prime}}=-\pi v_{\rm F}\,\Lambda\partial_{A}\int_{- \Lambda_{0}}^{\Lambda_{0}}\Theta(|\xi_{k}|-\Lambda)\,T\sum_{\omega_{n}}G^{0}( \xi_{k+0+},\omega_{n})G^{0}(\xi_{k},\omega_{n})\,\mathrm{d}\xi_{k} \tag{24}\] and only take finite values within the thermal shell \(\Lambda\lesssim T\). To determine the phase diagram, we have to derive further the flow equations for the three-leg vertices. In the case of the \(g\)-ology model, we limit ourselves to the vertices \(Z_{\eta k_{\rm F}}^{\rm x}(q)\) evaluated at \(q=\pm 2k_{\rm F}\) for the density waves, and at \(q=0\) for the singlet/triplet superconductivity. When this is done, we define the vertices associated to the specific ground states we are looking for. First of all, it is possible to probe the density waves around \(2k_{\rm F}\). There are four such density waves of the \(2k_{\rm F}\) p-h or Peierls channel. They correspond to site-centered charge- and spin-density wave (CDW, SDW), and bond-centered charge- and spin-density wave (BOW, BSDW): \[Z_{\rm CDW} =Z_{+k_{\rm F}}^{\rm ch.s}(-2k_{\rm F})=Z_{-k_{\rm F}}^{\rm ch.s}( +2k_{\rm F}),\quad Z_{\rm CDW}^{\prime}=Z_{+k_{\rm F}}^{\rm ch.s}(+2k_{\rm F} )=Z_{-k_{\rm F}}^{\rm ch.s}(-2k_{\rm F}),\] \[Z_{\rm BOW} =Z_{+k_{\rm F}}^{\rm ch.b}(-2k_{\rm F})=Z_{-k_{\rm F}}^{\rm ch.b}( +2k_{\rm F}),\quad Z_{\rm BOW}^{\prime}=Z_{+k_{\rm F}}^{\rm ch.b}(+2k_{\rm F} )=Z_{-k_{\rm F}}^{\rm ch.b}(-2k_{\rm F}),\] \[Z_{\rm SDW} =Z_{+k_{\rm F}}^{\rm sp.s}(-2k_{\rm F})=Z_{-k_{\rm F}}^{\rm sp.s} (+2k_{\rm F}),\quad Z_{\rm SDW}^{\prime}=Z_{+k_{\rm F}}^{\rm sp.s}(+2k_{\rm F} )=Z_{-k_{\rm F}}^{\rm sp.s}(-2k_{\rm F}),\] \[Z_{\rm BSDW} =Z_{+k_{\rm F}}^{\rm sp.b}(-2k_{\rm F})=Z_{-k_{\rm F}}^{\rm sp.b} (+2k_{\rm F}),\quad Z_{\rm BSDW}^{\prime}=Z_{+k_{\rm F}}^{\rm sp.b}(+2k_{\rm F} )=Z_{-k_{\rm F}}^{\rm sp.b}(-2k_{\rm F}). \tag{25}\] The vertices associated to singlet (SS) and triplet (TS) superconductivity are given by \[Z_{\rm SS}=Z_{+k_{\rm F}}^{\rm s}(0)+Z_{-k_{\rm F}}^{\rm s}(0),\] \[Z_{\rm TS}=Z_{+k_{\rm F}}^{\dagger}(0)-Z_{-k_{\rm F}}^{\rm t}(0). \tag{26}\] The flow equations associated to the density-wave vertices are thus \[\frac{{\rm d}Z_{\rm x}}{{\rm d}\ell} =\frac{1}{2}Z_{\rm x}\tilde{g}_{\rm x},\] \[\tilde{g}_{\rm CDW} =(g_{2}-2g_{1})\mathcal{L}_{\rm p}-f_{A}g_{3}\mathcal{L}_{\rm p},\] \[\tilde{g}_{\rm SDW} =g_{2}\mathcal{L}_{\rm p}+f_{A}g_{3}\mathcal{L}_{\rm p},\] \[\tilde{g}_{\rm BOW} =(g_{2}-2g_{1})\mathcal{L}_{\rm p}+f_{A}g_{3}\mathcal{L}_{\rm p},\] \[\tilde{g}_{\rm BSW} =g_{2}\mathcal{L}_{\rm p}-f_{A}g_{3}\mathcal{L}_{\rm p}, \tag{27}\] while those for singlet and triplet superconductivity are \[\frac{{\rm d}Z_{\rm x}}{{\rm d}\ell} =\frac{1}{2}Z_{\rm x}\tilde{g}_{\rm x},\] \[\tilde{g}_{\rm SS} =(-g_{1}-g_{2})\mathcal{L}_{\rm C},\] \[\tilde{g}_{\rm TS} =(g_{1}-g_{2})\mathcal{L}_{\rm C}. \tag{28}\] The initial conditions are \(Z_{\rm x}(\ell=0)=1\) for all channels x. The expression of the normalized susceptibility that stands for any of the above correlation channels is given by \[\chi_{\rm x}(\ell) =\int_{0}^{\ell}Z_{\rm x}^{2}(\ell^{\prime})\,\mathcal{L}_{\rm x} (\ell^{\prime}){\rm d}\ell^{\prime} \tag{29}\] with \(\chi_{\rm x}(\ell=0)=0\) as the initial condition. The phase of the system is defined by the most singular susceptibility \(\chi_{\rm x}\) and therefore the most singular \(Z_{\rm x}\). We shall limit ourselves to to the phases bearing the most important singularities. These correspond to \(2k_{\rm F}\) density-wave and superconducting phases at zero pairing momentum, which are governed by Eqs. (27) and (28). For these, the expressions for the three-leg vertices can be expressed as a power law \(Z_{\rm x}(\ell)=\exp[\frac{1}{2}\gamma_{\rm x}(\ell)]\), with a scale-dependent exponent \(\gamma_{\rm x}(\ell)=\int_{0}^{\ell}\tilde{g}_{\rm x}(\ell^{\prime}){\rm d} \ell^{\prime}\). In this work we do not consider uniform \(q\to 0\) responses. It is useful in what follows to recall the main features of the one-loop flow equations of the continuum theory both at and away from half-filling #### 2.3.1 Half-filling We first consider the case at half-filling and in the zero-temperature limit where \(\mu=0\) (\(n=1\)) and \(\beta\to\infty\). This gives for the bubble intensities (23): \[\mathcal{L}_{\rm p,p^{\prime}}=-\mathcal{L}_{\rm C}=\tanh(\beta\,\Lambda/2)\to 1\] and \[\mathcal{L}_{\rm L,C}=-2\Lambda\partial_{\Lambda}n_{\rm F}(\Lambda)\to 0,\] where \(n_{\rm F}\) is the Fermi distribution. From (22) one recovers the well known \(g\)-ology flow equations at half-filling [37, 35, 2]: \[\frac{\mathrm{d}g_{1}}{\mathrm{d}\ell} =-g_{1}^{2},\] \[\frac{\mathrm{d}g_{2}}{\mathrm{d}\ell} =(g_{3}^{2}-g_{1}^{2})/2,\] \[\frac{\mathrm{d}g_{3}}{\mathrm{d}\ell} =g_{3}(2g_{2}-g_{1}),\] \[\frac{\mathrm{d}g_{4}}{\mathrm{d}\ell} =0, \tag{30}\] where \(\ell\) is the so-called RG time, such that \(\Lambda=\Lambda_{0}\mathrm{e}^{-\ell}\). The flow of \(g_{1}(\ell)\) associated to the spin degrees of freedom is decoupled from those of \(g_{3}(\ell)\) and \(2g_{2}(\ell)-g_{1}(\ell)\) linked to the charge ones. These combine to give the scale invariant constant \(C=g_{3}^{2}(\ell)-\left(2g_{2}(\ell)-g_{1}(\ell)\right)^{2}\)[35]. Thus for an initial attraction, \(g_{1}<0\) (\(U<2V\)), the flow of \(g_{1}(\ell)\) scales to strong attractive coupling with a singularity that develops at a finite \(\ell_{\sigma}\), indicative of a spin gap \(\Delta_{\sigma}\sim\Lambda_{0}\mathrm{e}^{-\ell_{\sigma}}\); whereas for an initial repulsion \(g_{1}>0\), \(g_{1}(\ell)\) is marginally irrelevant and spin degrees of freedom remain gapless. For the charge part, when \(g_{1}-2g_{2}\geq|g_{3}|\), umklapp scattering becomes marginally irrelevant, \(2g_{2}(\ell)-g_{1}(\ell)\) then scales to a non-universal value and the charge-density sector remains gapless. By contrast, when \(g_{1}-2g_{2}<|g_{3}|\), the umklapp term is marginally relevant and the flow leads to a singularity in both \(g_{3}(\ell)\) and \(2g_{2}(\ell)-g_{1}(\ell)\) at \(\ell_{\rho}\) implying a Mott gap \(\Delta_{\rho}\sim\Lambda_{0}\mathrm{e}^{-\ell_{\rho}}=\Lambda_{0}\mathrm{e}^{ -1/\sqrt{|C|}}\) in the charge-density sector. Finally, at the one-loop level there are no logarithmic contributions to the flow of intra-branch forward scattering \(g_{4}\), which remains scale invariant as long as \(\Lambda\) is outside the region of thermal excitations around the Fermi level (\(\Lambda(\ell)>T\)). Regarding the phase diagram as a function of \(U\) and \(V\), when \(U>\pm 2V\), so that \(g_{1}>0\) and \(g_{1}-2g_{2}<|g_{3}|\), the strongest singularity appears for \(\chi_{\mathrm{SDW}}\), \(\gamma_{\mathrm{SDW}}\) being the largest exponent of (27), with a SDW state having gapless spin excitations and a Mott gap. For \(V<\mp U/2\), so that \(g_{1}>0\) and \(g_{1}-2g_{2}>|g_{3}|\), (27) yields \(\gamma_{\mathrm{TS}}\) as the strongest exponent and a dominant susceptibility for triplet superconductivity with gapless excitations for both spin and charge. For \(U/2<V<0\), which implies \(g_{1}<0\) and \(g_{1}-2g_{2}>|g_{3}|\), it is in turn \(\gamma_{\mathrm{ss}}\) to be the strongest exponent in (27) with a dominant singularity in the SS susceptibility with a spin gap. Finally when \(U/2<V\) and \(V>0\), we have \(g_{1}<0\) and \(g_{1}-2g_{2}<|g_{3}|\) for dominant \(\gamma_{\mathrm{CDW}}\) and CDW phase, which is gapped for both spin and charge excitations. Along the separatrix \(U=2V\), \(g_{1}=g_{3}=0\), corresponding to gapless conditions of the Tomonaga-Luttinger model, with \(\gamma_{\mathrm{SDW}}=\gamma_{\mathrm{CDW}}\) and \(\chi_{\mathrm{CDW}}\) and \(\chi_{\mathrm{SDW}}\) equally singular at \(U>0\), whereas at \(U<0\), \(\gamma_{\mathrm{SS}}=\gamma_{\mathrm{TS}}\), and \(\chi_{\mathrm{TS}}\) and \(\chi_{\mathrm{SS}}\) become in turn equal. Finally, the symmetry line at \(U<0\) and \(V=0\), with \(g_{1}<0\) and \(g_{1}-2g_{2}>|g_{3}|\), leads to the equality \(\gamma_{\mathrm{CDW}}=\gamma_{\mathrm{SS}}\) and to coexisting CDW and SS phases of equal strength. The resulting well known phase diagram of the continuum theory is given in Fig. 5[37]. It is worth noting that in this framework, \(\chi_{\mathrm{BOW}}\) never appears as a dominant singularity in the phase diagram, but only as a subdominant one in the SDW sector [37]. Figure 5: Weak-coupling phase diagram of the \(g\)-ology extended Hubbard model in the continuum limit (linear spectrum and momentum-independent interactions) at half-filling \(\mu=0\) (top panel) and small doping \(\mu=0.035\) (lower panel). \(U\) and \(V\) are expressed in units of bare hopping \(t\). The subscripts \(\sigma/\rho\) of a given phase indicate the presence of a gap in the spin/charge excitations. #### 2.3.2 Away from half-filling We now turn to the main results for finite values of \(\mu\). From (22), the one-loop flow equations at finite doping in the low-temperature limit can be put in the form \[\frac{\mathrm{d}g_{1}}{\mathrm{d}\ell}=-g_{1}^{2},\] \[\frac{\mathrm{d}}{\mathrm{d}\ell}(2g_{2}-g_{1})=f_{\Lambda}^{2}g _{3}^{2}\mathcal{L}_{p^{\prime}},\] \[\frac{\mathrm{d}g_{3}}{\mathrm{d}\ell}=(1+\mathcal{L}_{p^{\prime }})f_{\Lambda}g_{3}(2g_{2}-g_{1})/2,\] \[\frac{\mathrm{d}g_{4}}{\mathrm{d}\ell}=0. \tag{31}\] Except for the presence of the cutoff umklapp function \(f_{\Lambda}\), the equations correspond to the former results of Seidel _et al._[36, 38] and are consistent with those of the Bosonization representation in the weak-coupling limit [6, 39, 40]. We illustrate this situation at a finite but small doping \(\mu=0.035\). The flow of \(g_{1}\), tied to the spin degrees of freedom, keeps the same form as before, namely \(g_{1}(\ell)=g_{1}(1+g_{1}\ell)^{-1}\), with a spin gap \(\Delta_{\sigma}\sim\Lambda_{0}\mathrm{e}^{-1/|g_{1}|}\) which, according to the initial value of \(g_{1}\) in (19), covers the region above the separatrix \(V=U/(2-\mu^{2})\) whose slope increases with \(\mu\), as shown in Fig. 5-(b). Concerning the charge degrees of freedom, however, the influence of a finite \(\mu\) affects the flows of \(2g_{2}(\ell)-g_{1}(\ell)\) and \(g_{3}(\ell)\) due to the suppression of the logarithmic singularity of the particle-hole loop \(\mathcal{L}_{p^{\prime}}\) when \(\Lambda(\ell)<\mu\). This is superimposed to the cutoff function \(f_{\Lambda}\) on umklapp terms which takes into account the reduction of phase space available for this scattering at finite \(\mu\). Thus at sufficiently small couplings, the flow of \(2g_{2}(\ell)-g_{1}(\ell)\) is no longer singular so that no charge gap is possible. This introduces gapless regions for the charge sector in Fig. 5-(b) in which the most important singularities in (27) give either SDW or CDW phases. By cranking up \(\mu\), the charge-gapped regions are shrinking in size and found to be pushed to higher couplings. In the gapless-charge domains, umklapp scattering reduces to a simple renormalization of the combination \(2g_{2}-g_{1}\), which becomes scale invariant in weak coupling [see Eq. (31)]. In the \(g_{1}>0\) part of Fig. 5-(b), that is for \(V<U/(2-\mu^{2})\), the detrimental effect of doping on umklapp is also apparent for the gapless region where the most important power-law singularity in \(\chi_{\mathrm{TS}}\) gains in importance against SDW. On the same figure, a similar effect takes place at \(g_{1}<0\) where the SS region, in which umklapp scattering is an irrelevant coupling, gains in importance against CDW by increasing \(\mu\). We shall examine next to what extent taking into account lattice effects of the EHM model can alter these results. ### Lattice effects and low-energy limit Lattice effects are twofold. First, they are present in the one-body term of the Hamiltonian through the inter-site hopping of electrons described by the nearest-neighbor hopping term \(t\) in (1). This leads to the tight-binding spectrum of Fig. 1 showing the growth of its curvature as energy moves away from the Fermi level and which becomes particle-hole asymmetric away from half-filling. Second, they appear in the coupling constants that are spatially non-local. This is the case of the nearest-neighbor interaction \(V\) which introduces a dependence on wave vectors in momentum space. Both effects are linked since the momentum dependence of interactions generates a curvature of the spectrum through one-particle self-energy corrections \(\Sigma_{k}\). At the one-loop level, these come from Hartree-Fock contributions to the flow, which can be put in the diagrammatic form \[\partial_{A}\Sigma_{k}=\raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/fig/fig/fig/figfig/figfig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/figfig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/ using the cutoff function \(f(x)\) defined earlier in (21). We now derive the general form of the flow equations when the \(\xi\) expansion is made explicit. One first makes the change of variables \(k\to(\eta,\xi)\), and writes \[\frac{\mathrm{d}g_{\bar{\pi}}^{\bar{\eta}}}{\mathrm{d}\ell}=-\Lambda\partial_{ \Lambda}g_{\bar{\pi}}^{\bar{\eta}}=\sum_{x}D_{x}^{\bar{\eta}}(\bar{\xi}), \tag{37}\] where the sum runs over all Feynman graphs of Fig. 2, that is to say \(\mathrm{x}\in\{\mathrm{ee},\mathrm{e}\mathrm{h}1,\mathrm{e}\mathrm{h}2, \mathrm{e}\mathrm{h}3\}\). Furthermore, each diagram can be written in the form \[D_{x}^{\bar{\eta}}(\bar{\xi})=\sum_{p\in\mathrm{B}\mathrm{Z}1}\mathcal{L}_{x}^ {\bar{\eta}}(p,\bar{\xi})\gamma_{x1}^{\bar{\eta}}(p,\bar{\xi})\gamma_{x2}^{ \bar{\eta}}(p,\bar{\xi}), \tag{38}\] where \(\mathcal{L}_{x}^{\bar{\eta}}(p,\bar{\xi})\) is a bubble of the scattering channel \(\mathrm{x}\), while the \(\gamma_{x1}^{\bar{\eta}}(p,\bar{\xi})\)'s are combinations of the coupling constants. For example, in the case \(\mathrm{x}=\mathrm{p}\mathrm{p}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{} \mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\), one gets from Eqs. (55a) of Appendix: \[\mathcal{L}_{\mathrm{p}\mathrm{p}}^{\bar{\eta}}(p,\bar{\xi}) =\mathcal{L}_{p,-p+k_{1}+k_{2}}^{\mathrm{p}},\] \[\gamma_{x1}^{\bar{\eta}}(p,\bar{\xi}) =g_{k_{2},k_{1}-p+k_{1}+k_{2}},\] \[\gamma_{x2}^{\bar{\eta}}(p,\bar{\xi}) =g_{p,-p+k_{1}+k_{2},k_{1}^{\prime}}.\] The corresponding expressions for the other channels \(\mathrm{x}=\mathrm{p}\mathrm{h}1,\mathrm{p}\mathrm{h}2\) and \(\mathrm{p}\mathrm{h}3\) are given in Eqs. (55b), (55c) and (55d), respectively. It is then possible to make use of the expansion given in (34) for the couplings and a similar one for the bubbles. Once this is done, the flow equations are written as \[\sum_{n_{i}=0}^{\infty}\frac{\xi_{1}^{n_{1}}\xi_{2}^{n_{2}}\xi_{1^{\prime}}^{n _{1}^{\prime}}}{n_{1}!n_{2}!n_{1}!n_{2}!n_{1}^{\prime}!}\frac{\mathrm{d}g_{ \bar{\pi}}^{\bar{\eta}}}{\mathrm{d}\ell}=\sum_{n_{i}=0}^{\infty}\sum_{m_{1},i =0}^{\infty}\sum_{m_{2},i=0}^{\infty}\frac{\xi_{1}^{n_{1}}\xi_{2}^{n_{2}}\xi_ {1^{\prime}}^{n_{1}^{\prime}}}{n_{1}!n_{2}!n_{1}!n_{2}!n_{1}^{\prime}!}\mathcal{ L}_{\bar{n},\bar{n}_{i},\bar{m}_{1},\bar{m}_{2}}^{\bar{\eta}_{1}}g_{\bar{m}_{2}}^{\bar{ \eta}_{1}}. \tag{39}\] Now it is useful to express the flow equations in a dimensionless form. Let us introduce the dimensionless quantities \(\bar{g}_{\Lambda}^{\bar{\eta}}(\bar{\xi})\), where \(\bar{\bar{\xi}}=\bar{\xi}/\Lambda\). The natural unit is the cut-off \(\Lambda\), \[g_{\Lambda}^{\bar{\eta}}(\bar{\xi})=\Lambda^{[\bar{\xi}]}\bar{g}_{\Lambda}^{ \bar{\eta}}(\bar{\xi}/\Lambda)\iff\bar{g}_{\Lambda}^{\bar{\eta}}(\bar{\xi})= \Lambda^{-[\bar{\eta}]}g_{\Lambda}^{\bar{\eta}}(\bar{\xi}). \tag{40}\] In this expression, \([\bar{g}]\) denotes the engineering dimension of the coupling constant \(g\). For two-body interactions between fermions in one dimension, the constants (40) are dimensionless, \([\bar{g}]=0\). From the expansion (34), it is straightforward to determine the dimension of all contributions: \[g_{\bar{n}}^{\bar{\eta}}=\Lambda^{-[\bar{n}]}\bar{g}_{\bar{n}}^{\bar{\eta}}, \tag{41}\] where the notation \(|\bar{n}|=n_{1}+n_{2}+n_{1^{\prime}}\) has been introduced. The dimensionless flow equations for the coupling constants are then obtained by a simple identification from (39): \[\frac{\mathrm{d}\bar{g}_{\bar{n}}^{\bar{\eta}}}{\mathrm{d}\ell}=-\Lambda \partial_{\Lambda}\bar{g}_{\bar{n}}^{\bar{\eta}}=-|\bar{n}|\bar{g}_{\bar{n}}^{ \bar{\eta}}-\sum_{n_{1},i=0}^{\infty}\sum_{n_{2},i=0}^{\infty}\tilde{\mathcal{L }}_{\bar{n},\bar{\eta}_{1},\bar{\eta}_{2}}^{\bar{\eta}_{1},\bar{\eta}_{2}} \tilde{g}_{\bar{n}_{1}}^{\bar{\eta}_{1}}\tilde{g}_{\bar{n}_{2}}^{\bar{\eta}_{2}}, \tag{42}\] with \(\tilde{\mathcal{L}}_{\bar{n},\bar{\eta}_{1},\bar{\eta}_{2}}^{\bar{\eta}_{1}, \bar{\eta}_{2}}=\Lambda^{[\bar{n}]-[\bar{n}_{1}]-[\bar{n}_{2}]}\mathcal{L}_{ \bar{n},\bar{\eta}_{1},\bar{\eta}_{2}}^{\bar{\eta}_{1},\bar{\eta}_{2}}\). As a consequence, the expansion in \(\xi\) classifies the coupling constants by order of irrelevance from the value of \(|\bar{n}|\). In practice, we will restrict ourselves to quadratic order, i.e. \(|\bar{n}|\leq 2\). The different sets of interactions and their initial conditions can be expressed in terms of the coupling constants (8) of the original EHM: \[g_{k_{1},k_{2},k_{1}^{\prime}}=\frac{U}{\pi\nu_{\mathrm{F}}}+\frac{2V}{\pi\nu_{ \mathrm{F}}}\cos(k_{1}-k_{1}^{\prime}). \tag{43}\] The expansion of the cosine in terms of the variables \((\xi,\eta)\) gives, up to second order in \(\xi\), \[\cos(k_{1}-k_{1}^{\prime})= \ \eta_{1}\eta_{1^{\prime}}+(1-\eta_{1}\eta_{1^{\prime}})\frac{\mu^{2 }}{4}+\frac{\mu}{4}(1-\eta_{1}\eta_{1^{\prime}})(\xi_{1}+\xi_{1^{\prime}})\] \[\ \ -\eta_{1}\eta_{1^{\prime}}\bigg{(}\frac{1}{8}-\frac{\mu^{2}}{ 32(1-\mu^{2}/4)}\bigg{)}(\xi_{1}^{2}+\xi_{1^{\prime}}^{2})\] \[\ +\bigg{(}\frac{1}{4}+\eta_{1}\eta_{1^{\prime}}\frac{\mu^{2}}{ 16(1-\mu^{2}/4)}\bigg{)}\xi_{1}\xi_{1^{\prime}}+.... \tag{44}\] Hence we obtain the following initialization conditions for the coupling constants introduced in (34). For marginal interactions \(\mathcal{O}(\xi^{0})\) (\(\vec{n}=0\)), one has \[\ g^{+\eta,-\eta,-\eta,-\eta}_{0,0,0}=g_{1},\ \ \ \ \ g^{+\eta,-\eta,+\eta}_{0,0,0}=g_{2},\] \[\ g^{+\eta,+\eta,-\eta}_{0,0,0}=g_{3},\ \ \ \ \ g^{+\eta,+\eta,+\eta}_{0,0,0}=g_{4}, \tag{45}\] and the initial values coincide with those of the continuum theory in (19). From (44) and at \(\mathcal{O}(\xi)\) (\(|\vec{n}|=1\)), the set of irrelevant interactions labeled in terms of backward, forward and umklapp scattering amplitudes, together with their initial filling-dependent values, reads: \[\ g^{+\eta,-\eta,-\eta}_{1,0,0}=\frac{V\mu}{\pi\nu_{\rm F}},\ \ \ g^{+\eta,-\eta,-\eta}_{0,0,1}=\frac{V\mu}{\pi\nu_{\rm F}},\ \ \ g^{+\eta,-\eta,+\eta}_{1,0,0}=0,\ \ \ g^{+\eta,-\eta,+\eta}_{0,0,1}=0,\] \[\ g^{+\eta,+\eta,-\eta}_{1,0,0}=\frac{V\mu}{\pi\nu_{\rm F}},\ \ \ g^{+\eta,+\eta,-\eta}_{0,0,1}=\frac{V\mu}{\pi\nu_{\rm F}},\ \ \ g^{+\eta,+\eta,+\eta}_{1,0,0}=0,\ \ \ g^{+\eta,+\eta,+\eta}_{0,0,1}=0. \tag{46}\] Likewise, the set of irrelevant couplings at \(\mathcal{O}(\xi^{2})\) (\(|\vec{n}|=2\)) and their initial values can be put in the form \[\ g^{+\eta,-\eta,-\eta}_{2,0,0}=\frac{V}{\pi\nu_{\rm F}}\bigg{(} \frac{1}{2}-\frac{\mu^{2}}{8(1-\mu^{2}/4)}\bigg{)},\ \ g^{+\eta,-\eta,-\eta}_{0,0,2}=\frac{V}{\pi\nu_{\rm F}}\bigg{(} \frac{1}{2}-\frac{\mu^{2}}{8(1-\mu^{2}/4)}\bigg{)},\] \[\ g^{+\eta,-\eta,-\eta}_{1,0,1}=\frac{V}{\pi\nu_{\rm F}}\bigg{(} \frac{1}{2}-\frac{\mu^{2}}{8(1-\mu^{2}/4)}\bigg{)},\ \ g^{+\eta,-\eta,+\eta}_{2,0,0}=-\frac{V}{\pi\nu_{\rm F}}\bigg{(} \frac{1}{2}-\frac{\mu^{2}}{8(1-\mu^{2}/4)}\bigg{)},\] \[\ g^{+\eta,+\eta,-\eta}_{2,0,0}=\frac{V}{\pi\nu_{\rm F}}\bigg{(} \frac{1}{2}-\frac{\mu^{2}}{8(1-\mu^{2}/4)}\bigg{)},\ \ g^{+\eta,+\eta,-\eta}_{0,0,2}=\frac{V}{\pi\nu_{\rm F}}\bigg{(} \frac{1}{2}-\frac{\mu^{2}}{8(1-\mu^{2}/4)}\bigg{)},\] \[\ g^{+\eta,+\eta,-\eta}_{1,0,1}=\frac{V}{\pi\nu_{\rm F}}\bigg{(} \frac{1}{2}-\frac{\mu^{2}}{8(1-\mu^{2}/4)}\bigg{)},\ \ g^{+\eta,+\eta,+\eta}_{2,0,0}=-\frac{V}{\pi\nu_{\rm F}}\bigg{(} \frac{1}{2}-\frac{\mu^{2}}{8(1-\mu^{2}/4)}\bigg{)},\] \[\ g^{+\eta,+\eta,+\eta}_{0,0,2}=-\frac{V}{\pi\nu_{\rm F}}\bigg{(} \frac{1}{2}-\frac{\mu^{2}}{8(1-\mu^{2}/4)}\bigg{)},\ \ g^{+\eta,+\eta,+\eta}_{1,0,1}=\frac{V}{\pi\nu_{\rm F}}\bigg{(} \frac{1}{2}+\frac{\mu^{2}}{8(1-\mu^{2}/4)}\bigg{)}. \tag{47}\] The same expansion procedure can in principle be applied to the vertex parts of the response functions: \[Z^{\rm x}_{k}(q)=\sum_{n=0}^{\infty}\frac{\xi^{n}}{n!}Z^{\rm x}_{\eta,n}(q). \tag{48}\] However, at variance with the coupling constants, the irrelevant contributions to all \(Z_{\rm x}\) are zero at \(\ell=0\), so that their corrections will be negligible to the flow. In the following, we shall therefore proceed to the evaluation of \(Z^{\rm x}_{k}(q)\) in the lowest or marginal order by retaining only \(Z^{\chi}_{\eta,\,n=0}(q)\). Higher-order corrections are not expected to bring any qualitative modifications to the phase diagram. Thus for the site- and bond-density-wave channels at \(q=\pm 2k_{\rm F}\), the flow equations are respectively \[\Lambda\partial_{\lambda}Z^{\prime}_{\rm CDW} =\frac{1}{2}\mathcal{L}_{\rm P}(g_{2}-2g_{1})Z^{\prime}_{\rm CDW}- \frac{1}{2}\mathcal{L}_{\rm P}f_{\lambda}g_{3}Z_{\rm SDW},\] \[\Lambda\partial_{\lambda}Z_{\rm CDW} =\frac{1}{2}\mathcal{L}_{\rm P}(g_{2}-2g_{1})Z_{\rm CDW}-\frac{1} {2}\mathcal{L}_{\rm P}f_{\lambda}g_{3}Z^{\prime}_{\rm CDW},\] \[\Lambda\partial_{\lambda}Z^{\prime}_{\rm SDW} =\frac{1}{2}\mathcal{L}_{\rm P}g_{2}Z^{\prime}_{\rm SDW}+\frac{1} {2}\mathcal{L}_{\rm P}f_{\lambda}g_{3}Z_{\rm SDW},\] \[\Lambda\partial_{\lambda}Z_{\rm SDW} =\frac{1}{2}\mathcal{L}_{\rm P}g_{2}Z_{\rm SDW}+\frac{1}{2} \mathcal{L}_{\rm P}f_{\lambda}g_{3}Z^{\prime}_{\rm SDW}, \tag{49}\] and \[\Lambda\partial_{\lambda}Z^{\prime}_{\rm BOW} =\frac{1}{2}\mathcal{L}_{\rm P}(g_{2}-2g_{1})Z^{\prime}_{\rm BOW} -\frac{1}{2}\frac{\mathcal{L}_{\rm P}}{\cos(2k_{\rm F})}f_{\lambda}g_{3}Z_{ \rm BOW},\] \[\Lambda\partial_{\lambda}Z_{\rm BOW} =\frac{1}{2}\mathcal{L}_{\rm P}(g_{2}-2g_{1})Z_{\rm BOW}-\frac{1} {2}\mathcal{L}_{\rm P}\cdot\cos(2k_{\rm F})f_{\lambda}g_{3}Z^{\prime}_{\rm BOW},\] \[\Lambda\partial_{\lambda}Z^{\prime}_{\rm BSDW} =\frac{1}{2}\mathcal{L}_{\rm P}\cdot g_{2}Z^{\prime}_{\rm BSDW} +\frac{1}{2}\frac{\mathcal{L}_{\rm P}}{\cos(2k_{\rm F})}f_{\lambda}g_{3}Z_{ \rm BSDW},\] \[\Lambda\partial_{\lambda}Z_{\rm BSDW} =\frac{1}{2}\mathcal{L}_{\rm P}g_{2}Z_{\rm BSDW}+\frac{1}{2} \mathcal{L}_{\rm P}\cdot\cos(2k_{\rm F})f_{\lambda}g_{3}Z^{\prime}_{\rm BSDW}. \tag{50}\] In the superconducting channel at zero pair momentum, one has \[\Lambda\partial_{\lambda}Z_{\rm SS} =\frac{1}{2}\mathcal{L}_{\rm C}(g_{1}+g_{2})Z_{\rm SS},\] \[\Lambda\partial_{\lambda}Z_{\rm TS} =-\frac{1}{2}\mathcal{L}_{\rm C}(g_{1}-g_{2})Z_{\rm TS}. \tag{51}\] All the \(Z_{\rm x}\) equations are bound to the initial conditions \(Z_{\rm x}(\ell=0)=1\). From these the normalized susceptibilities \(\chi_{\rm x}\) in the channel x can be obtained from the definition (29) with the initial condition \(\chi_{\rm x}(\ell=0)=0\). The main differences with respect to the \(g\)-ology model are in the bubbles and in the fact that the marginal coupling constants are influenced by the irrelevant ones. ## 3 Lattice model: results and discussion In this section, we will discuss the consequences of taking into account the lattice effects coming from the non-linearity of the spectrum and the momentum dependence of interactions in the determination of most singular quantum phases of the EHM as a function of filling. The calculations are limited to the weak-coupling sector. ### Half-filled case Before considering non zero values of the chemical potential, let us examine as a benchmark of our method the extensively studied half-filling case. The tight-binding spectrum at \(\mu=0\) shows a non-vanishing curvature as one moves away from the Fermi points \(\pm k_{\rm F}\). On the boundaries and at the center of the Brillouin zone, the spectrum displays a vanishing slope, which causes the appearance of a van Hove singularity. At half-filling the progressive integration of degrees of freedom is then symmetric with respect to the exchange of occupied and empty states. From the integration of Eqs. (42) and (49-51), and by using the intial conditions (19) and (46-47) for the couplings, one obtains the half-filling EHM phase diagram shown in Fig. 6. Among the most striking modifications made to the continuum \(g\)-ology phase diagram of Fig. 5-(a), we first note the phases located in the vicinity of the line \(U=2V\). Recall that in the \(g\)-ology framework, both \(g_{1}\) and \(g_{3}\) vanish along that line at half-filling, which leads to the conditions of the Tomonaga-Luttinger (TL) model. In this framework, crossing the line then corresponds to a change of sign of the \(g_{1}\) and \(g_{3}\) coupling constants (see Eq. (19)). If one considers the situation on the line \(U=2V>0\) in the repulsive part of the diagram, the gapless regime of the TL model with equally singular SDW and CDW suceptibilities is made unstable due to the presence of irrelevant couplings. Thus below but close to the line Figure 6: Phase diagram of the EHM at half filling. The points A, B, C and C’ are discussed in the text. The dashed lines correspond to the phase boundaries of the continuum limit of the model shown in Fig. 5-(a). The subscript \(\sigma/\rho\) indicates the presence of a gap in spin/charge excitations. \(U\) and \(V\) are expressed in units of bare hopping \(t\). \(U=2V\) at C' in the phase diagram of Fig. 6, \(g_{3}\) evolves to positive values and then becomes relevant together with the combination \(2g_{2}-g_{1}\). Both diverge at some critical \(\ell_{\rho}\), indicative of a charge (Mott) gap. The fate of \(g_{1}\) is of particular interest since though repulsive initially, it evolves toward negative values and then its flow ultimately separates from those of \(g_{3}\) and \(2g_{2}-g_{1}\) at sufficiently large \(\ell\) where the influence of irrelevant terms in (42) at \(|n|\neq 0\) becomes vanishingly small and can be ignored above some arbitrary value \(\ell^{*}\) or equivalently below an effective cutoff energy \(\Lambda^{*}=\Lambda e^{-\ell^{*}}\). One finally recovers the flow of the continuum-limit theory [Eq. (30)], implying \[g_{1}(\ell)=\frac{g_{1}(\ell^{*})}{1+g_{1}(\ell^{*})(\ell-\ell^{*})}\qquad( \ell\geq\ell^{*}), \tag{52}\] where \(g_{1}(\ell^{*})<0\). Typically, we have \(|g_{1}(\ell^{*})|\ll 1\), so that the singularity of (52) will invariably leads to a finite, though very small, gap \(\Delta_{\sigma}\sim\Lambda^{*}e^{-1/|g_{1}(\ell^{*})|}\) in the spin sector. Slightly above the \(U=2V\) line at C in the phase diagram, both \(g_{1}\) and \(g_{3}\) are initially attractive. While \(g_{1}\) stays attractive and evolves to strong coupling with the formation of a spin gap \(\Delta_{\sigma}\), which is much stronger in comparison to C', the coupling \(g_{3}\), though initially attractive, changes sign and becomes repulsive at the beginning of the flow due to its coupling to irrelevant terms. According to Fig. 8-(b), the flows of \(g_{3}\) and \(2g_{2}-g_{1}\) then evolve to strong coupling and lead to the formation of a charge gap \(\Delta_{\rho}\). The consequence of effective repulsive \(g_{3}\) and attractive \(g_{1}\) couplings on the nature of most singular correlations is significant. On the \(U=2V\) line, instead of the coexistence of gapless CDW and SDW phases predicted by the TL model, a fully gapped BOW phase emerges. According to Figs. 8-(a),(c), the gapped BOW state extends on either side of the line defining a fan shape region where it dominates over SDW and CDW phases. These findings confirm previous RG results [25, 27, 28], and are consistent with those of numerical simulations in the weak-coupling region of the phase diagram [18, 19, 23]. We now turn to the attractive sector surrounding the \(U=2V\) line, namely the region \(U<0\) in the phase diagram of Fig. 6. In the \(g\)-ology formulation of the EHM, the TL conditions \(g_{1}=g_{3}=0\) at \(U=2V\) will be also unstable due to the presence of irrelevant terms that couple spin and charge degrees of freedom at the beginning of the flow. Thus in spite of \(g_{3}\) remaining irrelevant, \(g_{1}(\ell)\) becomes negative for \(\ell\geq\ell^{*}\), as shown in Fig. 7-(b); \(\ell^{*}\) being large, this leads to a small spin gap \(\Delta_{\sigma}\) [Eq. (52)]. As displayed in Fig. 6, this tips the balance in favor of SS as the stable phase, impinging on the region of TS stability found in the continuum \(g\)-ology theory (Fig. 5). The resulting growth of the SS region against the gapless TS one leads to a convex SS-TS boundary in the phase diagram that is consistent with previous weak-coupling RG calculations [28] and exact diagonalization results of Nakamura [18]. A related bending of phase boundary is also found for the \(U=-2V\) line separating the Mott SDW and gapless TS phases in the TL model. The SDW state is then favored against TS at \(U>0\) and \(V<0\). This is illustrated in Fig. 7-(c),(d) for the point B of Fig. 6 where \(g_{2}\) changes sign at the beginning of the flow, so that umklapp scattering, known to be irrelevant on the \(U=-2V\) line in the continuum \(g\)-ology theory, becomes marginally relevant with a small but finite charge gap in the presence of irrelevant terms in (42), which enlarges the stability region of the Mott SDW state. Regarding the rest of the phase diagram of Fig. 6, only quantitative changes in the flow of coupling constants result from the presence of irrelevant terms generated by lattice effects. These results confirm those of Ref. [28] obtained by a different RG approach. We conclude that even if the lattice EHM model at half-filling is invariably described at sufficiently low energy by an effective continuum \(g\)-ology model, it is difficult to determine the initial conditions of this effective model without a careful analysis of the physics at high energy. Taking directly the continuum limit from the bare Hamiltonian may lead to wrong Figure 7: Flow of the three-leg vertices \(Z_{\mathbf{x}}\) of the susceptibilities [(a),(c)] and coupling constants [(b),(d)] for points A and B of the phase diagram of Fig. 6 at half-filling. A:(-1.69, -1.0), B:(1,84, -1.0). Figure 8: Flow of the three-leg vertices \(Z_{\mathbf{x}}\) of the susceptibilities [(a),(c)] and coupling constants [(b),(d)] for points C’ and C of the phase diagram in Fig. 6, near the \(U=2V>0\) line at half-filling. C:(1.81, 1.03), C’:(1.43, 0.69). conclusions as to the nature of the ground state and in turn the structure of the phase diagram. These effects carry over away from half-filling for the EHM model, as we shall discuss next. ### Away from half-filling As far as the part played by the spectrum is concerned, we first note that away from half-filling, when \(\mu\neq 0\), the integration of degrees of freedom is no longer symmetric with respect to the Fermi level, except in the low-energy domain where \(\Lambda\ll\Lambda_{0}\) and the spectrum can be considered essentially linear, as generically depicted in Fig. 1. As a consequence, the RG flow can be divided into three regimes. In the first regime, the asymmetry between electrons and holes plays an important role. Typically, for \(\mu>0\), we can have \(\mathcal{N}(\xi>\Lambda)=0\), that is, no fermion states are available, whereas \(\mathcal{N}(\xi<-\Lambda)\neq 0\) (Fig. 1). The contributions at the level of the bubbles \(\mathcal{L}^{\text{ph,pp}}\) will be affected accordingly. Thus, there will be no \(2k_{\text{F}}\) particle-hole excitations and \(\mathcal{L}^{\text{eh}}\) will vanish in this regime (See Fig. 16). This contrasts with Cooper pair excitations contributing to \(\mathcal{L}^{\text{pp}}\) which are present for \((-k,k)\) pairs of momentum where \(\mathcal{N}(\xi_{k})\neq 0\). It follows that \(\mathcal{L}^{\text{pp}}\) will be only halved in amplitude, the remaining part being still logarithmic. As we will see, this is responsible for a sizeable screening of interactions at the beginning of the flow, whose impact alters the structure of the phase diagram obtained in the continuum limit. This is reminiscent of the screening of Coulomb interactions by pairing fluctuations in the theory of conventional superconductivity [41]. The second regime corresponds to the region of \(\Lambda\) where we have \(\mathcal{N}(\xi)\approx\mathcal{N}(-\xi)\), but where the spectrum is still poorly approximated by a linear function. In this regime, the logarithmic singularity of the p-h channel is only partly restored while the one in the p-p channel is complete (See Fig. 16); this imbalance between the two scattering channels favors the screening effects of the Coulomb term. Finally, the last regime corresponds to the continuum limit at small \(\Lambda\), for which we can write \(\mathcal{N}(\xi)\approx\mathcal{N}(-\xi)\approx 1/\pi v_{\text{F}}\). This corresponds to the density of states used in the \(g\)-ology model for each fermion branch and both spin orientations. Besides these loop effects associated to the density of states, the chemical potential has also an impact on the Peierls loop \(\mathcal{L}_{\text{p}^{\prime}}\) in which the reciprocal lattice vector is involved in momentum conservation making the nesting relation not perfect anymore [36, 38]. As a consequence, \(\mathcal{L}_{\text{p}}\) survives but not \(\mathcal{L}_{\text{p}^{\prime}}\), so that the equations for normal \(g_{1}\) and \(g_{2}\) processes become independent of umklapp processes at \(\Lambda(\ell)<v_{\text{F}}\mu\). Furthermore, as discussed at the level of Eq. (36), because of the reduction of phase space available for umklapp processes as a function of \(\Lambda\) at \(\mu\neq 0\), \(g_{3}\) is intrinsically suppressed, since \(4k_{\text{F}}\neq 2\pi\) away from half-filling. #### 3.2.1 Small doping One can now consider the phase diagram for small departure from half-filling, namely at \(\mu=0.035\) (Fig. 9), integrating the flow equations (42), (49-50) and (51) with the initial conditions (19), (46) and (47). In the repulsive sector near the \(U=2V\) line, we see that the regions of fully gapped BOW and charge-gapped SDW phases shrink in size, and only exist above some threshold \(U_{\text{c}}\) in the interactions. Thus a finite region, with incommensurate CDW, BOW and SDW phases, unfolds at small coupling, these having no gap in the charge sector. The putative gap is indeed suppressed by the energy scale \(v_{\text{F}}\mu\) that stops the flow of \(2g_{2}-g_{1}\) and \(g_{3}\) towards strong coupling when \(\Lambda(\ell)<v_{\text{F}}\mu\). The profile of the critical \(U_{\text{c}}\) shown in Fig. 10 for the onset of the gapped BOW phase as a function of doping \(\mu\), is well described by a power law \(U_{\text{c}}(\mu)\simeq 8.03\mu^{b}+c\), where \(b\simeq 0.53\). Here \(c\to 0\) when the temperature goes to zero indicating that in the ground state, \(U_{\text{c}}\to 0\) as \(\mu\to 0\). At non zero \(\mu\), a finite region of dominant BOW state with only a spin gap and gapless charge excitations forms in the phase diagram. At point D in Fig. 9 for Figure 10: The critical coupling \(U_{c}\) is plotted as a power law \(U_{c}=au^{b}+c\) of the chemical potential \(\mu\). The gapped BOW\({}_{\sigma,\rho}\) phase exists for all \(U\geqslant U_{c}\). Here \(b=0.53\), \(a=8.06\), and the constant \(c=0.34\) for a temperature of \(10^{-7}\) used in the calculations. Figure 9: Same as Fig. 6 but away from half-filling: \(\mu=0.035\). The dashed lines refer to the phase boundaries of the continuum limit in Fig. 5-(b). The open circle corresponds to the threshold value \(U_{c}(\mu)\) for the onset of a gapped BOW state as a function of \(\mu\) (Fig. 10). The point D in the BOW\({}_{\sigma}\) charge-gapless region is discussed in the text and Fig. 11. The long-dashed lines indicate the boundary above which a charge gap is present in the continuum limit. instance, the corresponding flow of the couplings displayed in Fig. 11 shows a growth followed by the leveling off of repulsive umklapp scattering. This is the signature that \(g_{3}\) becomes irrelevant beyond some finite value of \(\ell\). Nevertheless, this trajectory favors BOW correlations against CDW ones; it also initiates an incommensurate regime in which \(2g_{2}(\ell)-g_{1}(\ell)\) evolves toward a constant. Regarding the attractive backscattering amplitude \(g_{1}\), it will according to (52) invariably lead to a small spin gap at large \(\ell\). Dominant BOW correlations away from half-filling but at finite \(U\) and \(V\) near the line \(U=2V\) have been noticed numerically in quantum Monte Carlo simulations [19], in qualitative agreement with the present results. If one moves downward in the second quadrant of the phase diagram of Fig. 9, we see that a finite \(\mu\) suppresses the transition for the charge gap at the boundary between SDW and TS phases which is present at half-filling. The SDW phase then becomes entirely gapless near the boundary. One finally notes that the frontier between CDW and SS in the fourth quadrant of the phase diagram of Fig. 9 has a bit moved upward which is consistent with the results of the continuum limit, as already shown in the lower panel of Fig. 5. However, as we will see next this boundary is noticeably affected at larger \(\mu\). #### 3.2.2 Intermediate doping The phase diagram at intermediate doping \(\mu=0.3\) is displayed in Fig. 12. Due to the weak effect of umklapp processes at this filling, there is no region of the phase diagram characterized by a charge gap. However, the influence of \(g_{3}\) at the beginning of the flow is still finite which, together with the change of \(g_{1}\) to negative values due to irrelevant coupling terms, still defines a region of dominant BOW phase near the \(U=2V\) line. The characteristics of the flow of coupling constants in this BOW region, albeit much further reduced in their amplitudes, are similar to those portrayed in Fig. 11. In the phase diagram of Fig. 12, the SDW-TS boundary turns out to be relatively close to the prediction of the model in the continuum limit. Here only the weak impact of umklapp and irrelevant couplings, which preserves the sign of \(g_{2}\), restores the stability of TS compared to the situation at very small \(\mu\) (e.g., point B of Figs. 6 and 7-(c),(d)). In the fourth quadrant of Fig. 12 the deviations with respect to the prediction of the contin Figure 11: Flow of the three-leg vertices \(Z_{\rm x}\) of (a) density-wave susceptibilities and (b) coupling constants at point D of the phase diagram of Fig. 9 (\(\mu=0.035\), \(n=1.01\)). D:(1.30, 0.63). Figure 12: Same as Fig. 9 but for \(\mu=0.3\) (\(n=1.1\)). The point E is discussed in detail in the text. Figure 13: Flow of the three-leg vertices \(Z_{\rm x}\) of (a) CDW and SS susceptibilies and (b) the coupling constants at point E of the diagram in Fig. 12 (\(\mu=0.3\), \(n=1.10\)). E:(-1.24, 1.06). uum model are particularly significative. One observes an expansion of the SS phase against CDW which goes well beyond its region of stability found in the continuum limit. The origin of this expansion resides in the sizable asymmetry of the spectrum with respect to the Fermi level. At the beginning of the flow, that is at large \(\Lambda\), all \(2k_{\rm F}\) particle-hole pair fluctuations coming from closed loops, vertex and ladder diagrams in Fig. 2 are strongly suppressed, a consequence of the lack of available density of states for either electrons or holes for this p-h pairing when asymmetry is pronounced, as illustrated in Fig. 1. This regime is followed by a second one at relatively large \(\Lambda\) where these fluctuations are only partially restored. Thus there is a sizeable \(\Lambda\) interval where p-p ladder diagrams for pairing fluctuations (first row of Fig. 2) dominate (See e.g. Fig. 16 at finite \(\mu\)), and govern the flow of \(g_{1}\) and \(g_{2}\). At point \(E\) for instance in Fig. 12, the coupling \(g_{2}\), though initially repulsive is screened by pairing fluctuations, to the point where it changes sign and becomes attractive. This is shown in Fig. 13-(b). As a result, the SS phase is favored against CDW (Fig. 13-(a)). This effect is reminiscent of the screening of the Coulomb interaction by pairing fluctuations which favors phonon-induced singlet superconductivity in isotropic metals [41]. The strong reduction of the \(2k_{\rm F}\) particle-hole pair contribution at the beginning of the flow is also responsible for making umklapp processes irrelevant in the whole CDW region of the upper half of the phase diagram. This is why no charge gap is found, in contrast to the continuum-limit prediction (region above the spaced dashed line in Fig. 12). #### 3.2.3 Large doping One now considers the phase diagram at the much higher doping level, \(\mu=1.03\), shown in Fig 14. The whole diagram indicates that \(g_{3}\) has virtually no effect in this range of doping reflecting an incommensurate situation for the electron system where no trace of dominant short-range BOW correlations is found. This coupling can then be safely ignored from the analysis. Only a spin gap can occur. In the continuum model, we have seen that it is governed by the flow of \(g_{1}(\ell)\) in Eq. (31) and the initial condition \(g_{1}\simeq U-V<0\) for attractive back Figure 14: Same as Fig. 12 but for \(\mu=1.03\) (\(n=1.1\)). The point F is discussed in the text and Fig. 15. ward scattering, that is above the dashed line \(U\simeq V\) in Fig. 14. According to the figure, the continuum-model result is however significantly altered by lattice effects and important deviations are present on the repulsive side of the line. A significant region develops with gapless spin excitations although \(g_{1}\) is initially attractive. At point F for instance, Fig. 15 shows that \(g_{1}\) indeed starts in the attractive domain but rapidly evolves towards repulsive sector to become in turn a marginally irrelevant variable at sufficiently large \(\ell\). This remarkable effect has its origin in the pronounced asymmetry of the spectrum which, as we have seen, suppresses most, if not all, contributions coming from \(2k_{\rm F}\) particle-hole loops at large \(\Lambda\); small momentum pairing fluctuations coming from contributions of ladder Cooper diagrams to \(g_{1}\) in Fig. 2 largely dominate. Since for these terms the product \(g_{1}g_{2}\) in lowest order is initially negative, this makes these diagrams globally positive and then push the flow of \(g_{1}\) towards positive values. This can be seen as the counterpart effect of the screening discussed above for the enhancement of singlet superconductivity by pairing fluctuations. This counter-screening of \(g_{1}\) enlarges the stability region of the SDW phase and is detrimental to the CDW phase that shrinks in size. Counter-screening is also present but to a lesser extent in the negative part of the \(U\simeq V\) line where the TS phase with effective repulsive \(g_{1}\) is found slightly above the continuum \(U\simeq V\) line. If we turn our attention to the fourth quadrant of the phase diagram, we see that compared to the results shown in Fig. 12 the stability region of the SS phase is further broadened against CDW, reaching about twice the area predicted by the continuum model. The screening of \(g_{2}\) by pairing fluctuations from positive to negative values results from the non-linear spectrum in the first two regimes of the flow. This follows the pattern already displayed in Fig. 13, which is here magnified due to the more pronounced asymmetry of the spectrum. One can show that this trend is confirmed when increasing \(\mu\). Calculations carried out at higher doping, namely up to \(\mu=\sqrt{2}\) (3/4-filling) show roughly a similar phase diagram as in Fig. 14, except for the boundaries delimiting the CDW phase. By the same mechanism of screening the CDW region monotonously shrinks in size to the benefit of the SS or SDW phase.1 This leads to the interesting conclusion that the asymmetry between electron and hole states in an incommensurately Figure 15: Flow of the three-leg vertices \(Z_{\rm x}\) of the susceptibilities (a) and couplings (b) at point F of Fig. 14 at \(\mu=1.03\) (\(n=1.34\)). F:(1.30, 1.94). filled spectrum can act as an efficient mechanism to screen the repulsive part of long-range Coulomb interactions and thus promotes superconductivity. ## 4 Conclusions and perspectives In this work we have developed a weak-coupling functional RG approach to 1D lattice models of interacting fermions in one dimension. In the framework of the EHM, we have shown how lattice effects modify in a systematic way the initial conditions defining the effective continuum field theory which invariably emerges at sufficiently low energy. For repulsive couplings at half-filling, for instance, the impact of irrelevant interactions on marginal couplings, which couple spin and charge degrees of freedom, turn out to be a key factor in the emergence of the gapped BOW state that overlaps the \(U=2V>0\) gapless Tomonaga-Luttinger line of the continuum theory. We have also checked that qualitative changes in the nature of ground states are also manifest in the attractive sector of the EHM phase diagram at half-filling. These changes are due to irrelevant terms affecting the flow of marginal couplings at high energy and introducing noticeable shifts in the transition lines of the continuum theory, altering the stability region of the gapless TS state in favour of SS or SDW gapped states. These alterations of the continuum EHM phase diagram at weak coupling are consistent with previous numerical studies [17, 18]; they also confirm the results obtained from numerical fRG in the repulsive coupling sector [27], and more generally from a Wilsonian RG approach to the non linearity of the spectrum and momentum-dependent interaction of the EHM [28]. We have also carried out our fRG procedure away from half-filling. In this case, the particle-hole symmetry in the tight-binding spectrum is lost and the integration of degrees of freedom becomes asymmetric with respect to the Fermi level. This notably affects the influence of high-energy fermion states on the flow of scattering amplitudes and susceptibilities. An imbalance between the logarithmic screening of the p-p and \(2k_{\rm F}\) p-h scattering channels is introduced which couples charge and spin degrees of freedom. In a finite energy interval at the beginning of the flow, the \(2k_{\rm F}\) density-wave part and concomitantly the magnification of umklapp commensurability, are strongly reduced. This contrasts with the p-p scattering channel which is weakly affected and sees its logarithmic singularity maintained. As the integration of degrees of freedom approaches the Fermi level, the imbalance together with irrelevant interactions scale down to zero and the flow progressively evolves toward the one of an effective continuum theory. However, the input parameters that govern the low-energy flow are not those of the naive continuum limit and alter sizable parts of the EHM phase diagram compared to the continuum \(g\)-ology predictions away from half-filling. This is particularly manifest for the CDW state whose extent in the phase diagram at negative \(U\) and repulsive \(V\), for instance, is steadily reduced as a function of doping to the benefit of singlet superconductivity which gains in importance. This feature is not without bearing comparison with the screening of Coulomb interactions by high-energy pairing fluctuations in ordinary metals, which is known to promote the existence of superconductivity from retarded attractive coupling induced by electron-phonon interactions [41]. The approach developed in this paper can be easily transposed to other non-integrable models of interacting electrons defined on a lattice. This is the case of models with generalized non-local interactions [42, 43, 44, 45], for which numerical calculations are available at half-filling and known to deviate from the predictions of the \(g\)-ology approach in the field-theory continuum limit [18]. Another natural extension of the present work concerns the EHM in the quasi-one-dimensional case, where a weak but finite interchain hopping is taken into account. This may serve as a weak-coupling quasi-1D EHM to study the sequence of ground states that can unfold in strongly anisotropic correlated systems as a function of doping. Some of these applications are currently under investigation. C. B and L. D thank the National Science and Engineering Research Council of Canada (NSERC), the Regroupement Quebecois des Materiaux de Pointe (RQMP) and the Institut Quantique of Universite de Sherbrooke for financial support. The authors thank E. Larouche and M. Haguier for their support on various numerical aspects of this work. ## Appendix A Flows of coupling constants ### Finite-temperature, one-dimensional, single-band systems In this first part of the Appendix, we detail the derivation of the flow equations for the scattering amplitudes at the one-loop level for both marginal and irrelevant couplings. To do so we first make the correspondence \(k\to(\eta,\xi)\) between the momentum and the energy \(\xi\) and its branch \(\eta\), so that \[g_{k_{1},k_{2},k^{\prime}_{1}}=g^{\bar{\eta}}(\bar{\xi}),\] where \(\bar{x}=(x_{1},x_{2},x_{1^{\prime}})\) for \(x=\xi,\eta\). From the diagrams of Fig. 2, the flow equations of the coupling constants at the one-loop level comprise a sum of contributions coming from p-p and p-h scattering channels, which can be put in the form: \[\Lambda\partial_{\Lambda}\bar{g}^{\bar{\eta}}(\bar{\xi}) =\sum_{\mathbf{x}}D_{\mathbf{x}}^{\bar{\eta}}(\bar{\xi})\] \[=D_{\text{pp}}^{\bar{\eta}}(\bar{\xi})+D_{\text{phh}}^{\bar{\eta} }(\bar{\xi})+D_{\text{phh2}}^{\bar{\eta}}(\bar{\xi})+D_{\text{ph3}}^{\bar{\eta }}(\bar{\xi}), \tag{53}\] where the diagrams \[D_{\mathbf{x}}^{\bar{\eta}}(\bar{\xi})=\sum_{p}c_{\mathbf{x}}^{\bar{\eta}}( \bar{\xi})\,\gamma_{x1}^{\bar{\eta}}(\bar{\xi})\,\gamma_{x2}^{\bar{\eta}}( \bar{\xi}) \tag{54}\] are expressed in terms of loops \(\mathcal{L}_{\mathbf{x}}^{\bar{\eta}}(\bar{\xi})\) and combinations of coupling constants \(\gamma_{x1}^{\bar{\eta}}(\bar{\xi})\) and \(\gamma_{x2}^{\bar{\eta}}(\bar{\xi})\) for each scattering channel \(\mathbf{x}\). They are respectively given by \[D_{\text{pp}}^{\bar{\eta}}(\bar{\xi}) =\sum_{p}\mathcal{L}_{p,-p+k_{1}+k_{2}}^{\text{pp}}\,g_{k_{2},k_ {1},-p+k_{1}+k_{2}}g_{p,-p+k_{1}+k_{2},k^{\prime}_{1}}, \tag{55a}\] \[D_{\text{phh}1}^{\bar{\eta}}(\bar{\xi}) =\sum_{p}\mathcal{L}_{p,\,p-k^{\prime}_{1}+k_{2}}^{\text{ph}}\,g _{k_{1},p-k^{\prime}_{1}+k_{2},p}\,g_{k_{2},p,-k^{\prime}_{1}+k_{2}},\] (55b) \[D_{\text{phh2}}^{\bar{\eta}}(\bar{\xi}) =-2\sum_{p}\mathcal{L}_{p,\,p+k^{\prime}_{1}-k_{1}}^{\text{ph}}\,g _{k_{1},p+k^{\prime}_{1}-k_{1},k^{\prime}_{1}}g_{k_{1},p+k^{\prime}_{1}-k_{1},k^{\prime}_{1}}\,g_{k,p,k_{2},p+k^{\prime}_{1}-k_{1}},\] (55c) \[D_{\text{ph3}}^{\bar{\eta}}(\bar{\xi}) =\sum_{p}\mathcal{L}_{p,\,p+k^{\prime}_{1}-k_{1}}^{\text{ph}}(g _{k_{1},p+k^{\prime}_{1}-k_{1},p}\,g_{p,k_{2},p+k^{\prime}_{1}-k_{1}}+g_{k_{1},p+k^{\prime}_{1}-k_{1},k^{\prime}_{1}}\,g_{k_{2},p,p+k^{\prime}_{1}-k_{1}}). \tag{55d}\] As explained in the main text, \(g^{\bar{\eta}}(\bar{\xi})\) and \(D_{\mathbf{x}}^{\bar{\eta}}(\bar{\xi})\) on each side of (53) can be formally expanded in power of \(\bar{\xi}\) to get the flow equations of the set of marginal and irrelevant couplings. ### Loop expressions The expressions of the loop contributions for the diagrams of the \(\mathrm{p-p}\) and \(\mathrm{p-h}\) scattering channels are given by \[\mathcal{L}_{p,q}^{\text{ph}}=\Lambda\partial_{\Lambda}\Pi_{p,q+p}^{\text{ph} },\qquad\mathcal{L}_{p,q}^{\text{pp}}=\Lambda\partial_{\Lambda}\Pi_{p,q-p}^{ \text{pp}}, \tag{56}\] with \[\Pi^{\text{ph}}_{p,\,q} =\frac{\pi\nu_{\text{F}}}{2L}\theta_{\Lambda}(p)\theta_{\Lambda}(p+q )\frac{n_{\text{F}}\big{(}\xi(p)\big{)}-n_{\text{F}}\big{(}\xi(p+q)\big{)}}{\xi(p )-\xi(p+q)},\] \[\Pi^{\text{pp}}_{p,\,q} =-\frac{\pi\nu_{\text{F}}}{2L}\theta_{\Lambda}(p)\theta_{\Lambda} (p+q)\frac{n_{\text{F}}\big{(}\xi(p)\big{)}-n_{\text{F}}\big{(}-\xi(p+q)\big{)} }{\xi(p)+\xi(p+q)}, \tag{57}\] where \(n_{\text{F}}(\xi)=(1+\mathrm{e}^{\beta\xi})^{-1}\) is the Fermi-Dirac distribution and \(\theta_{\Lambda}(k)\) is the regulator or cut-off function of the RG procedure. The latter is introduced explicitly in Sec. A.3 below. Let us discuss some limiting cases for these loops at vanishing external momentum. These enter in the flow equations of response functions. We can define the following intensities in each scattering channel. In the p-h channel, we have \[\mathcal{L}_{\text{p}} =\sum_{p\geq 0}\mathcal{L}^{\text{ph}}_{p,\,p-2k_{\text{F}}}= \sum_{p\geq 0}\mathcal{L}^{\text{ph}}_{-p,\,p+2k_{\text{F}}}, \tag{58a}\] \[\mathcal{L}_{\text{p}^{\prime}} =\sum_{p\geq 0}\mathcal{L}^{\text{ph}}_{p,\,p+2k_{\text{F}}}= \sum_{p\geq 0}\mathcal{L}^{\text{ph}}_{-p,\,-p-2k_{\text{F}}},\] (58b) \[\mathcal{L}_{\text{L}} =\sum_{p\geq 0}\mathcal{L}^{\text{ph}}_{p,\,p}=\sum_{p\geq 0} \mathcal{L}^{\text{eh}}_{-p,\,-p}, \tag{58c}\] which correspond respectively to the \(2k_{\text{F}}\) eh or Peierls loops without (\(\mathcal{L}_{\text{p}}\)) and with (\(\mathcal{L}_{\text{p}^{\prime}}\)) umklapp, and to \(q=0\) eh loop. As for the p-p or Cooper loop at zero pair momentum, it is given by \[\mathcal{L}_{\text{C}}=\sum_{p\geq 0}\mathcal{L}^{\text{pp}}_{p,\,-p}=\sum_{p \geq 0}\mathcal{L}^{\text{pp}}_{-p,\,p}. \tag{59}\] These quantities are plotted in Fig. 16 as a function of the RG time \(\ell\) defined by \(\Lambda=\Lambda_{0}\mathrm{e}^{-\ell}\). We can observe the presence of the van Hove singularity located at the edge of the spectrum. At half-filling the amplitudes of the Cooper and Peierls bubble intensities are the same at all \(\ell\) but opposite in sign and lead to maximum interference between certain classes of diagrams in Fig. 2. Away from half-filling, the Peierls intensity \(\mathcal{L}_{\text{p}^{\prime}}\), which involves umklapp scattering, sees its intensity suppressed as a function of \(\ell\), when typically \(\Lambda(\ell)<v_{\text{F}}\mu\). This differs from the normal part \(\mathcal{L}_{\text{p}}\) with no umklapp which keeps its full intensity down to the thermal shell. We also note in the third panel of Fig. 16 that at sizeable doping all the Peierls intensities are zero at the beginning of the flow. This results from the particle-hole asymmetry of the spectrum which suppresses electron or hole states required for \(2k_{\text{F}}\) eh pairing. By contrast the asymmetry of the spectrum suppresses only half of the states for p-p pairing states so that the Cooper intensity is only halved and remains finite at the beginning of the flow. ### Choice of the regulator The regulator \(r_{a}(x)\) is realized as a smooth step function, and depends on a rigidity parameter \(a\), such that \(r_{a=\infty}(x)=\Theta(x-1)\), where \(\Theta(x)\) is the Heaviside function. Its expression is the following: \[r_{a}(x)=g(ax-a+1/2), \tag{60}\] where \[g(x) =\frac{f(x)}{f(x)+f(1-x)}, \tag{61}\] \[f(x) =\begin{cases}\mathrm{e}^{-1/x}\text{ if }x>0,\\ 0\text{ otherwise}.\end{cases} \tag{62}\] Figure 16: Cooper, Peierls and Landau bubbles shown for different values of the chemical potential, in the case of a tight-binding spectrum. The first panel is at half-filling and the others at different fillings. In this figure, \(\ell\) is the RG time, defined by \(\Lambda=\Lambda_{0}\mathrm{e}^{-\ell}\). The peak is due to the van Hove singularity present in the density of states. The regulator is shown in Fig. 17. It enters the flow equations through the function \(\theta_{\Lambda}(k)\) in the loop expressions (see Eqs. 57). This function only depends on the momentum \(k\) through the variable \(\xi(k)=\xi\), and is given by \[\theta_{\Lambda}(k)=r_{a}(|\xi|/\Lambda).\] Such a cutoff procedure is meant to reproduce the Wilsonian RG approach, which amounts to a progressive integration of the degrees of freedom. Here, the UV degrees of freedom are integrated first, and the RG flow leads to a low-energy effective theory. In the case of one-dimensional fermions, the low-energy theory corresponds to a model with a linear spectrum comprising two branches centered around the two Fermi points. This is of course in stark contrast to the bosonic case for which the low-energy theory is described by modes of momenta \(k\approx 0\). Let us now clarify the structure of the bubble \(\mathcal{L}_{p,q}^{\text{ph,pp}}\). Each bubble is made of two factors: the first one is proportional to the cutoff function while the second is proportional to the derivative of this function with respect to the RG parameter \(\Lambda\) -- unslashed and slashed fermion lines respectively, in diagrams of Figs. 2,3 and 4. Since the cutoff function is roughly a regularized step function, its derivative is a regularized Dirac function, whose effect is a selection of modes of energy \(\xi\thicksim\Lambda\), and hence reproduces Wilson's idea. Van Hove singularity regularization.The regulator function \(r_{a}(x)\) can be used to regularize the van Hove singularity. The density of states has the schematic form: \[f(x)=\frac{\Theta(1-|x|)}{\sqrt{1-x^{2}}}, \tag{63}\] and is singular at \(x=\pm 1\). In order to regularize this function, we first define a regularized gate function: \[G_{a}(x)=r_{a}(x+2)\big{(}1-r_{a}(x)\big{)}, \tag{64}\] and then make the following replacement: \[f(x)\to f_{a}(x)=\frac{G_{a}(x)}{\sqrt{1-x^{2}G_{a}(x)}}. \tag{65}\] The regularized van Hove singularity is shown in Fig. 18. Such a regularization is advantageous, because it produces a smooth function, well suited for numerical evaluations. Furthermore, the error due the regularization is restricted to small segments around the singular points. This is because the regulator is built out of functions whose variation support is compact. The total number of states is recovered in the limit \(a\to\infty\).
2309.10018
The $1$-Level Density for Zeros of Hecke $L$-Functions of Imaginary Quadratic Number Fields of Class Number $1$
Let $\mathbb{K} = \mathbb{Q}(\sqrt{-d})$ be an imaginary quadratic number field of class number $1$ and $\mathcal{O}_{\mathbb{K}}$ its ring of integers. We study a family of Hecke $L$-functions associated to angular characters on the non-zero ideals of $\mathcal{O}_{\mathbb{K}}$. Using the powerful Ratios Conjecture (RC) due to Conrey, Farmer, and Zirnbauer, we compute a conditional asymptotic for the average $1$-level density of the zeros of this family, including terms of lower order than the main term in the Katz-Sarnak Density Conjecture coming from random matrix theory. We also prove an unconditional result about the $1$-level density, which agrees with the RC prediction when our test functions have Fourier transforms with support in $(-1,1)$.
Kristian Holm
2023-09-18T17:16:25Z
http://arxiv.org/abs/2309.10018v1
# The \(1\)-level density for zeros of Hecke ###### Abstract Let \(\mathbb{K}=\mathbb{Q}(\sqrt{-d})\) be an imaginary quadratic number field of class number \(1\) and \(\mathcal{O}_{\mathbb{K}}\) its ring of integers. We study a family of Hecke \(L\)-functions associated to angular characters on the non-zero ideals of \(\mathcal{O}_{\mathbb{K}}\). Using the powerful Ratios Conjecture (RC) due to Conrey, Farmer, and Zirnbauer, we compute a conditional asymptotic for the average \(1\)-level density of the zeros of this family, including terms of lower order than the main term in the Katz-Sarnak Density Conjecture coming from random matrix theory. We also prove an unconditional result about the \(1\)-level density, which agrees with the RC prediction when our test functions have Fourier transforms with support in \((-1,1)\). _Keywords:_ 1-level density, Hecke \(L\)-functions, the Ratios Conjecture, imaginary quadratic number fields _Mathematics Subject Classification:_ 11R42, 11M50 (Primary), 11R11, 11M41 (Secondary) ## 1 Introduction Many problems in modern number theory await progress due to the difficulty of obtaining exact information about zeros of \(L\)-functions. Perhaps not unrelated to this difficulty, the study of the large scale statistics of such zeros has also become a topic of much interest, the underlying philosophy being that a collection of objects is often more regular and well-behaved than the objects themselves. This line of research began with the work of Montgomery in the 1970's who famously conjectured [16] that the _pair correlation_ of normalized zeros of the Riemann zeta function, quantifying the "probability" of \(\zeta\) having two zeros within some prescribed distance of each other, is the same as the pair correlation for eigenvalues of random Hermitian matrices. Today, there is a large body of conjectures describing the links between \(L\)-functions and random matrices, or formulating properties that should hold for \(L\)-functions by analogy with random matrices. In this paper, we will focus on two of these: The Katz\(-\)Sarnak Density Conjecture and the \(L\)-functions Ratios Conjecture. The Katz\(-\)Sarnak conjecture is a statement about another statistic of the zeros of (a family of) \(L\)-functions, namely the \(1\)_-level density_, which was first studied by Ozluk and Snyder in [18]. For our purposes, it can be defined as follows: If \(\mathcal{F}=\{L_{k}:k\geqslant 1\}\) is a family of \(L\)-functions indexed by some parameter \(k\), and \(\mathcal{F}(K)=\{L_{k}:1\leqslant k\leqslant K\}\), let \[Z_{k}:=\{z:0\leqslant\operatorname{Re}(z)\leqslant 1,\,L_{k}(z)=0\}\] be the set of zeros of \(L_{k}\) in the critical strip. If \(\rho\in Z_{k}\), let \(\gamma(\rho)=-i(\rho-1/2)\). Thus, under the Riemann Hypothesis for the family \(\mathcal{F}(K)\), \(\gamma(\rho)\) is the imaginary part of the zero \(\rho\) of \(L_{k}\). Furthermore, let \(f:\mathbb{R}\to\mathbb{R}\) be an even Schwarz function with the property that its Fourier transform \(\widehat{f}\) has compact support. Then the \(1\)_-level density_ of the zeros of the family \(\mathcal{F}(K)\) is the number \[D(\mathcal{F}(K);f):=\frac{1}{K}{\sum_{k=1}^{K}{\sum_{\rho\in Z_{k}}}}f\bigg{(} \frac{\gamma(\rho)\log K}{\pi}\bigg{)}.\] Here, the scaling of \(\gamma(\rho)\) ensures that the average spacing between the zeros is approximately \(1\). Under the Grand Riemann Hypothesis, the \(1\)-level density measures the average density of the normalized zeros of the family \(\mathcal{F}(K)\)_in a weak sense._ That is, for zeros on the critical line \(\mathrm{Re}(z)=1/2\), and especially such zeros close to the real line, \(D(\mathcal{F}(K),\cdot)\) is a functional that sees their (scaled) distribution through the lens of a suitable test function. When taking more and more \(L\)-functions of the family and their zeros into account, one thus obtains a sequence of distributions. For the purposes of studying this sequence, and in particular its limit, such weak characterizations in fact give a complete picture of the distribution of the zeros of the family. (At least, this holds if one knows the weak characterizations of the distribution for a sufficiently large class of test functions, thanks to classic results in probability theory such as the Portmanteau theorem.) Regarding the limiting distribution of the zeros of a family of \(L\)-functions, Katz and Sarnak conjectured [14, 15] that when \(K\) tends to infinity, the functional \(D(\mathcal{F}(K),\cdot)\) converges weakly to some integral kernel that arises in the dimensional limit of the \(1\)-level density of eigenangles of random unitary matrices, chosen uniformly at random with respect to Haar measure either from the full unitary group \(\mathrm{U}(N)\) or from one of the subgroups \(\mathrm{USp}(N)\) (when \(N\) is even), \(\mathrm{O}(N)\), or \(\mathrm{SO}(N)\). Specifically, if the family \(\mathcal{F}\) has so-called _unitary symplectic symmetry type_, the _Katz-Sarnak Density Conjecture_ states that \[\lim_{K\to\infty}D(\mathcal{F}(K);f)=\int_{\mathbb{R}}\!f(x)\!\left(1-\frac{ \sin(2\pi x)}{2\pi x}\right)\mathrm{d}x \tag{1.1}\] for any even Schwarz function \(f\) whose Fourier transform has compact support. We emphasize that if the family \(\mathcal{F}(K)\) has a different symmetry type, the integral kernel which is conjectured to appear in the limit has a different form. Next, the very powerful \(L\)-functions Ratios Conjecture due to Conrey, Farmer, and Zirnbauer [5] asserts that averages of quotients of \(L\)-functions evaluated at certain parameters satisfy asymptotics that parallel those of quotients of characteristic polynomials of matrices. (See Section 3 for a more detailed statement.) Many authors have used the Ratios Conjecture to study statistical aspects of the zeros of \(L\)-functions, or various other aspects of such functions. For example, Conrey and Snaith [6] studied the pair correlation of the zeros of the Riemann zeta function. In the same paper, they also studied the \(1\)-level density for zeros of quadratic Diriehlet \(L\)-functions. Later, the Ratios Conjecture was used to study zeros of \(L\)-functions of a more general class of characters, namely _Hecke characters_ of a number field. The Hecke \(L\)-functions considered in [20] are those associated to angular characters of the Gaussian integers, and Waxman here used the Ratios Conjecture to compute the \(1\)-level density and identify lower-order terms (compared with the Katz-Sarnak heuristic) in this asymptotic. The goal of this paper is to follow [20] and do such a study for a general imaginary quadratic number field of class number \(1\). Thus, we consider a family \(\mathcal{F}(K)\) of \(L\)-functions associated to angular Hecke characters of such fields, which we will describe now. By the Baker-Heegner-Stark Theorem, a complete list of imaginary quadratic number fields with class number \(1\) is given by \(\mathbb{K}:=\mathbb{K}_{d}:=\mathbb{Q}(\sqrt{-d})\), where \(d\) is one of the _Heegner numbers_, \[d=1,2,3,7,11,19,43,67,163.\] Since the case \(d=1\) has already been treated in [20], we will let \(d\) denote any of the eight remaining numbers on the list above. (This restriction will also make certain computations simpler, as the arguments involve several functions defined conditionally on the value of \(d\).) Moreover, we will also let \(N\geqslant 1\) denote any fixed positive multiple of \(\left|\mathcal{O}_{\mathbb{K}}^{\times}\right|\), where \(\left|\mathcal{O}_{\mathbb{K}}^{\times}\right|<\infty\) is the order of the group of units in the ring \(\mathcal{O}_{\mathbb{K}}\). Our family of \(L\)-functions is then given by \(\mathcal{F}=\{L_{k}(s):k\geqslant 1\}\) where \[L_{k}(s):=\sum_{\begin{subarray}{c}I\subset\mathcal{O}_{\mathbb{K}} \\ I\neq 0\end{subarray}}\frac{\psi_{k}(I)}{\mathbb{N}(I)^{s}},\qquad\psi_{k}( \langle\alpha\rangle)=\Big{(}\frac{\alpha}{\overline{\alpha}}\Big{)}^{Nk},\] when \(\operatorname{Re}(s)>1\). We note that such \(L\)-functions have been studied for arithmetic purposes on several occasions in the past. To give a few examples, Harman and Lewis considered the functions \(L(s,\Xi_{k})\), \(k\geqslant 1\), with the Hecke character \(\Xi_{k}\) given by \(\Xi_{k}(\alpha)=(\alpha/\overline{\alpha})^{2k}\) for \(\alpha\in\mathbb{Z}[i]\), and proved [10, Thm. 1] the existence of infinitely many rational primes \(p\) that have a Gaussian prime factor with a small argument (depending on the size of \(p\)). Later, in [19] Rudnick and Waxman considered the same family of \(L\)-functions and counted Gaussian primes in more general sectors of the complex plane, in a sense quantifying Hecke's classical theorem about the equidistribution of the angles of Gaussian primes on the circle ([11], [12]). In particular, the authors in [19] studied the variance of such smooth counts of Gaussian primes and conjectured an asymptotic ([19, Conjecture 1.2]) for this statistic based on a random matrix model and an analogue with similar counts over function fields. The asymptotic behaviour of this variance was, in fact, investigated quite recently from a different point of view in [3], with a particular point of interest being the nature of the lower order terms in the asymptotic. An important aspect of the work in [3] relied on a study of the Hecke \(L\)-functions \(L(s,\Xi_{k})\) discussed above in combination with the Ratios Conjecture. We now state our main results. In the formulations of these (and throughout the paper), \(D\) denotes the discriminant of our number field \(\mathbb{K}\), and \(\chi(n)=(-d/n)\) denotes the Dirichlet character coming from the Kronecker symbol (see Section 2.3). Moreover, \(\gamma\) denotes the Euler-Mascheroni constant. Theorem 1.1.: _Suppose that \(f:\mathbb{R}\to\mathbb{R}\) is an even Schwarz function with \(\operatorname{supp}\widehat{f}\subset(-1,1).\) Then we have_ \[D(\mathcal{F}(K);f)=\int_{\mathbb{R}}\!f(x)\bigg{(}1-\frac{\sin(2\pi x)}{2\pi x }\bigg{)}\;\mathrm{d}x+\frac{\ell_{0}\hat{f}(0)}{\log K}+O\bigg{(}\frac{1}{( \log K)^{2}}\bigg{)},\] _where_ \[\ell_{0} =-\!\!\int_{1}^{\infty}\!t^{-2}\!\left(-t+\sum_{n\leqslant t}\! \Lambda(n)\right)\,\mathrm{d}t-\frac{L^{\prime}(1,\chi)}{L(1,\chi)}-2\sum_{ \begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}\frac{\log p}{p^{2}-1}\] \[\qquad+\log\sqrt{|D|}-\log 2\pi+\log N-2-\frac{\sqrt{d}\log d}{d-1}- \frac{2\log 2}{3}\cdot 1(d\neq 2,7),\] _and \(\Lambda(n)\) denotes the von Mangoldt function._ By assuming the Grand Riemann Hypothesis (GRH) and the Ratios Conjecture ([5]), we also prove the following result. Theorem 1.2.: _Suppose that \(f:\mathbb{R}\to\mathbb{R}\) is an even Schwarz function whose Fourier transform has compact support. Assume the GRH and the Ratios Conjecture. Then_ \[D(\mathcal{F}(K);f)=\int_{\mathbb{R}}\!f(x)\bigg{(}1-\frac{\sin(2\pi x)}{2\pi x }\bigg{)}\;\mathrm{d}x+\frac{\ell_{0}}{\log K}\Big{(}\hat{f}(0)-\hat{f}(1) \Big{)}+O\bigg{(}\frac{1}{(\log K)^{2}}\bigg{)},\] _where \(\ell_{0}\) is as in the statement of Theorem 1.1._ Remarks. 1) Although we give an explicit value for \(L^{\prime}(1,\chi)/L(1,\chi)\) in Lemma 4.5, this expression is rather intricate, and we therefore decided to keep the notation \(L^{\prime}(1,\chi)/L(1,\chi)\) in the statements of the theorems. 2) We note that both Theorem 1.1 and Theorem 1.2 verify the Katz-Sarnak Density Conjecture, but to different extents: While Theorem 1.1 requires \(f\) to have a Fourier transform with very small support, Theorem 1.2 holds without any such assumption. 3) The appearance of \(\left|\mathcal{O}_{\mathbb{K}}^{\times}\right|\) in the exponents of the characters \(\psi_{k}\) is very natural, since \(\psi_{k}\) must satisfy a condition related to the units in order to define a Hecke character on the ideals of \(\mathcal{O}_{\mathbb{K}}\) (cf. Section 2.1). Compared to the setup in [20], we are considering a more general family of characters since we allow \(N\) to be any multiple of the order of the unit group. The reason why we are able to handle this more general case is that we formulate and prove a generalization (Theorem 6.1) of the result [19, Lemma 2.1] relating the arguments and norms of certain elements of \(\mathcal{O}_{\mathbb{K}}\). Our approach is based on [20]. We first prove Theorem 1.2, and this is accomplished in Section 3, where we also describe the Ratios Conjecture in detail for our family \(\mathcal{F}\), and in Section 4. In Section 5, we use the explicit formula for our family \(\mathcal{F}\) to give an unconditional asymptotic for the 1-level density. Section 6 is then a comparison between this and the conditional asymptotic, which leads to a proof of Theorem 1.1. As mentioned, this comparison is facilitated by Theorem 6.1, which we also state and prove in Section 6. Acknowledgements. We are most grateful to Anders Sodergren for suggesting the problem considered in this article, and for many helpful discussions. Moreover, we are indebted to Victor Ahlugist for pointing out the symmetry between the coefficients of \(\widehat{f}(0)\) and \(\widehat{f}(1)\) in the statement of Theorem 1.2. Finally, we would like to thank Ezra Waxman, Daniel Fiorilli, Julia Brandes, and Michael Bjorklund for valuable discussions and comments in relation to this project. ## 2. Preliminaries We will now introduce Hecke characters on imaginary quadratic number fields, describe our concrete family of \(L\)-functions in more detail, and mention various standard results that we will need later. ### Hecke Characters in Imaginary Quadratic Number Fields An equivalent formulation of \(\mathbb{K}\) having class number \(1\) is that its ring of integers \(\mathcal{O}_{\mathbb{K}}\) is a principal ideal domain. Explicitly, we have \[\mathcal{O}_{\mathbb{K}}=\begin{cases}\mathbb{Z}[\sqrt{-d}]&\text{if }d\equiv 1,2 \ (\text{mod }4),\\ \mathbb{Z}[(1+\sqrt{-d})/2]&\text{if }d\equiv 3\ (\text{mod }4).\end{cases} \tag{2.1}\] By using the fact that any unit in \(\mathcal{O}_{\mathbb{K}}\) must have norm \(1\), one may easily prove that \[\mathcal{O}_{\mathbb{K}}^{\times}\simeq\begin{cases}\mathbb{Z}/2\mathbb{Z}& \text{if }d=2\text{ or }d\geqslant 5,\\ \mathbb{Z}/6\mathbb{Z}&\text{if }d=3.\end{cases}\] Since we will later make use of the lattice structure of \(\mathcal{O}_{\mathbb{K}}\), we also describe these rings in the following way. Lemma 2.1.: _Under the identification \(\mathbb{C}\simeq\mathbb{R}^{2}\), we have_ \[\mathcal{O}_{\mathbb{K}}=2^{1/4}\binom{2^{-1/4}}{0}\mathbb{Z}^{2},\] _when \(d=2\); or, when \(d\geqslant 3\),_ \[\mathcal{O}_{\mathbb{K}}=d^{1/4}2^{-1/2}\binom{2^{-1/2}d^{-1/4}}{0}\binom{1}{ 1}\quad 0\choose 1/2\quad 1\Big{)}\mathbb{Z}^{2}.\] Proof.: When \(d=2\), this is simply a matter of expanding (2.1). When \(d\geqslant 3\), (2.1) shows that any \(\alpha\in\mathcal{O}_{\mathbb{K}}\) can be written as \[-a+b/2+b\sqrt{-d}/2=s/2+(s+2a)\sqrt{-d}/2\] with \(s=-2a+b\). The decomposition of \(\mathcal{O}_{\mathbb{K}}\) in this case now follows once we express \(\mathcal{O}_{\mathbb{K}}\) using the variables \(s\) and \(a\). When \(\mathbb{K}\) has class number \(1\), a _Hecke character_\(\psi\) on \(\mathbb{K}\) corresponds to a unique pair \((\chi,\chi_{\infty})\) consisting of a generalized Dirichlet character \(\chi\) (modulo some ideal \(\mathfrak{m}\subset\mathcal{O}_{\mathbb{K}}\)) and a unitary character \(\chi_{\infty}\) on \(\mathbb{C}^{\times}\). Conversely, given a pair of such characters, their (pointwise) product is a Hecke character provided that \(\chi\cdot\chi_{\infty}\) is constant on \(\mathcal{O}_{\mathbb{K}}^{\times}\), cf. [13, eq. (3.80)]. Thus, in order for us to specify a Hecke character, it is enough to specify two characters \[\chi:\big{(}\mathcal{O}_{\mathbb{K}}/\mathfrak{m}\big{)}^{\times}\to\mathbb{ C}^{\times},\qquad\chi_{\infty}:\mathbb{C}^{\times}\to S^{1}\] satisfying \(\chi(u)\chi_{\infty}(u)=1\) for all \(u\in\mathcal{O}_{\mathbb{K}}^{\times}\). We now describe our concrete family of Hecke characters. Let \(N\) be any positive integer multiple of \(\big{|}\mathcal{O}_{\mathbb{K}}^{\times}\big{|}\). Since \(\mathcal{O}_{K}\) is a principal ideal domain, for any \(k\geqslant 1\) we can define the unitary character \(\chi_{\infty,k}\) by \[\chi_{\infty,k}(I)=\chi_{\infty,k}(\alpha)=(\alpha/\bar{\alpha})^{Nk}\] whenever \(I=\langle\alpha\rangle\). This is well-defined since any two generators of \(I\) will differ by a factor in \(\mathcal{O}_{K}^{\times}\) where \(\chi_{\infty,k}\) is identically equal to \(1\). To make this into a Hecke character, we also need to specify a Dirichlet character that is compatible with \(\chi_{\infty,k}\) in the above sense. However, we can simply take \(\chi\) to be the trivial generalized Dirichlet character of modulus \(\mathfrak{m}=\mathcal{O}_{\mathbb{K}}\). In this way we obtain the family of Hecke characters given by \[\psi_{k}(\langle\alpha\rangle)=\psi_{k}(\alpha):=\chi(\alpha)\cdot\chi_{\infty,k}(\alpha)=\chi_{\infty,k}(\alpha)=\Big{(}\frac{\alpha}{\overline{\alpha}} \Big{)}^{Nk}\] for \(\alpha\in\mathcal{O}_{\mathbb{K}}\setminus\{0\}\). Since conjugation is an automorphism of \(\mathbb{C}\), we note the relation \(\overline{\psi_{k}}=\psi_{-k}\), which will be useful later on. In the literature it is common to write such unitary characters as \(\chi_{\infty}(\alpha)=\left(\alpha/|\alpha|\right)^{\ell}\) for a suitable integer \(\ell\) called the _frequency_. In the case of our character \(\psi_{k}\), we see that \(\psi_{k}(\alpha)=\left(\alpha/|\alpha|\right)^{2Nk}\), so that \(\psi_{k}\) has frequency \(2Nk\). We note that \(\psi_{k}\) can also be described explicitly as a function of the argument of \(\alpha\), which will be convenient at certain points in the paper. Namely, if we write \(\alpha=re^{i\theta_{\alpha}}\), we have \[\psi_{k}(\alpha)=\left(\frac{re^{i\theta_{\alpha}}}{re^{-i\theta_{ \alpha}}}\right)^{Nk}=e^{2iNk\theta_{\alpha}}.\] We also wish to speak of the "argument of the ideal \(\langle\alpha\rangle\)." A priori, this is not well-defined since \(\langle u\alpha\rangle=\langle\alpha\rangle\) for any unit \(u\in\mathcal{O}_{\mathbb{K}}^{\times}\). However, since any unit has argument equal to a multiple of \(2\pi/|\mathcal{O}_{\mathbb{K}}^{\times}|\), the effect of multiplying \(\alpha\) with a unit \(u\) is to change \(\theta_{\alpha}\) by such a multiple. For this reason, by choosing \(u\) appropriately, we can always ensure that the argument of \(u\alpha\) lies in \(\left[0,2\pi/|\mathcal{O}_{\mathbb{K}}^{\times}|\right)\). Accordingly, the angle \(\theta_{\langle\alpha\rangle}\) of the ideal \(\langle\alpha\rangle\) is well-defined when taken in the interval \(\left[0,2\pi/|\mathcal{O}_{\mathbb{K}}^{\times}|\right)\). ### Hecke \(L\)-Functions and Their Zeros To each of the characters \(\psi_{k}\) (\(k\geqslant 1\)) we can associate a _Heeke \(L\)-function_ given initially by the series and corresponding Euler product \[L_{k}(s)=L_{k}(s,\psi_{k}):=\sum_{\begin{subarray}{c}I\subset \mathcal{O}\\ I\neq 0\end{subarray}}\frac{\psi_{k}(I)}{\mathbb{N}(I)^{s}}=\prod_{\mathfrak{ p}}\frac{1}{1-\psi_{k}(\mathfrak{p})/\mathbb{N}(\mathfrak{p})^{s}},\qquad \text{Re}(s)>1.\] Let us immediately note that \(L_{k}=L_{-k}\). Indeed, if \(N(a,b)\) denotes the norm (see Section 2.3) of an element \[j(a,b)=\begin{cases}a+i\sqrt{2}b&\text{ if }d=2,\\ a+b(1+i\sqrt{d})/2&\text{ if }d\geqslant 3,\end{cases}\] we note that the map \(A_{d}=\left(\begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}\right)+\mathbb{1}(d\geqslant 3)\cdot(\begin{smallmatrix}0 &0\\ 0&0\end{smallmatrix})\) preserves the norm \(N(a,b)\) and satisfies \(j(A_{d}(a,b)^{\intercal})=\overline{j(a,b)}\). In particular, \(A_{d}\) defines a bijection on the set \(\left\{(a,b)\in\mathbb{Z}^{2}:N(a,b)\neq 0\right\}\). Therefore, for \(\text{Re}(s)>1\), the trivial identity [20, eq. (2.1)] gives \[L_{-k}(s) =\frac{1}{|\mathcal{O}_{\mathbb{K}}^{\times}|}\sum_{N(a,b)\neq 0} \biggl{(}\frac{j(a,b)}{|j(a,b)|}\biggr{)}^{-2Nk}N(a,b)^{-s}\] \[=\frac{1}{|\mathcal{O}_{\mathbb{K}}^{\times}|}\sum_{N(a,b)\neq 0} \biggl{(}\frac{|j(a,b)|}{j(a,b)}\biggr{)}^{2Nk}N(a,b)^{-s}\] \[=\frac{1}{|\mathcal{O}_{\mathbb{K}}^{\times}|}\sum_{N(a,b)\neq 0} \biggl{(}\frac{\overline{j(a,b)}}{|j(a,b)|}\biggr{)}^{2Nk}N(a,b)^{-s}\] \[=\frac{1}{|\mathcal{O}_{\mathbb{K}}^{\times}|}\sum_{N(A_{d}(a,b)^ {\intercal})\neq 0}\biggl{(}\frac{j(A_{d}(a,b)^{\intercal})}{|j(A_{d}(a,b)^{ \intercal})|}\biggr{)}^{2Nk}N(A_{d}(a,b)^{\intercal})^{-s}\] \[=\frac{1}{|\mathcal{O}_{\mathbb{K}}^{\times}|}\sum_{N(a,b)\neq 0} \biggl{(}\frac{j(a,b)}{|j(a,b)|}\biggr{)}^{2Nk}N(a,b)^{-s}=L_{k}(s).\] By a theorem of Hecke [13, Theorem 3.8], if \(k\neq 0\) (so that \(\psi_{k}\) is not the trivial character), \(L_{k}\) admits an analytic continuation (which we will also denote by \(L_{k}\)) to the entire complex plane, and it satisfies the functional equation \[\Lambda(s,\psi_{k})=\frac{\tau(\psi_{k})}{i^{\mathfrak{c}}\sqrt{\mathbb{N}( \mathfrak{m})}}\Lambda\big{(}1-s,\overline{\psi_{k}}\big{)},\] where \(\mathfrak{m}=\mathcal{O}_{\mathbb{K}}\) is the modulus of \(\psi_{k}\), \(\ell=2Nk\) is the frequency, and \(\Lambda(s,\psi_{k})\) denotes the _completed \(L\)-function_ \[\Lambda(s,\psi_{k})=\Lambda_{k}(s):=L_{k}(s)\frac{\big{(}|D|\mathbb{N}( \mathfrak{m})\big{)}^{s/2}}{(2\pi)^{s}}\Gamma\big{(}s+|\ell|/2\big{)},\] and where \(\tau(\psi_{k})\) denotes the Gauss sum \[\tau(\psi_{k})=\psi_{k}(\gamma)\psi_{k}(\mathfrak{c})^{-1}\sum_{ \alpha\in\mathfrak{c}/\mathfrak{cm}}\exp(2\pi i\mathrm{Tr}(\alpha/\gamma)),\] cf. [13, eq. (3.86)]. Here \(\gamma\in\mathcal{O}_{K}\) and \(\mathfrak{c}\subset\mathcal{O}_{K}\) are arbitrary except for the requirements that \(\mathfrak{c}\) should be an ideal, and that \(\gamma\) and \(\mathfrak{c}\) should satisfy \((\mathfrak{c},\mathfrak{m})=1\) and \(\mathfrak{cm}=\langle\gamma\rangle\), where \(\mathfrak{d}\) is the _different_ of \(K\). In our case, we have \(\tau(\psi_{k})=1\): Since we have \(\mathfrak{m}=\mathcal{O}_{K}\) and \(\mathfrak{d}=\langle\sqrt{D}\rangle\), these conditions are satisfied with \(\mathfrak{c}=\mathcal{O}_{K}\) and \(\gamma=\sqrt{D}\). In combination with the relation \(\overline{\psi_{k}}=\psi_{-k}\) and the fact that the frequency \(\ell\) of \(\psi_{k}\) is \(2Nk\equiv 0\pmod{4}\), this means that the root number of \(L_{k}\) is \(1\), and the functional equation assumes the simpler form \[\Lambda_{k}(s)=\Lambda_{-k}(1-s)=\Lambda_{k}(1-s), \tag{2.2}\] where also the completed \(L\)-function can be described in the simpler form \[\Lambda_{k}(s)=L_{k}(s)\frac{|D|^{s/2}}{(2\pi)^{s}}\Gamma\big{(}s+Nk\big{)}. \tag{2.3}\] Of course, it is also possible to recast the identity (2.2) as a statement about \(L_{k}\) that does not explicitly involve \(\Lambda_{k}\). Doing so, we find that \[L_{k}(s)=L_{k}(1-s)X_{k}(s), \tag{2.4}\] where \[X_{k}(s): =\frac{\Gamma(1-s+Nk)}{\Gamma(s+Nk)}|D|^{1/2-s}(2\pi)^{2s-1}. \tag{2.5}\] If \(K\geqslant 1\) is an integer, then as we mentioned in the introduction, we will use the notation \[\mathcal{F}(K):=\{L_{k}:1\leqslant k\leqslant K\}\] to denote our family of \(L\)-functions. We wish to normalize the zeros of this family so that they have mean spacing \(1\). This of course warrants an understanding of the asymptotic number \(N_{k}(T)\) of zeros of \(L_{k}\) in the critical strip \(0\leqslant\mathrm{Re}(z)\leqslant 1\) up to a given height \(T\) as \(k\to\infty\). Estimates of the count \(N_{k}(T)\) for general \(L\)-functions are abundant in the literature, but these usually provide an asymptotic as \(T\to\infty\). We therefore prove the following result, which gives an asymptotic expression for \(N_{k}(T)\) when it is not the height, but rather the size of our family that tends to infinity. Proposition 2.2.: _Let \(k\geqslant 1\), and assume the Riemann Hypothesis for \(L_{k}\). For \(T>0\), let_ \[N_{k}(T):=\#\Big{\{}z\in\mathbb{C}:L_{k}(z)=0,\,0\leqslant\mathrm{Re}(z) \leqslant 1,\,-T\leqslant\mathrm{Im}(z)\leqslant T\Big{\}}\] _be the number of zeros of \(L_{k}\) in the critical strip up to absolute height \(T\). Then as \(k\to\infty\),_ \[N_{k}(T)\sim\frac{2T\log k}{\pi}.\] Proof.: Let us write \(X:=1/2+Nk\). By [2, Thm. 5] and [2, Eq. (4.1)], the main term of \(N_{k}(T)\) comes from the integral \[\frac{1}{\pi}\!\int_{-T}^{T}\!\!\mathrm{Re}\!\left(\frac{\Gamma^{\prime}(X+it) }{\Gamma(X+it)}\right)\,\mathrm{d}t=\frac{1}{\pi}\mathrm{Re}\!\left(\!\int_{-T }^{T}\!\!\frac{\Gamma^{\prime}(X+it)}{\Gamma(X+it)}\,\mathrm{d}t\right)\!.\] Indeed, in the notation of that paper, we have \[L(z,\pi_{\infty})=\frac{|D|^{z/2}}{(2\pi)^{z}}\Gamma\big{(}z+Nk\big{)};\] and since the logarithmic derivative of the factor \(|D|^{z/2}(2\pi)^{-z}\) is constant, the integral of the logarithmic derivative of \(|D|^{(X+it)/2}(2\pi)^{-X-it}\) over the line \(-T\leqslant t\leqslant T\) is at most a constant times \(T\). Since \(-i\log\Gamma(X+it)\) is a primitive function for the logarithmic derivative in the integrand above, and since our domain of integration lies in the right half-plane, we can use Stirling's formula \(\log\Gamma(z)=z\log z-z-(\log z)/2+O(\max\{1,1/z\})\) to obtain \[\int_{-T}^{T}\!\!\frac{\Gamma^{\prime}(X+it)}{\Gamma(X-it)}\, \mathrm{d}t=-i\cdot X\log\frac{X+iT}{X-iT}+T\log\!\left(X^{2}+T^{2}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad-2T+i\cdot\frac{1}{2}\log \frac{X+iT}{X-iT}+O(1).\] By taking the real part and using that \(T\log\!\left(X^{2}+T^{2}\right)=2T\log k+O_{T}(1)\), we therefore have \[\frac{1}{\pi}\!\int_{-T}^{T}\!\!\mathrm{Re}\!\left(\frac{\Gamma^{\prime}(X+it )}{\Gamma(X+it)}\right)\,\mathrm{d}t=\frac{2T\log k}{\pi}+O_{T}(1)\] as \(k\to\infty\). This concludes the proof. ### Splitting Behaviour of Rational Primes We now record some facts about how a prime \(p\in\mathbb{Z}\) behaves in the extension \(\mathbb{K}=\mathbb{Q}(\sqrt{-d})\), where we recall that \(d\) is one of the integers \(2,3,7,11,19,43\), \(67,163\). For a rational prime \(p\), we say that \(p\) is _ramified_ if \(p=q^{2}u\) for a prime \(q\in\mathcal{O}_{\mathbb{K}}\) and \(u\in\mathcal{O}_{\mathbb{K}}^{\times}\), _split_ if \(p=q\overline{q}u\) for a prime \(q\in\mathcal{O}_{\mathbb{K}}\) and \(u\in\mathcal{O}_{\mathbb{K}}^{\times}\), or _iner_ if \(p\) is a prime element in \(\mathcal{O}_{\mathbb{K}}\). Any rational prime belongs to exactly one of these categories. By using the norm map \(\mathcal{O}_{\mathbb{K}}\to\mathbb{Z}_{+}\), we immediately see that if \(p\) is ramified or split, then some prime element \(q\in\mathcal{O}_{\mathbb{K}}\) has norm \(p\). Thus, we can reduce certain aspects of the study of the splitting behaviour in \(\mathbb{K}\) of rational primes to studying which primes are represented over \(\mathbb{Z}\) by the quadratic forms defined by the norm. We now describe these forms in more detail. If \(d\equiv 3\) (mod 4), any integer in \(\mathbb{K}\) has the form \(a+b(1+\sqrt{-d})/2\), and its norm is \[\mathbb{N}\bigg{(}a+\frac{b(1+\sqrt{-d})}{2}\bigg{)}=\bigg{(}a+\frac{b}{2} \bigg{)}^{2}+\frac{db^{2}}{4}=a^{2}+ab+\frac{d+1}{4}b^{2}. \tag{2.6}\] Note that this form has integer coefficients precisely because of the congruence condition on \(d\). On the other hand, if \(d\equiv 1,2\) (mod 4), then any integer in \(\mathbb{K}\) has the form \(a+b\sqrt{-d}\), and its norm is \[\mathbb{N}\Big{(}a+b\sqrt{-d}\Big{)}=a^{2}+db^{2}. \tag{2.7}\] It is well-known that a rational prime \(p\) is ramified in \(\mathbb{K}\) if and only if \(p\mid D\), the discriminant of \(\mathbb{K}\). Since \[|D|=\begin{cases}d&\text{ if }d\equiv 3\text{ (mod 4)},\\ 4d&\text{ if }d\equiv 1,2\text{ (mod 4)},\end{cases}\] we see that \(p\) is ramified in \(\mathbb{K}\) if and only if \(p=d\). By [17, Prop I.8.5] we know that if \(p\neq 2\), then \(p\) splits in \(\mathbb{K}\) if and only if the Legendre symbol \((-d/p)\) equals \(1\). On the other hand, if \(p=2\) we can use (2.6) to see that \(p\) only splits in \(\mathbb{K}\) when \(d=7\). (This also implies that whenever \(d\neq 2,7\), the prime \(2\) is inert in \(\mathcal{O}_{\mathbb{K}}\).) Indeed, if \(d\equiv 3\) (mod 4) and \(2\) splits in \(\mathbb{K}\), there are rational integers \(a\) and \(b\) such that \(q=a+b\big{(}1+\sqrt{-d}\big{)}/2\) and \[\mathbb{N}(q)-2=a^{2}+ab+\frac{d+1}{4}b^{2}-2=0,\] which implies (by solving the quadratic equation in \(a\)) that \(8-db^{2}\) has to be a square number. Given that \(d\equiv 3\) (mod 4), this is only possible if \(d=7\) and \(b=\pm 1\). This shows that the element \(q=(1+\sqrt{-7})/2\) has norm \(2\). It is clear that \(2=q\overline{q}\) and that \(q\) and \(\overline{q}\) are not equivalent up to multiplication by units. In the remainder of the paper, we will also need to work with the "Legendre symbol modulo composite numbers." We therefore introduce the Kronecker symbol, which will also be convenient for the purpose of expressing the splitting behavior of a prime. If \(n=p_{1}^{e_{1}}\cdots p_{k}^{e_{k}}\) is (the prime factorization of) a positive integer and \(a\in\mathbb{Z}\), the _Kronecker symbol_\((a/n)\) is defined as \[\Big{(}\frac{a}{n}\Big{)}=\prod_{i=1}^{k}\bigg{(}\frac{a}{p_{i}}\bigg{)}^{e_{ i}},\] where the symbol \((a/p_{i})\) appearing on the right-hand side is the Legendre symbol if \(p_{i}\) is an odd prime, or otherwise given by \[\Big{(}\frac{a}{2}\Big{)}=\begin{cases}0&\text{ if }a\text{ is even,}\\ 1&\text{ if }a\equiv\pm 1\text{ (mod 8),}\\ -1&\text{ if }a\equiv\pm 3\text{ (mod 8).}\end{cases}\] We note that if \(a\not\equiv 3\) (mod 4), then the map \(n\mapsto(a/n)\) is a quadratic Dirichlet character, cf. [7, SS 5]. In particular, for \(d\) equal to any of the eight non-trivial Heegner numbers that we are considering, \(\chi(n):=(-d/n)\) is a Dirichlet character. We can now sum up the above discussion about the splitting behavior of rational primes in the ring \(\mathcal{O}_{\mathbb{K}}\) in the following lemma. Lemma 2.3.: _Let \(p\) be a rational prime. In the number field \(\mathbb{K}\),_ \[p\;\text{is}\begin{cases}\text{ramified}&\text{if}\;\chi(p)=0\;\text{(that is,}\;p=d),\\ \text{split}&\text{if}\;\chi(p)=1,\\ \text{inert}&\text{if}\;\chi(p)=-1.\end{cases}\] We end this section with the following elementary lemma whose proof we include for the sake of completeness. Lemma 2.4.: _Let \(\mathfrak{p}\subset\mathcal{O}_{\mathbb{K}}\) be a prime ideal. Then \(\mathfrak{p}\) lies over a rational prime \(p\) (that is, \(\mathfrak{p}\cap\mathbb{Z}=p\mathbb{Z}\)) if and only if \(\mathfrak{p}\mid p\mathcal{O}_{\mathbb{K}}\). Hence, the prime ideals with norm equal to a power of \(p\) are precisely the prime ideals dividing \(p\mathcal{O}_{\mathbb{K}}\)._ Proof.: Suppose that \(\mathfrak{p}\) lies over \(p\). Then clearly \(p\in\mathfrak{p}\), and by the ideal property of \(\mathfrak{p}\) we therefore have \(p\mathcal{O}_{\mathbb{K}}\subset\mathfrak{p}\), which means \(\mathfrak{p}\mid p\mathcal{O}_{\mathbb{K}}\). Conversely, if \(\mathfrak{p}\mid p\mathcal{O}_{\mathbb{K}}\), then \(\mathfrak{p}\supset p\mathcal{O}_{\mathbb{K}}\), and hence \(\mathfrak{p}\cap\mathbb{Z}\supset p\mathcal{O}_{\mathbb{K}}\cap\mathbb{Z} \supset p\mathbb{Z}\). Since \(p\mathbb{Z}\) is a maximal ideal in \(\mathbb{Z}\) and \(\mathfrak{p}\cap\mathbb{Z}\subset\mathbb{Z}\) must be a proper ideal, it follows that \(p\mathbb{Z}=\mathfrak{p}\cap\mathbb{Z}\). Therefore \(\mathfrak{p}\) lies over \(p\). ## 3 Implications of the Ratios Conjecture In this section, we describe the \(L\)-functions Ratios Conjecture which is due to Conrey, Farmer, and Zirnbauer ([5]), generalizing a conjecture of Farmer about the Riemann zeta function (see [8]). We show how the conjecture implies strong estimates for the \(1\)-level density of the zeros of the family \(\mathcal{F}(K)\) as \(K\to\infty\). The _Ratios Conjecture_ states that a sum of ratios of (products of) \(L\)-functions evaluated at certain parameters should obey a specific asymptotic estimate. We now describe the recipe from [5] for conjecturing such an asymptotic. However, for the sake of simplicity, we do not describe the most general case possible. In the case of two \(L\)-functions in the numerator and denominator, one considers \[Q(s,\boldsymbol{\alpha},\boldsymbol{\gamma};\chi)=\frac{L(s+ \alpha_{1},\chi)L(s+\alpha_{2},\chi)}{L(s+\gamma_{1},\chi)L(s+\gamma_{2},\chi )},\] where \(L(s,\chi)\) is the \(L\)-function associated to a character \(\chi\) and satisfying the functional equation \[L(s,\chi)=X(s,\chi)L(1-s,\overline{\chi}).\] The recipe is as follows: * _Approximate functional equation for_ \(L\) Replace each \(L\)-function in the numerator of \(Q(s,\boldsymbol{\alpha},\boldsymbol{\gamma};\chi)\) with the two main terms from its approximate functional equation, completely disregarding the remainder term. * _Infinite series for_ \(1/L\) Replace each reciprocal \(L\)-function with its expression as an infinite series involving a suitable Mobius function. * _Extend ranges and regroup factors_ Extending the ranges of all series to infinity and multiplying out the resulting expression, write each of the four resulting terms as \[\big{(}\text{product of root numbers }\varepsilon_{\chi}\big{)}\times\big{(}\text{ product of }X(\cdot,\chi)\text{-factors}\big{)}\times\big{(}\text{sum over }n_{1},n_{2},n_{3},n_{4}\big{)},\] where \(n_{1},n_{2},n_{3},n_{4}\) are the indexing variables from the two approximate functional equations and the two infinite series giving the reciprocal \(L\)-functions. * _Average factors over the family_ Replace each product of root numbers, each product of \(X(\cdot,\chi)\)-factors, and each summand in the last factor with their respective averages over the family \(\mathcal{F}=\{\chi\}\). Denote the resulting expression by \(M(s,\boldsymbol{\alpha},\boldsymbol{\gamma})\). * _Statement of the conjecture_ The conjecture now states that for any \(\varepsilon>0\), \[\sum_{\chi\in\mathcal{F}}\!\!Q(s,\boldsymbol{\alpha},\boldsymbol{ \gamma};\chi)w(q(\chi))=\Big{(}1+O\Big{(}e^{(-1/2+\varepsilon)q(\chi)}\Big{)} \Big{)}\!\sum_{\chi\in\mathcal{F}}\!\!M(s,\boldsymbol{\alpha},\boldsymbol{ \gamma})w(q(\chi)),\] where \(q(\chi):=|X^{\prime}(1/2,\chi)|\) denotes the _log conductor_ of \(\chi\), and \(w\) is a suitable weight function. A later addition to the Ratios Conjecture [6, Sect. 2] is that such an asymptotic is expected to hold provided that \(\alpha\) and \(\gamma\) satisfy the constraints \[-\frac{1}{4}<\text{Re}(\alpha)<\frac{1}{4},\quad\frac{1}{\log K} \ll\text{Re}(\gamma)<\frac{1}{4},\quad\text{Im}(\alpha),\,\text{Im}(\gamma) \ll_{\varepsilon}K^{1-\varepsilon}. \tag{3.1}\] ### Ingredients for the Recipe To follow the recipe outlined above in the case of the expression \[R_{K}(\alpha,\gamma)=\frac{1}{K}\!\sum_{k=1}^{K}\!\frac{L_{k}(1/2 +\alpha)}{L_{k}(1/2+\gamma)}, \tag{3.2}\] we first need to describe the approximate functional equation for \(L_{k}\) and obtain an expression for the reciprocal \(L_{k}^{-1}\) as an infinite series. We begin by describing the approximate functional equation, which involves writing \(L_{k}\) in a different way that is more reminiscent of classical Dirichlet \(L\)-functions. To this end, we observe that \[L_{k}(s)=\sum_{\begin{subarray}{c}I\subset\mathcal{O}\\ I\neq 0\end{subarray}}\!\!\frac{\psi_{k}(I)}{\mathbb{N}(I)^{s}}=\sum_{n\geqslant 1 }\!\left(\sum_{\mathbb{N}(I)=n}\!\!\psi_{k}(I)\right)n^{-s}=\sum_{n\geqslant 1 }\!\frac{A_{k}(n)}{n^{s}}\] for \(\text{Re}(s)>1\), with \[A_{k}(n):=\sum_{\mathbb{N}(I)=n}\!\!\psi_{k}(I),\qquad n\geqslant 1.\] We note that \(A_{k}\) is real-valued. Indeed, this follows from the definition of \(\psi_{k}\) and the fact that for any \(\alpha\in\mathcal{O}_{\mathbb{K}}\), \[\mathbb{N}(\langle\alpha\rangle)=\alpha\overline{\alpha}=\mathbb{N}(\langle \overline{\alpha}\rangle).\] Hence, any ideal that contributes to the sum defining \(A_{k}(n)\) is accompanied by its conjugate ideal, provided that these are different. (If they are not, then their contribution to \(A_{k}\) is clearly real.) Complex conjugation of \(A_{k}\) therefore amounts to permuting the terms in the sum, which of course leaves \(A_{k}\) unchanged. Using the series defining \(L_{k}\) and the functional equation (2.4), we can now describe the approximate functional equation of \(L_{k}\) as \[L(s,\psi_{k}) \approx\sum_{n<x}\frac{A_{k}(n)}{n^{s}}+X_{k}(s)\sum_{n<y}\frac{ \overline{A_{k}(n)}}{n^{1-s}}\] \[=\sum_{n<x}\frac{A_{k}(n)}{n^{s}}+X_{k}(s)\sum_{n<y}\frac{A_{k}(n )}{n^{1-s}}, \tag{3.3}\] where \(x\) and \(y\) are positive real parameters. As mentioned above, the recipe of the Ratios Conjecture also requires us to obtain a formula for the reciprocal function \(L_{k}^{-1}\). Taking the reciprocal of the Euler product and using the fact that \(\psi_{k}\) and the ideal norm are completely multiplicative, we see that \[\frac{1}{L_{k}(s)} =\prod_{\mathfrak{p}}\biggl{(}1-\frac{\psi_{k}(I)}{\mathbb{N}(I) ^{s}}\biggr{)}\] \[=1-\sum_{n=\mathfrak{p}_{1}}\frac{\psi_{k}(\mathfrak{p}_{1})}{ \mathbb{N}(\mathfrak{p}_{1})^{s}}+\sum_{n=\mathfrak{p}_{1}\mathfrak{p}_{2}} \frac{\psi_{k}(\mathfrak{p}_{1}\mathfrak{p}_{2})}{\mathbb{N}(\mathfrak{p}_{1} \mathfrak{p}_{2})^{s}}-\sum_{n=\mathfrak{p}_{1}\mathfrak{p}_{2}\mathfrak{p}_{3 }}\frac{\psi_{k}(\mathfrak{p}_{1}\mathfrak{p}_{2}\mathfrak{p}_{3})}{ \mathbb{N}(\mathfrak{p}_{1}\mathfrak{p}_{2}\mathfrak{p}_{3})^{s}}+\cdots\] \[=\sum_{\begin{subarray}{c}I\subset\mathcal{O}\\ I\neq 0\end{subarray}}\frac{\mu(I)\psi_{k}(I)}{\mathbb{N}(I)^{s}},\] where \[\mu(I)=\begin{cases}(-1)^{n}&\text{if $I$ is the product of $n$ distinct primes},\\ 0&\text{otherwise},\end{cases}\] is the natural analogue of the Mobius function. With \[\mu_{k}(n):=\sum_{\mathbb{N}(I)=n}\mu(I)\psi_{k}(I),\] we therefore obtain the formula \[\frac{1}{L_{k}(s)}=\sum_{n\geqslant 1}\frac{\mu_{k}(n)}{n^{s}},\qquad \operatorname{Re}(s)>1. \tag{3.4}\] Lemma 3.1.: _For any \(k\geqslant 1\), the functions \(\mu_{k}\) and \(A_{k}\) are multiplicative._ Proof.: It suffices to show that if \(p\neq q\) are rational primes and \(a,b\geqslant 1\) are rational integers, then \[\mu_{k}\Bigl{(}p^{a}q^{b}\Bigr{)}=\mu_{k}(p^{a})\mu_{k}\Bigl{(}q^{b}\Bigr{)}, \qquad A_{k}\Bigl{(}p^{a}q^{b}\Bigr{)}=A_{k}(p^{a})A_{k}\Bigl{(}q^{b}\Bigr{)}.\] We prove the claim for \(\mu_{k}\) as the proof of the claim for \(A_{k}\) is analogous. Since \(\psi_{k}\) is completely multiplicative, we have \[\mu_{k}(p^{a})\mu_{k}\Big{(}q^{b}\Big{)}=\sum_{\begin{subarray}{c}\mathbb{N}(I_{1 })=p^{a}\\ \mathbb{N}(I_{2})=q^{b}\end{subarray}}\mu(I_{1}I_{2})\psi_{k}(I_{1}I_{2}).\] It remains to show that all ideals \(I\) of norm \(p^{a}q^{b}\) have the form \(I_{1}I_{2}\) for ideals \(I_{1}\) and \(I_{2}\) of norm \(p^{a}\) and \(q^{b}\), respectively. However, this follows immediately from the unique factorization of ideals in \(\mathcal{O}_{\mathbb{K}}\) in combination with the properties of the norm map. ### Following the Recipe In accordance with the recipe outlined above, we now use (3.3) and (3.4) and compute \[\frac{L_{k}(1/2+\alpha)}{L_{k}(1/2+\gamma)}\approx\sum_{m,n\geq 1}\frac{\mu_{k} (m)A_{k}(n)}{m^{1/2+\gamma}n^{1/2+\alpha}}+X_{k}(1/2+\alpha)\sum_{m,n\geq 1} \frac{\mu_{k}(m)A_{k}(n)}{m^{1/2+\gamma}n^{1/2-\alpha}}, \tag{3.5}\] where we extended the ranges of summation to infinity in accordance with the recipe provided above. We now average the individual factors. We begin with the following lemma. Lemma 3.2.: _As \(K\to\infty\), we have_ \[\frac{1}{K}\underset{k=1}{\overset{K}{\sum}}X_{k}(1/2+\alpha)=\frac{1}{1-2 \alpha}\Bigg{(}\frac{2\pi}{KN\sqrt{|D|}}\Bigg{)}^{2\alpha}+O_{\alpha}\bigg{(} \frac{1}{K}\bigg{)}+O_{\alpha}\bigg{(}\frac{1}{K^{1+2\alpha}}\bigg{)}. \tag{3.6}\] Proof.: We can argue exactly as in the proof of [20, Lemma 3.1]. The only thing we need to take into account is that we have \(\mathbf{k}=1/2+Nk\) in the notation of that paper. It remains for us to compute the average of the summands in (3.5). Here it is fruitful to rewrite the sums appearing there as products in order to take advantage of the multiplicative nature of the function \(\mu_{k}\). To that end, with the help of Lemma 3.1 we note that \[\sum_{m,n\geq 1}\frac{\mu_{k}(m)A_{k}(n)}{m^{1/2+\gamma}n^{1/2+\alpha}}=\ \underset{\text{ prime}}{\prod}\sum_{m,n\geq 0}\frac{\mu_{k}(p^{m})A_{k}(p^{n})}{p^{m(1/2+ \gamma)}p^{n(1/2+\alpha)}}. \tag{3.7}\] At this point, we will describe the values of \(\mu_{k}\) and \(A_{k}\) on prime powers more precisely. Lemma 3.3.: _Let \(p\) be a rational prime. Then_ \[\mu_{k}(p^{m})=\begin{cases}1&\text{if $m=0$},\\ -A_{k}(p)&\text{if $m=1$},\\ -1&\text{if $m=2$ and $p$ is inert,}\\ 1&\text{if $m=2$ and $p$ splits,}\\ 0&\text{otherwise.}\end{cases}\] Proof.: The first two claims follow immediately from the definition of \(\mu_{k}\). As for the third claim, we have \(\mu_{k}\big{(}p^{2}\big{)}=-1\) since only the prime ideal \(\langle p\rangle\) has norm \(p^{2}\) by Lemma 2.4. Turning to the fourth claim, let us suppose that \(\langle p\rangle\) has prime factors \(\mathfrak{p}_{1}\) and \(\mathfrak{p}_{2}\). In this case, there is only one squarefree ideal of norm \(p^{2}\), namely \(\langle p\rangle\), since only the prime ideals dividing \(\langle p\rangle\) have norms equal to a power of \(p\) by Lemma 2.4. We now prove the fifth and final claim. We first examine the case where \(m=2\) and \(p\) is ramified in \(\mathcal{O}_{\mathbb{K}}\) with \(\langle p\rangle=\mathfrak{q}^{2}\) for some prime ideal \(\mathfrak{q}\). This immediately implies that \(\mu_{k}(p^{2})=0\) since the only ideal of norm \(p^{2}\) is \(\mathfrak{q}^{2}\) by Lemma 2.4. For the final case, we assume \(m\geqslant 3\). To prove the claim, it is enough to show that the norm of a product of distinct prime ideals lying over \(p\) is at most \(p^{2}\). However, this is clear from Lemma 2.4 once we consider the three possible splitting behaviours of \(p\). We also need the following description of the function \(A_{k}\) on prime powers. Lemma 3.4.: \(A_{k}\) _assumes the following values on prime powers (where we understand that the case \(n=0\) takes precedence over the remaining cases):_ \[A_{k}(p^{n})=\begin{cases}1&\text{if $n=0$},\\ (q^{n}/\overline{q}^{n})^{Nk}&\text{if $\langle p\rangle=\langle q\rangle^{2}$,} \\ 1&\text{if $p$ is inert and $n$ is even},\\ 0&\text{if $p$ is inert and $n$ is odd},\\ \sum_{j=-n/2}^{n/2}\psi_{k}(q)^{2j}&\text{if $\langle p\rangle=\langle q \rangle\langle\overline{q}\rangle$ and $n$ is even},\\ \sum_{j=-(n+1)/2}^{(n-1)/2}\psi_{k}(q)^{2j+1}&\text{if $\langle p\rangle=\langle q \rangle\langle\overline{q}\rangle$ and $n$ is odd}.\end{cases}\] Proof.: Since only \(\langle 1\rangle=\mathcal{O}_{\mathbb{K}}\) has norm \(1\), the first claim follows immediately from the definition. In the following we suppose that \(n\geqslant 1\). Suppose that \(p\) is ramified in \(\mathcal{O}_{\mathbb{K}}\) with \(\langle p\rangle=\mathfrak{q}^{2}\). Then Lemma 2.4 immediately implies that the only ideal of norm \(p^{n}\) is \(\mathfrak{q}^{n}\), which proves the claim. Suppose that \(p\) is inert. Then there is no prime ideal of norm \(p\). Moreover, the ideal \(\langle p\rangle\) is prime and has norm \(p^{2}\), and it is the only such prime ideal. It follows that if \(I\) is any ideal of norm \(p^{n}\), then \(n\) must be even, and in this case \(I=\langle p\rangle^{n/2}=\langle p^{n/2}\rangle\). This yields the third and fourth claims. Turning to the final claim, we assume that \(p\) splits as \(q\overline{q}\) in \(\mathcal{O}_{\mathbb{K}}\). Then clearly \(\langle q\rangle\) and \(\langle\overline{q}\rangle\) are the only prime ideals of norm \(p\). Moreover, as we assumed that \(p\) is not ramified, it is clear that \(\langle q\rangle\neq\langle\overline{q}\rangle\). Similarly as before, we also see that no prime ideals of norm \(p^{2}\) can exist. It follows that if \(I\) has norm \(p^{n}\), then \(I=\langle q\rangle^{j}\langle\overline{q}\rangle^{n-j}=\langle q^{j} \overline{q}^{n-j}\rangle\) for some \(j=0,\ldots,n.\) Thus, \[A_{k}(p^{n})=\sum_{j=0}^{n}\biggl{(}\frac{q^{j}\overline{q}^{n-j}}{\overline{ q}^{j}q^{n-j}}\biggr{)}^{Nk}=\sum_{j=0}^{n}\biggl{(}\frac{q^{2j-n}}{\overline{q}^{2j -n}}\biggr{)}^{Nk},\] and therefore \[A_{k}(p^{n})=\begin{cases}\sum_{j=-n/2}^{n/2}\psi_{k}(q)^{2j}& \text{if $n$ is even},\\ \sum_{j=-(n+1)/2}^{(n-1)/2}\psi_{k}(q)^{2j+1}&\text{if $n$ is odd}.\end{cases}\] This concludes the proof. In accordance with the Ratios Conjecture, we now compute the (asymptotic) averages of the function \(\mu_{k}(p^{m})A_{k}(p^{n})\) as \(k=1,\ldots,K\). We will denote this by \(\delta_{p}(m,n)\) so that \[\delta_{p}(m,n)=\underset{K\to\infty}{\lim}\frac{1}{K}{\sum}_{k=1}^{K}\mu_{k}(p ^{m})A_{k}(p^{n}).\] The existence of this limit will be clear from the consideration of the special cases of \(p\) (split, inert, or ramified). In anticipation of this, we furthermore write \[\delta_{p}(m,n)=\begin{cases}\delta_{\text{in}}(m,n)&\text{if $p$ is inert}\\ \delta_{\text{sp}}(m,n)&\text{if $p$ splits}\\ \delta_{\text{ram}}(m,n)&\text{if $p$ is ramified}.\end{cases}\] Lemma 3.5.: _We have_ \[\delta_{\text{in}}(m,n)=\begin{cases}0&\text{if $m=1$ or $m\geqslant 3$ or $n$ is odd,}\\ 1&\text{if $m=0$ (and $n$ is even),}\\ -1&\text{if $m=2$ (and $n$ is even),}\\ \end{cases}\] \[\delta_{\text{sp}}(m,n)=\begin{cases}1&\text{if $m=0$ and $n$ is even,}\\ -2&\text{if $m=1$ and $n$ is odd,}\\ 1&\text{if $m=2$ and $n$ is even,}\\ 0&\text{otherwise,}\end{cases}\] \[\delta_{\text{ram}}(m,n)=\begin{cases}1&\text{if $m=0$,}\\ -1&\text{if $m=1$,}\\ 0&\text{if $m\geqslant 2$.}\end{cases}\] Proof.: This follows immediately from Lemma 3.3 and Lemma 3.4. In the case of a ramified prime \(p\), we also use the fact that \(A_{k}(p^{n})=\left(q^{n}/\overline{q}^{n}\right)^{Nk}=1\), since in this case \(q/\overline{q}\in\mathcal{O}_{\mathbb{K}}^{\times}\). Using Lemma 3.5, we can describe the limiting average of the right-hand side of (3.7) as follows. If the prime \(p\) is inert, we write \[G_{\text{in}}(p;\alpha,\gamma):=\underset{m,n\geqslant 0}{\sum}\frac{\delta_{ \text{in}}(m,n)}{p^{m(1/2+\gamma)+n(1/2+\alpha)}},\] and we define \(G_{\text{sp}}(p;\alpha,\gamma)\) and \(G_{\text{ram}}(p;\alpha,\gamma)\) analogously. By using Lemma 3.5, we then deduce that \[G_{\text{in}}(p;\alpha,\gamma)=\underset{n\geqslant 0}{\sum}p^{-n(1+2\alpha)}-p^{- (1+2\gamma)}\underset{n\geqslant 0}{\sum}p^{-n(1+2\alpha)}=\Big{(}1-p^{-(1+2 \gamma)}\Big{)}\Big{(}1-p^{-(1+2\alpha)}\Big{)}^{-1},\] \[G_{\text{sp}}(p;\alpha,\gamma)=\Big{(}1-2p^{-(1+\alpha+\gamma)}+p^{-(1+2 \gamma)}\Big{)}\Big{(}1-p^{-(1+2\alpha)}\Big{)}^{-1},\] \[G_{\mathrm{ram}}(p;\alpha,\gamma)=\sum_{n\geqslant 0}\Bigl{(}p^{-n(1/2+ \alpha)}-p^{-(1/2+\gamma)}p^{-n(1/2+\alpha)}\Bigr{)}=\Bigl{(}1-p^{-(1/2+\gamma) }\Bigr{)}\Bigl{(}1-p^{-(1/2+\alpha)}\Bigr{)}^{-1}.\] Moreover, we let \[F_{2}(\alpha,\gamma):=\begin{cases}1,&\text{if }d=2,\\ \bigl{(}1-2^{-(\alpha+\gamma)}+2^{-(1+2\gamma)}\bigr{)}\bigl{(}1-2^{-(1+2 \alpha)}\bigr{)}^{-1}&\text{if }d=7,\\ \bigl{(}1-2^{-(1+2\gamma)}\bigr{)}\bigl{(}1-2^{-(1+2\alpha)}\bigr{)}^{-1}& \text{otherwise}.\end{cases}\] We now see that the product of \(G_{*}(p;\alpha,\gamma)\) over all rational primes equals \[G(\alpha,\gamma) =F_{2}(\alpha,\gamma)\,G_{\mathrm{ram}}(d;\alpha,\gamma)\prod_{ \begin{subarray}{c}p\geqslant 3\\ (-d/p)=1\end{subarray}}G_{\mathrm{sp}}(p;\alpha,\gamma)\prod_{ \begin{subarray}{c}p>3\\ (-d/p)=-1\end{subarray}}G_{\mathrm{in}}(p;\alpha,\gamma)\] \[=\Bigl{(}1-d^{-(1/2+\gamma)}\Bigr{)}\Bigl{(}1-d^{-(1/2+\alpha)} \Bigr{)}^{-1}F_{2}(\alpha,\gamma)\prod_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=1\end{subarray}}\Bigl{(}1-2p^{-(1+\alpha+\gamma)}+p^{-(1+2\gamma)} \Bigr{)}\] \[\qquad\times\prod_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}\Bigl{(}1-p^{-(1+2\gamma)}\Bigr{)}\prod_{ \begin{subarray}{c}p\geqslant 3\\ p\neq d\end{subarray}}\Bigl{(}1-p^{-(1+2\alpha)}\Bigr{)}^{-1}\] \[=\Bigl{(}1-1(d\geqslant 3)\cdot d^{-(1+2\alpha)}\Bigr{)}\Bigl{(}1-d^{- (1/2+\gamma)}\Bigr{)}\Bigl{(}1-d^{-(1/2+\alpha)}\Bigr{)}^{-1}F_{2}(\alpha,\gamma)\] \[\qquad\times\prod_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=1\end{subarray}}\Bigl{(}1-2p^{-(1+\alpha+\gamma)}+p^{-(1+2\gamma)} \Bigr{)}\prod_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}\Bigl{(}1-p^{-(1+2\gamma)}\Bigr{)}\] \[\qquad\times\prod_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}\Bigl{(}1-p^{-(1+2\gamma)}\Bigr{)}\zeta(1+2\alpha), \tag{3.8}\] when \(\mathrm{Re}(\alpha)>0\). Here \(\tilde{F}_{2}\) is the function obtained by possibly removing the factor \(\bigl{(}1-2^{-(1+2\alpha)}\bigr{)}^{-1}\) from \(F_{2}\), i.e. \[\tilde{F}_{2}(\alpha,\gamma):=\begin{cases}1&\text{if }d=2,\\ 1-2^{-(\alpha+\gamma)}+2^{-(1+2\gamma)}&\text{if }d=7,\\ 1-2^{-(1+2\gamma)}&\text{otherwise}.\end{cases}\] It will be convenient to introduce convergence factors in the two Euler products and thus bundle together all singularities in a number of zeta functions and Dirichlet \(L\)-functions. Doing so, and writing \(\chi(p)=(-d/p)\), we find that our expression (3.8) equals \[\frac{\zeta(1+2\alpha)L(1+2\gamma,\chi)}{\zeta(1+\alpha+\gamma)L(1+ \alpha+\gamma,\chi)}\Big{(}1+d^{-(1/2+\alpha)}\Big{)}\Big{(}1-d^{-(1/2+\gamma)} \Big{)}\Big{(}1-d^{-(1+\alpha+\gamma)}\Big{)}^{-1}H_{2}(\alpha,\gamma)\] \[\times\prod_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=1\end{subarray}}\frac{\big{(}1-2p^{-(1+\alpha+\gamma)}+p^{-(1+2\gamma) }\big{)}\big{(}1-p^{-(1+2\gamma)}\big{)}}{\big{(}1-p^{-(1+\alpha+\gamma)} \big{)}^{2}}\] \[\times\prod_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}\frac{\big{(}1-p^{-(1+2\gamma)}\big{)}\big{(}1+p^{-(1+ 2\gamma)}\big{)}}{\big{(}1-p^{-(1+\alpha+\gamma)}\big{)}\big{(}1+p^{-(1+\alpha +\gamma)}\big{)}},\] where the function \(H_{2}\), defined by \[H_{2}(\alpha,\gamma)=\begin{cases}1&\text{if $d=2$,}\\ \big{(}1-2^{-(1+2\gamma)}\big{)}\big{(}1-2^{-(1+\alpha+\gamma)}\big{)}^{-2} \big{(}1-2^{-(\alpha+\gamma)}+2^{-(1+2\gamma)}\big{)}&\text{if $d=7$,}\\ \big{(}1-2^{-2(1+2\gamma)}\big{)}\big{(}1-2^{-2(1+\alpha+\gamma)}\big{)}^{-1}& \text{otherwise,}\end{cases}\] keeps track of the contributions from the prime \(p=2\) when \(d\neq 2\). Indeed; for example, in case of the \(L\)-function \(L(1+2\gamma,\chi)\), we see that all of its factors are cancelled out by reciprocal factors in the two Euler products, except for factors corresponding to the primes \(p=2\) and \(p=d\). If \(d=2\), then no such factor is missing as \(\chi(2)=0\), and if \(d\geqslant 3\), only the factor corresponding to \(p=2\) is missing, namely the factor \(1-\chi(2)2^{-(1+2\gamma)}\). If we let \[A_{1}(\alpha,\gamma) :=\prod_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=1\end{subarray}}\frac{\big{(}1-2p^{-(1+\alpha+\gamma)}+p^{-(1+2\gamma) }\big{)}\big{(}1-p^{-(1+2\gamma)}\big{)}}{\big{(}1-p^{-(1+\alpha+\gamma)} \big{)}^{2}},\] \[A_{-1}(\alpha,\gamma) :=\prod_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}\frac{\big{(}1-p^{-(1+2\gamma)}\big{)}\big{(}1+p^{-(1+ 2\gamma)}\big{)}}{\big{(}1-p^{-(1+\alpha+\gamma)}\big{)}\big{(}1+p^{-(1+\alpha +\gamma)}\big{)}},\] \[F_{d}(\alpha,\gamma) :=\frac{\big{(}1+d^{-(1/2+\alpha)}\big{)}\big{(}1-d^{-(1/2+\gamma )}\big{)}}{1-d^{-(1+\alpha+\gamma)}},\] we therefore have \[G(\alpha,\gamma)=\frac{\zeta(1+2\alpha)L(1+2\gamma,\chi)}{\zeta(1+\alpha+ \gamma)L(1+\alpha+\gamma,\chi)}F_{d}(\alpha,\gamma)H_{2}(\alpha,\gamma)A_{1}( \alpha,\gamma)A_{-1}(\alpha,\gamma). \tag{3.9}\] Combining this with (3.6), we see that the prediction of the Ratios Conjecture is that with \(\alpha\) and \(\gamma\) subject to the conditions (3.1), the average \(R_{K}(\alpha,\gamma)\) is equal to \[R_{K}(\alpha,\gamma)=G(\alpha,\gamma)+\frac{1}{1-2\alpha}\Bigg{(}\frac{2\pi}{ KN\sqrt{|D|}}\Bigg{)}^{2\alpha}G(-\alpha,\gamma)+O_{\varepsilon}\Big{(}K^{-1/2+ \varepsilon}\Big{)}. \tag{3.10}\] ### The Logarithmic Derivative If we know that the effective estimate (3.10) is invariant under differentiation (in the sense that the derivatives of the main terms are equal, at least up to an error of roughly the same order of magnitude), we can use (3.10) to describe an asymptotic for the average of the logarithmic derivative of \(L_{k}\) over the family \(\mathcal{F}(K)\). Since the logarithmic derivative \(L_{k}^{\prime}/L_{k}\) is intimately connected with the zeros and poles of \(L_{k}\) by Cauchy's residue theorem, such an asymptotic estimate will allow us to study the asymptotics of the one-level density of the family \(\mathcal{F}(K)\). To see that (3.10) is, in fact, invariant under differentiation in the sense above, let \(\Omega\subset\mathbb{C}\) be any open set where the function \[\alpha\mapsto f(\alpha):=R_{K}(\alpha,\gamma)-G(\alpha,\gamma)-\frac{1}{1-2 \alpha}\Bigg{(}\frac{2\pi}{KN\sqrt{|D|}}\Bigg{)}^{2\alpha}G(-\alpha,\gamma)\] is holomorphic. For \(\alpha_{0}\in\Omega\), choose \(\delta>0\) so that \(\Omega\) contains the circle \(C\) centered at \(\alpha_{0}\) with radius \(\delta.\) Then by Cauchy's integral formula, \[|f^{\prime}(\alpha_{0})|\leqslant\frac{1}{2\pi}\int_{C}\biggl{|}\frac{f(z)}{ (z-\alpha_{0})^{2}}\biggr{|}\,|\mathrm{d}z|\leqslant\frac{1}{2\pi}\frac{1}{ \delta^{2}}\cdot 2\pi\delta\cdot\sup_{z\in C}|f(z)|=\frac{1}{\delta}\cdot O_{ \varepsilon}\Big{(}K^{-1/2+\varepsilon}\Big{)},\] and the claim follows by linearity of differentiation. Since only the numerators in the sum defining \(R_{K}(\alpha,\gamma)\) depend on \(\alpha\), (3.10) and the estimate obtained by differentiating (3.10) therefore imply that \[\begin{split}\frac{1}{K}\sum_{k=1}^{K}&\frac{L_{k}^ {\prime}(1/2+r)}{L_{k}(1/2+r)}=\frac{\partial}{\partial\alpha}R_{K}(\alpha, \gamma)\bigg{|}_{\alpha=\gamma=r}\\ &\approx\frac{\partial}{\partial\alpha}G(\alpha,\gamma)\bigg{|}_ {\alpha=\gamma=r}+\frac{\partial}{\partial\alpha}\frac{1}{1-2\alpha}\Bigg{(} \frac{2\pi}{KN\sqrt{|D|}}\Bigg{)}^{2\alpha}G(-\alpha,\gamma)\bigg{|}_{\alpha= \gamma=r}\end{split} \tag{3.11}\] for any \(r\in\mathbb{C}\) satisfying the conditions \[\frac{1}{\log K}\ll\mathrm{Re}(r)<1/4,\qquad\mathrm{Im}(r)\ll_{\varepsilon} K^{1-\varepsilon}. \tag{3.12}\] By going through a computation identical to that in the proof of [20, Lemma 3.4] and observing that \(F_{d}(r,r)=H_{2}(r,r)=A_{1}(r,r)=A_{-1}(r,r)=1\), we now see that \[\begin{split}\frac{\partial}{\partial\alpha}G(\alpha,\gamma) \bigg{|}_{\alpha=\gamma=r}&=\frac{\zeta^{\prime}(1+2r)}{\zeta(1+ 2r)}-\frac{L^{\prime}(1+2r,\chi)}{L(1+2r,\chi)}-\frac{d^{r+1/2}\log d}{d^{2r+1 }-1}\\ &+H_{2}^{\prime}(r)-2\sum_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}\frac{\log p}{p^{4r+2}-1},\end{split}\] where \[H_{2}^{\prime}(r):=\frac{\partial}{\partial\alpha}H_{2}(\alpha,\gamma)\bigg{|} _{\alpha=\gamma=r}=\begin{cases}0&\text{if $d=2,\,7$}\\ -2\log 2\big{(}2^{2(2r+1)}-1\big{)}^{-1}&\text{otherwise.}\end{cases}\] It remains to compute the other partial derivative in (3.11). However, in the resulting sum only the term coming from differentiating the quotient of zeta- and \(L\)-functions survives on account of the pole of \(\zeta(s)\) at \(s=1\). Thus, with \[F_{d}(-r,r) =1+\frac{d^{1/2+r}-d^{1/2-r}}{d-1},\] \[H_{2}(-r,r) =\begin{cases}1&\text{if }d=2,\\ 2^{1-2r}\big{(}1-2^{-1-2r}\big{)}&\text{if }d=7,\\ \frac{4}{3}\big{(}1-2^{-2(2r+1)}\big{)}&\text{otherwise},\end{cases}\] \[A_{1}(-r,r) =\prod_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=1\end{subarray}}\frac{\big{(}1-2p^{-1}+p^{-(1+2r)}\big{)}\big{(}1-p^{-( 1+2r)}\big{)}}{(1-p^{-1})^{2}},\] \[A_{-1}(-r,r) =\prod_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}\frac{1-p^{-2(1+2r)}}{1-p^{-2}},\] we conclude that \[J(r) :=\frac{\partial}{\partial\alpha}\frac{1}{1-2\alpha}\Bigg{(} \frac{2\pi}{KN\sqrt{|D|}}\Bigg{)}^{2\alpha}G(-\alpha,\gamma)\bigg{|}_{\alpha= \gamma=r}\] \[=-\frac{\zeta(1-2r)L(1+2r,\chi)}{L(1,\chi)}\frac{1}{1-2r}\Bigg{(} \frac{2\pi}{KN\sqrt{|D|}}\Bigg{)}^{2r}F_{d}(-r,r)H_{2}(-r,r)A_{1}(-r,r)A_{-1} (-r,r).\] We sum up this discussion in the following proposition. Proposition 3.6.: _The Ratios Conjecture implies that for any \(r\in\mathbb{C}\) satisfying (3.12), we have_ \[\frac{1}{K}\underset{k=1}{\overset{K}{\sum}}\frac{L_{k}^{\prime}( 1/2+r)}{L_{k}(1/2+r)} =\frac{\zeta^{\prime}(1+2r)}{\zeta(1+2r)}-\frac{L^{\prime}(1+2r, \chi)}{L(1+2r,\chi)}-\frac{d^{r+1/2}\log d}{d^{2r+1}-1}+H_{2}^{\prime}(r)\] \[\qquad-2\underset{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}{\sum_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}}\frac{\log p}{p^{4r+2}-1}+J(r)+O_{\varepsilon}\Big{(}K ^{-1/2+\varepsilon}\Big{)}.\] We now let \(f:\mathbb{R}\to\mathbb{R}\) be an even Schwarz function with \(\operatorname{supp}\widehat{f}\) compact. By the argument given in the beginning of [20, Sect. 4], the above result allows us to express the \(1\)-level density \(D(\mathcal{F}(K);f)\) (conditionally on the Ratios Conjecture) as \[\frac{1}{2\pi i}\!\int_{(c)}\!\frac{1}{K}\Bigg{(}\sum_{k=1}^{K} \!\bigg{(}2\cdot\frac{L_{k}^{\prime}(1/2+r)}{L_{k}(1/2+r)}-\frac{X_{k}^{\prime }(1/2+r)}{X_{k}(1/2+r)}\bigg{)}\Bigg{)}f\bigg{(}\frac{ir\log K}{\pi}\bigg{)} \;\mathrm{d}r\] \[\qquad=S_{X}+S_{\zeta}+S_{L}+S_{A^{\prime}}+S_{J}+S_{d}+S_{H}+O_ {\varepsilon}\Big{(}K^{-1/2+\varepsilon}\Big{)}, \tag{3.13}\] where \(c\) is any real number satisfying \(1/\log K<c<1/4\), and \[S_{X}:=-\frac{1}{2K\log K}\!\int_{(C)}\!\underset{k=1}{\overset{K}{\sum}}\frac {X_{k}^{\prime}(1/2+\pi i\tau/\log K)}{X_{k}(1/2+\pi i\tau/\log K)}f(\tau)\; \mathrm{d}\tau, \tag{3.14}\] \[S_{\zeta}:=\frac{1}{\log K}\!\int_{(C)}\!\!\frac{\zeta^{\prime}(1+2\pi i\tau/ \log K)}{\zeta(1+2\pi i\tau/\log K)}f(\tau)\;\mathrm{d}\tau, \tag{3.15}\] \[S_{L}:=-\frac{1}{\log K}\!\int_{(C)}\!\!\frac{L^{\prime}(1+2\pi i\tau/\log K, \chi)}{L(1+2\pi i\tau/\log K,\chi)}f(\tau)\;\mathrm{d}\tau, \tag{3.16}\] \[S_{A^{\prime}}:=-\frac{2}{\log K}\!\int_{(C)}\sum_{\begin{subarray}{c}p>3\\ (-d/p)=-1\end{subarray}}\frac{\log p}{p^{4\pi i\tau/\log K+2}-1}f(\tau)\;\mathrm{ d}\tau, \tag{3.17}\] \[S_{J}:=\frac{1}{\log K}\!\int_{(C)}\!\!J\!\left(\frac{\pi i\tau}{\log K} \right)\!f(\tau)\;\mathrm{d}\tau, \tag{3.18}\] \[S_{d}:=-\frac{\log d}{\log K}\!\int_{(C)}\!\!\frac{d^{\pi i\tau/\log K+1/2}}{ d^{2\pi i\tau/\log K+1}-1}f(\tau)\;\mathrm{d}\tau, \tag{3.19}\] \[S_{H}:=\frac{1}{\log K}\!\int_{(C)}\!H_{2}^{\prime}\!\left(\frac{\pi i\tau}{ \log K}\right)\!f(\tau)\;\mathrm{d}\tau, \tag{3.20}\] where we denoted by \((C)\) the set of all \(\tau\) with \(\mathrm{Im}(\tau)=-c\log K/\pi\). In order to determine explicitly the prediction of the one-level density \(D(\mathcal{F}(K);f)\) offered by the Ratios Conjecture, our next goal will be to provide estimates of each of these integrals in terms of \(f\) and the relevant parameters of our family \(\mathcal{F}(K)\). This is the point of the next section. ## 4 Computations of the Integrals (3.14)-(3.20) In the following we will allow all implicit constants to depend on the test function \(f\). We start by recalling and adapting some results from [20]. Lemma 4.1.: _Let \(B\) be a positive integer. As \(K\to\infty\), we have_ \[S_{\zeta}=-\frac{f(0)}{2}-\sum_{j=1}^{B}\frac{c_{j}\hat{f}^{(j-1)}(0)}{(\log K )^{j}}+O_{B}\!\left(\frac{1}{(\log K)^{B+1}}\right)\!,\] _where \(c_{1},c_{2},c_{3},\ldots\) are numbers defined in [20, eq. (5.4), (5.5)]. In particular,_ \[c_{1}=1+\int_{1}^{\infty}\!\frac{\psi(t)-t}{t^{2}}\;\mathrm{d}t=-\gamma,\] _where \(\gamma\) denotes the Euler-Mascheroni constant and \(\psi(t)=\sum_{n\leqslant t}\Lambda(n)\) is the second Chebyshev function._ Proof.: The asymptotic expression for \(S_{\zeta}\) is simply [20, Lemma 5.2], so we only need to prove that \[\int_{1}^{\infty}\!\frac{\psi(t)-t}{t^{2}}\;\mathrm{d}t=-\gamma-1. \tag{4.1}\] We begin by rewriting the integral as \[\int_{1}^{\infty}\frac{\psi(t)-t}{t^{2}}\;\mathrm{d}t=\lim_{T\to\infty}\int_{1}^{ T}\frac{\psi(t)-t}{t^{2}}\;\mathrm{d}t=\lim_{T\to\infty}\int_{1}^{T}\frac{\psi(t)}{t^{ 2}}\;\mathrm{d}t-\log T. \tag{4.2}\] To evaluate the integral of \(\psi(t)/t^{2}\), we let \(\{q_{k}:k\geqslant 1\}\) denote the sequence of prime powers in increasing order, and we write \(p_{k}:=\exp(\Lambda(q_{k}))\) for the unique prime dividing \(q_{k}\). It will also be convenient to write \(\Pi_{k}:=p_{1}\cdot p_{2}\cdots p_{k}\). Then, using that \(\psi(t)\) is constantly equal to \(\log\Pi_{k}=\psi(q_{k})\) on the interval \([q_{k},q_{k+1})\), we compute that for any large prime power \(q_{M}\), \[\int_{1}^{q_{M}}\frac{\psi(t)}{t^{2}}\;\mathrm{d}t =\sum_{k=1}^{M}\log\Pi_{k}\!\!\int_{q_{k}}^{q_{k+1}}\!\!\frac{1}{ t^{2}}\;\mathrm{d}t=\sum_{k=1}^{M}\log\Pi_{k}\!\left(\frac{1}{q_{k}}-\frac{1}{q _{k+1}}\right)\] \[=\frac{\log\Pi_{1}}{q_{1}}-\frac{\log\Pi_{M}}{q_{M+1}}+\sum_{k=2 }^{M-1}\!\frac{1}{q_{k}}\!\left(\log\Pi_{k}-\log\Pi_{k-1}\right)\] \[=-\frac{\log\Pi_{M}}{q_{M+1}}+\sum_{k=1}^{M-1}\!\frac{\log p_{k}} {q_{k}}=-\frac{\psi(q_{M})}{q_{M+1}}+\sum_{n=1}^{q_{M-1}}\!\frac{\Lambda(n)}{n}.\] Inserting this into (4.2) and replacing \(T\) by \(q_{M-1}\), we obtain from the prime number theorem that \[\int_{1}^{\infty}\frac{\psi(t)-t}{t^{2}}\;\mathrm{d}t =\lim_{M\to\infty}-\frac{\psi(q_{M})}{q_{M}}\cdot\frac{q_{M}}{q_{M +1}}+\sum_{n=1}^{q_{M-1}}\!\frac{\Lambda(n)}{n}-\log q_{M-1}+\log\frac{q_{M-1 }}{q_{M}}\] \[=-1+\gamma+\sum_{n=1}^{\infty}\!\frac{\Lambda(n)-1}{n}=\lim_{s \to 1^{+}}\!\left(-\frac{\zeta^{\prime}(s)}{\zeta(s)}-\zeta(s)\right)=-1-\gamma,\] where we also used the familiar asymptotic for the harmonic numbers and noted that \(q_{M}/q_{M+1}\to 1\) as \(M\to\infty\). Although our Dirichlet \(L\)-function \(L(s,\chi)\) is defined in terms of the character \(\chi(\cdot)=(-d/\cdot)\) and not the non-principal character modulo \(4\) as in [20, Lemma 5.4], we note that this result, including the statement about \(S_{A^{\prime}}\), remains valid in our case with exactly the same proof. We record these two results in the following two separate lemmas. Lemma 4.2 ([20, Lemma 5.4]).: _As \(K\to\infty\), we have_ \[S_{L}=-\frac{\hat{f}(0)}{\log K}\frac{L^{\prime}(1,\chi)}{L(1,\chi)}+O\!\left( \frac{1}{(\log K)^{2}}\right)\!.\] Lemma 4.3 ([20, Lemma 5.4]).: _As \(K\to\infty\), we have_ \[S_{A^{\prime}}=-\frac{2\hat{f}(0)}{\log K}\sum_{\begin{subarray}{c}p\geqslant 3 \\ (-d/p)=-1\end{subarray}}\frac{\log p}{p^{2}-1}+O\!\left(\frac{1}{(\log K)^{2}} \right)\!.\] Moreover, we can adapt [20, Lemma 5.1] to our situation and obtain the following result. Lemma 4.4.: _As \(K\to\infty\), we have_ \[S_{X}=\hat{f}(0)\biggl{(}1+\frac{\log\sqrt{|D|}-\log 2\pi+\log N-1}{\log K} \biggr{)}+O\biggl{(}\frac{1}{K}\biggr{)}.\] Proof.: We have \[\frac{X_{k}^{\prime}(s)}{X_{k}(s)}=\frac{\mathrm{d}}{\mathrm{d}s}\log\Gamma(1-s +Nk)-\frac{\mathrm{d}}{\mathrm{d}s}\log\Gamma(s+Nk)-\log|D|+2\log 2\pi.\] The contribution of the constants to the integral (3.14) is \[\frac{1}{\log K}\widehat{f}(0)\Bigl{(}\log\sqrt{|D|}-\log 2\pi\Bigr{)}.\] As for the contribution of the gamma functions, we recall that, in the notation of [20], we have \(\mathbf{k}=1/2+Nk\). It then follows immediately from [20, Eq. (5.1)], that this contribution equals \[\frac{1}{K\log K}\widehat{f}(0)\sum_{k=1}^{K}\log\mathbf{k}+O\biggl{(}\frac{1 }{K}\biggr{)}.\] By arguing as in [20, Eq. (5.2)], we find that \[\sum_{k=1}^{K}\log\,\mathbf{k} =\sum_{k=1}^{K}\log(1/2+Nk)=K\log N+\log K!+O(\log K)\] \[=K\log N+K\log K-K+O(\log K)\] \[=K\log K+K(\log N-1)+O(\log K).\] This proves the claim. We now turn to the integral (3.18). As Lemma 4.6 below will show, the asymptotic expression for this integral will involve a special value of the logarithmic derivative of \(L(s,\chi)\). In anticipation of this, we will first compute this special value. Lemma 4.5.: _Let \(\eta\) denote Dedekind's eta function,_ \[\eta(\tau)=e^{\pi i\tau/12}\prod_{n\geqslant 1}\bigl{(}1-e^{2\pi in\tau} \bigr{)},\qquad\mathrm{Im}(\tau)>0,\] _and let \(\gamma\) denote the Euler-Mascheroni constant. Then we have_ \[\frac{L^{\prime}(1,\chi)}{L(1,\chi)}=\gamma-\log 2-(\log|D|)/2-\log\mathrm{ Im}(\tau_{0})-4\log|\eta(\tau_{0})|\] _where \(C(d)=2\gamma-\log 2-(\log|D|)/2-\log\mathrm{Im}(\tau_{0})-4\log|\eta(\tau_{0})|\), and_ \[\tau_{0}=\begin{cases}i\sqrt{2}&\text{ if }d=2,\\ (-1+i\sqrt{d})/2&\text{ if }d\neq 2.\end{cases}\] Proof.: If \(\zeta_{\mathbb{K}}(s)\) denotes the Dedekind zeta function of \(\mathbb{K}\), [4, Prop. 10.5.5] gives the factorization \(\zeta_{\mathbb{K}}(s)=\zeta(s)L(s,\chi)\). Indeed, this is clear if \(d\geqslant 3\) since then \(D=-d\), whereas for \(d=2\) and \(n\geqslant 1\), the fact that \((-2/n)\) can only assume the values \(0,\pm 1\) immediately shows that \[\bigg{(}\frac{-2}{n}\bigg{)}=\bigg{(}\frac{-2}{n}\bigg{)}^{3}=\bigg{(}\frac{-8 }{n}\bigg{)},\] so that \((-d/n)=(D/n)\) even in this case. On the other hand, since any non-zero ideal \(\mathfrak{m}\subset\mathcal{O}_{\mathbb{K}}\) has exactly \(\big{|}\mathcal{O}_{\mathbb{K}}^{\times}\big{|}\) generators, we also have \[\zeta_{\mathbb{K}}(s)=\frac{1}{\big{|}\mathcal{O}_{\mathbb{K}}^{ \times}\big{|}}\sum_{N(a,b)\neq 0}\frac{1}{N(a,b)^{s}},\qquad\mathrm{Re}(s)>1,\] where \(N(a,b)\) denotes the norm of the element \(j(a,b)\) as defined in Section 2.2. It now follows from this and [4, Corollary 10.4.8] that \[\zeta(s)L(\chi,s)=\frac{1}{\big{|}\mathcal{O}_{\mathbb{K}}^{ \times}\big{|}}\sum_{N(a,b)\neq 0}\frac{1}{N(a,b)^{s}}=\frac{2}{\big{|} \mathcal{O}_{\mathbb{K}}^{\times}\big{|}}\frac{\pi}{\sqrt{|D|}}\bigg{(}\frac {1}{s-1}+C(d)+O(s-1)\bigg{)}. \tag{4.3}\] Furthermore, by writing \(L\) and \(\zeta\) as Laurent series around \(s=1\), we find that \[L(s,\chi) =L(1,\chi)+L^{\prime}(1,\chi)(s-1)+O\big{(}(s-1)^{2}\big{)},\] \[\zeta(s) =\frac{1}{s-1}+\gamma+O(s-1),\] and hence \[\zeta(s)L(s,\chi)=\frac{L(1,\chi)}{s-1}+L^{\prime}(1,\chi)+\gamma L (1,\chi)+O(s-1). \tag{4.4}\] Together, the two different expressions (4.3) and (4.4) for \(\zeta(s)L(s,\chi)\) now force an equality of coefficients, namely \[L(1,\chi)=\frac{2\pi}{\big{|}\mathcal{O}_{\mathbb{K}}^{\times} \big{|}\sqrt{|D|}},\qquad L^{\prime}(1,\chi)=\frac{2\pi(C(d)-\gamma)}{\big{|} \mathcal{O}_{\mathbb{K}}^{\times}\big{|}\sqrt{|D|}}.\] By computing the quotient \(L^{\prime}(1,\chi)/L(1,\chi)\), we obtain the claim. Lemma 4.6.: _As \(K\to\infty\), we have_ \[S_{J}=\frac{f(0)}{2}-\frac{1}{2}\!\int_{\mathbb{R}}\!\hat{f}( \tau)\mathds{1}_{[-1,1]}(\tau)\;\mathrm{d}\tau+\frac{\hat{f}(1)}{\log K}\ell_ {1}+O_{d}\bigg{(}\frac{1}{(\log K)^{2}}\bigg{)},\] _where_ \[\ell_{1}=\frac{L^{\prime}(1,\chi)}{L(1,\chi)}+2\sum_{\begin{subarray}{c}p >3\\ (-d/p)=-1\end{subarray}}\frac{\log p}{p^{2}-1}+\log\frac{2\pi e}{N\sqrt{|D|}} +\frac{\sqrt{d}\log d}{d-1}-\gamma-\frac{2a\log 2}{3},\] \(a=-1(d\neq 2,7)\)_, and \(L^{\prime}(1,\chi)/L(1,\chi)\) is given in Lemma 4.5._ Proof.: We proceed as in the proof of [20, Lemma 5.5], which relies on the methods of [9]. That is, we will replace the domain of integration \((C)\) with the union of the compact and non-compact contours \(C_{1}\cup C_{\eta}\) and \(C_{0}\), respectively, where \[\begin{split} C_{0}&:=\{\tau\in\mathbb{C}:\mathrm{Im}( \tau)=0,\,|\mathrm{Re}(\tau)|\geqslant(\log K)^{\varepsilon}\},\\ C_{1}&:=\{\tau\in\mathbb{C}:\mathrm{Im}(\tau)=0,\, \eta\leqslant|\mathrm{Re}(\tau)|\leqslant(\log K)^{\varepsilon}\},\\ C_{\eta}&:=\Big{\{}\eta e^{i\theta}:-\pi\leqslant \theta\leqslant 0\Big{\}}.\end{split}\] The utility of such a decomposition is two-fold: On the non-compact part \(C_{0}\) we can bound the integrand using the rapid decay of our test function \(f\). On the other hand, on the compact part we can estimate the individual factors in the integrand with the first few terms of their Taylor expansions. Moreover, the fact that \(C_{1}\cup C_{\eta}\) becomes symmetric when we let \(\eta\to 0\) means that we do not have to take into account contributions from any odd, positive powers of \(\tau\) in these expansions, as the integrals of \(\tau f(\tau)\), \(\tau^{3}f(\tau)\), \(\tau^{5}f(\tau)\),... over this set vanish. We exploit this fact to get rid of any occurrences of \(\varepsilon\) in the error terms. We now proceed to the details. Initially, we recall the following elementary estimates: If \(\gamma\) denotes the Euler-Mascheroni constant, then as \(K\to\infty\), \[\zeta\bigg{(}1-\frac{2\pi i\tau}{\log K}\bigg{)} =-\frac{\log K}{2\pi i\tau}+\gamma+O\bigg{(}\frac{|\tau|}{\log K} \bigg{)}, \tag{4.5}\] \[\frac{L(1+2\pi i\tau/\log K,\chi)}{L(1,\chi)} =1+\frac{L^{\prime}(1,\chi)}{L(1,\chi)}\frac{2\pi i\tau}{\log K}+O \bigg{(}\frac{|\tau|^{2}}{(\log K)^{2}}\bigg{)}, \tag{4.6}\] and \[\begin{split} A_{1}&\bigg{(}-\frac{\pi i\tau}{\log K },\frac{\pi i\tau}{\log K}\bigg{)}A_{-1}\bigg{(}-\frac{\pi i\tau}{\log K}, \frac{\pi i\tau}{\log K}\bigg{)}\\ &=1+\left(2\sum_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}\frac{\log p}{p^{2}-1}\right)\frac{2\pi i\tau}{\log K} +O\bigg{(}\frac{|\tau|^{2}}{(\log K)^{2}}\bigg{)}.\end{split} \tag{4.7}\] We now obtain Taylor expansions of the other factors in \(J\). First of all, \[\begin{split}&\frac{1}{1-2r}\Bigg{(}\frac{2\pi}{KN\sqrt{|D|}} \Bigg{)}^{2r}\bigg{|}_{r=\frac{\pi i\tau}{\log K}}\\ &=\exp\Bigg{(}\frac{2\pi i\tau}{\log K}\log\frac{2\pi}{N\sqrt{|D |}}-2\pi i\tau\Bigg{)}\bigg{(}1+\frac{2\pi i\tau}{\log K}+O\bigg{(}\frac{|\tau |^{2}}{(\log K)^{2}}\bigg{)}\bigg{)}\\ &=e^{-2\pi i\tau}\Bigg{(}1+\frac{2\pi i\tau}{\log K}\log\frac{2 \pi}{N\sqrt{|D|}}+O\bigg{(}\frac{|\tau|^{2}(\log|D|)^{2}}{(\log K)^{2}}\bigg{)} \bigg{)}\bigg{(}1+\frac{2\pi i\tau}{\log K}+O\bigg{(}\frac{|\tau|^{2}}{(\log K )^{2}}\bigg{)}\bigg{)}\\ &=e^{-2\pi i\tau}+e^{-2\pi i\tau}\frac{2\pi i\tau}{\log K}\log \frac{2\pi e}{N\sqrt{|D|}}+O\bigg{(}\frac{|\tau|^{2}(\log|D|)^{2}}{(\log K)^{2 }}\bigg{)},\end{split} \tag{4.8}\] whenever \(K\) is so large that \[\bigg{|}\frac{2\pi i\tau}{\log K}\bigg{|}<1,\qquad\bigg{|}\frac{2\pi i\tau}{ \log K}\log\frac{2\pi}{N\sqrt{|D|}}\bigg{|}<1.\] (Note that this is certainly satisfied for \(\tau\in C_{1}\cup C_{\eta}\).) Next, we see that \[F_{d}\biggl{(}-\frac{\pi i\tau}{\log K},\frac{\pi i\tau}{\log K}\biggr{)}=1+\frac {\sqrt{d}\log d}{d-1}\frac{2\pi i\tau}{\log K}+O_{d}\biggl{(}\frac{|\tau|^{2}}{( \log K)^{2}}\biggr{)}. \tag{4.9}\] Finally, we also record the bound \[H_{2}\biggl{(}-\frac{\pi i\tau}{\log K},\frac{\pi i\tau}{\log K}\biggr{)}=1- \frac{2a\log 2}{3}\frac{2\pi i\tau}{\log K}+O\biggl{(}\frac{|\tau|^{2}}{(\log K)^{2} }\biggr{)}. \tag{4.10}\] By taking the product of all the Taylor expansions (4.5)-(4.10) and disregarding all those resulting terms which have order at least \(\tau/\log K\), we obtain that \[J\biggl{(}\frac{\pi i\tau}{\log K}\biggr{)}\] \[\qquad+\,e^{-2\pi i\tau}\log\frac{2\pi e}{N\sqrt{|D|}}+e^{-2\pi i \tau}\frac{\sqrt{d}\log d}{d-1}-e^{-2\pi i\tau}\frac{2a\log 2}{3}\biggr{)}+O_{d}(|x|^{2} )\biggr{)}\] \[\qquad=\frac{e^{-2\pi i\tau}}{x}+\,e^{-2\pi i\tau}\biggl{(}- \gamma+\frac{L^{\prime}(1,\chi)}{L(1,\chi)}+2\sum_{\begin{subarray}{c}p\geqslant 3 \\ (-d/p)=-1\end{subarray}}\frac{\log p}{p^{2}-1}\] \[\qquad+\log\frac{2\pi e}{N\sqrt{|D|}}+\frac{\sqrt{d}\log d}{d-1} -\frac{2a\log 2}{3}\biggr{)}+O_{d}(|x|),\] where we wrote \(x=2\pi i\tau/\log K\) for simplicity. Thus, we obtain \[S_{J} =\int_{C_{1}\cup C_{\eta}}\!\!f(\tau)\frac{e^{-2\pi i\tau}}{2\pi i \tau}\;\mathrm{d}\tau\] \[\quad+\frac{1}{\log K}\Biggl{(}-\gamma+\frac{L^{\prime}(1,\chi)} {L(1,\chi)}+2\sum_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}\frac{\log p}{p^{2}-1}+\log\frac{2\pi e}{N\sqrt{|D|}}+ \frac{\sqrt{d}\log d}{d-1}-\frac{2a\log 2}{3}\Biggr{)}\] \[\quad\times\int_{C_{1}\cup C_{\eta}}\!\!f(\tau)e^{-2\pi i\tau}\; \mathrm{d}\tau+\frac{1}{\log K}\!\int_{C_{0}}\!\!J\biggl{(}\frac{\pi i\tau}{ \log K}\biggr{)}\!\!f(\tau)\;\mathrm{d}\tau+O_{d}\biggl{(}\frac{1}{(\log K)^{ 2}}\biggr{)},\] where we made use of the evenness of \(f\) as described earlier. By arguing exactly as in [20, Lemma 5.5], we relate the integrals above to special values of the Fourier transform \(\hat{f}\), namely \[\int_{C_{1}\cup C_{\eta}}\!\!f(\tau)e^{-2\pi i\tau}\;\mathrm{d} \tau =\hat{f}(1)+O\biggl{(}\frac{1}{(\log K)^{3}}\biggr{)}, \tag{4.12}\] \[\int_{C_{1}\cup C_{\eta}}\!\!f(\tau)\frac{e^{-2\pi i\tau}}{2\pi i \tau}\;\mathrm{d}\tau =\frac{f(0)}{2}-\frac{1}{2}\!\int_{-1}^{1}\!\!\hat{f}(\tau)\; \mathrm{d}\tau+O\biggl{(}\frac{1}{(\log K)^{3}}\biggr{)}. \tag{4.13}\] Since the rapid decay of \(f\) on \(\mathbb{R}\) shows that the integral over \(C_{0}\) in (4.11) is at most a constant times \((\log K)^{-2}\) (for example), it now follows from (4.12) and (4.13) that \[S_{J} =\frac{f(0)}{2}-\frac{1}{2}\!\int_{-1}^{1}\!\!\hat{f}(\tau)\; \mathrm{d}\tau+\frac{\hat{f}(1)}{\log K}\Bigg{(}\frac{L^{\prime}(1,\chi)}{L(1, \chi)}+2\sum_{\begin{subarray}{c}p\geq 3\\ (-d/p)=-1\end{subarray}}\frac{\log p}{p^{2}-1}\] \[\qquad+\log\frac{2\pi e}{N\sqrt{|D|}}+\frac{\sqrt{d}\log d}{d-1} -\gamma-\frac{2a\log 2}{3}\Bigg{)}+O_{d}\bigg{(}\frac{1}{(\log K)^{2}}\bigg{)},\] which completes the proof. We now turn to the final two integrals (3.19) and (3.20). Lemma 4.7.: _As \(K\to\infty\), we have the estimate_ \[S_{d}=-\frac{\log d}{\log K}\frac{\sqrt{d}}{d-1}\hat{f}(0)+O_{d}\bigg{(}\frac {1}{(\log K)^{2}}\bigg{)}.\] Proof.: The integrand only has poles when \(\mathrm{Im}(\tau)=(\log K)/2\pi>0\), so analogously to the proof of Lemma 4.6, we use Cauchy's residue theorem and the rapid decay of \(f\) to move the contour \((C)\) to the real line without changing the value of the integral. As before, we partition this set into a compact and a non-compact part in order to, respectively, use the (even-indexed terms of the) Taylor expansion of the integrand and bound the integral using the decay of the test function. Concretely, we write \(\mathbb{R}=C_{0}\cup C_{1}\) with \[C_{0} :=\{\tau\in\mathbb{C}:\mathrm{Im}(\tau)=0,\,|\mathrm{Re}(\tau)|> (\log K)^{\varepsilon}\},\] \[C_{1} :=\{\tau\in\mathbb{C}:\mathrm{Im}(\tau)=0,\,|\mathrm{Re}(\tau)| \leqslant(\log K)^{\varepsilon}\}.\] As in the proof of Lemma 4.6, we note that \[\int_{C_{0}}\frac{d^{\pi i\tau/\log K+1/2}}{d^{2\pi i\tau/\log K+1}-1}f(\tau) \;\mathrm{d}\tau\ll_{d}\frac{1}{(\log K)^{2}},\] whereas for the integral over \(C_{1}\), we use the Taylor expansion \[\frac{d^{r+1/2}}{d^{2r+1}-1}=\frac{\sqrt{d}}{d-1}+r\cdot\frac{\mathrm{d}}{ \mathrm{d}r}\Bigg{(}\frac{d^{r+1/2}}{d^{2r+1}-1}\Bigg{)}\Bigg{|}_{r=0}+O_{d} \big{(}r^{2}\big{)}\qquad(r\to 0)\] and the fact that \(\tau f(\tau)\) is odd to obtain \[\int_{C_{1}}\!\!\frac{d^{\pi i\tau/\log K+1/2}}{d^{2\pi i\tau/\log K +1}-1}f(\tau)\;\mathrm{d}\tau =\int_{C_{1}}\!\!\Bigg{(}\frac{\sqrt{d}}{d-1}+O_{d}\bigg{(}\frac{ \tau^{2}}{(\log K)^{2}}\bigg{)}\Bigg{)}f(\tau)\;\mathrm{d}\tau\] \[=\Bigg{(}\frac{\sqrt{d}}{d-1}+O_{d}\bigg{(}\frac{1}{(\log K)^{2-2 \varepsilon}}\bigg{)}\Bigg{)}\!\int_{C_{1}}\!\!f(\tau)\;\mathrm{d}\tau\] \[=\Bigg{(}\frac{\sqrt{d}}{d-1}+O_{d}\bigg{(}\frac{1}{(\log K)^{2-2 \varepsilon}}\bigg{)}\Bigg{)}\Bigg{(}\!\int_{\mathbb{R}}\!\!f(\tau)\;\mathrm{d} \tau+O\bigg{(}\frac{1}{(\log K)^{2}}\bigg{)}\Bigg{)},\] where we used the rapid decay of \(f\) in the last step. The claim now follows. Finally, we have the following asymptotic estimate. We assume that \(d\neq 2,7\) since otherwise \(S_{H}=0\). Lemma 4.8.: _Suppose that \(d\neq 2,\,7.\) As \(K\to\infty,\) we have_ \[S_{H}=\frac{-2\log 2}{3\log K}\hat{f}(0)+O\bigg{(}\frac{1}{(\log K)^{2}}\bigg{)}.\] Proof.: The method of proof is identical to that of the previous lemma. Once again, we note that \(H^{\prime}_{2}(\pi i\tau/\log K)\) only has poles if \(\mathrm{Im}(\tau)=\log K/(2\pi\log 2)>0,\) so that we are justified in moving the contour to the real line. As before, the integral over the non-compact part \(C_{0}\) of our partition \(C_{0}\cup C_{1}\) of \(\mathbb{R}\) simply contributes to the error term, while the integral over the compact part is \[\int_{C_{1}}\Bigg{(}\frac{-2\log 2}{3}+\frac{\pi i\tau}{\log K}\cdot \frac{\mathrm{d}}{\mathrm{d}r}H^{\prime}_{2}(r)\Bigg{|}_{r=0}+O\bigg{(}\frac{ |\tau|^{2}}{(\log K)^{2}}\bigg{)}\Bigg{)}f(\tau)\;\mathrm{d}\tau,\] which follows from the Taylor expansion \[H^{\prime}_{2}(r)=\frac{-2\log 2}{3}+r\cdot\frac{\mathrm{d}}{\mathrm{d}r}H^{ \prime}_{2}(r)\Bigg{|}_{r=0}+O\big{(}r^{2}\big{)}.\] We now proceed exactly as in the proof of the previous lemma. By combining the results from Lemma 4.1 to Lemma 4.8 with (3.13), we have completed the proof of Theorem 1.2. ## 5 An Unconditional Asymptotic for the One-Level Density In this section, we use the following formula for logarithmic derivatives of the \(L\)-functions \(L_{k}\) in order to give an unconditional expression for the one-level density \(D(\mathcal{F}(K);f)\). Lemma 5.1.: _For \(k\geqslant 1\) and \(\mathrm{Re}(s)>1,\) we have_ \[\frac{L^{\prime}_{k}(s)}{L_{k}(s)}=-\underset{n\geqslant 1}{\sum}\frac{c_{k}(n )}{n^{s}},\qquad c_{k}(n)=\Lambda(n)\underset{\mathbb{N}(\mathfrak{p}^{m})=n}{ \sum}\Big{(}1+1\big{(}\mathfrak{p}=\langle p\rangle\big{)}\Big{)}\psi_{k}( \mathfrak{p}^{m}),\] _where we understand that the indicator function specifies whether or not \(\mathfrak{p}\) lies over an inert rational prime._ Proof.: By taking the logarithmic derivative of the (absolutely convergent) Euler product that defines \(L_{k}(s)\) in the half-plane \(\mathrm{Re}(s)>1,\) we get \[\frac{L^{\prime}_{k}(s)}{L_{k}(s)}=-\underset{\mathfrak{p}}{\sum}\log \mathbb{N}(\mathfrak{p})\frac{\psi_{k}(\mathfrak{p})/\mathbb{N}(\mathfrak{p})^ {s}}{1-\psi_{k}(\mathfrak{p})/\mathbb{N}(\mathfrak{p})^{s}}=-\underset{ \mathfrak{p}}{\sum}\log\mathbb{N}(\mathfrak{p})\underset{m\geqslant 1}{\sum} \frac{\psi_{k}(\mathfrak{p}^{m})}{\mathbb{N}(\mathfrak{p}^{m})^{s}},\] where we also used that \(\psi_{k}\) and the norm are completely multiplicative. To finish the computation we note that, if \(\mathfrak{p}\) lies over a ramified or split prime \(p,\) then for any \(m\geqslant 1,\) \[\log\mathbb{N}(\mathfrak{p})=\log p=\Lambda(\mathbb{N}(\mathfrak{p}))=\Lambda (\mathbb{N}(\mathfrak{p}^{m})),\] whereas if \(\mathfrak{p}=\langle p\rangle\) lies over an inert prime, \[\log\mathbb{N}(\mathfrak{p})=2\log p=2\Lambda(\mathbb{N}(\mathfrak{p}))=2 \Lambda(\mathbb{N}(\mathfrak{p}^{m}))\] for any \(m\geqslant 1\). In light of this and the previous computation, we can rewrite the logarithmic derivative as \[\frac{L_{k}^{\prime}(s)}{L_{k}(s)}=-{\sum_{\mathfrak{p}}}\Big{(}1+ \mathbb{1}\big{(}\mathfrak{p}=\langle p\rangle\big{)}\Big{)}{\sum_{m \geqslant 1}}\frac{\psi_{k}(\mathfrak{p}^{m})}{\mathbb{N}(\mathfrak{p}^{m})^{ s}}\Lambda(\mathbb{N}(\mathfrak{p}^{m})).\] We now obtain the claim by grouping together all \(\mathfrak{p}^{m}\) with norm \(n\), for all \(n\geqslant 1\). Towards a computation of the \(1\)-level density, we note that just as in [20, Section 6], we have \[D(\mathcal{F}(K);f)=\frac{1}{2\pi i}{\int_{(c^{\prime})}}\frac{1}{K}\Bigg{(}{ \sum_{k=1}^{K}}2\frac{L_{k}^{\prime}(1/2+r)}{L_{k}(1/2+r)}-\frac{X_{k}^{ \prime}(1/2+r)}{X_{k}(1/2+r)}\Bigg{)}f\bigg{(}\frac{ir\log K}{\pi}\bigg{)} \;\mathrm{d}r,\] where now \(c^{\prime}\geqslant 1/2\). Using Lemma 5.1 and arguing as in [13, Thm. 5.12], we now obtain \[D(\mathcal{F}(K);f) =S_{X}-\frac{1}{\pi i}\frac{1}{K}{\int_{(c^{\prime})}}\sum_{k=1} ^{K}{\sum_{n\geqslant 1}}\frac{c_{k}(n)}{n^{1/2+r}}f\bigg{(}\frac{ir\log K}{ \pi}\bigg{)}\;\mathrm{d}r\] \[=S_{X}-\frac{1}{\pi i}\frac{1}{K}{\sum_{k=1}^{K}}{\sum_{n \geqslant 1}}\frac{c_{k}(n)}{\sqrt{n}}{\int_{(c^{\prime})}}e^{-r\log n}f \bigg{(}\frac{ir\log K}{\pi}\bigg{)}\;\mathrm{d}r\] \[=S_{X}-\frac{1}{K\log K}{\sum_{k=1}^{K}}{\sum_{n\geqslant 1}} \frac{c_{k}(n)}{\sqrt{n}}{\int_{\{\mathrm{Im}(r)=c^{\prime}\log K/\pi\}}}e^{2 \pi ir\log n/2\log K}f(r)\;\mathrm{d}r\] \[=S_{X}-\frac{1}{K\log K}{\sum_{k=1}^{K}}{\sum_{n\geqslant 1}} \frac{c_{k}(n)}{\sqrt{n}}\widehat{f}\bigg{(}\frac{\log n}{2\log K}\bigg{)},\] where we replaced \(\{\mathrm{Im}(r)=c^{\prime}\log K/\pi\}\) with \(\mathbb{R}\) due to the rapid decay of \(f\) on horizontal strips. The occurrence of the von Mangoldt function in \(c_{k}(n)\) means that the only indices \(n\) contributing to the sum above are the prime powers. If we replace \(n\) with \(p^{n}\) (to save notation), we can then describe the resulting values of \(c_{k}(p^{n})\) depending on the splitting behavior of \(p\). Thus, with the help of Lemma 2.4, we find that \[c_{k}(p^{n})=\begin{cases}(\psi_{k}(\mathfrak{q}_{1}^{n})+\psi_{k}(\mathfrak{q }_{2}^{n}))\log p&\text{if }\langle p\rangle=\mathfrak{q}_{1}\mathfrak{q}_{2},\, \mathfrak{q}_{1}\neq\mathfrak{q}_{2},\\ 2\log p&\text{if }p\text{ is inert and }n\text{ is even},\\ 0&\text{if }p\text{ is inert and }n\text{ is odd},\\ \log p&\text{if }\langle p\rangle=\mathfrak{q}^{2}.\end{cases}\] Considering these special values of \(c_{k}(p^{n})\), we now define \[S_{\text{inert}}=-\frac{2}{\log K}{\sum_{p\text{ inert}}}\sum_{n\geqslant 1} \frac{\log p}{p^{n}}\widehat{f}\bigg{(}\frac{n\log p}{\log K}\bigg{)},\] \[S_{\rm split}=-\frac{1}{K\log K}\sum_{p\text{ split}}\sum_{n\geqslant 1}\frac{\log p}{p^{n/2}} \widehat{f}\bigg{(}\frac{n\log p}{2\log K}\bigg{)}\sum_{k=1}^{K}(\psi_{k}( \mathfrak{q}_{1}^{n})+\psi_{k}(\mathfrak{q}_{2}^{n})),\] \[S_{\rm ram}=-\frac{1}{\log K}\sum_{n\geqslant 1}\frac{\log d}{d^{n/2}}\widehat{ f}\bigg{(}\frac{n\log d}{2\log K}\bigg{)}.\] Thus, \[D(\mathcal{F}(K);f)=S_{X}+S_{\rm inert}+S_{\rm split}+S_{\rm ram}. \tag{5.1}\] We now want to compare the terms \(S_{\rm inert}\) and \(S_{\rm ram}\) with the terms (3.14)-(3.20) appearing in the expression of the one-level density conditional on the Ratios Conjecture. To facilitate this comparison, we begin by rewriting these terms as follows. Lemma 5.2.: _We have that_ \[S_{\zeta}=-\frac{1}{\log K}\sum_{n\geqslant 1}\frac{\Lambda(n)}{n}\widehat{ f}\bigg{(}\frac{\log n}{\log K}\bigg{)}, \tag{5.2}\] \[S_{A^{\prime}}=-\frac{2}{\log K}\sum_{\begin{subarray}{c}p\geqslant 3\\ (-d/p)=-1\end{subarray}}\sum_{n\geqslant 1}\frac{\log p}{p^{2n}}\widehat{ f}\bigg{(}\frac{2n\log p}{\log K}\bigg{)}, \tag{5.3}\] \[S_{L}=\frac{1}{\log K}\sum_{n\geqslant 1}\frac{\Lambda(n)\chi(n)}{n}\widehat{ f}\bigg{(}\frac{\log n}{\log K}\bigg{)}, \tag{5.4}\] \[S_{d}=-\frac{\log d}{\sqrt{d}\log K}\sum_{n\geqslant 0}\frac{1}{d^{n}} \widehat{f}\bigg{(}\frac{(1/2+n)\log d}{\log K}\bigg{)}, \tag{5.5}\] \[S_{H}=\frac{a}{2}\frac{\log 2}{\log K}\sum_{n\geqslant 0}\frac{1}{4^{n}} \widehat{f}\bigg{(}\frac{(2n+2)\log 2}{\log K}\bigg{)}. \tag{5.6}\] Proof.: The proofs of the equalities (5.2) and (5.3) follow immediately from the corresponding proofs in [20, Lemma 4.2], once we substitute our character \(\chi\) for the character \(\chi_{1}\) in that paper. As for the equality (5.4), we note the standard formula \[\frac{L^{\prime}(s,\chi)}{L(s,\chi)}=-\sum_{n\geqslant 1}\frac{\Lambda(n) \chi(n)}{n^{s}},\qquad\text{Re}(s)>1.\] Substituting this infinite sum for the logarithmic derivative appearing in the definition of \(S_{L}\), we obtain the claim by changing variables and moving the contour of integration to the real line, which is justified due to the rapid decay of \(f\) in combination with the lack of poles of the integrand in the part of the complex plane enclosed by these contours. Turning to \(S_{d}\), we note that when \(\tau\) has imaginary part \(-c\log K/\pi\) where \(c>0\), the number \(-2\pi i\tau/\log K-1\) has real part \(-2c-1<0\), and so we can write \[\frac{d^{\pi i\tau/\log K+1/2}}{d^{2\pi i\tau/\log K+1}-1}=d^{-\pi i\tau/\log K -1/2}\sum_{n\geqslant 0}\Bigl{(}d^{-2\pi i\tau/\log K-1}\Bigr{)}^{n}.\] Inserting this into the expression (3.19), we get that \[S_{d} =-\frac{\log d}{\sqrt{d}\log K}{\int_{(C)}}\exp\biggl{(}-\pi i \tau\frac{\log d}{\log K}\biggr{)}{\sum_{n\geqslant 0}}\exp\biggl{(}-n\log d \biggl{(}\frac{2\pi i\tau}{\log K}+1\biggr{)}\biggr{)}f(\tau)\;{\rm d}\tau\] \[=-\frac{\log d}{\sqrt{d}\log K}{\sum_{n\geqslant 0}}d^{-n}{\int_{(C)}}f(\tau)\exp \biggl{(}-2\pi i\tau(1/2+n)\frac{\log d}{\log K}\biggr{)}\;{\rm d}\tau,\] which equals the claimed expression for \(S_{d}\) as we can move the contour \((C)\) to the real line for the usual reasons. Finally, we turn our attention to \(S_{H}\). In the same way as before, we rewrite the function \[H^{\prime}_{2}(r)=\begin{cases}0&\text{if }d=2,7,\\ -2\log 2\bigl{(}2^{2(2r+1)}-1\bigr{)}^{-1}&\text{otherwise},\end{cases}\] as a geometric series. Letting \(r=\pi i\tau/\log K\), we then find that \[H^{\prime}_{2}\biggl{(}\frac{\pi i\tau}{\log K}\biggr{)}=\frac{a\log 2}{2}{ \sum_{n\geqslant 0}}4^{-n}\exp\biggl{(}-2\pi i\tau\frac{(2n+2)\log 2}{\log K }\biggr{)},\] where \(a=-1(d\neq 2,7)\). The equality (5.6) now follows once we substitute this expression for \(H^{\prime}_{2}\) in (3.20) and shift the contour of integration. \(\blacksquare\) It now follows from (5.2) and (5.4) that \[S_{\zeta}+S_{L}=\frac{1}{\log K}{\sum_{n\geqslant 1}}\frac{\Lambda(n)(\chi(n)-1) }{n}\widehat{f}\biggl{(}\frac{\log n}{\log K}\biggr{)}.\] Since those \(d\) we are interested in satisfy \(d\not\equiv 1\) (mod 4), we know that \(\chi(n)\) is a quadratic Dirichlet character of modulus \(4d=d^{3}\) (in case \(d=2\)) or modulus \(d\) (otherwise). This fact means that for all prime powers \(n\) appearing in the above sum, \(\chi(n)=0\) if and only if \(n\) is a power of \(d\). Moreover, since \(\chi(n)-1=0\) whenever \(n\) is the power of a split prime or an even power of an inert prime, the computation above shows that \[S_{\zeta}+S_{L}=-\frac{\log d}{\log K}{\sum_{n\geqslant 1}}d^{-n}\widehat{f} \biggl{(}\frac{n\log d}{\log K}\biggr{)}-\frac{2}{\log K}{\sum_{n\geqslant 0 }}\sum_{p\text{ inert}}\frac{\log p}{p^{2n+1}}\widehat{f}\biggl{(}\frac{(2n+1) \log p}{\log K}\biggr{)}. \tag{5.7}\] Regarding the first infinite sum, we note with the help of Lemma 5.2 that \[\begin{split}-\frac{\log d}{\log K}{\sum_{n\geqslant 1}}d^{-n} \widehat{f}\biggl{(}\frac{n\log d}{\log K}\biggr{)}&=S_{\text{ ram}}+\frac{\log d}{\log K}{\sum_{n\geqslant 0}}d^{-n-1/2}\widehat{f}\biggl{(} \frac{(2n+1)\log d}{2\log K}\biggr{)}\\ &=S_{\text{ram}}-S_{d}.\end{split} \tag{5.8}\] Similarly, in the case of the second infinite sum, we see that \[-\frac{2}{\log K}{\sum_{n\geqslant 0}}\sum_{p\text{ inert}}\frac{\log p}{p^{2n+1}}\widehat{f}\bigg{(}\frac{(2n+1)\log p}{ \log K}\bigg{)} \tag{5.9}\] \[\qquad=S_{\text{inert}}+\frac{2}{\log K}{\sum_{p\text{ inert}}\sum _{n\geqslant 1}}{\frac{\log p}{p^{2n}}\widehat{f}\bigg{(}\frac{2n\log p}{\log K} \bigg{)}}\] \[\qquad=S_{\text{inert}}+\frac{2}{\log K}{\sum_{p\geqslant 3}}{ \frac{n\geqslant 1}{p^{2n}}\widehat{f}\bigg{(}\frac{2n\log p}{\log K} \bigg{)}}\] \[\qquad+\mathbb{1}\,(2\text{ inert})\frac{2\log 2}{\log K}{\sum_{n \geqslant 1}}2^{-2n}\widehat{f}\bigg{(}\frac{2n\log 2}{\log K}\bigg{)}\] \[\qquad=S_{\text{inert}}-S_{A^{\prime}}+\mathbb{1}(2\text{ inert}) \frac{2\log 2}{\log K}{\sum_{n\geqslant 1}}2^{-2n}\widehat{f}\bigg{(}\frac{2n\log 2 }{\log K}\bigg{)},\] again using Lemma 5.2. Since 2 is inert in \(\mathbb{K}\) if and only if \(d\neq 2\), 7, we see that when \(d\neq 2\), 7, the last term above is \[\frac{2\log 2}{\log K}{\sum_{n\geqslant 1}}2^{-2n}\widehat{f} \bigg{(}\frac{2n\log 2}{\log K}\bigg{)} =\frac{4}{2}\frac{\log 2}{\log K}{\sum_{n\geqslant 0}}2^{-2n-2} \widehat{f}\bigg{(}\frac{(2n+2)\log 2}{\log K}\bigg{)}\] \[=\frac{1}{2}\frac{\log 2}{\log K}{\sum_{n\geqslant 0}}2^{-2n} \widehat{f}\bigg{(}\frac{(2n+2)\log 2}{\log K}\bigg{)}\] \[=-S_{H}.\] Since \(S_{H}\) furthermore vanishes if \(d=2,\,7\), combining (5.7), (5.8), and (5.9), we therefore obtain that \[S_{\text{inert}}+S_{\text{ram}}=S_{\zeta}+S_{L}+S_{A^{\prime}}+S_{d}+S_{H}. \tag{5.10}\] It now follows from (5.1) and (3.13) that our unconditional expression for the 1-level density \(D(\mathcal{F}(K);f)\) agrees with the expression conditional on the Ratios Conjecture, and hence with the Katz-Sarnak prediction (1.1), if \[S_{\text{split}}\approx S_{J}.\] ## 6 Comparison with the Katz\(-\)Sarnak Density Conjecture The goal of this section is to unify our explicit computation with the prediction of the Katz\(-\)Sarnak Density Conjecture. As we described earlier, this goal amounts to verifying that the term \(S_{\text{split}}\) coming from the split rational primes is equal to the term \(S_{J}\) predicted by the Ratios Conjecture, at least up to some small error. We now generalize the result [19, Lemma 2.1] which provides a useful relation between the angle \(\theta_{I}\) and the norm \(\mathbb{N}(I)\) of a non-zero ideal \(I\subset\mathcal{O}_{\mathbb{K}}\). Theorem 6.1.: _Let \(\Lambda=g\mathbb{Z}^{2}\subset\mathbb{R}^{2}\) be a unimodular lattice with_ \[g=\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix}\begin{pmatrix}1&0\\ n&1\end{pmatrix}\in\operatorname{SL}_{2}(\mathbb{R}).\] _Let \(\ell\) be a line through the origin such that the angle between the positive \(x\)-axis and \(\ell\) is \(\theta\in[0,\pi)\). If \(\theta=\pi/2\) (in which case we let \(q:=1\)), or if \(-n+a^{2}\tan\theta\) is an algebraic number of degree \(q\geqslant 1\), then there exists \(C=C(\Lambda,\ell)>0\) such that for every \(\boldsymbol{v}\in\Lambda\setminus\ell\),_ \[|\alpha(\boldsymbol{v})|\geqslant\frac{C}{\|\boldsymbol{v}\|^{q}},\] _where \(\alpha(\boldsymbol{v})\) denotes the angle between \(\boldsymbol{v}\) and \(\ell\)._ Proof.: Let us first note that if \(\ell\) is the \(y\)-axis, then the claim follows easily: Indeed, the set of first coordinates of lattice points in \(\Lambda\) is discrete, so for any \(\boldsymbol{v}\in\Lambda\), if the angle \(\alpha(\boldsymbol{v})\) between \(\boldsymbol{v}\) and the \(y\)-axis is non-zero, but small, we have \[2|\alpha(\boldsymbol{v})|\geqslant|\tan\alpha(\boldsymbol{v})|=\frac{|as|}{|( sn+t)/a|}\gg_{a}\frac{1}{|(sn+t)/a|}\geqslant\frac{1}{\sqrt{(sn+t)^{2}/a^{2}+a^{2} s^{2}}}=\frac{1}{\|\boldsymbol{v}\|}.\] We can therefore assume that \(\theta\neq\pi/2\), so that \(\cos\theta\neq 0\). Similarly, if \(\ell\) is the \(x\)-axis, Liouville's Theorem [1, Thm. 1.1] and the assumption that \(-n\) is algebraic of degree \(q\) imply that \(|sn+t|\gg|s|^{1-q}\) uniformly in \((s,t)\in\mathbb{Z}^{2}\setminus\{\boldsymbol{0}\}\), and hence \[2|\alpha(\boldsymbol{v})|\geqslant|\tan\alpha(\boldsymbol{v})|=\frac{|sn+t|} {a^{2}|s|}\gg_{a}\frac{1}{|a^{q}s^{q}|}\geqslant\frac{1}{\sqrt{a^{2q}s^{2q}+( sn+t)^{2q}/a^{2q}}}=\frac{1}{\|\boldsymbol{v}\|^{q}},\] if \(\alpha(\boldsymbol{v})\) is non-zero and small. (Note that, in particular, this implies that \(s\neq 0\) and \(sn+t\neq 0\).) We can therefore also assume that \(\theta\neq 0\), so that \(\sin\theta\neq 0\). Let us now rotate \(\ell\) and \(\Lambda\) clockwise by the angle \(\theta\), which transforms \(\ell\) into the \(x\)-axis and \(\Lambda\) into the lattice \[\Lambda^{\prime}=\bigg{\{}\begin{pmatrix}x\cdot\cos\theta+y\cdot\sin\theta \\ -x\cdot\sin\theta+y\cdot\cos\theta\end{pmatrix}:x=as,\,y=(sn+t)/a,\,s,t\in \mathbb{Z}\bigg{\}}.\] Since the substance of the claim pertains to the situation where \(\alpha(\boldsymbol{v})\neq 0\) is very small, we assume that \(\boldsymbol{v}\in\Lambda^{\prime}\) is any non-zero lattice point with \(\alpha(\boldsymbol{v})\) non-zero, but small. Under this assumption, we have the estimate \[2|\alpha(\boldsymbol{v})|\geqslant|\tan\alpha(\boldsymbol{v})|=\frac{|-x \cdot\sin\theta+y\cdot\cos\theta|}{|x\cdot\cos\theta+y\cdot\sin\theta|}=\frac{ |y-x\cdot\tan\theta|}{|x+y\cdot\tan\theta|}\gg_{a}\frac{|t-s\cdot\big{(}a^{2} \tan\theta-n)|}{|x+y\cdot\tan\theta|}. \tag{6.1}\] If \(s=0\), we have \(x=0\) and \(y=t/a\neq 0\), and and the right-hand side of (6.1) is bounded from below (independently of \(\boldsymbol{v}\)) by \(|a/\tan\theta|>0\). We can therefore suppose that \(s\neq 0\). In this case, the assumption about \(a^{2}\tan\theta-n\) implies, by Liouville's Theorem, that there exists \(C=C(\Lambda,\ell)>0\) such that \[\big{|}t-s\cdot(a^{2}\tan\theta-n)\big{|}\geqslant\frac{C}{|s|^{q-1}}. \tag{6.2}\] Since we have \(|s|^{q-1}\ll_{a}\big{(}a^{2}s^{2}+(sn+t)^{2}/a^{2}\big{)}^{(q-1)/2}=\|\mathbf{v} \|^{q-1}\) and \[|x+y\cdot\tan\theta|\leqslant\sqrt{|x+y\cdot\tan\theta|^{2}+|y-x\cdot\tan\theta |^{2}}=\frac{\|\mathbf{v}\|}{|\cos\theta|},\] we obtain the claim from (6.1). Remark. In anticipation of the lemma below, we use Theorem 6.1 to define \[Q:=\max_{0\leqslant m\leqslant 2N-1}\Big{\{}q\geqslant 1:\begin{array}{c}-n( \mathcal{O}_{\mathbb{K}})+a(\mathcal{O}_{\mathbb{K}})^{2}\tan(\pi m/(2N))\\ \text{ is algebraic of degree }q\end{array}\Big{\}},\] where \(n(\mathcal{O}_{\mathbb{K}})\) and \(a(\mathcal{O}_{\mathbb{K}})\) denote the parameters appearing in the Iwasawa decomposition of the lattice \(\mathcal{O}_{\mathbb{K}}\) from Lemma 2.1. That is, \(Q\) is the largest of the degrees of all the algebraic numbers \(-n(\mathcal{O}_{\mathbb{K}})+a(\mathcal{O}_{\mathbb{K}})^{2}\tan(\pi m/(2N))\), where \(m=0,\ldots,2N-1\). Note that these numbers are indeed algebraic: This follows from Lemma 2.1 and from the fact that \(\tan(\pi m/2N)\) is algebraic, as \(\tan(\pi m)=0\) can be written as a quotient of two polynomials in \(\tan(\pi m/2N)\) with integer coefficients. In particular, there exists a constant \(0<c_{0}<1/4N\), which only depends on \(\mathcal{O}_{\mathbb{K}}\) and \(N\), such that for \(m=0,\ldots,2N-1\), we have \(|\alpha(\mathbf{v})|\geqslant c_{0}/\|\mathbf{v}\|^{Q}\), where \(\alpha(\mathbf{v})\) denotes the angle between \(\ell_{m}\) and \(\mathbf{v}\in\mathcal{O}_{\mathbb{K}}\setminus\ell_{m}\). We can now repeat the argument of Waxman in [20, Section 6] to prove that, at least when \(\alpha:=\sup\text{supp}\,\widehat{f}<1\), the unconditional asymptotic for the one-level density obtained above is in agreement with the prediction of the Katz-Sarnak Density Conjecture. Observe that when \(\alpha<1\), this is the case precisely if \(S_{\text{split}}\) is very small, cf. Lemma 4.6. Lemma 6.2.: _Suppose that \(\alpha<1\). Then \(S_{\text{split}}\ll_{\widehat{f},\varepsilon}K^{\alpha-1+\varepsilon}\)._ Proof.: Note that the character sum appearing in the definition of \(S_{\text{split}}\) satisfies \[\left|\sum_{k=1}^{K}\psi_{k}(I)\right|=\left|\sum_{k=1}^{K}e^{2iNk\theta_{I}} \right|\leqslant\frac{2}{|e^{2iN\theta_{I}}-1|} \tag{6.3}\] whenever \(\theta_{I}\) is not a multiple of \(\pi/N\). Also, note that \(\theta_{I}\) can't even be a multiple of \(\pi/2N\), for in that case, if \(I=\langle\beta\rangle\), we have \((\beta/\overline{\beta})^{N}=\pm 1\), and hence \[\langle\beta\rangle^{N}=\langle\beta^{N}\rangle=\langle\overline{\beta}^{N} \rangle=\langle\overline{\beta}\rangle^{N},\] which forces \(\beta\equiv\overline{\beta}\) (mod \(\mathcal{O}_{\mathbb{K}}^{\times}\)) due to the unique factorization of ideals in \(\mathcal{O}_{\mathbb{K}}\). It therefore follows that \(\langle\beta\rangle\) can't lie over a split prime, which is a contradiction. Now, if \[\frac{n\pi}{2N}<\theta_{I}<\frac{(n+1)\pi}{2N} \tag{6.4}\] for some \(n=0,\ldots,2N-1\), we have \[\frac{2}{|e^{2iN\theta_{I}}-1|}\ll\frac{1}{(n+1)\pi/(2N)-\theta_{I}}+\frac{1} {\theta_{I}-n\pi/(2N)}.\] Since we can always take \(\theta_{I}\in(0,\pi)\) because \(\pm 1\in\mathcal{O}_{\mathbb{K}}^{\times}\) for all possible values of \(d\), we can define \(n(\theta_{I})\) to be the unique \(n\in\{0,\ldots,2N-1\}\) such that (6.4) is satisfied. In combination with (6.3), we therefore obtain \[\left|\sum_{k=1}^{K}\psi_{k}(I)\right|\ll\frac{1}{(n(\theta_{I})+1)\pi/(2N)- \theta_{I}}+\frac{1}{\theta_{I}-n(\theta_{I})\pi/(2N)}. \tag{6.5}\] Now, from the definition of \(S_{\rm split}\) and (6.5) it is clear that \[\begin{split} S_{\rm split}&\ll\frac{1}{K\log K} \sum_{n=0}^{2N-1}\sum_{\begin{subarray}{c}I\subset\mathcal{O}_{\mathbb{K}}\\ n(\theta_{I})=n\end{subarray}}\frac{\Lambda(N(I))}{\sqrt{N(I)}}\cdot\left| \widehat{f}\Bigg{(}\frac{\log\sqrt{N(I)}}{\log K}\Bigg{)}\right|\\ &\times\left(\frac{1}{(n+1)\pi/(2N)-\theta_{I}}+\frac{1}{\theta_ {I}-n\pi/(2N)}\right)\end{split} \tag{6.6}\] since all the powers \(p^{m}\) (\(m\geqslant 1\)) of a split prime will appear as norms of suitable ideals \(I\subset\mathcal{O}_{\mathbb{K}}\) with \(\theta_{I}\neq 0\). Since \(\mathcal{O}_{\mathbb{K}}\) is a two-dimensional lattice in \(\mathbb{C}\), the basic idea is that the right-hand side of (6.6) can be estimated rather sharply by replacing it with an integral over certain parts of the ambient complex plane, where a change to polar coordinates will simplify the integrand greatly. To specify these domains of integration, for each \(I\subset\mathcal{O}_{\mathbb{K}}\) we let \[S_{I}\subset\Big{\{}re^{i\theta}\in\mathbb{C}:n(\theta_{I})\pi/2N+c_{0}<\theta <(n(\theta_{I})+1)\pi/2N-c_{0}\Big{\}}\] be a small annulus sector with distance \(\gg 1\) to the origin and containing the generator \(\sqrt{\mathbb{N}(I)}e^{i\theta_{I}}\) of the ideal \(I\) in the direction given by \(\theta_{I}\). Furthermore, we choose these sectors to have the same area and so that \(S_{I}\cap S_{J}=\varnothing\) if \(I\neq J\). Now, we find that \[\begin{split}&\frac{1}{K\log K}\sum_{n=0}^{2N-1}\sum_{ \begin{subarray}{c}I\subset\mathcal{O}_{\mathbb{K}}\\ n(\theta_{I})=n\end{subarray}}\frac{\Lambda(N(I))}{\sqrt{N(I)}}\bigg{|} \widehat{f}\Bigg{(}\frac{\log\sqrt{N(I)}}{\log K}\Bigg{)}\Bigg{|}\bigg{(} \frac{1}{(n+1)\pi/(2N)-\theta_{I}}+\frac{1}{\theta_{I}-n\pi/(2N)}\bigg{)}\\ &\quad=\frac{1}{K\log K}\sum_{n=0}^{2N-1}\sum_{\begin{subarray} {c}I\subset\mathcal{O}_{\mathbb{K}}\\ n(\theta_{I})=n\end{subarray}}\int_{S_{I}}\log r\bigg{|}\widehat{f}\bigg{(} \frac{\log r}{\log K}\bigg{)}\bigg{|}\bigg{(}\frac{1}{(n+1)\pi/(2N)-\theta_{I }}+\frac{1}{\theta_{I}-n\pi/(2N)}\bigg{)}\;\mathrm{d}r\,\mathrm{d}\theta\\ &\quad\ll\frac{1}{K\log K}\sum_{n=0}^{2N-1}\int_{r=1}^{\infty} \log r\bigg{|}\widehat{f}\bigg{(}\frac{\log r}{\log K}\bigg{)}\bigg{|}\!\int _{n\pi/2N+c_{0}/r^{Q}}^{(n+1)\pi/2N-c_{0}/r^{Q}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! It would be satisfactory to give a reason why \(S_{\rm split}\) and \(S_{J}\) would have anything to do with each other, so that the approximate equality given by Lemma 6.2 is not just "accidental." It seems difficult to give any such reason, considering the particularly involved arithmetic quality of the term \(S_{J}\), and we have not succeeded in this matter. However, we can make the following observation, which may or may not be relevant in order to make further progress: The character sum appearing in \(S_{\rm split}\) is expressible in terms of the _Dirichlet kernel_ \[D_{K}(x)=\frac{\sin(Kx+x/2)}{\sin x/2}=1+2\sum_{k=1}^{K}\cos(kx).\] More precisely, we have the following result. Proposition 6.3.: _Suppose \(\mathfrak{q}_{1}\) and \(\mathfrak{q}_{2}\) are different prime ideals in \(\mathcal{O}_{\mathbb{K}}\) lying over a rational prime \(p\), and let \(n\geqslant 1.\) Then we have_ \[\sum_{k=1}^{K}\Bigl{(}\psi_{k}(\mathfrak{q}_{1}^{n})+\psi_{k}( \mathfrak{q}_{2}^{n})\Bigr{)}=-1+D_{K}\bigl{(}2Nn\theta_{\mathfrak{q}_{1}} \bigr{)},\] _where \(\theta_{\mathfrak{q}_{1}}\in(0,\pi)\) denotes the argument of a generator of \(\mathfrak{q}_{1}\) in the upper half-plane._ Proof.: Suppose that \(z=a+ib\) is any generator of \(\mathfrak{q}_{1}\). Since \(\bigcap_{d}\mathcal{O}_{\mathbb{K}}^{\times}=\mathbb{Z}/2\mathbb{Z}\), we can assume that \(b>0\). Moreover, since the generator of the ideal \(\mathfrak{q}_{1}\mathfrak{q}_{2}\) is a rational prime, the conjugate \(\overline{z}=a-ib\) must be a generator of \(\mathfrak{q}_{2}\). Therefore, \[\psi_{k}(\mathfrak{q}_{1}^{n})+\psi_{k}(\mathfrak{q}_{2}^{n}) =\Bigl{(}\frac{z}{\overline{z}}\Bigr{)}^{Nnk}+\left(\frac{ \overline{z}}{z}\right)^{Nnk}\] \[=2\cos\Bigl{(}Nnk\log\left(z/\,\overline{z}\right)\Bigr{)}\] \[=2\cos\biggl{(}Nink\log\frac{a/b+i}{a/b-i}\biggr{)}\] \[=2\cos\biggl{(}2Nnk\cdot\frac{1}{2i}\log\frac{a/b-i}{a/b+i} \biggr{)}\] \[=2\cos\bigl{(}2Nnk\cdot\arctan(b/a)\bigr{)},\] where we used the identity \[\arctan\frac{1}{x}=-\frac{1}{2i}\log\frac{x-i}{x+i},\] valid for all \(x\neq 0\). Moreover, since the angle \(\theta_{\mathfrak{q}_{1}}\) between \(1\) and \(a+ib\) is given by \[\theta_{\mathfrak{q}_{1}}=\begin{cases}\arctan(b/a)&\text{if }a>0,\\ \arctan(b/a)+\pi&\text{if }a<0,\end{cases}\] the computation above and the periodicity of cosine show that \[\psi_{k}(\mathfrak{q}_{1}^{n})+\psi_{k}(\mathfrak{q}_{2}^{n})=2\cos\big{(}2Nnk \theta_{\mathfrak{q}_{1}}\big{)}.\] It now follows that \[\sum_{k=1}^{K}(\psi_{k}(\mathfrak{q}_{1}^{n})+\psi_{k}(\mathfrak{q}_{2}^{n}))=2 \hskip-2.845276pt\sum_{k=1}^{K}\cos\big{(}2Nnk\theta_{\mathfrak{q}_{1}}\big{)} =-1+D_{K}\big{(}2Nn\theta_{\mathfrak{q}_{1}}\big{)},\] which completes the proof. \(\blacksquare\) Making use of this observation, we can then rewrite \(S_{\rm split}\) as \[S_{\rm split} =\frac{1}{K\log K}\sum_{(-d/p)=1}\ \sum_{n\geqslant 1}\frac{\log p }{p^{n/2}}\widehat{f}\bigg{(}\frac{n\log p}{2\log K}\bigg{)}\] \[\qquad-\frac{1}{\log K}\sum_{(-d/p)=1}\ \sum_{n\geqslant 1}\frac{ \log p}{p^{n/2}}\widehat{f}\bigg{(}\frac{n\log p}{2\log K}\bigg{)}\frac{D_{K} \big{(}2Nn\theta_{p}\big{)}}{K},\] where \(\theta_{p}\in(0,\pi)\) denotes the argument of one of the generators of either of the prime ideals lying over \(p\). Thus, the primary benefit of Lemma 6.3 is that it allows one to express the contribution of the split primes as a sum of two terms, among which the first one can be understood with partial summation, and the other one involves a well-studied function in a normalized form (due to the factor \(1/K\)). At any rate, the presence of the angles \(\theta_{p}\) in the last contribution seems to be quite a difficult obstacle. A better understanding of these angles is likely a crucial step if one wishes to understand the relation \(S_{\rm split}\approx S_{J}\) on a deeper level.
2308.00122
DAVIS: High-Quality Audio-Visual Separation with Generative Diffusion Models
We propose DAVIS, a Diffusion model-based Audio-VIusal Separation framework that solves the audio-visual sound source separation task through a generative manner. While existing discriminative methods that perform mask regression have made remarkable progress in this field, they face limitations in capturing the complex data distribution required for high-quality separation of sounds from diverse categories. In contrast, DAVIS leverages a generative diffusion model and a Separation U-Net to synthesize separated magnitudes starting from Gaussian noises, conditioned on both the audio mixture and the visual footage. With its generative objective, DAVIS is better suited to achieving the goal of high-quality sound separation across diverse categories. We compare DAVIS to existing state-of-the-art discriminative audio-visual separation methods on the domain-specific MUSIC dataset and the open-domain AVE dataset, and results show that DAVIS outperforms other methods in separation quality, demonstrating the advantages of our framework for tackling the audio-visual source separation task.
Chao Huang, Susan Liang, Yapeng Tian, Anurag Kumar, Chenliang Xu
2023-07-31T19:41:49Z
http://arxiv.org/abs/2308.00122v1
# DAVIS: High-Quality Audio-Visual Separation with Generative Diffusion Models ###### Abstract We propose DAVIS, a Diffusion model-based **A**udio-**V**I**u**sla **S**eparation framework that solves the audio-visual sound source separation task through a generative manner. While existing discriminative methods that perform mask regression have made remarkable progress in this field, they face limitations in capturing the complex data distribution required for high-quality separation of sounds from diverse categories. In contrast, DAVIS leverages a generative diffusion model and a Separation U-Net to synthesize separated magnitudes starting from Gaussian noises, conditioned on both the audio mixture and the visual footage. With its generative objective, DAVIS is better suited to achieving the goal of high-quality sound separation across diverse categories. We compare DAVIS to existing state-of-the-art discriminative audio-visual separation methods on the domain-specific MUSIC dataset and the open-domain AVE dataset, and results show that DAVIS outperforms other methods in separation quality, demonstrating the advantages of our framework for tackling the audio-visual source separation task. ## 1 Introduction Visually-guided sound source separation, also referred to as audio-visual separation, is a pivotal task for assessing a machine perception system's ability to integrate multisensory signals. The primary goal is to isolate individual sounds from a complex audio mixture by utilizing visual cues about the objects that are producing the sounds, e.g., separate the "barking" sound from the mixture by querying the "dog" object. To achieve human-like intelligence, an effective separation model should be capable of handling a _diverse_ range of sounds and produce _high-quality_ separations that can deliver a realistic auditory experience. The community has dedicated significant efforts to this task, and existing methods [1; 2; 3; 4; 5] have made extensive attempts to tackle this problem, such as developing more powerful separation frameworks [1; 2; 4], proposing more effective training pipeline [5], and incorporating additional visual cues [3] to enhance the separation performance. For optimization, these approaches usually take mask regression [1] or spectrogram reconstruction [6] as training objectives. While these methods have shown promising separation performance in specific domains, such as musical instrument sounds, they are not yet satisfactory in dealing with open-domain sounds where background noise and off-screen sounds are prevalent. These sounds produce complicated mosaic of time and frequency patterns, posing a significant challenge in achieving high-quality separation. Thus, a natural question arises: _is there an effective approach to model these complex audio data distribution and produce high-quality separated sounds?_ We answer the question by introducing a generative framework for the audio-visual separation. A new class of generative models called denoising diffusion probabilistic models (DDPMs) [7; 8; 9], also known as diffusion models, has emerged recently and demonstrated remarkable abilities in generating diverse and high-quality images [10] and audio [11]. The impressive capabilities of generative diffusion models in capturing complex data distributions inspire us to explore their potential for enhancing audio-visual separation. Unlike discriminative modeling, we believe that generative diffusion models can effectively approximate more intricate data distributions, allowing us to handle open-domain time and frequency patterns and lead to superior separation results. To this end, we present DAVIS, a novel framework for audio-visual separation that is built upon a generative diffusion model. Unlike typical discriminative methods that predict a mask representing the separated sound from the input mixture, DAVIS approaches the separation task as a conditional generation process. Specifically, our method incorporates a T-step diffusion and reverse process [7; 10; 8]: during the training stage, Gaussian noise controlled by a variance schedule [12] is added to the unmixed sound at each diffusion step. In the reverse process, our method initiates from a standard Gaussian distribution, and an effective Separation U-Net is proposed to estimate the noise added at each diffusion step, iteratively generating the separated magnitude with guidance from the mixture and visual footage. The Separation U-Net comprises an encoder-decoder structure with enabled skip connections. To capture both local time-frequency patterns and long-range time dependencies, we introduce a CA block consisting of two ResNet blocks for capturing local patterns and a Time-Attention block for learning long-range time correlation. Furthermore, to enhance audio-visual association learning, we devise a Feature Interaction module to facilitate interactions between audio and visual features and inject visual cues into the separation. Experiments on the MUSIC [1] and AVE [13] datasets demonstrate that DAVIS outperforms the state-of-the-art methods in terms of separation quality. Our contributions are summarized as follows: * We are the first study, to the best of our knowledge, to approach the audio-visual separation task as a conditional generation process and solve it using a diffusion model. * We design a Separation U-Net, which incorporates CA blocks and a Feature Interaction module to capture the audio-visual association effectively. * Our framework surpasses previous methods on both specific and open-domain sound datasets, highlighting the benefits of solving audio-visual separation through a generative approach. ## 2 Related Work **Audio-Visual Sound Source Separation.** In this section, our focus is on modern audio-visual sound source separation approaches while acknowledging the prolonged research efforts dedicated to sound source separation in signal processing. Recent deep learning-based audio-visual sound source separation methods have been applied to different categories of audio, such as speech signals [14; 6; 15; 16], musical instrument sounds [1; 3; 5; 2; 17; 4], and universal sound sources [18; 19; 20; 21; 22; 23]. These methods typically employ a learning regime that involves mixing two audio streams from different videos to provide supervised training signals. A sound separation network, often implemented as a U-Net, is then used for mask regression [1; 2; 5; 24] conditioned on associated visual representations. In recent years, research in this area has shifted from solving domain-specific sound source separation to addressing the challenge of open-domain sounds [20; 21; 19]. However, such methods require text-query [20] or estimating motion information [19]. In this paper, we propose a novel audio-visual separation approach that can handle both specific and open-domain sound sources. **Diffusion Models.** Diffusion models [7; 25; 26] fall under the category of deep generative models that start with a sample in a random distribution and gradually restore the data sample through a denoising process. Recently, diffusion models have exhibited remarkable performance across various domains, including computer vision [10; 27; 28; 29; 30; 31; 32; 33; 34], natural language processing [35; 36; 37; 38], and audio applications [39; 40; 41; 42; 43; 41]. While diffusion models have been successfully employed for single-modality generation, their potential for audio-visual tasks remains largely unexplored. For instance, only recently has MM-diffusion [44] proposed simultaneous generation of videos and audio. Furthermore, there has been a growing interest in employing diffusion models for discriminative tasks. Some pioneer works have explored the application of diffusion models to image segmentation [45; 46; 47] and object detection [48]. However, despite significant interest in this direction, there have been no prior successful attempts to apply generative diffusion models to audio-visual scene understanding, which has notably lagged behind the progress in visual perception tasks. To the best of our knowledge, this paper presents the first work that adopts a diffusion model to learn audio-visual associations for audio-visual sound source separation. ## 3 Method In this section, we introduce DAVIS, our novel diffusion model-based audio-visual separation framework designed for achieving high-quality separation results. We begin by providing a brief recap of the preliminary knowledge of diffusion models in Sec. 3.1. Next, we present our proposed Separation U-Net architecture, which effectively captures the audio-visual association through the generation process in Sec. 3.3. Finally, we discuss the training and inference designs in Sec. 3.4 and Sec. 3.5, respectively. ### Preliminaries We introduce the concept of diffusion models, which serves to illustrate the pipeline of our framework. A diffusion model consists of a forward and a reverse process. The forward process is defined as a Markovian chain that gradually adds noise to the data sample \(x_{0}\) according to a variance schedule \(\beta_{1},...,\beta_{T}\): \[q(x_{1:T}|x_{0})=\prod_{i=1}^{T}q(x_{t}|x_{t-1}), \tag{1}\] \[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t}|\sqrt{\bar{\alpha}_{t}}x_{t-1},\beta_{t} \mathbf{I}), \tag{2}\] where \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\prod_{s=0}^{t}\alpha_{s}\). Note that the variance schedule is also fixed during the reverse process. If the total number of \(T\) goes to infinity, the diffusion process will finally lead to pure noise, _i.e._, the distribution of \(p(x_{T})\) is \(\mathcal{N}(x_{t};\mathbf{0},\mathbf{I})\) with only Gaussian noise. The reverse process aims to recover samples from Gaussian distribution by removing the noise gradually, which is a Markovian chain parameterized by \(\theta\): \[p_{\theta}(x_{0:T})=p_{\theta}(x_{T})\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t}), \tag{3}\] where at each iteration, the noise \(\epsilon\) added in the forward process is estimated as: \[p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\boldsymbol{\mu_{\theta}}(x_{t },t),\boldsymbol{\Sigma_{\theta}}(x_{t},t)). \tag{4}\] Note that we set the variances \(\boldsymbol{\Sigma_{\theta}}(x_{t},t)=\tilde{\beta}_{t}\mathbf{I}\) to untrained constants and \(\boldsymbol{\mu_{\theta}}(x_{t},t)\) is usually implemented as neural networks. To adapt the diffusion process into a conditional one, we include the conditional context \(\boldsymbol{c}\) as additional network inputs, which modifies Eq. 4 as follows: \[p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\boldsymbol{\mu_{\theta}}(x_{t },t,\boldsymbol{c}),\tilde{\beta}_{t}\mathbf{I}),\quad\text{where }\tilde{\beta}_{t}:=\frac{1-\bar{ \alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t}. \tag{5}\] We follow [7] to use a simplified training objective: \[L_{simple}(\theta)=\mathbb{E}_{t,x_{0},\epsilon}[||\epsilon-\epsilon_{ \theta}(\sqrt{\bar{\alpha}_{t}}x_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon, \boldsymbol{c},t)||], \tag{6}\] where \(\epsilon_{\theta}\) represents a function approximator used to predict \(\epsilon\), while \(t\) denotes a uniformly sampled value ranging from 1 to \(T\). The loss function computes a weighted variational bound that highlights different aspects of the reconstruction. ### Task Setup and Method Overview Given an unlabeled video clip \(V\), we can extract an audio-visual pair \((a,v)\), where \(a\) and \(v\) denote the audio and visual stream, respectively. In real-world scenarios, the audio stream can be a mixture of \(N\) individual sound sources, denoted as \(a=\sum_{i=1}^{N}s_{i}\), where each source \(s_{i}\) can be of various categories. The primary goal of the visually-guided sound source separation task is to utilize visual cues from \(v\) to effectively separate \(a\) into its constituent sources \(s_{i}\), for \(i\in\{1,2,...,N\}\). Since no labels are provided to distinguish the sound sources \(s_{i}\), prior works [1; 5; 24] have commonly used a "mix and separate" strategy, which involves mixing audio streams from two different videos and manually create the mixture: \(a^{mix}=a^{(1)}+a^{(2)}\). Furthermore, the time series \(a\) is usually transformed into magnitude spectrogram by short-time Fourier transform (STFT): \(x=\mathbf{STFT}(a)\in\mathbb{R}^{T\times F}\), allowing for manipulations in the 2D-like Time-Frequency domain, where \(F\) and \(T\) are the numbers of frequency bins and time frames, respectively. Consequently, the goal of training is to learn a separation network capable of mapping \((x^{mix},v^{(1)})\to x^{(1)}\). For simplicity, we will omit the video index notation in the subsequent sections 1. Footnote 1: In this paper, superscripts denote video indices, while subscripts refer to diffusion timesteps. In contrast to discriminative approaches that perform the mapping through regression, our proposed DAVIS framework is built on a diffusion model with a T-step diffusion and reverse process. The diffusion process is determined by a fixed variance schedule as described in Eq. (1) and Eq. (2), which gradually adds noises to the magnitude spectrogram \(x_{0}\) and converts it to latent \(x_{T}\). As depicted in Fig. 1, the reverse process (according to Eq. (3) and Eq. (5)) of DAVIS is specified by our proposed separation network \(\epsilon_{\theta}\). This reverse process iteratively denoises a latent variable \(x_{T}\), which is sampled from a uniform distribution, to obtain a separated magnitude conditioned on the magnitude of the input sound mixture \(x^{mix}\) and the visual footage \(v\). Consequently, the objective of the separation network \(\epsilon_{\theta}\) is to predict the noise \(\epsilon\) added at each diffusion timestep during the forward process. ### Separation U-Net Previous works [2; 4] often utilize a U-Net [49]-like architecture for separation network designs. This choice is attributed to the U-Net's effectiveness in capturing multi-level feature representations and producing separated magnitudes with the same shape as inputs. Exploiting the grid-like nature of magnitude spectrograms, existing methods employ convolution-based U-Nets for separation and concatenate audio and visual features directly at the bottleneck to incorporate visual cues. While these approaches achieve good separation performance, we argue that they may be inadequate for real-world sound mixtures due to two key reasons. Firstly, similar frequency patterns can occur in temporally distant frames, and distinct frequency patterns can mix within a single time frame. Such occurrences necessitate the network to capture both local patterns and long-range dependencies across time and frequency dimensions, where pure convolution may fall short. Motivated by this, we propose a novel Separation U-Net (depicted in Fig. 1) that incorporates Convolution-Attention (CA) blocks Figure 1: **Overview of the DAVIS framework. Our objective is to synthesize the separated sound \(x_{0}\) by leveraging an audio mixture \(x^{mix}\) and an image \(v\), while taking into account the diffusion timestep \(t\). Firstly, we sample a latent variable \(x_{T}\) from a standard distribution. Next, we encode the image \(v\) and the timestep \(t\) into the embedding space, resulting in feature vectors \(\mathbf{v}\) and \(\mathbf{t}\). These features serve as conditions in the Separation U-Net \(\epsilon_{\theta}\), which performs iterative denoising on \(x_{T}\) to obtain the separated sound \(x_{0}\). Specifically, \(\mathbf{t}\) is passed to all the modules within \(\epsilon_{\theta}\), while \(\mathbf{v}\) is only utilized in the Feature Interaction Module (Sec. 3.3) to enhance audio-visual association learning.** (elaborated in Fig. 2). These blocks combine sequential convolution and time-attention mechanisms to learn both local and global time-frequency associations. Secondly, learning representative audio-visual associations is crucial for the task. While existing methods directly concatenate visual features and audio embeddings at the bottleneck and feed them to the decoder network for mask regression, such approaches lack the ability to foster further interactions between audio and visual features. To address this limitation, we introduce an audio-visual feature interaction module, which enhances association learning by enabling interactions between audio and visual modalities. **Timestep Embedding.** In a diffusion model, the timestep embedding serves to inform the model about the current position of the input within the Markovian chain. As shown in Fig. 1, diffusion time \(t\) is specified by the Transformer sinusoidal position embedding [50] and further transformed by an MLP, which will be passed to each CA block as a condition. **Encoder/Decoder Designs.** Our proposed Separation U-Net architecture comprises an encoder and a decoder, with an audio-visual feature interaction module in between. Both the encoder and decoder consist of four CA Blocks, and skip connections are utilized to facilitate information flow. Initially, we concatenate the latent variable \(x_{T}\) with the mixture \(x^{mix}\) along the channel dimension and employ a 1x1 convolution to project it to the feature space. As depicted in Fig. 2, each CA block in the encoder consists of two ResNet blocks for extracting local time-frequency patterns and one Time-Attention block for capturing long-range time dependencies. Following this, a downsample layer (or upsample layer for the decoder) with a scale factor of 2 is used. Specifically, we construct the ResNet block using WeightStandardized 2D convolution [51] along with GroupNormalization [52] and SiLU activation [53]. To incorporate the time embedding \(\mathbf{t}\) as a conditioning factor, we employ an MLP to generate \(\mathbf{t}\)-dependent scaling and shifting vectors for feature-wise affine transformation [54] before the activation layer. A Time-Attention block is appended after the ResNet blocks to capture long-range time dependencies. For implementation, we adopt the design proposed by Wang et al.[55], which includes Pre-Layer Normalization and Multi-Head Attention along the time dimension within the residual connection. The downscale and upscale layers are implemented using 2D convolutions with a stride of 2. As a result, we can obtain audio feature maps \(\mathbf{f_{a}}\in\mathbb{R}^{C\times\frac{\mathcal{F}}{16}\times\frac{\mathcal{F} }{16}}\) at the bottleneck, where \(C\) represents the number of channels. Additionally, we include a 1x1 convolution to convert the decoder output into magnitude. **Audio-Visual Feature Interaction Module.** The key to achieving successful audio-visual separation lies in effectively utilizing visual information to separate visually-indicated sound sources. Therefore, the interaction between audio and visual modalities at the feature level becomes crucial. Existing approaches typically concatenate audio and visual features at the bottleneck [2; 4] and pass them to the decoder for further fusion. In this paper, we propose a novel audio-visual feature interaction module to enhance this capability. Concretely, given the visual footage \(v\), we first use a pre-trained ResNet-18 [56] visual backbone \(\mathbf{Enc_{v}}\) to extract the global embedding \(\mathbf{v}:=\mathbf{Enc_{v}}(v)\in\mathbb{R}^{C}\). Since the number of channels in the visual embedding \(\mathbf{v}\) matches that of \(\mathbf{f_{a}}\), we can omit the modality notation. Next, we tile \(\mathbf{v}\) to match the shape of \(\mathbf{f_{a}}\), resulting in visual feature maps \(\mathbf{f_{v}}\). Subsequently, Figure 2: **Illustrations on CA Block and Feature Interaction Module. (a) Our CA block operates by taking audio feature maps and a time embedding \(\mathbf{t}\) as inputs. Each sub-block, except the up/down sampling layer, is conditioned on \(\mathbf{t}\). It consists of two groups of convolutions within each ResNet block to capture local time-frequency patterns, while the Time-Attention block captures long-range dependencies along the time dimension. (b) The Feature Interaction Module functions by replicating and concatenating \(\mathbf{v}\) with \(\mathbf{f_{a}}\). Two identical ResNet blocks, as described in (a), are used to process the concatenated features. However, in contrast to (a), the attention block here operates on both the frequency and time dimensions instead of solely aggregating temporal contexts.** the audio and visual feature maps are concatenated and fed into the feature interaction module (FIM): \(\hat{\mathbf{f}}_{\mathbf{a}}:=\mathbf{FIM}([\mathbf{f}_{\mathbf{a}},\mathbf{f}_{\mathbf{v}}])\), where \(\hat{\mathbf{f}}_{\mathbf{a}}\in\mathbb{R}^{C\times\frac{T}{16}\times\frac{F}{16}}\). The details of the **FIM** module are illustrated in Fig. 2(b), encompassing ResNet blocks and a time-frequency attention block that facilitate the establishment of audio-visual associations in both local and global regions. ### Training ``` 1:Input: A dataset \(D\) that contains audio-visual pairs \(\{(a^{(k)},v^{(k)})\}_{k=1}^{K}\), total diffusion step \(T\) 2:Initialize: randomly initialize Separation U-Net \(\epsilon_{\theta}\) and pre-trained visual encoder \(\mathbf{Enc_{v}}\) 3:repeat 4: Sample \((a^{(1)},v^{(1)})\) and \((a^{(2)},v^{(2)})\sim D\) 5: Mix and compute \(x^{mix},x^{(1)}\) 6: Scale \(x=\log_{e}(1+x)\cdot\sigma\) and clip \(x^{mix},x^{(1)}\) to [0,1] 7: Encode visual frame \(v^{(1)}\) as \(\mathbf{v^{(1)}}:=\mathbf{Enc_{v}}(v^{(1)})\) 8: Sample \(\epsilon\sim\mathcal{N}(\textbf{0},\textbf{I})\), and \(t\sim\text{Uniform}(1,...,T)\) 9: Take gradient step on 10:\(\nabla_{\theta}||\epsilon-\epsilon_{\theta}(x_{t}^{(1)},x^{mix},\mathbf{v}^{(1)},t)||,x_{t}^{(1)}=\sqrt{\bar{\alpha}_{t}}x^{(1)}+\sqrt{1-\bar{\alpha}_{t}}\epsilon\) 11:until converged ``` **Algorithm 1** Training Algorithm 1 depicts the complete training procedure of our DAVIS framework. Given the sampled audio-visual pairs from the dataset, we first adopt the "mix and separate" strategy and compute the magnitudes \(x^{(1)},x^{(2)},x^{mix}\) with STFT. **Data Scaling:** To align with the frequency decomposition of the human auditory system, we apply a logarithmic transformation to the magnitude spectrogram, converting it to a log-frequency scale. Additionally, we ensure consistent scaling of the log-frequency magnitudes by multiplying them with a scale factor \(\sigma\) and clipping the values to fall within the range [0, 1]. The visual frames are encoded to embeddings \(\mathbf{v^{(1)}}\),\(\mathbf{v^{(2)}}\). Taking video (1) as an example, we sample \(\epsilon\) from a standard Gaussian distribution and \(t\) from the set \(\{1,...,T\}\). Then, we input \(x_{t}^{(1)},x^{mix},\mathbf{v}^{(1)},t\) to the Separation U-Net \(\epsilon_{\theta}\) and optimize the network by taking a gradient step on Eq. (6). In practice, we use both video (1) and (2) for optimization, therefore the final loss term is formulated as \(\mathcal{L}=\mathcal{L}_{simple}^{(1)}(\theta)+\mathcal{L}_{simple}^{(2)}(\theta)\). ### Inference ``` 1:Input: Audio mixture \(a^{mix}\) and the query visual frame \(v\), total diffusion step \(T\) 2: Sample \(x_{T}\sim\mathcal{N}(\textbf{0},\textbf{I})\) 3: Compute \(x^{mix}:=\mathbf{STFT}(a^{mix})\) 4: Encode visual frame \(\mathbf{v}:=\mathbf{Enc_{v}}(v)\) 5:for\(t=T,...,1\)do 6: Sample \(z\sim\mathcal{N}(\textbf{0},\textbf{I})\) if \(t>1\), else \(z=0\) 7: Compute \(x_{t-1}\): \(x_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}(x_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\bar{ \alpha}_{t}}}\epsilon_{\theta}(x_{t},x^{mix},\mathbf{v},t))+\sqrt{\bar{\beta}_{t}}z\) 8:endfor 9:return\(e^{x_{0}/\sigma}-1\) ``` **Algorithm 2** Inference As illustrated in Algorithm 2, our inference process starts from a sampled latent variable \(x_{T}\), and takes the mixture \(x^{mix}\) and visual frame embedding \(\mathbf{v}\) as conditions to produce the separated magnitude \(x_{0}\) through T iterations. Finally, the output is rescaled to the original value range. ## 4 Experiments ### Experimental Setup **Datasets.2** Our model demonstrates the ability to handle both specific and open-domain sound separation. To evaluate our approach, we use MUSIC [1] and AVE [13] datasets, which cover musical instruments and open-domain sounds. The evaluation settings are described in detail below: Footnote 2: All collection, processing, and use of data obtained from datasets was conducted by the University of Rochester. * **MUSIC**: We evaluate our proposed method on the widely-used MUSIC [1] dataset, which includes 11 musical instrument categories: accordion, acoustic guitar, cello, clarinet, erhu, flute, saxophone, trumpet, tuba, violin, and xylophone. All the videos are clean solo and the sounding instruments are usually visible. We follow CCoL [5] and use the same train/val/test splits, resulting in a total of 468/26/26 videos across various instrument categories. * **AVE**: In addition to the MUSIC dataset, we also evaluate our method on the Audio-Visual Event (AVE) dataset [13]. This dataset contains 4143 10-second videos, including 28 diverse sound categories, such as _Church Bell_, _Barking_, and _Frying (Food)_, among others. The AVE dataset presents greater challenges as the audio in these videos may not span the entire duration and can be noisy, including off-screen sounds (e.g., human speech) and background noise. We conduct training and evaluation on this demanding dataset using the original train/val/test splits, consisting of 3339/402/402 videos, respectively. **Baselines.** To the best of our knowledge, we are the first to adopt a generative model for the audio-visual source separation task. Thus, we compare DAVIS against the following state-of-the-art discriminative methods: (i) _Sound of Pixels_ (SoP) [1] that learns ratio mask predictions with a 1-frame-based model, (ii) _Co-Separation_ (CoSep) [2] that takes a single visual object as the condition to perform mask regression, (iii) _Cyclic Co-Learn_ (CCoL) [5] which jointly trains the model with sounding object visual grounding and visually-guided sound source separation tasks. Since the AVE dataset lacks bounding box annotation for detected objects, we use the entire image for CoSep and CCoL. For all the comparative methods, we use the authors' publicly available code. Several other recent works [22; 57] have achieved impressive separation results. However, due to the unavailability of their source code, we will not include them in our comparative analysis. **Evaluation Metrics.** To quantitatively evaluate the audio-visual sound source separation performances, we use the standard metrics [1; 5; 2], namely: Signal-to-Distortion Ratio (SDR), Signal-to-Interference Ratio (SIR), and Signal-to-Artifact Ratio (SAR). We adopt the widely-used mir eval library [58] to report the standard metrics. Note that SDR and SIR evaluate the accuracy of source separation, whereas SAR specifically measures the absence of artifacts [2]. Consequently, SAR can be high even if the separation performance is poor in terms of accurately separating the sources. **Implementation Details.** In our experimental setup, we down-sample audio signals at 11kHz, and the video frame rate is set to 1 fps. For the MUSIC dataset, each video is approximately 6 seconds and we randomly select 3 frames per video. As for the AVE dataset, we use the entire 10-second audio as input and only select 1 frame that falls into the audio-visual event boundary [13] for training the model. This selection ensures that the auditory object occurs within the visual frame, facilitating audio-visual association learning. During training, the frames are first resized to 256\(\times\)256 and then randomly cropped to \(224\times 224\). We set the total time step \(T=1000\) to train our DAVIS \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Output**} & \multicolumn{3}{c}{**MUSIC**[1]} & \multicolumn{3}{c}{**AVE**[13]} \\ \cline{3-8} & & SDR & SIR & SAR & SDR & SIR & SAR \\ \hline SoP [1] & Mask & 3.42 & 4.98 & - & 0.46 & 4.17 & 12.08 \\ CoSep [2] & Mask & 2.04 & 6.21 & - & -1.33 & 2.54 & 5.77 \\ CCoL [5] & Mask & 7.18 & 12.55 & 11.09 & 1.77 & 3.25 & 22.52 \\ DAVIS & Mag. & 9.29 & 14.19 & 15.76 & 1.78 & 6.61 & 7.76 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparisons with different discriminative audio-visual separation approaches on MUSIC and AVE test sets. The “Output” category highlights the distinction between our method (magnitude synthesis) and others (mask regression). We report SDR, SIR, and SAR metrics and mark our results in gray. Note that numbers marked in red actually indicate poor performance (Sec. 4.2). model. During inference, all the frames are directly resized to the desired size without cropping. To accelerate the separation process, we use DDIM [9] with a sampling step of 25. The audio waveform is transformed into a spectrogram with a Hann window of size 1022 and a hop length of 256. The obtained magnitude spectrogram is subsequently resampled to \(256\times 256\) to feed into the separation network. We set the number of audio and visual feature channels \(C\) as 512 and empirically choose the scale factor \(\sigma=0.15\). Our model is trained with the Adam optimizer, with a learning rate of \(10^{-4}\). The training is conducted on a single A6000 GPU for 200 epochs with a batch size of 10. ### Comparisons with State-of-the-art To evaluate the effectiveness of our method, we present separation results by comparing DAVIS with state-of-the-art approaches on the MUSIC and AVE datasets, as depicted in Tab. 1. Indeed, SDR is a reliable metric for measuring source separation accuracy in clean datasets like MUSIC, as it does not consider interference. On the other hand, SIR is useful for evaluating source separation in multi-source scenarios, which is suitable for open-domain datasets like AVE. Our results highlight the advantages of utilizing generative modeling for audio-visual separation. DAVIS consistently outperforms previous approaches across various evaluation categories, achieving up to a 2.1 dB improvement on the SDR scale for the MUSIC dataset and a 3.4 dB improvement on the SIR scale for the AVE dataset, surpassing the performance of the next best approach CCoL. These results clearly demonstrate the versatility of our method across diverse datasets with varying visual and audio contexts. Among the competing techniques, we observe that CCoL and SoP yield higher SAR results than our method on the AVE dataset. However, we argue that high SAR values do not necessarily imply effectiveness, as they can arise from poor separation. It is worth noting that a comparison between the mixture spectrogram and the ground truth unmixed spectrogram can surprisingly yield high SAR values. In this context, we believe that our method significantly improves separation performance compared to others. In Fig. 3, we visually compare our separation results to the CCoL baseline. Our visualizations demonstrate that DAVIS achieves higher separation quality, as evidenced by the closer resemblance of our separated magnitude spectrograms to the ground truth. Moreover, the successful handling of diverse time patterns in the provided examples highlights the importance of incorporating time-attention mechanisms in DAVIS. ### Experimental Analysis **Ablation Study.** We conduct ablation experiments on the MUSIC validation set to examine the different components of DAVIS. Firstly, we validate the effectiveness of our proposed CA block by designing two baselines: (a) replacing the Time-Attention mechanism within the CA block with a ResNet block (shown in Fig. 2 (a)), which only captures local time-frequency patterns, and (b) Figure 3: Visualizations of audio-visual separation results on the MUSIC (top) and AVE (bottom) datasets. Two sounds are mixed (mixture), and referenced frames are provided to guide the separation. We show the comparison between ground truth spectrograms and DAVIS/CCoL’s predictions. employing a Time-Frequency Attention mechanism that calculates attention weights along both the time and frequency dimensions. To ensure a fair comparison, we adopt an efficient attention mechanism [59] to mitigate computational overhead, resulting in a similar computation cost as our Time-Attention block. The results presented in Tab. 2 demonstrate the significance of capturing long-range time dependencies, while our Time-Attention mechanism achieves a favorable trade-off between computation and performance compared to Time-Frequency attention. Second, in Tab. 3, we investigate the impact of varying the number of sampling steps. We observe that setting the step value to 25 yields the best results in our setting. Meanwhile, even with step = 10, satisfactory results are obtained, indicating the potential for further acceleration if faster inference speed is prioritized. **Learned Audio-Visual Association.** The learned associations between audio and visual features play a crucial role in the success of separation. To demonstrate our learned associations, we present an example from the AVE dataset in Fig. 4. In this example, a video clip labeled "Rats" is mixed with another video clip labeled "Motorcycle." However, human perception reveals the presence of an off-screen sound "Speech." occurring in the "Rats" clip, while only the "rat" object is visible in the reference frame. In this scenario, our method successfully separates the "Rats" sound from the complicated mixture while disregarding the "Speech," thus affirming the accuracy of our learned audio-visual associations and our method's capability to capture complex data distribution. ## 5 Limitation and Discussion Our proposed DAVIS framework incorporates the extraction of global visual embedding as a condition for visually-guided source separation. This technique, which utilizes global visual features, has been widely adopted in audio-visual learning [1; 24]. Unlike methods that rely on pre-trained object detectors for extracting visual features, our framework does not have such a dependency. However, it may encounter limitations when trained on unconstrained video datasets. Intuitively, successful results can be achieved when the video contains a distinct sounding object, such as solo videos in the MUSIC dataset or videos capturing a sounding object performing a specific event in the AVE dataset. Nonetheless, this training assumption may not hold in more challenging scenarios, where multiple objects are likely producing sounds, rendering the global visual embedding inadequate for accurately describing the content of sounding objects. To address this issue, one possible approach is to adapt our framework to leverage more fine-grained visual features and jointly learn sounding object localization and visually-guided sound separation. This adaptation enables the model to utilize localized sounding object information to enhance the audio-visual association. ## 6 Conclusion and Future Work In this paper, we propose DAVIS, a diffusion model-based audio-visual separation framework designed to address the problem in a generative manner. Unlike approaches relying on discriminative training objectives for regression, our separation framework is built upon a T-step diffusion model, allowing for iterative synthesis of the separated magnitude spectrogram while conditioning on the visual footage. Leveraging the power of generative modeling, our method effectively handles complex data distributions and achieves high-quality sound separation. Extensive experiments on the MUSIC and AVE datasets validate the efficacy of our framework, demonstrating its effectiveness in separating \begin{table} \begin{tabular}{l c c c} \hline \hline Methods & SDR & SIR & SAR \\ \hline ResNet Block & 4.90 & 10.21 & 9.65 \\ Time Attn. & 7.57 & 12.55 & 15.50 \\ Time-Freq Attn. & 7.39 & 14.03 & 12.63 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation on CA block design. Figure 4: An visualization example showing that our DAVIS model can capture accurate audio-visual association to perform visually-guided separation. \begin{table} \begin{tabular}{l c c c} \hline \hline Sampling step & SDR & SIR & SAR \\ \hline Step=10 & 7.03 & 11.48 & 15.45 \\ Step=15 & 7.29 & 11.95 & 15.56 \\ Step=25 & 7.57 & 12.55 & 15.50 \\ Step=50 & 7.40 & 12.61 & 15.31 \\ \hline \hline \end{tabular} \end{table} Table 3: Number of sampling steps. sounds within specific domains (e.g., music instrument sounds) as well as its ability to generalize to open-domain sound categories. Our approach initiates the utilization of generative models for audio-visual scene understanding, paving the way for potential extensions to other multi-modal perception tasks like audio-visual object localization. Humans demonstrate the ability to imagine a "dog" upon hearing a "barking" sound, highlighting the potential of cross-modal generation in advancing audio-visual association learning. This implies that localization and separation tasks can be integrated into a single generative framework. In the future, we plan to explore the application of generative models to jointly address audio-visual localization and separation tasks.
2309.11312
A Competition-based Pricing Strategy in Cloud Markets using Regret Minimization Techniques
Cloud computing as a fairly new commercial paradigm, widely investigated by different researchers, already has a great range of challenges. Pricing is a major problem in Cloud computing marketplace; as providers are competing to attract more customers without knowing the pricing policies of each other. To overcome this lack of knowledge, we model their competition by an incomplete-information game. Considering the issue, this work proposes a pricing policy related to the regret minimization algorithm and applies it to the considered incomplete-information game. Based on the competition based marketplace of the Cloud, providers update the distribution of their strategies using the experienced regret. The idea of iteratively applying the algorithm for updating probabilities of strategies causes the regret get minimized faster. The experimental results show much more increase in profits of the providers in comparison with other pricing policies. Besides, the efficiency of a variety of regret minimization techniques in a simulated marketplace of Cloud are discussed which have not been observed in the studied literature. Moreover, return on investment of providers in considered organizations is studied and promising results appeared.
S. Ghasemi, M. R. Meybodi, M. Dehghan, A. M. Rahmani
2023-09-20T13:38:43Z
http://arxiv.org/abs/2309.11312v1
# A Competition-based Pricing Strategy in Cloud Markets using Regret Minimization Techniques ###### Abstract Cloud computing as a fairly new commercial paradigm, widely investigated by different researchers, already has a great range of challenges. Pricing is a major problem in Cloud computing marketplace; as providers are competing to attract more customers without knowing the pricing policies of each other. To overcome this lack of knowledge, we model their competition by an incomplete-information game. Considering the issue, this work proposes a pricing policy related to the regret minimization algorithm and applies it to the considered incomplete-information game. Based on the competition based marketplace of the Cloud, providers update the distribution of their strategies using the experienced regret. The idea of iteratively applying the algorithm for updating probabilities of strategies causes the regret set minimized faster. The experimental results show much more increase in profits of the providers in comparison with other pricing policies. Besides, the efficiency of a variety of regret minimization techniques in a simulated marketplace of Cloud are discussed which have not been observed in the studied literature. Moreover, return on investment of providers in considered organizations is studied and promising results appeared. application, Cloud computing marketplace, game theory, pricing, regret minimization. ## 1 Introduction Cloud Computing, introduced in the early 21st century, has emerged as one of the most attractive technologies in IT industry [1, 2, 3] since 2007. One of the main goals of this new computing paradigm is delivery of requested applications through provisioned virtualized resources [1, 4]. Recently, a great number of users has taken the advantages of services offered by Cloud computing providers. These providers are generally categorized into three groups: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) [1]. IaaS providers prepare computing and storage resources [5, 2] in form of virtual machines (VM); in addition to other providers, industrial and scientific organizations may utilize these services to run their applications [2]. SaaS providers offer applications to users without needs of infrastructures maintenance or development [6]. Existence of different providers offering their own services to potential users, makes the Cloud computing environment a competitive marketplace [7]. There are many studies that consider competition-based interactions between IaaS providers [3, 6, 8-16], between SaaS providers and IaaS providers or users and IaaS/SaaS providers [2, 4, 5, 9, 17-24], among SaaS providers [7, 14, 17, 25, 26]. In such investigations, Cloud is considered as a marketplace, including multi-tier economic interactions of SaaS, PaaS and IaaS providers and users which may strategically compete with each other within some tiers of this marketplace. This study focuses on interactions of SaaS providers and users in form of a game, like [7, 14, 26]. Proposed method of [14] formulated a non-cooperative stochastic game as a Markov Decision Process (MDP), with a Markov Perfect Equilibrium solution. MDPs operate on completely visible environments while our considered environment is not completely visible. SaaS providers should attract users to increase their profit. Pricing strategies of provided applications have a major effect on their profit; initial cost of applications, resource appropriation costs, user willingness to pay, multi-tenancy of application and so forth are some of the most prominent factors of pricing strategies, studied in form of a complete information game in [7]. SaaS providers consider services based on their changing utilities, user's service choice behavior, and the like. [26] uses a non-cooperative game to study such service provisioning problem in multi-Cloud competitions in a way to obtain the largest market share. Besides, there are some other studies such as [27, 28] which examine the pricing challenges of applications; application pricing in a duopoly is investigated under some constraints in [29], using an incomplete information game, where players are SaaS providers whose strategies are pricing policies, regarding the main features of application prices. Users usually tend to use an application which has satisfying performance with the least price. Unlike [7, 14], the lack of information in some environments [17, 29-31], reaching an equilibrium seems to be out-of-the-way, and applying learning techniques facilitate the issue. The behavior of SaaS providers is studied [19, 23] to optimize costs of running tasks on virtual resources while they are competing with each other to use the best resources stored in IaaS providers, regardless of pricing policies. However, most existing competition based mechanisms are inefficient and do not consider the application pricing strategies, regarding the competition between SaaS providers. In the present research, a competition based pricing mechanism is introduced which investigates various factors impacting the price of an application including development and deployment requirements. An efficient solution of an incomplete information game is studied based on regret minimization technique, using the updating equations to minimize the regret. Applying regret minimization techniques, instead of profit optimization ones, facilitates the solution as regrets may model the behaviors of players while their opponents perform more efficiently. Our proposed approach is applied to a pricing optimization game in which players are not aware of strategies of others. The main features that differentiate our work from the studied works can be outlined as follows. First, the proposed game is between SaaS providers offering their provided applications to users in an incomplete information environment. Quantitative modeling of SaaS providers and internal and external validation of the model are some positive points of our study. Second, our paper covers some considerations in dynamic pricing of applications, whose prices have significant economical influences on decisions of SaaS providers. Third, as our considered game is an incomplete information one, a new regret minimization technique is applied to achieve the solution concept known as correlated equilibrium. Finally, we evaluate the performance of different regret minimization techniques, including external, internal and swap regret minimization techniques applied to our proposed approach. Moreover, the return of investment is studied for providers using the proposed pricing policies. The rest of this paper is organized as follows. The review of the related works is presented in Section2. The system model, the problem statement and the related concepts of market state can be found in Section3. Section4 studies the formulated problem of the providers' pricing policy optimization, the game theory based model of Cloud application pricing, and the solution concepts of the proposed game. Section5 discusses the introduced experimental results, and demonstrates the efficiency of our proposed solution. The paper is ended with some concluding remarks in Section6. ## 2 Related Works This section provides a comprehensive review of literature on economical profits of Cloud computing paradigm. There are various sorts of resources such as processors, a cluster of computing devices, storage space, online tools, data, information, and applications which make the provisioning process of Cloud requests much complex. An auction is a primary market mechanism of selling commodities. In an auction based resource allocation, all parties do their best to improve their profits in the competition based marketplace. In [4], a taxonomy of the auction based resource allocation mechanisms in Cloud is provided; four major mechanisms are named 1) One-sided auctions consisting of three types as Reverse auctions, First-price sealed-bid auctions and Second-price sealed-bid auctions, 2) Double-sided auctions which has Continuous-time auctions, 3) Combinatorial auctions and 4) Other types of auctions. Auctions are mostly formulated as games. usage of game theory in auction of resource provisioning and pricing of Cloud are widely studied [3, 5, 6, 12, 13, 15, 20, 21, 23, 32, 10]. To maximize the profit of the providers, in [32], a Stackelberg game is assumed between operator of Cloud based integrated terrestrial satellite networks (connection of both satellite and RRHs to the Cloud for centralized processing) and users. The evolutionary game of users is solved with replicator dynamics to minimize energy consumption. Since there exist two types of users, when an evolutionary stable strategy of users is achieved the selections of both types of users are obtained for any pricing strategy. An optimal pricing strategy, that maximizes the profit of the operator and the Stackelberg equilibrium point of the system, is proposed by analyzing the evolution relationship between the profit and the pricing strategy. Thereafter, based on the optimal pricing strategy an optimal resource allocation scheme is obtained. [10] is another game theoretic approach to deal with pricing in resource allocation of Cloud providers. By using auction based allocation and pricing mechanisms, the satisfaction of both parties grows. The highest bids influence Cloud resources and services in a way that winners of the auction achieve the demanded quality. The major aim of the proposed winner-bid auction game is to increase the profits of the provider and user from different criteria. The game seals users' bids by a multi-criteria valuation-based bid function and sends them to the auctioneer. The determined Nash equilibrium specifies the prices to be paid by users in various cases and proves the truthfulness of the method. To analyze the reasonable allocation of resources, a Bayesian Nash equilibrium allocation algorithm is proposed in [3] which considers the heterogeneous distribution of resources, rational exchange behaviors of Cloud users, incomplete information and dynamic successive allocation. Cloud users reach Nash equilibrium allocation solutions by gambling stage by stage. Valerio et al. [5] solve the allocation problem of Cloud by proposing a two stage provisioning scheme; the first stage determines the number of required flat and on demand instances using standard optimization techniques, then in second stage, the competition of providers commences. The competition is among both IaaS providers and SaaS providers. The competition of providers in second stage is modeled by a Stackelberg game and the solution is achieved by solving a mathematical program with equilibrium constraints problem. There are a variety of research that focused on pricing and resource management in Cloud computing [8, 12, 13, 15, 16, 19, 20, 21, 23, 26]. Effective pricing policies for a Cloud provider are discussed in [8]. Then, the offered prices are sent to users in form of 'take it or leave it' to decide whether to rent the Cloud resources or not. The goal of the research is to maximize the social welfare (the aggregate gain of the Cloud provider and Cloud users) that might not bring about maximization of an individual provider revenue. For both private Cloud and public Cloud, maximizing social welfare is more relevant than maximizing a provider revenue. It is to be noted that resource allocation mechanisms may be related to users' urgency and priorities. Zhang et al. propose pricing functions based on dual price design in competitive online algorithms related to the classic primal-dual framework [8]; their pricing function, bounded to total demand, is related to the online knapsack problem. In [7], some efficient pricing strategies are introduced for providers which bring about gaining leverage to retain user by making its switching costs too high, and to achieve high profitability by implementing the appropriate strategies. Learning and heuristic algorithms are widely applied in the discussed problem of service provisioning in Cloud computing. [33] introduces a learning automata based resource provisioning approach that achieves cost-effective guarantees of demands which performs based on services of comprised applications and considers both cost and service requirements together for deploying applications. Another considerable learning based approach is introduced in [18], where the application of deep reinforcement learning is studied to obtain an optimal allocation/pricing policy that maximizes profit of the Cloud provider through trial and error. Du et al. [18] combine long short-term memory units with fully-connected neural networks to tackle online user arrivals, and adjust the output and update the methods to address both resource allocation and pricing. Moreover, Alsarhan et al. [9] use a learning based price control parameter in the proposed Cloud computing SLA framework to meet quality of service (QoS) demands in the market. To find an adaptable VM hiring policy to system changes including service cost, system capacity, and demand for service, a reinforcement learning approach is applied in the framework. The approach integrates computing resources adaptation with service admission control based on the reinforcement learning model. Besides, researchers in [2] analyses the efficiency of pricing process in a competing Cloud environment. The problem is modeled as a Markov game that Minimax-Q and Q-learning algorithms are applied to pricing policies of Cloud providers. In comparison of these two algorithms they find minimax-Q more suitable for analyzing the competition and adhering Cloud users while Q-learning performs better in maximizing profits. Additionally, Tafsiri et al. [25] studies a combinatorial double auction-based marketplace that formulates the problem in form of an integer linear programming model to maximize the profit of users and providers and minimize wasted resources. In order to decrease the complexity of the approach a heuristic is applied in resource allocation mechanism. Zhang et al. [11] uses integer programming and weighted VCG algorithms to solve the static optimal problem of the supposed auction. Their auction-based mathematical model provides truthful online auction mechanism based on user evaluation and cost by applying a fast heuristic algorithm. A multi-requirement, single-minded scenario is assumed in users' requests submission. The proposed online auction mechanism while maximizing the providers' profit specifies prices based on real-time supply and demand conditions. Regarding the interests of Cloud providers and users, the users' evaluations are mapped into the payment price algorithm. The proposed scheduling and pricing framework by Babaioff [34] for Cloud resources, aimed at increasing the efficiency of Cloud resources utilization based on the economic principles. The users' requests are scheduled regarding dynamically calculated prices. Besides, this economic resource allocation framework, named ERA, provides a key internal API to modules with scheduling, pricing and demand prediction algorithms. ## 3 System Model and Problem Statement Cloud computing provides a wide range of services to users all over the world; it can be considered as a marketplace with different parties including providers and users (Fig.1). In Fig.1, _Cloud Committee_ has the role of the market manager consisting of two main module named _Request Dispatcher_ and _Market Manager_. SaaS providers can register in the market to receive application requests through the considered API of _Request Dispatcher_. Let _Req\({}_{r}\)_ as a request sent to the services provider. If provider \(i\) can provide request _Req\({}_{r}\)_ then it will compute its price based on the pricing policies and send the offered price called bid (_A\({}_{i}\)_) to _Market Manager_. After that, _Cloud Committee_ informs the providers the notification of the winner (_Rep\({}_{r}\)_) who has the best offer to provide _Req\({}_{r}\)_. Finally, based on the received notification of _Cloud Committee_, the providers update their pricing strategies to have a better bid in the future. Requests are stored in a vector named _REQ_ = \(<\)_Req\({}_{1}\)_, _Req\({}_{2}\)_,...\(>\)_; let _Req\({}_{r}\)_ the _r\({}^{\text{th}}\)_ request in REQ. _Request Dispatcher_ sends requests to available SaaS providers registered in _Cloud Committee_. It is worth mentioning that each SaaS provider is basically independent and acts as stand-alone. Since they have different computing platforms for processing the requests they might select a variety of VMs to run an application. Thus, some of the parameters are set by each SaaS provider independently, such as prices. **Applications model:** The list of applications that provider \(i\) owns is represented by _App\({}_{i}\)_ = \(<\)_App\({}_{i1}\)_, _App\({}_{i2}\)_,..., App\({}_{iL}\)\(>\)_. Each of these applications has a benefit to the provider. Thus, the list of per unit benefit of applications of provider \(i\) is stored in \(\mathbf{\beta}_{i}\) = \(<\)_g\({}_{i1}\)_, \(\beta_{i2}\)_,..., \(\beta_{iL}\)\(>\)_, where _B\({}_{i}\)_ is the per unit benefit of _App\({}_{i}\)_ (application \(j\) in SaaS provider _i_). _App\({}_{i}\)_ is represented in form of \(<\)_AppID\({}_{i}\)_, _u\({}_{\beta}\)_, _Srv\({}_{\beta}\)_, _o\({}_{\beta}\)_, _MT\({}_{i}\)\(>\)_, where _AppID\({}_{i}\)_ denotes the identification of _App\({}_{i}\)_; the application consists of a number of services represented by _u\({}_{i}\)_; the list of these _u\({}_{i}\)_ services is stored in _Srv\({}_{i}\)_ = \((VMM)_{12}\)_, \(VMM)_{2}\)_,..., \(VMM\)_j\({}_{\mu_{ij}}\)_. _o\({}_{j}\)_ is the initial price of _App\({}_{i}\)_ which is determined by the developer of the application; _MT\({}_{i}\)_ denotes the number of tenants available for the application; thus, zero is assumed for a not multi-tenant application. As mentioned previously, each SaaS provider has a set of heterogeneous VMs of different types. These VMs have a per unit benefit for the provider; the per unit benefits of VMs of provider \(i\) is represented by _a\({}_{i}\)_=\(<\)_o\({}_{il}\)_, _a\({}_{i2}\)_,..., _a\({}_{iR}\)\(>\)_[14]. **User requests model:** The considered requests in our Cloud market are in form of applications. Therefore, the model of applications is firstly described. SaaS providers host some applications on some infrastructural resources; these resources are widely supplied by IaaS providers. Each application includes a set of services that are essential for the application to be run. In other words, these services are well defined components of functional behavior that provides a logical grouping of application functions, such as load balancing, application performance monitoring, application acceleration, auto-scaling, micro -segmentation, service proxy and service discovery and so forth. Thus, a request (_Req\({}_{r}\)_) is supposed as \(<\)_AppID\({}_{r}\)_, _r\({}_{r}\)_, _Wr\({}_{r}\)_, _Srv\({}_{r}\)\(>\)_, where _AppID\({}_{r}\)_ is the identifier of the application; _r\({}_{r}\)_ is the duration of time for _Req\({}_{r}\)_, _Wr\({}_{r}\)_ is the user's willingness to pay for this request and _Srv\({}_{r}\)_ depicts the required infrastructural configuration of application for being run based on its services. The required infrastructure is represented by a vector of VM models (_VMM_) named _Srv\({}_{r}\)_ = \(<\)_VMM\({}_{1}\)_, _VMM \({}_{2}\)_,..., _VMM \({}_{\mu}\)\(>\)_, each _VMM_ denotes the requirements of a service in the requested application; _\(\mu\)_ shows the number of services in _Req\({}_{r}\)_. **Virtual machines model:** the VM model is represented by _VMM_ = \(<\)_Type_, _Core_, _Memory_, _Storage_, _HourCost_\(>\)_, where _Type_, denotes the class of the VM as a predefined fixed label including'small','medium' or 'large'; _Core_ is the number of processors of the VM; _Memory_ and _Storage_ show the size of the RAM and the storage of the VM, respectively; _HourCost_ denotes the cost of the VM per hour. For instance, a _VMM_ in _Srv\({}_{r}\)_ can be \(<\)'small', 1, 5MB, 1GB, $\(\$\)0.5\(>\). **Payment pricing:** As mentioned previously, IaaS providers have to provide required infrastructures for hosted applications on SaaS providers. In other words, IaaS providers offer VM instances for executing applications [13] and SaaS providers prepare the required infrastructures in form of VMs to host the applications. Undoubtedly, users prefer offers that have lower prices with satisfying QoS. As users cannot perceive the absolute value of an application they select the best one among the offers of SaaS providers by comparison. Resource appropriation strategies and pricing policies are of the most effective decisions of SaaS providers [35, 36] which have vital impacts on their profits. The offered price of SaaS providers basically depends on both development and deployment expenses of applications. The development expenses are license costs of the application; needless to mention, multitenant applications have higher prices, as one license can be leased to several users simultaneously. The deployment expenses are the infrastructural requirements of the applications. Based on the aforementioned assumptions, the deployment cost of a request (_Req._), which demands for \(App_{y_{i}}\), is computed as \[c_{ij}=\tau_{r}\times\sum_{k=1}^{\mu_{ij}}VMM_{jk}.\,HourCost\qquad c_{ij} \ll W_{r},\forall k\in[1,...,\mu_{ij}]. \tag{1}\] The deployment cost depends on two main factors; firstly, the duration of time that _Req._ is run (\(\tau_{r}\)) and secondly, the costs of applied infrastructures to host the _Req._ The infrastructural cost of _Req._ is the summation of costs of all virtual machines required to run the services of the requested application per hour (\(\sum_{k=1}^{\mu_{ij}}VMM_{jk}.\,HourCost\)). As a tangible example, consider the following scenario. Suppose a typical SaaS provider has an application as <A021', 3, <t2.small','m3.large', 't2.medium'>, $340, 2>. Based on the provided prices of VM models depicted in Table2, the cost of deploying the request related to the application with identification 'A021' for r=10 hours is computed as \(10Hour\times\sum_{k=1}^{3}VMM_{k}.\,hourCost=10Hour\times\frac{50.026}{Hour}+ \frac{50.140}{Hour}+\frac{50.052}{Hour}=\$2.18\). SaaS provider \(i\) sends its offered price to _Market Manager_ in form of \(A_{i}\) (as depicted in Fig.1); _Cloud Committee_ sends back the offered price of the winner provider to the SaaS provider \(i\) as _Repi._ For the sake of clarity, the adopted notations in this research are summarized in Table1. **The economics of price discrimination of applications:** In this research, the following parameters are considered in pricing policies of the SaaS providers. \begin{tabular}{l l}. & Initial cost, \. & Resource appropriation, \. & Multi-tenancy, \\ \end{tabular} Fig. 1: The model of Cloud marketplace User's willingness to pay, Performance, Structure of payment flow. Providers determine the price of a service based on the above mentioned parameters. _Initial price_, introducing the development expenses determined by application developers, is one of the primitive parameters involved in pricing strategies. _Resource appropriation_ denotes the deployment expenses of a service; it is the second basis parameter which providers use in their pricing strategies. _Multi-tenancy_, as a single instance of a multitenant application shares development and deployment expenses while serving multiple users simultaneously a multitenant application can serve a predefined number of users; _MT\({}_{\bar{y}}\)_ represents the number of users of a multitenant application. _Willingness to _pay_ is a price determined by users while requesting, \(W_{r}\); they expect to pay for the request less than this value [27]. If users demand for a certain level of _performance_, then providers can determine an added price for guarantee and a penalty for the violation (known as SLA). Finally, the _structure of payment flow_ can affect the price as well. Payment flow can be single or recurring; providers assign fewer prices for single payments in comparison with the recurring one, which the payment is structured in multiple steps. For more details about dynamic pricing parameters, readers are referring to [18, 20, 21]. ### Game Theory Game theory is a powerful tool to face scenarios, including prominent interactions of rational entities. These interactions cannot be handled by classical optimization theories [5], as the behavior of a player affects not only itself but also the other players. According to Shoham [30], games include: \(\bullet\)\(N=\{1,2,\...,n\}\) as a finite set of players; \(\bullet\)\(S=S_{I}\times...\times S_{n}\), where \(S_{i}\) is a finite set of available strategies of player \(i\); \(\bullet\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\)\(\quad\)\ **Proposition1** If a game runs iteratively many times and each player performs according to a certain regret minimization strategy, then the empirical distribution of the game will converge to a set of correlated equilibrium. **Corollary** The proof is based on a recursive formula for the distance of the vector of regrets to the negative orthant. Many iteration of a game does not mean a simple one-period recursion from \(t\) to \(t\)\(+\)\(1\). Instead a multi-period recursion where a large "block" of periods, from \(t\) to \(t\)\(+\)\(v\), is combined together. Both \(t\) and \(v\) are carefully chosen and go to infinity; but, \(v\) is relatively small in comparison with \(t\). Consider \(\prod_{t}\) as the probabilities of transitioning from \(t\) to \(t\)\(+\)\(1\). \[\prod_{t}(j,k)=\begin{cases}\frac{1}{\mu}R_{t}(j,k),&\text{if }k\neq j,\\ 1-\sum_{k^{\prime}\neq j}\frac{1}{\mu}R_{t}(j,k^{\prime}),&\text{if }k=j.\end{cases}\] Note that \(\mu\) guarantees that the probabilities have greater values than zero for all possible values of \(j\) and \(t\). Let \(m\) as the number of strategies of player \(i\) and \(M\) as the upper bound of payoffs of player \(i\). Let \(L=\{(j,k)\in S^{l}\times S^{t}\colon j\neq k\}\); therefore, \(\mathbb{R}^{l}\) is the Euclidean space with \(m(m\)\(-\)\(1)\) dimensions. Then, for \(\varepsilon\)\(0\) we have the following. \[A_{t}(j,k)=1_{\{s_{t}^{i}=j\}}\big{[}u^{i}\big{(}k,s_{t}^{-1} \big{)}-u^{i}(s_{t})\big{]},\lx@note{footnote}{$\,I_{G}$ is the indicator of the event $G$.}\] \[D_{t}(j,k)=\frac{1}{t}\sum_{\tau=1}A_{\tau}(j,k),\] \[R_{t}(j,k)=D_{t}^{+}(j,k)\equiv[D_{t}(j,k)]^{+}.\] Finally, let \(\rho_{t}=[dist(D_{t},\mathbb{R}^{L})]^{2}\). Since the closest point to \(D_{t}\) in \(\mathbb{R}^{L}_{\infty}\) is2\(D_{t}^{-}\), \(\rho_{t}=\|D_{t}-D_{t}^{-}\|^{2}=\|D_{t}^{+}\|^{2}=\sum_{j\neq k}[D_{t}^{+}(j,k )]^{2}\). Footnote 2: \([x]\) means \(\min\{x,0\}\). The proof of proposition1 has 11 steps as follows. **Step 1:** \[\text{(i)}\qquad E[(t+v)^{2}\rho_{t+v}|h_{t}]\leq t^{2}p_{t}+2t\sum_{w=1}^{v} R_{t}\cdot E[A_{t+w}|h_{t}]+O(v^{2});\] it is the basic recursion equation. \[\text{(ii)}\qquad(t+v)^{2}\rho_{t+v}-t^{2}p_{t}=O(tv+v^{2}).\] Define \[\alpha_{\varepsilon,w}\big{(}j,s^{-l}\big{)}=\sum_{k\in S^{l}}\prod_{t}(k,j)P \big{[}s_{t+w}=\big{(}k,s^{-l}\big{)}|h_{t}\big{]}-P\big{[}s_{t+w}=\big{(}j,s ^{-l}\big{)}|h_{t}\big{]}.\] **Step 2:** \[R_{t}\cdot E[A_{t+w}|h_{t}]=\mu\sum_{s^{-l}\in S^{-l}}\sum_{j\in S^{l}}\alpha_ {t,w}\big{(}j,s^{-l}\big{)}u^{i}\big{(}j,s^{-l}\big{)}.\] The coefficients \(\alpha\) depend on the moves of the other players. Indeed, given \(h_{t}\), the choices \(s_{t+w}^{i}\) and \(s_{t+w}^{-i}\) are not independent when \(w\)\(>\)\(1\) (since the transition probabilities change with time). Therefore, we replace the process \((S_{t+w})_{0\approx w\approx v}\) by another process \(\big{(}S_{t+w}^{\wedge}\big{)}_{0\approx w\approx v}\), with a stationary transition matrix (that of period \(t\)). **Step 3:** \[R_{t+v}(j,k)-R_{t}(j,k)=O\left(\frac{v}{t}\right).\] For \(\varepsilon\)\(0\) and \(h_{t}\), an auxiliary stochastic process\(\big{(}S_{t+w}^{\wedge}\big{)}_{w=0,1,\ldots}\) with values in \(S\) as follows: The initial value is \(S_{t}^{\wedge}=S_{t}\), and the transition probabilities are \(P\big{[}S_{t+w}^{\wedge}=s|S_{t}^{\wedge},...,S_{t+w-1}^{\wedge}\big{]}=\prod_ {t^{\prime}\in N}\prod_{t}^{t^{\prime}}\big{(}S_{t+w-1}^{\wedge t^{\prime}},S ^{t^{\prime}}\big{)}\). The \(s^{\prime}\)-process is thus stationary: It uses the transition probabilities of period \(t\) at each period \(t\)\(+\)\(w\), **Step 4:** Define \[\alpha_{t,w}^{\wedge}\big{(}j,s^{-l}\big{)}=\sum_{k\in S^{l}}\prod_{t}(k,j)P \big{[}S_{t+w}^{\wedge}=\big{(}k,s^{-l}\big{)}|h_{t}\big{]}-P\big{[}S_{t+w}^{ \wedge}=\big{(}j,s^{-l}\big{)}|h_{t}\big{]}.\] For \(w\) small relative to \(t\), the change in probabilities is small (see the Steps 3 and 4). **Step 5:** \[\alpha_{t,w}\big{(}j,s^{-l}\big{)}-\alpha_{t,w}^{\wedge}\big{(}j,s^{-l}\big{)}=O \left(\frac{w^{2}}{t}\right).\] The total difference is estimated. **Step 6:** \[\alpha_{t,w}^{\wedge}\big{(}j,s^{-l}\big{)}=P\left[s_{t+w}^{\wedge-l}=s^{-l}|h_{ t}\right]\big{[}\prod_{L}^{w+1}-\prod_{L^{\prime}}^{w}\big{]}\big{(}s_{t}^{l},j \big{)},\] where \(\prod_{t}^{w}\equiv(\prod_{t})^{w}\) is the \(w^{\text{th}}\) power of the matrix \(\prod_{t}\), and \(\big{[}\prod_{t}^{w+1}-\prod_{L}^{w}\big{]}\big{(}s_{t}^{l},j\big{)}\) denotes the \(\big{(}s_{t}^{l},j\big{)}\) element of the matrix \(\prod_{t}^{w+1}-\prod_{t}^{w}\). The moves of the other players (which, in the \(s^{\wedge}\)-process, are independent of the moves of player \(i\)) are factored out from the coefficients \(\alpha^{\wedge}\). At this point we get the difference between the transition probabilities after \(w\) periods and after \(w\)+1 periods. **Step 7:** \[\alpha_{t,w}^{\wedge}\big{(}j,s^{-l}\big{)}=O\big{(}w^{-1/2}\big{)}.\] The difference depicted in Step 6 is shown to be small, since \(w\) is large and the transition matrix has all its diagonal elements strictly positive. **Step 8:** \[E\big{[}(t+v)^{2}\rho_{t+v}|h_{t}\big{]}\leq t^{2}\rho_{t}+O\big{(}v^{3}+tv^{ 1/2}\big{)}.\] For each \(n\)=1,2,..., let \(t_{n}=\big{\lfloor}n^{5/3}\big{\rfloor}\) be the largest integer not exceeding \(n^{5/3}\). Substituting in (i) in Step 1 yields the final recursive formula. **Step 9:** \[E\big{[}t_{n+1}^{2}\rho_{t_{n+1}}|h_{t}\big{]}\leq t_{n}^{2}\rho_{t_{n}}+O(n^ {2}).\] **Step 10:** \[lim_{n\rightarrow\infty}\rho_{t_{n}}=0\quad a.\,s.\] **Step 11:** \[lim_{t\rightarrow\infty}R_{t}(j,k)=0\quad a.\,s.\] If a carefully chosen subsequence of periods \((t_{n})_{n=1,2,...}\) is considered the proof ends. It is worth mentioning that the proof of described steps can be found in [41]. Based on Proposition1, the empirical distributions converge to the set of correlated equilibrium if and only if the average of regrets converges to zero. The regret of player \(i\) for strategy profile \(\boldsymbol{s}\) is computed using Eq.3, as follows. \[\Big{[}max_{s_{i}\in S_{t}}u_{i}\big{(}s_{i}^{\wedge},\boldsymbol{s}_{-l} \big{)}\Big{]}-u_{i}(\boldsymbol{s}). \tag{3}\] The regret is the profit that player \(i\) has lost by playing \(s_{i}\) instead of playing the recommended strategy \(s_{i}\). In other words, the difference of profit of player \(i\) for playing \(s_{i}\) instead of \(s_{i}\) is assumed as its regret. To clarify this equation, consider a _Cloud Committee_ with 5 SaaS providers; let \(u_{i}(\boldsymbol{s})=\$100\) as the strategy of player 1, and the recommended strategy that has the greatest profit is \([max_{s_{i}\in S_{t}}u_{i}(s^{\wedge},\boldsymbol{s}_{-l})]=\$93\); other players' strategies are $107, $96, $110, and $103. The regret of player 1 is \(u_{i}(\$93,\$107,\$96,\$110,\$103)-u_{i}(\$100,\$107,\$96,\$110,\$103)\). ## 4. Optimization Problem Formulation and Solution In this section, firstly, the formulation of SaaS providers for optimizing their profit is presented. Then, the optimization problem is studied in form of an incomplete information game, and the equilibrium properties are discussed by the aim of regret minimization techniques. Finally, the economic impact of the considered game is discussed. _4.1 Providers' Profits Optimization Formulation_ As previously mentioned, SaaS providers compete with each other for users' requests, named \(\mathit{Req}_{r}\); the requests are received from _Request Dispatcher_. In such conflicting situation, the providers face an optimization issue of profit maximization while keeping users satisfied as well. Provider \(i\) computes the costs of providing the request, and it offers a price, \(A_{i}\), to _Market Manager_ (Fig.1). The _Market Manager_ evaluates the received offers of all registered providers and notifies the winner to all providers, named \(\mathit{Rep}_{r}\). Finally, an agreement is formed between the winner provider and the user who has sent \(\mathit{Req}_{r}\). SaaS providers compete with each other to attract as more users as possible; this is achieved by offering a desired price. The offered price of a provider depends on previous mentioned criteria, which dynamically determine the price. The profit of SaaS providers is the surplus remaining after total costs are deducted from total revenue. The total costs associated with a SaaS provider (\(C_{i}\)) include the potential costs of software licenses, providing infrastructures, service disruptions, data security concerns, etc. [19, 23]; the total revenue that a SaaS provider obtains is determined on the basis of the pricing policies of the provider \(i\) depicted by \(P_{i}\). The profit of provider \(i\) (\(u_{i}\)) is formulated as \[u_{i}=P_{i}-C_{i}, \tag{4}\] where, \(P_{i}\) and \(C_{i}\) are the revenue of SaaS provider \(i\) and the associated cost of software licenses, infrastructures and so forth, respectively. Providers have to consider some constraints while determining the prices. The considered constraints of SaaS provider \(i\) are formulated in form of an optimization problem as follows. \[\begin{array}{ll}max\;\;u_{i}=max(P_{i}-C_{i})=max\;\;P_{i}-min\;C_{i}\\ s.t\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \ \(u_{i}(\mathbf{x})\) not only depends on strategy of provider \(i\) but also relates to all strategies of other providers; needless to mention that strategy profile \(\mathbf{x}=(x_{l},\,\ldots,\,x_{n},\,\ldots,\,x_{n})\). Thus, the profit function of SaaS provider \(i\) is \[u_{i}=\begin{cases}S_{i}-C_{i},&\underset{else}{argmin}(\mathbf{s})=i\\ 0,&\underset{else}{else}\end{cases}, \tag{6}\] where, \(S_{i}\) denotes the pricing strategy of provider \(i\) for \(\mathit{Req}_{r}\), written as \(P_{i}\) in Eq.4; as previously mentioned, \(\mathit{Req}_{r}\) is demanding for \(\mathit{App}_{i}\). The profits of all players except the winner of the game are zero; the winner is a player whose offered price is the least. \(S_{i}\) is determined as the following equation. \[S_{i}=\sqrt{\omega_{i}}\big{(}1+\gamma\sqrt{\omega_{i}}\big{)}\big{(}\theta_{ ij}+c_{ij}\big{)}, \tag{7}\] where, \(\omega_{i}\) is a parameter which is specified by provider \(i\), \(\gamma\) is a constant positive value determined by the _Cloud Committee_; it is less than 1. \(\theta_{ij}\) and \(c_{ij}\) are the price of the requested application \(\mathit{App}_{i}\) and the price of supplied VMs for deployment of the requested application \(\mathit{App}_{i}\), respectively. Players choose their strategies by choosing different values of \(\omega_{i}\) in a way to optimize their profits. The explanation of how provider \(i\) opts for \(\omega_{i}\) is provided in section 5.1 Experimental setup. \(C_{i}\) in Eq.6 denotes the cost of serving the request \(\mathit{Req}_{r}\) demanding \(\mathit{App}_{i}\) as \[C_{i}=\alpha_{ij}c_{ij}+\beta_{ij}\theta_{ij}. \tag{8}\] \(\alpha_{ij}\)\(\epsilon\)\(\mathbf{a}_{i}\) is per unit benefit of application \(\mathit{App}_{i}\) hosted by provider \(i\) and \(\beta_{ij}\)\(\epsilon\)\(\mathbf{\beta}_{i}\) is per unit benefit of virtual resources that provider \(i\) has applied for deploying application \(\mathit{App}_{i}\). Offering each application and each virtual machine has a benefit for the SaaS provider stored in \(\mathbf{a}_{i}\) and \(\mathbf{\beta}_{i}\) sets, in the order mentioned. Eventually, by substituting Eq.7 and Eq.8 in Eq.6, \(u_{i}\) is formally expanded as \[u_{i}=\begin{cases}\sqrt{\omega_{i}}\big{(}1+\gamma\sqrt{\omega_{i}}\big{)} \big{(}\theta_{ij}+c_{ij}\big{)}-\big{(}\alpha_{ij}c_{ij}+\beta_{ij}\theta_{ ij}\big{)},&\underset{else}{argmin}(\mathbf{s})=i\\ 0,&\underset{else}{else}\end{cases}. \tag{9}\] Algorithm1 presents the considered game between SaaS providers. The algorithm is run in a distributed manner by each SaaS provider in Cloud market separately as providers are mainly dependent; the provider gets the request of a user as its input (\(\mathit{Req}_{r}\)) and the output is sent to _Market Manager_ in form of \(\mathit{BidList}\). Then, in a loop that ends when the equilibrium is achieved the following steps are run. Firstly, \(\mathit{VMsSelection}\) procedure gets the request (\(\mathit{Req}_{r}\)) and returns a list of suitable virtual machines named _SelectedVMList_ that map the requirements of \(\mathit{Req}_{r}\). In the next step, this provider must offer price for providing the requirements of \(\mathit{Req}_{r}\) shown in \(\mathit{BidList}\). The provider runs the procedure called Bid according to the \(\mathit{Req}_{r}\) and the list of the selected virtual machines (\(\mathit{SelectedVMList}\)). The provider computes its profit using \(\mathit{Profit}\) procedure based on the offered bid (\(\mathit{ProfitList}\)). After that in line 4, the \(\mathit{BidList}\) is sent to the _Market Manager_ as depicted in Fig.1 by \(A_{i}\). Then in line 5, the _Market Manager_ finds the _Winner_ provider whose bid is the best based on the price and its accommodation to the requirements by running \(\mathit{MarketMgr}\) procedure; in the case that a provider is the _Winner_ then it should allocate required resources to \(\mathit{Req}_{r}\) (line 6). Whenever the equilibrium circumstances occur Algorithm1 terminates in line 9. During this loop, each provider tries to offer a bid which leads to making the provider as the _Winner_. Now we present our theoretical results on time complexity of Algorithm1 in lemma1. **Lemma1** The time complexity of Algorithm1 is O(\(|S|\)) at round \(t\) for provider \(i\), where \(|S|\) is the number of pricing strategies of the provider \(i\). More specifically, if the underlying game satisfies the equilibrium condition, then its time complexity in processing each request is O(\(|S|^{2}n\)) on average, where \(n\) is the number of providers. **Proof** Based on Proposition1, this algorithm will converge to the set of correlated equilibrium. \(|S|\) is the number of strategies player \(i\) has; each of the regrets is bounded by an expression that depends on the maximum regret that provider \(i\) may experience for not playing a recommended strategy. In order to minimize the average regret of players, a recommended strategy is chosen from \(|S|\) strategies. Although the average regret of players changes in every step, it is bounded by the upper bound, the algorithm must terminate. Since the least difference between any two regrets is bounded, the result of the lemma follows directly. Note that Lemma1 also states that having knowledge about other providers' strategies makes the algorithm better informed and decreases its run-time. As the game is running among \(n\) players, thus the time complexity of the algorithm is \(\mathrm{O}(|S|^{2}n)\), an upper bound on the complexity of Algorithm1. \(\Box\) ``` Algorithm1 RPG: pricing optimization Game algorithm The algorithm is run by each SaaS provider joined in the _Cloud Committee_, as CurrentPrv, in a distributed manner. Input: \(\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ \[p_{i}^{t+1}(x)=\begin{cases}p_{i}^{t}(x)(1-\frac{k_{T}^{t}(s_{i}s_{i}^{t})}{R_{ max}(s_{i}^{t})})&,\ \forall\ x=s_{i}\\ \frac{k_{T}^{t}(s_{i}s_{i}^{t})}{R_{max}(s_{i}^{t})\times(|S|-1)}+p_{i}^{t}(x)( 1-\frac{k_{T}^{t}(s_{i}s_{i}^{t})}{R_{max}(s_{i}^{t})})&,\ else\end{cases}, \tag{11}\] where, \(p_{i}^{t}(x)\) represents the probability of strategy \(x\) at time \(t\); \(R_{max}(s_{i})\) denotes the maximum regret that provider \(i\) may experience for not playing \(s_{i}\); \(|S|\) is the number of possible strategies that provider \(i\) has. If the set of strategies of providers is infinite then term \(\frac{k_{T}^{t}\big{(}s_{i}s_{i}^{t}\big{)}}{R_{max}(s_{i}^{t})\times(|S|-1)}\) will be zero, without loss of generality; the larger experienced regret for strategy \(s_{i}\), the smaller assigned probability for the strategy \(s_{i}\) and vice versa. The probabilities are initially assigned the same values and during the run of the game they are updated to converge to an optimal point. The applied equation in Eq.11 is derived from update equations of learning automata actions in the case that the reinforcement signal of the environment for undesirable replies [33]. In Eq.11, the penalty parameter of learning automata is replaced with a variable value based on the regret (\(\frac{k_{T}^{t}\big{(}s_{i}s_{i}^{t}\big{)}}{R_{max}(s_{i}^{t})}\)). All the generated probabilities in this equation are between 0 and 1 inclusive. As previously mentioned, regret minimization algorithm must compare the current choice in each round with a not chosen strategy to find the regret; a recommender introduces the not chosen strategy. The profit of the introduced strategy of the recommender, known as \(s_{i}\), is compared with the gained profit of playing current strategy, named \(s_{i}\). Regarding the way that the recommender performs, three types of regret minimization algorithms exist including external, internal and swap regret minimization technique. In external regret, the coordinator recommends a predefined strategy to all players, without paying attention to the current strategy. The coordinator in internal regret uses a mapping between each of the strategies; i.e. for each strategy a predefined strategy is recommended. Thus, a particular strategy is recommended for the provider's current strategy and the provider finds the benefits of choosing a particular strategy in comparison with current one. Finally, the swap regret minimization coordinator randomly recommends a strategy instead of the current strategy. _4.4 The Economic Impact of the Considered Game_ In this section, the economic impact of our pricing game is discussed. According to the reports of Eurostat [43], Cloud computing services used by one out of every five enterprises in 2013; furthermore, by 2015, revenue of SaaS is $27.94 billion, compared against $13.46 billion in 2011. Such increases in revenue demonstrate the importance of economic impact of pricing policies of Cloud providers. Based on ProfitBricks report, just 1% improvements in pricing strategies will boost the investments 11.1% in average [44]. In the light of the aforementioned facts outlined in the preceding paragraphs (Section4.1), provider \(i\) has a budget constraint as \(C_{i}\leq S_{i}\), which must be noted while pricing the services. It is to be noted that billing of the services of registered users occurs periodically (usually monthly [45]). Based on the performed billing, the investment or capital of providers can be computed using Eq.12. The investment of provider \(i\) is computed as \[V_{i}^{t} =V_{i}^{t-1}+\sum u_{i},\] \[V_{i}^{t} =V_{i}^{t-1}+\sum S_{i}-\sum C_{i},\] \[V_{i}^{t} =V_{i}^{t-1}+\sum\sqrt{\omega_{i}\big{(}1+\gamma\sqrt{\omega_{i} }\big{)}\big{(}\theta_{ij}+c_{ij}\big{)}-\sum\big{(}\alpha_{j}c_{ij}+\beta_{ ij}\theta_{ij}\big{)}}. \tag{12}\] \(V_{i}^{t}\) represents the investment of provider \(i\) at time \(t\) which changes based on the current achieved profit (\(u_{i}\)). Then, according to the equation of profit shown in Eq.6, \(u_{i}\) is substituted by \(S_{i}\) - \(C_{i}\). Finally, based on the equations of \(S_{i}\) and \(C_{i}\) presented in Eq.7 and Eq.8 respectively, Eq.12 is rewritten. As mentioned before, our presented pricing mechanism in this article computes rough values for undercharging and overcharging a user. For instance, the offered prices of providers do not include costs for marketing staff, or sales management. Let \(q\) as the number of received requests of applications, \(L_{i}\) as the number of provided applications in provider \(i\), and \(n\) as the number of providers registered in _Cloud Committee_. The provider's profit will be a positive value if it wins the game, otherwise it will be zero; so, for provider \(i\), the probability of getting success in \(q\) independent yes/no trials, each of which yields success with probability \(\frac{1}{n}\), is given by a binomial distribution; i.e. on average, the provider would win the game with probability \(\frac{1}{n}\times q\). Due to this probability, the overall provider's profit presented in Eq.9 is rewritten as \(\frac{q}{n}\times E\big{(}\sqrt{\omega_{i}}\big{(}1+\gamma\sqrt{\omega_{i}}\big{)} \times\big{(}\theta_{ij}+c_{ij}\big{)}-\sum\alpha_{j}c_{ij}+\beta_{ij}\theta_{ ij}\big{)}\), where \(E(.)\) refers to the expected value. The average of \(\sum\alpha_{j}c_{ij}+\beta_{ij}\theta_{ij}\) would be \(L_{i}\times E\big{(}\alpha_{j}c_{i(1..L_{i})}+\beta_{ij}\theta_{i(1..L_{i})} \big{)}\). Eq.12 is rewritten as \[\overline{\Delta V}=\frac{q}{n}\times E(\theta+c)\times E\left(\sqrt{\omega} \big{(}1+\gamma\sqrt{\omega}\big{)}\right)-L_{i}\times E(\alpha c+\beta\theta).\] The positive \(\overline{\Delta V}\) denotes the provider has a satisfying value of return on investment. To have a positive \(\overline{\Delta V}\), the following equation must be satisfied; \(\frac{q}{n}\times E(\theta+c)\times E\left(\sqrt{\omega}\big{(}1+\gamma\sqrt {\omega}\big{)}\right)>L_{i}\times E(\beta\theta+\alpha c)\), which leads to \[q\gg\frac{L_{i}\times n}{E\big{(}\sqrt{\omega}\big{(}1+\gamma\sqrt{\omega} \big{)}\big{)}}. \tag{13}\] It shows that the profit of provider \(i\) improves when \(q\) reaches a value greater than \(\frac{L_{i}\times n}{E\big{(}\sqrt{\omega}\big{(}1+\gamma\sqrt{\omega}\big{)} \big{)}}\). Besides, the larger the values of parameter \(\omega\), the provider would get the faster return on investment; however, the amount of loss may be compensated in time. ## 5 Performance Evaluation In this section, some experiments are designed to analyze the proposed model of Cloud computing marketplace and to evaluate its functionality. In our considered scenario, the winner of the competition is chosen by _Committee_ instead of the user choice without loss of generality. Firstly, the parameter settings and the simulation configuration are studied. Then, the evaluation experiment of the approach is presented. ### 5.1 Experimental setup The experimented Cloud computing marketplace is modeled by randomly generating number of SaaS providers, requests, VMs of each provider and so forth; a number from 5 to 100 to simulate number of SaaS providers with entering 1 to 100 requests per second. Although there are a growing number of SaaS providers, a bounded number is reported [1]. Initially, SaaS providers own a random number of VMs, from 100 to 1000; the types of considered VMs are based on what defined by Amazon EC2. The parameters of the VMs, previously introduced as VM model (VMM), are listed in Table2. The processes in SaaS providers have two stages. The first one finds the most proper combination of VMs to host each service of the requested application. In this stage a VM, with a low cost per hour, which can host the service is chosen. In second stage, the provider runs the pricing strategy to offer a price for providing the request. To implement the model, a variety of changes are made in some CloudSim classes and some new ones are added to the simulator. Due to the different characteristics of virtual machines, various number of VM instances with different configurations are considered as Cloud instances, shown in Table2. The pricing game scenario simulation is implemented on Matlab 2007 on an Intel core 2 Duo CPU with 4GB RAM. The number of applications in each provider is from 10 to 100. Our considered applications in this research are ERP (Enterprise Resource Planning), being provided by different SaaS providers. One of these ERP applications is CRM, which has three main instances: Essential, Basic, and Professional. Some instances of the potential costs of Microsoft CRM applications are presented in Table3. Costs of providers' applications varies based on the commercial fees [46]. Providers are monthly billed per user for online provisioning. Besides, for an on-premise provisioning the licensing price is determined based on the instances. Initial price of applications is determined by SaaS providers, who offer ERP, like Actionstep, iCIMS, Plex Systems and Host Analytics Inc.; the assumed values of parameters of the simulation are derived from [19, 23], regarding the commercial fees applied by IaaS providers [47], and by SaaS providers [46]. In our experiments, as \(\gamma\) corresponds to 0.05 interest rate it is set to 0.95 [14]. The probability distribution of \(\omega_{i}\) is initialized as uniform distribution. It is derived from a finite set based on the level of the service. Providers choose their strategies by choosing different values of \(o_{i}\) in a way to optimize their profits. After determining the price of an application (Eq.7), provider \(i\) computes its profit (Eq.9); then, it computes the regret of not choosing the recommended strategy instead of its current one (Eq.10). Eventually, the probabilities of strategies are updated (Eq.11) to experience fewer regrets in the future. The game continues until some strategies, known as equilibrium points, are found. Thus, the simulation runs until reaching these points. ### Equilibrium Efficiency The experiments are executed on a Cloud marketplace with five SaaS providers; the profits of the providers are examined while receiving 100 different requests. As the game proceeds, the profits of the providers increase as well, as depicted in Fig.2. The figure compares equilibrium efficiency of different techniques of regret minimization on pricing strategies of providers in the marketplace. Without loss of generality, the number of SaaS providers and requests are assumed very low. To show the truthfulness of our results, at the end of this section the experiments with much higher number of SaaS providers and requests are provided. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \begin{tabular}{c} **Attr.** \\ **Size** \\ \end{tabular} & \begin{tabular}{c} **VCPU** \\ **(GB)** \\ \end{tabular} & \begin{tabular}{c} **Memory** \\ **(GB)** \\ \end{tabular} & \begin{tabular}{c} **Storage** \\ **VM5** \\ \end{tabular} \\ \hline t2.small & 1 & 2 & 1x 4 SSD & \$0.026/Hour \\ t2.medium & 2 & 4 & 1x 4 SSD & \$0.052/Hour \\ \hline m3.medium & 1 & 3.75 & 1x 4 SSD & \$0.070/Hour \\ c3.large & 2 & 3.75 & 2x 16 SSD & \$0.105/Hour \\ \hline m3.large & 2 & 7.5 & 1x 32 SSD & \$0.140/Hour \\ R3.large & 2 & 15 & 1x 32 SSD & \$0.175/Hour \\ \hline \end{tabular} \end{table} TABLE II: VMs defined by Amazon EC2 as an LaaS provider (in December 2015) [45] \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \begin{tabular}{c} **Type of Provision** \\ **Application’s License** \\ \end{tabular} & \begin{tabular}{c} **Online** \\ **(per user per month)** \\ \end{tabular} \\ \hline CRM Server 2013 & \$4922 & \$150 \\ \hline CRM Professional User CAL & \$983 & \$65 \\ \hline CRM Professional Device CAL & \$787 & \$65 \\ \hline CRM Basic User CAL & \$342 & \$30 \\ \hline CRM Basic Device CAL & \$236 & \$30 \\ \hline CRM Essential CAL & \$79 & \$15 \\ \hline \end{tabular} \end{table} TABLE III: Considered applications offered by SaaS providers with their costs [36] In this research, the external regret coordinator recommends SaaS provider, a strategy which makes the offered price as small as possible. The coordinator in internal regret uses a mapping between each pair of the strategies; it recommends a particular strategy instead of the provider's current strategy; the internal recommended strategy is introduced based on permutation of number of strategies that provider \(i\) has. Finally, the coordinator in swap regret minimization technique randomly recommends a strategy instead of the current strategy of the provider; i.e. there will be different pairs of strategies recommended; the external recommender introduces a strategy which makes the offered price as small as possible. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}{c} Different \\ Pricing \\ Techniques \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **External** \\ **R.M.** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Internal** \\ **R.M.** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Swap** \\ **R.M.** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Non-** \\ **consumption-** \\ **based** \\ **Pricing** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **RM with** \\ **learning** \\ **function in [30]** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Pricing** \\ **policies** \\ **in [9]** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Pricing** \\ **in [19]** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Random** \\ **Pricing** \\ **Techniques** \\ \end{tabular} } \\ \hline \begin{tabular}{c} Profit in \\ Iteration 25 \\ \end{tabular} & 73.98016 & 68.4607 & 67.52008 & 51.8364 & 38.97911 & 129.8065 & 68.1981 & 32.7065 \\ \hline \begin{tabular}{c} Profit in \\ Iteration 50 \\ \end{tabular} & 135.31609 & 111.55481 & 86.11657 & 47.2032 & 69.43673 & 129.8065 & 89.736 & 36.8616 \\ \hline \begin{tabular}{c} Profit in \\ Iteration 75 \\ \end{tabular} & 178.345291 & 153.86802 & 126.3146 & 48.6688 & 98.0287 & 129.8065 & 112.9271 & 42.3701 \\ \hline \begin{tabular}{c} Profit in \\ Iteration 100 \\ \end{tabular} & 228.82859 & 204. 5176 & 192.1307 & 52.47 & 143.6948 & 129.8065 & 149.9769 & 47.1365 \\ \hline \begin{tabular}{c} Average of \\ Profit \\ \end{tabular} & 154.117533 & 134.60028 & 118.02049 & 50.0446 & 87.5348 & 129.8065 & 105.2095 & 39.7687 \\ \hline \end{tabular} \end{table} TABLE IV: Changes of the Profits of Providers using DIFFERENT TECHiques Fig. 3: Profits of providers using non-competition-based pricing Fig. 2: Profits of SaaS providers using **(a)** External regret minimization **(b)** Internal regret minimization **(c)** Swap regret minimization Regarding Fig.2, the following results are concluded. Firstly, the profits of providers in swap regret are less than the others, and the profits in external regret are the greatest in almost all iterations. Secondly, in swap regret the upward changes in the profits of providers are less than two other types of regrets. The figures are summarized in Table4. Additionally, Table4 includes the comparison of the proposed pricing approach with two studied pricing policies [14, 26]. The average values of the profit of the providers in some special steps and the average of the values are inserted; the table entries accurately confirm the mentioned points above. In Table4, comparison of the proposed updating technique for probability of strategies of players (Eq.11) with the one in [41] is depicted as well; it can be conveniently observed that our proposed technique has better profits comparing with the standard regret minimization updating techniques (external, internal and swap). Basically, the technique of [41] update the probabilities of strategies in each phase while in Eq.11, the update of probabilities is performed based on the experienced regret. Compared results of our approach with the proposed stochastic game in [14] represent effectiveness of our approach; in [14], SaaS providers in the considered oligopoly have different pricing policies based on the resource types and the benefits that each type has. In our approach, this concept is generalized by \(\mathbf{a}_{i}\), as per unit benefit of virtual resources in provider \(i\), which makes the approach more applicable. Besides, the price policies in [14] quickly converge to the equilibrium after 10 iterations which cause the same values in rows of Table4. Since in [26], the pricing problem is modeled as a generalized Nash equilibrium problem (GNEP), providers always consider the users and their cost constraints when choosing a pricing strategy. As depicted in Table4 our proposed pricing approach have better profits for the providers in comparison with [26]. In the non-competition-based pricing approach, pricing strategies of providers are independent of the other providers' strategies. In other words, each provider tries to improve its profit without considering other providers' prices. Our competition-based pricing approach of this research is compared with a non-competition-based one (depicted in Table4); from Fig.2 and Fig.3, it can be observed that the profit growth of providers in our approach outperforms the non-competition-based pricing policy. The profits of providers using Random pricing strategy are added to the Table4 as a benchmark to evaluate the proposed approach. As depicted in Table4, our approach is highly dominating this benchmark. Additionally, the comparison of profits of 30 providers in a Cloud marketplace is depicted in Fig.4; the average profit of providers per each request are depicted. Regarding the results of this evaluation shows the superiority of the proposed approach to the two other approaches. It can be observed from Fig.4 that the providers' profits have totally upward trends in much iteration. In [41], standard updating approach is applied to modify the probabilities of the strategies without noting their previous values; the previous behaviors of providers are disregarded in the standard updating approaches introduced in [41]. The results of Fig.4 are also presented in Table4 in details, as well. In the next experiment, the evolution of regrets of providers while achieving correlated equilibrium is compared in three different regret minimization techniques. Fig.5 depicts the results; as it is expected, the graph shows that the regret is decreasing while the game proceeds. Unlike the other regret techniques, the regrets of external regret technique are initially high, but they slightly decrease. Besides, external regret minimization reaches the equilibrium before both other techniques. Fig. 4: Comparison of regret minimization based pricing strategy with non-competition-based pricing and the proposed approach in [42] Finally, the probability convergence of the strategy set of providers is studied. It is expected that the evolutions of the probabilities of strategies reach a fixed value while the convergence occurs. Since SaaS providers have a great number of strategies, as discussed in 5.1, the evolutions of a few number of strategies are depicted in Fig.6; here, 10 strategies are supposed. Each line in the figures shows the evolution of the probability of an individual strategy of the provider. It can be observed from Fig.6 that although the convergence of probabilities is obtained in all regret techniques, the external regret reaches faster in comparison with two others. Besides, it can be concluded that although regret minimization causes improvements in providers' decision making, the way probabilities are updated has considerable impacts as well. ### Validating the simulation model Cost containment is one of the most prominent concerns of SaaS providers; thus, the return on investment of SaaS providers has been discussed in this section. The variations of the investments have been investigated for four providers. The Cloud providers' investments are reported from less than $10M to more than $1B [47]; in this experiment, part of this amount is considered as the investments on the applications, i.e. ($12000-$17000). Although Fig.7 shows the decline in the performance of the investment in the early rounds, the return on investment is observed in next periods of experiments. Regarding the inequality of Eq.13, it is expected that the profit of provider \(i\) increases after at most 10\(\times\)\(L_{i}\)\(\times\)\(n\) times. As \(n\)=4, and E(\(o_{i}\))=0.1, providers obtain their investments before 80\({}^{\text{th}}\) iteration. Although the users' preferences have not been considered here, they will surely obtain some benefits from the providers' optimized offers. Figure 6: The probabilities of 10 number of strategies of a provider in the game (**a**) External regret minimization (**b**) Internal regret minimization (**c**) Swap regret minimization The model presented in this article adopts the assumptions and methodology of Cloud models [19, 23, 46, 47], besides, it suffers the same limitations. Adopting more realistic assumptions tends to increase the worthiness of our proposed pricing model. Indeed, if we consider the users' preferences with their limited knowledge of environment, and the IaaS providers' preferences with their limited infrastructure capabilities, our presented pricing mechanism could be a further realistic. #### 5.3.1 Scalability assessment The proposed approach can scale to a realistic size of the Cloud marketplace within an acceptable time limit. On a MacBook Pro with Core 2 Duo processor running at 2.4GHz with 4GB RAM, the experiments are performed with 15 SaaS providers corresponding to the largest number of providers in the current markets and the number of the requests exponentially increases to \(10^{4}\). Although the size of the state space of the market exponentially grows the proposed approach complexity does not increase. Besides, to validate the reality of the proposed approach on a Cloud marketplace, the experiments are run in a larger scale with a higher number of SaaS providers and requests; the experiments include 15 SaaS providers with request from \(10^{2}\) to \(10^{4}\). The number of applications in each provider is assumed from 100 to 500. The other parameters of the experiment are the same as the previous experiments. Both small and large scale experiments have the same trends; i.e. the least profits belong to swap regret and the best occurs in external regret. The results are summarized in Table5. No figure is reported in this section as depicting 15 vectors belonged to SaaS providers is confusing. Approximate number of required iterations for convergence are computed. As non-competition based pricing approach neglects others players' behaviors without optimizing pricing strategies, it does not converge to any point. The other methods converge to the Equilibrium point after processing less than 10% of requests. ## 6 Conclusion The Cloud computing environment has recently been considered as a marketplace for providing different services. There are numerous SaaS providers in this market, which might compete or cooperate with each other. In this Figure 7: Return on investments of SaaS providers in form of profit evolution research the competition of these entities are studied in a way to attract as much as users. Providers compete to set prices of their Cloud applications simultaneously. The competition is modeled as an incomplete-information game, equipped with proposed regret minimization techniques to overcome the information shortage. Players of the game are SaaS providers; the strategies are their pricing policies, and the profits are the estimated utility that providers may obtain by providing the requests. The comparison of efficiency of external, internal and swap regret minimization techniques are studied in the game; each of these techniques follows an individual process to recommend a strategy to players to compute their regret for not choosing the recommended strategy. In each iteration, the players update the distribution of their strategies, which enables them to learn from their past behaviors. In the experiments, the performance of different regret minimization techniques has been evaluated, and the correlated equilibrium of the game is investigated as well. The approach is compared with some different competition-based pricing approaches and the results show a better efficiency of our proposed approach. Finally, since cost saving is one reason that Cloud computing technology is adopted as an alternative to the existing IT solutions, the consequences of cost challenges in the proposed approach are investigated as well. The experiments denote that the proposed approach leads to promising improvements in SaaS providers' return on investment and decision rankings on their pricing policies. There are some future research directions that can be further studied; such as the variety of Cloud services in different time slots and the cooperation of providers may conclude more efficient pricing strategies. There are some limitations in our research; such as lack of open data set related to our work, the current method cannot be verified in the real data.
2309.08643
NISF: Neural Implicit Segmentation Functions
Segmentation of anatomical shapes from medical images has taken an important role in the automation of clinical measurements. While typical deep-learning segmentation approaches are performed on discrete voxels, the underlying objects being analysed exist in a real-valued continuous space. Approaches that rely on convolutional neural networks (CNNs) are limited to grid-like inputs and not easily applicable to sparse or partial measurements. We propose a novel family of image segmentation models that tackle many of CNNs' shortcomings: Neural Implicit Segmentation Functions (NISF). Our framework takes inspiration from the field of neural implicit functions where a network learns a mapping from a real-valued coordinate-space to a shape representation. NISFs have the ability to segment anatomical shapes in high-dimensional continuous spaces. Training is not limited to voxelized grids, and covers applications with sparse and partial data. Interpolation between observations is learnt naturally in the training procedure and requires no post-processing. Furthermore, NISFs allow the leveraging of learnt shape priors to make predictions for regions outside of the original image plane. We go on to show the framework achieves dice scores of 0.87 $\pm$ 0.045 on a (3D+t) short-axis cardiac segmentation task using the UK Biobank dataset. We also provide a qualitative analysis on our frameworks ability to perform segmentation and image interpolation on unseen regions of an image volume at arbitrary resolutions.
Nil Stolt-Ansó, Julian McGinnis, Jiazhen Pan, Kerstin Hammernik, Daniel Rueckert
2023-09-15T00:11:36Z
http://arxiv.org/abs/2309.08643v1
# NISF: Neural Implicit Segmentation Functions ###### Abstract Segmentation of anatomical shapes from medical images has taken an important role in the automation of clinical measurements. While typical deep-learning segmentation approaches are performed on discrete voxels, the underlying objects being analysed exist in a real-valued continuous space. Approaches that rely on convolutional neural networks (CNNs) are limited to grid-like inputs and not easily applicable to sparse or partial measurements. We propose a novel family of image segmentation models that tackle many of CNNs' shortcomings: Neural Implicit Segmentation Functions (NISF). Our framework takes inspiration from the field of neural implicit functions where a network learns a mapping from a real-valued coordinate-space to a shape representation. NISFs have the ability to segment anatomical shapes in high-dimensional continuous spaces. Training is not limited to voxelized grids, and covers applications with sparse and partial data. Interpolation between observations is learnt naturally in the training procedure and requires no post-processing. Furthermore, NISFs allow the leveraging of learnt shape priors to make predictions for regions outside of the original image plane. We go on to show the framework achieves dice scores of \(0.87\pm 0.045\) on a (3D+t) short-axis cardiac segmentation task using the UK Biobank dataset. We also provide a qualitative analysis on our frameworks ability to perform segmentation and image interpolation on unseen regions of an image volume at arbitrary resolutions. ## 1 Introduction Image segmentation is a core task in domains where the area, volume or surface of an object is of interest. The principle of segmentation involves assigning a class to every presented point in the input space. Typically, the input is presented in the form of images: aligned pixel (or voxel) grids, with the intention to obtain a class label for each. In this context, the application of deep learning to the medical imaging domain has shown great promise in recent years. With the advent of the U-Net [20], Convolutional Neural Networks (CNN) have been successfully applied to a multitude of imaging domains and achieved (or even surpassed) human performance [11]. The convolution operation make CNNs an obvious choice for dealing with inputs in the form of 2D pixel- or 3D voxel-grids. Despite their efficacy, CNNs suffer from a range of limitations that lead to incompatibilities for some imaging domains. CNNs are restricted to data in the form of grids, and cannot easily handle sparse or partial inputs. Moreover, due to the CNN's segmentation output also being confined to a grid, obtaining smooth object surfaces requires post-processing heuristics. Predicting a high resolution segmentations also has implications on the memory and compute requirements in high-dimensional domains. Finally, the learning of long-distance spatial correlations requires deep stacks of layers, which may pose too taxing in low resource domains. We introduce a novel approach to image segmentation that circumvents these shortcomings: Neural Implicit Segmentation Functions (NISF). Inspired by on-going research in the field of neural implicit functions (NIF), a neural network is taught to learn a mapping from a coordinate space to any arbitrary real-valued space, such as segmentation, distance function, or image intensity. While CNNs employ the image's pixel or voxel intensities as an input, NISF's input is a real-valued vector \(c\in\mathbb{R}^{N}\) for a single N-dimensional coordinate, alongside a subject-specific latent representation vector \(h\in\mathbb{R}^{d}\). Given \(c\) and \(h\), the network is taught to predict image intensity and segmentation value pairs. The space \(\mathcal{H}\) over all possible latent vectors \(h\) serves as a learnable prior over all possible subject representations. In this paper, we describe an auto-decoder process by which a previously unseen subject's pairs of coordinate-image intensity values \((c,i)\) may be used to approximate that subject's latent representation \(h\). Given a latent code, the intensity and segmentation predictions from any arbitrary coordinates in the volume may be sampled. We evaluate the proposed framework's segmentation scores and investigate its generalization properties on the UK-Biobank cardiac magnetic resonance imaging (MRI) short-axis dataset. We make the source code publicly available5. Figure 1: Short axis volumes have low resolution along the ventricle’s long axis. Given a short axis image volume, a NISF can produce arbitrary resolution segmentations along the long axis. ## 2 Related Work **Cardiac MRI.** Cardiac magnetic resonance imaging (MRI) is often the preferred imaging modality for the assessment of function and structure of the cardiovascular system. This is in equal parts due to its non-invasive nature, and due to its high spatial and temporal resolution capabilities. The short-axis (SAX) view is a (3+t)-dimensional volume made up of stacked cross-sectional (2D+t) acquisitions which lay orthogonal to the ventricle's long axis (see Figure 1). Spatial resolution is highest in-plane (typically \(<\)3mm\({}^{2}\)), with a much lower inter-slice resolution ( 10mm), and a temporal resolution of \(\leq\)45ms [15]. On the other hand, long-axis (LAX) views are (2D+t) acquisitions orthogonal to the SAX plane and provide high resolution along the ventricle's long axis. **Image segmentation.** The capabilities of the CNN has caused it to become the predominant choice for image segmentation tasks [8, 20]. However, a pitfall of these models is their poor generalization to certain input transformations. One such transformation is scaling. This drawback limits the use of CNNs on domains with large variations in pixel spacings. Past works have attempted to mitigate this issue by accounting for dataset characteristics [11], building resilience through augmentations [29], or using multi-scale feature extractors [5]. Additionally, segmentation performed by fully convolutional model is restricted to predicting in pixel (or voxel) grids. This requires post-processing heuristics to extract smooth object surfaces. Works such as [19, 12] try to mitigate this issue through point-wise decoders that operate on interpolated convolutional features. Alternatives to binarized segmentation have been recently proposed such as soft segmentations [7] and distance field predictions [6, 24]. Smoothness can also be improved by predicting at higher resolutions. This is however limited by the exponential increase of memory that comes with high-dimensional data. Partitioning of the input can make memory requirements manageable [3, 9], but doing so disallows the ability to learn long-distance spatial correlations. **Neural implicit functions.** In recent years, NIFs have achieved notable milestones in the field of shape representations [17, 18]. NIFs have multiple advantages over classical voxelized approaches that makes them remarkably interesting for applications in the medical imaging domain [10, 28]. First, NIFs can sample shapes at any points in space at arbitrary resolutions. This makes them particularly fit for working with sparse, partial, or non-uniform data. Implicit functions thus remove the need for traditional interpolation as high-resolution shapes are learnt implicitly by the network [1]. This is specially relevant to the medical imaging community, where scans may have complex sampling strategies, have missing or unusable regions, or have highly anisotropic voxel sizes. These properties may further vary across scanners and acquisition protocols, making generalization across datasets a challenge. Additionally, the ability to process each point independently allows implicit functions to have flexible optimization strategies, making entire volumes be optimizable holistically. Image priors.The typical application of a NIF involves the training of a multi-layer perceptron (MLP) on a _single_ scene. Although generalization still occurs in generating novel views of the target scene, the introduction of prior knowledge and conditioning of the MLP is subject to ongoing research [1, 14, 16, 18, 22, 23]. Approaches such as [1, 18] opt for auto-decoder architectures where the network is modulated by latent code at the input level. At inference time, the latent code of the target scene is optimized by backpropagation. Works such as [16] choose to instead modulate the network at its activation functions. Other frameworks obtain the latent code in a single-shot fashion through the use of an encoder network [14, 16, 23, 22]. This latent code is then used by a hyper-network [14, 16, 23] or a meta-learning approach [22] to generate the weights of a decoder network. ## 3 Methods Shared Prior.In order to generalize to unseen subjects, we attempt to build a shared prior \(\mathcal{H}\) over all subjects. This is done by conditioning the classifier with a latent vector \(h\in\mathbb{R}^{d}\) at the input level. Each individual subject \(j\) in a population \(X\), can be thought of having a distinct \(h_{j}\) that serves as a latent code of their unique features. Following [1, 18], we initialize a matrix \(H\in\mathbb{R}^{Xd}\), where each row is a latent vector \(h_{j}\) corresponding to a single subject \(j\) in the dataset. The latent vector \(h_{j}\) of a subject is fed to the MLP alongside a point's coordinate and can be optimized through back-propagation. This allows \(\mathcal{H}\) to be optimized to capture useful inter-patient features. Model Architecture.The architecture is composed of a segmentation function \(f_{\theta}\) and a reconstruction function \(f_{\phi}\). At each continuous-valued coordinate \(c\ \in\ \mathbb{R}^{N}\), function \(f_{\theta}\) models the shape's segmentation probability \(s_{c}\) for all \(M\) classes, and function \(f_{\phi}\) models the image intensity \(i_{c}\). The functions are conditioned by a latent vector \(h\) at the input level as follows: \[f_{\theta}:\big{(}c\in\mathbb{R}^{N}\big{)}\times\big{(}h\in\mathbb{R}^{d} \big{)}\to s_{c}\in[0,1]^{M},\quad\sum_{i=1}^{M}s_{c}^{i}=1 \tag{1}\] \[f_{\phi}:\big{(}c\in\mathbb{R}^{N}\big{)}\times\big{(}h\in\mathbb{R}^{d} \big{)}\to i_{c}\in[0,1] \tag{2}\] In order to improve local agreement between the segmentation and reconstruction functions, we jointly model \(f_{\theta}\) and \(f_{\phi}\) by a unique multi-layer perceptron (MLP) with two output heads (Figure 2). We employ Gabor wavelet activation functions [21] which are known to be more expressive than Fourier Features combined with ReLU [26] or sinusoidal activation functions [23]. Prior Training.Following the setup described in [1], we randomly initialize the matrix \(H\) consisting of a trainable latent vector \(h_{j}\sim\mathcal{N}\left(0,10^{-2}\right)\) for each subject in the training set. On each training sample, the parameters of the MLP are jointly optimized with the subject's \(h_{j}\). We select a training batch by uniformly sampling a time frame \(t\) and using all points within that 3D volume. Each voxel in the sample is processed in parallel along the batch dimension. Coordinates are normalized to the range \([0,1]\) based on the voxel's relative position. The difference in image reconstruction from the ground-truth voxel intensities is supervised using binary cross-entropy (BCE). This is motivated by our data's voxel intensity distribution being heavily skewed towards the extremes. The segmentation loss is a sum of a BCE loss component and a Dice loss component. We found that adding a weighting factor of \(\alpha=10\) to the image reconstruction loss component yielded inference-time improvements on both image reconstruction and segmentation metrics. Additionally, L2 regularization is applied to the latent vector \(h_{j}\) and the MLP's parameters. The full loss is summarized as follows: \[\begin{split}\mathcal{L}_{train}(\theta,\phi,h_{j})& =\ \mathcal{L}_{BCE}\Big{(}f_{\theta}(c,h_{j}),s_{c}\Big{)}\ +\ \mathcal{L}_{Dice}\Big{(}f_{\theta}(c,h_{j}),s_{c}\Big{)}\\ &\ Due to the loss being composed exclusively by the image reconstruction term, \(h\) is expected to eventually overfit to \(f_{\phi}\). Special care should be taken to find a step-number hyperparameter that stops the optimization of \(h\) at the optimal segmentation performance. In our experiments, we chose this parameter based on the Dice score of the best validation run. ## 4 Experiments and Results **Data overview.** The dataset consists of a random subset of 1150 subjects from the UK Biobank's short-axis cardiac MRI acquisitions [25]. An overview of the UK Biobank cohort's baseline statistics can be found in their showcase website [27]. The dataset split included 1000 subjects for the prior training, 50 for validation, and 100 for testing. The (3D+t) short-axis volumes are anisotropic in nature and have a wide range of shapes and pixel spacings along the spatial dimensions. No form of preprocessing was performed on the images except for an intensity normalization to the range \([0,1]\) as performed in similar literature [2]. The high dimensionality of (3D+t) volumes makes manual annotation prohibitively time consuming. Due to this, we make use of synthetic segmentation as ground truth shapes created using a trained state of the art segmentation CNN provided by [2]. The object of interest in each scan is composed of three distinct, mutually exclusive sub-regions: The left ventricle (LV) blood pool, LV myocardium, and right ventricle (RV) blood pool (see Figure 1). **Implementation details.** The architecture consists of 8 residual layers, each with 128 hidden units. The subject latent codes had 128 learnable parameters. The model was implemented using Pytorch and trained on an NVIDIA A40 GPU for 1000 epochs, lasting approximately 9 days. Inference optimization lasted 3-7 minutes per subject depending on volume dimensions. Losses are minimized using the ADAM optimizer [13] using a learning rate of \(10^{-4}\) during the prior training training and \(10^{-4}\) during inference. **Results.** As the latent code is optimized during inference, segmentation metrics follow an overfitting pattern (see Figure 3). This is an expected consequence of the inference process optimizing solely on the image reconstruction loss. Early stopping should be employed to obtain the best performing latent code state. The benefits of training a prior over the population is investigated by tracking inference-time Dice scores obtained from spaced-out validation runs. Training of the prior is shown to significantly improve performance of segmentation and image reconstruction at inference-time as seen in Figure 4. \begin{table} \begin{tabular}{|l||l|l|l|l|} \hline Class & Classes average & LV blood Pool & LV myocardium & RV blood Pool \\ \hline Dice score & \(0.87\pm 0.045\) & \(0.90\pm 0.037\) & \(0.82\pm 0.075\) & \(0.88\pm 0.063\) \\ \hline \end{tabular} \end{table} Table 1: Class Dice scores for the 100 subject test dataset. Validation results showed the average optimal number of latent code optimization steps at inference to be 672. Thus, the test set per-class Dice scores (Table 1) were obtained after 672 optimization steps on \(h\) for each test subject. Further investigation is performed on the generalization capabilities of the subject prior by producing segmentations for held-out sections of the image volume. First, the subject's latent code is optimized using the inference process. Then, the model's output is sampled at the held-out region's coordinates. Figure 4: Inference-time segmentation and image reconstruction at various stages of the prior’s training process. a) Prior has not been trained. Inference can roughly reconstruct the image outline. Segmentation fails. b) Early on, reconstructed images are blurry. Segmentation is poor, but at the correct region. c) Eventually images are reconstructed with great detail and segmentations are accurate. d) Ground truth. Figure 3: Segmentation Dice trend during a subject’s inference. Early stopping is important to prevent overfitting on reconstruction task. a) Non-optimized latent code creates blurry images with ’generic’ morphology. b) As the latent code is optimized, subject morphology begins to be accurately reconstructed. Segmentation performance reaches an optimum. c) Reconstruction continues to improve, but segmentation deteriorates. Right ventricle segmentation in basal slices is notoriously challenging to manually annotate due to the delineation of the atrial and ventricular cavity combined with the sparsity of the resolution along the long axis [4]. Nonetheless, as seen in Figure 5, our approach is capable of capturing smooth and plausible morphology of these regions despite not having access to the image information. We go on to show NISF's ability to generate high-resolution segmentation for out-of-plane views. We optimize on a short-axis volume at inference and subsequently sample coordinates corresponding to long-axis views. Despite _never_ presenting a ground-truth long-axis image, the model reconstructs an interpolated view and provides an accurate segmentation along its plane (Figure 6). Figure 5: Interpolation predictions for a held-out basal slice. Top row: Predicted segmentation overlayed on predicted image. Bottom row: Ground truth segmentation overlayed on original image. Middle column is never shown to network during inference. Black slices don’t exist in original image volume. The model appears to understand how the ventricles come into view as we descend down the slice dimension. Figure 6: Segmentation of a held-out long-axis 4-chamber plane from SAX image data. a) Ground-truth long-axis 4-chamber view (not presented to network). b) Nearest-neighbour interpolation of 4-chamber view from SAX volume. c) Predicted 4-chamber image plane. d) Predicted 4-chamber view segmentation. ## 5 Conclusion We present a novel family of image segmentation models that can model shapes at arbitrary resolutions. The approach is able to leverage priors to make predictions for regions not present in the original image data. Working directly on the coordinate space has the benefit of accepting high-dimensional sparse data, as well as not being affected by variations in image shapes and resolutions. We implement a simple version of this framework and evaluate it on a short-axis cardiac MRI segmentation task using the UK Biobank. Reported Dice scores on 100 unseen subjects average \(0.87\pm 0.045\). We also perform a qualitative analysis on the framework's ability to predict held-out sections of image volumes. ## 6 Acknowledgements This work is funded by the Munich Center for Machine Learning and European Research Council (ERC) project Deep4MI (884622). This research has been conducted using the UK Biobank Resource under Application Number 87802.
2308.00136
Magnetic dipole operator from chiral effective field theory for many-body expansion methods
Many-body approaches for atomic nuclei generally rely on a basis expansion of the nuclear states, interactions, and current operators. In this work, we derive the representation of the magnetic dipole operator in plane-wave and harmonic-oscillator basis states, as needed for Faddeev calculations of few-body systems or many-body calculations within, e.g., the no-core shell model, the in-medium renormalization group, coupled-cluster theory, or the nuclear shell model. We focus in particular on the next-to-leading-order two-body contributions derived from chiral effective field theory. We provide detailed benchmarks and also comparisons with quantum Monte Carlo results for three-body systems. The derived operator matrix elements represent the basic input for studying magnetic properties of atomic nuclei based on chiral effective field theory.
R. Seutin, O. J. Hernandez, T. Miyagi, S. Bacca, K. Hebeler, S. König, A. Schwenk
2023-07-31T20:05:24Z
http://arxiv.org/abs/2308.00136v2
# Magnetic dipole operator from chiral effective field theory for many-body expansion methods ###### Abstract Many-body approaches for atomic nuclei generally rely on a basis expansion of the nuclear states, interactions, and current operators. In this work, we derive the representation of the magnetic dipole operator in plane-wave and harmonic-oscillator basis states, as needed for Faddeev calculations of few-body systems or many-body calculations within, e.g., the no-core shell model, the in-medium renormalization group, coupled-cluster theory, or the nuclear shell model. We focus in particular on the next-to-leading-order two-body contributions derived from chiral effective field theory. We provide detailed benchmarks and also comparisons with quantum Monte Carlo results for three-body systems. The derived operator matrix elements represent the basic input for studying magnetic properties of atomic nuclei based on chiral effective field theory. ## I Introduction Calculating the electromagnetic structure of nuclei is a powerful tool to explore and test nuclear theory. The weak electromagnetic coupling compared to the strong interaction allows for a perturbative treatment of these processes, so that the nuclear structure content can be separated with great control. The electromagnetic interaction between the nucleus and external photons can in general be described by a current-current interaction. While quantum electrodynamics (QED) describes the current of the external probe, nuclear theory deals with the nuclear current. To first approximation, the interaction between the photon and an atomic nucleus can be expressed in terms of the sum of photon interactions with all the individual nucleons. This approximation is equivalent to retaining only one-body contributions in the nuclear current, while all possible higher-body operators are neglected. Even though these leading terms provide the dominant contributions, higher-order contributions, especially from two-body operators are crucial for precise predictions of electromagnetic observables. The modern approach to quantitatively understanding low-energy nuclear physics in terms of _ab initio_ calculations is based on effective field theory (EFT), most notably chiral EFT. It provides a systematic expansion of the strong interaction between nucleons as well as electroweak interactions with a direct connection to the fundamental theory of quantum chromodynamics (QCD) and its symmetries [1; 2; 3]. A power-counting scheme orders the expansion terms according to decreasing importance in powers of \((Q/\Lambda_{\mathrm{b}})^{\nu}\), with \(Q\) the typical momentum scale governing processes in the nucleus, which is of the order of the pion mass \(m_{\pi}\), and \(\Lambda_{\mathrm{b}}\) the breakdown scale \(\Lambda_{\mathrm{b}}=500-600\,\mathrm{MeV}\). Leading order (LO) terms, i.e., \(\nu=0\) for nuclear forces and \(\nu=-2\) for electromagnetic currents, include the dominant one-body contributions mentioned earlier, while next-to-leading order (NLO) and next-to-next-to-leading order (N\({}^{2}\)LO) terms, etc., add contributions of decreasing importance. The systematic expansion provides a way to improve calculations and to determine uncertainties arising from neglected higher orders [4; 5]. Furthermore, EFT provides a consistent derivation of nuclear forces and currents. To date, there have been several efforts to derive electromagnetic nuclear currents within the framework of chiral EFT. In Refs. [6; 7; 8] time-ordered perturbation theory was used to obtain current operator expressions up to next-to-next-to-leading order (N\({}^{3}\)LO) in the chiral expansion, while Refs. [9; 10; 11] used the method of unitary transformation. Both methods agree on the current operators at the order we employ in this work. However, at higher orders disagreements occur; for a detailed discussion see Ref. [11]. Calculating the electromagnetic structure of nuclei involves evaluating the electromagnetic nuclear current operator \(J_{\mu}=(\rho,\mathbf{j})\), with charge operator \(\rho\) and three-vector current operator \(\mathbf{j}\), between initial and final states of the nuclear system \(\ket{i}\) and \(\ket{f}\). The Fourier transform of the current operator contains information about the charge and magnetization densities inside the nucleus. Because the nuclear states have a definite angular momentum, it is useful to decompose the nuclear current into its multipole components. For example, the current operator \(\mathbf{j}\) can be expressed in terms of magnetic multipole operators, the long-wavelength limits of which correspond to the magnetic moment operators, which are dominated by the magnetic dipole contribution. With the magnetic dipole operator, one can calculate ground-state properties like the nuclear magnetic moment, defined by \[\mu\equiv\,\bra{\xi JM}=J\ket{\mu_{\mathrm{c}}}\ket{\xi JM}=J\,, \tag{1}\] where \(J\) and \(M\) are the nuclear spin and its projection, respectively, and \(\xi\) represents all other quantum numbers relevant to describe the state. In addition, one can calculate magnetic
2309.03716
Sharp semiclassical spectral asymptotics for local magnetic Schrödinger operators on $\mathbb{R}^d$ without full regularity
We consider operators acting in $L^2(\mathbb{R}^d)$ with $d\geq3$ that locally behave as a magnetic Schr\"odinger operator. For the magnetic Schr\"odinger operators we suppose the magnetic potentials are smooth and the electric potential is five times differentiable and the fifth derivatives are H\"older continuous. Under these assumptions, we establish sharp spectral asymptotics for localised counting functions and Riesz means.
Søren Mikkelsen
2023-09-07T13:48:23Z
http://arxiv.org/abs/2309.03716v2
Sharp semiclassical spectral asymptotics for local magnetic Schrodinger operators on \(\mathbb{R}^{d}\) without full regularity. ###### Abstract We consider operators acting in \(L^{2}(\mathbb{R}^{d})\) with \(d\geq 3\) that locally behave as a magnetic Schrodinger operator. For the magnetic Schrodinger operators we suppose the magnetic potentials are smooth and the electric potential is two times differentiable and the second derivatives are Holder continuous. Under these assumptions we establish sharp spectral asymptotics for localised counting functions and Riesz means. ## 1 Introduction We will here consider sharp semiclassical spectral asymptotics for operators \(\mathcal{H}_{h,\mu}\) that locally are given by a magnetic Schrodinger operators acting in \(L^{2}(\mathbb{R}^{d})\) for \(d\geq 3\). What we precisely mean by "locally given by" will be clarified below. That is we consider operators that locally are of the form \[H_{h,\mu}=(-i\hbar\nabla-\mu a)^{2}+V, \tag{1.1}\] where \(\hbar\in(0,1]\) is the semiclassical parameter, \(\mu\geq 0\) is the intensity of the magnetic field, \(a\) is the magnetic vector potential and \(V\) is the electric potential. Our exact assumptions on the potentials and intensity \(\mu\) will be stated below. We will here for \(\gamma\in[0,1]\) be interested in the asymptotics as \(\hbar\) goes to zero of the following traces \[\mathrm{Tr}[\varphi g_{\gamma}(\mathcal{H}_{h,\mu})], \tag{1.2}\] for \(\gamma\in[0,1]\), where \(\varphi\in C_{0}^{\infty}(\mathbb{R}^{d})\). The function \(g_{\gamma}\) is given by \[g_{\gamma}(t)=\begin{cases}\mathbf{1}_{(-\infty,0]}(t)&\gamma=0\\ (t)_{-}^{\gamma}&\gamma\in(0,1],\end{cases} \tag{1.3}\] where we have used the notation \((x)_{-}=\max(0,-x)\) and \(\mathbf{1}_{(-\infty,0]}\) is the characteristic function for the set \((-\infty,0]\). To ensure that the leading order term in the asymptotics is independent of the magnetic field we will assume that \(\hbar\mu\leq C\), where \(C\) is some positive constant. Understanding these localised traces are crucial steps in understanding the global quantity \[\mathrm{Tr}[g_{\gamma}(H_{h,\mu})]. \tag{1.4}\] Especially the case \(\gamma=1\) has physical motivation both with and without a magnetic vector potential. For details see e.g. [9, 18, 19, 20, 21, 29]. The case \(\gamma=0\) is also of interest. Recently in [10] sharp estimates for the trace norm of commutators between spectral projections and position and momentum operators was obtained using asymptotic for (1.2) for \(\gamma=0\). This type of bound first appeared as an assumption in a [3], where the mean-field evolution of fermionic systems was studied. The assumption have also appeared in [1, 2, 4, 6, 17, 23]. The asymptotics used in [10] was obtained in [27]. Before we state our main result we will specify our assumptions on the operator \(\mathcal{H}_{\hbar,\mu}\) and what we mean by "locally given by a magnetic Schrodinger operator". That we only locally assume \(\mathcal{H}_{\hbar,\mu}\) is acting as a magnetic Schrodinger operator is due to the presence of the cut-off function. This type of assumptions first appeared in [27] to the knowledge of the author. Our exact assumptions are given below. **Assumption 1.1**.: Let \(\mathcal{H}_{\hbar,\mu}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\), where \(\hbar>0\) and \(\mu\geq 0\). Moreover, let \(\gamma\in[0,1]\). Suppose that * \(\mathcal{H}_{\hbar,\mu}\) is self-adjoint and lower semibounded. * Suppose there exists an open set \(\Omega\subset\mathbb{R}^{d}\) and real valued functions \(V\in C^{2,\kappa}_{0}(\mathbb{R}^{d})\) with \(\kappa>\gamma\), \(a_{j}\in C^{\infty}_{0}(\mathbb{R}^{d})\) for \(j\in\{1,\dots,d\}\) such that \(C^{\infty}_{0}(\Omega)\subset\mathcal{D}(\mathcal{H})\) and \[\mathcal{H}_{\hbar,\mu}\varphi=H_{\hbar,\mu}\varphi\quad\text{for all }\varphi\in C^{\infty}_{0}(\Omega),\] where \(H_{\hbar,\mu}=(-i\hbar\nabla-\mu a)^{2}+V\). In the assumption we have used the notation \(C^{2,\kappa}_{0}(\mathbb{R}^{d})\). This is the space of compactly supported functions that are two times differentiable and the second derivatives are uniformly Holder continuous with parameter \(\kappa\). That is for \(f\in C^{2,\kappa}_{0}(\mathbb{R}^{d})\) there exsist a constant \(C>0\) such that for all \(x,y\in\mathbb{R}^{d}\) it holds that \[|\partial_{x}^{\alpha}f(x)-\partial_{x}^{\alpha}f(y)|\leq C|x-y|^{\kappa}\quad \text{for all }\alpha\in\mathbb{N}^{d}_{0}\text{ with }|\alpha|=2. \tag{1.5}\] Note that we here and in the following are using the convention that \(\mathbb{N}\) does not contain \(0\) and we will use the notation \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). Moreover for the cases where \(\kappa>1\) we use the convention that \[C^{2,\kappa}_{0}(\mathbb{R}^{d})\coloneqq C^{2+\lfloor\kappa\rfloor,\kappa- \lfloor\kappa\rfloor}_{0}(\mathbb{R}^{d}), \tag{1.6}\] where \(C^{k,\kappa}_{0}(\mathbb{R}^{d})\) is the space of compactly supported functions that are \(k\) times differentiable and the \(k\)'ed derivatives are uniformly Holder continuous with parameter \(\kappa\). The assumptions we make on the operator \(\mathcal{H}_{\hbar,\mu}\) is very similar to the assumptions made in [27]. The difference is that we do not require \(V\) to be smooth. But instead assume it has two derivatives and that the second derivative is uniformly Holder continuous. With this assumption in place we can state our main result. **Theorem 1.2**.: _Let \(\mathcal{H}_{\hbar,\mu}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\) and let \(\gamma\in[0,1]\). If \(\gamma=0\) we assume \(d\geq 3\) and if \(\gamma\in(0,1]\) we assume \(d\geq 4\). Suppose that \(\mathcal{H}_{\hbar,\mu}\) satisfies Assumption 1.1 with the set \(\Omega\) and the functions \(V\) and \(a_{j}\) for \(j\in\{1,\dots,d\}\). Then for any \(\varphi\in C^{\infty}_{0}(\Omega)\) it holds that_ \[\Big{|}\operatorname{Tr}[\varphi g_{\gamma}(\mathcal{H}_{\hbar,\mu})]-\frac{1 }{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x))\varphi(x)\, dxdp\Big{|}\leq C\langle\mu\rangle^{1+\gamma}\hbar^{1+\gamma-d}\] _for all \(\hbar\in(0,\hbar_{0}]\) and \(\mu\leq C\hbar^{-1}\), where \(\hbar_{0}\) is sufficiently small. With the notation \(\langle\mu\rangle=(1+\mu^{2})^{\frac{1}{2}}\). The constant \(C\) depends on the dimension \(d\), the numbers \(\gamma\), \(\|\varphi\|_{L^{\infty}(\mathbb{R}^{d})}\), \(\|\partial_{x}^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R}^{d})}\) and \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}^{d}_{0}\) with \(|\alpha|\geq 1\) and \(j\in\{1\dots,d\}\), \(\|\partial_{x}^{\alpha}V\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}^{d}_{0}\) such that \(|\alpha|\leq 2\)._ _Remark 1.3_.: We remark that the error term is independent of \(\left\|a_{j}\right\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(j\in\{1\ldots,d\}\). This is also the case for the results in [27]. As remarked in [27] it is not surprising as the magnitude of \(a_{j}\) can easily be change by a Gauge transform. The assumptions on the dimension are needed to ensure convergence of certain integrals. As mentioned above, asymptotics in the case, where \(V\in C_{0}^{\infty}(\mathbb{R}^{d})\), was obtained in [27]. In [28] sharp asymptotics was also obtained, however the potential was allowed to be singular at the origin but otherwise smooth. In [14] non-smooth potentials are also considered in the presence of a magnetic field. Theses results are also given in [15, Vol IV]. In some cases the results presented in [14] and [15, Vol IV] requires less smoothness than here. However, to the knowledge of the author the results presented here do not appear in either [14] or [15, Vol IV]. In Section 2 we specify the notation we use and describe the operators we will be working with. Moreover, we recall some definitions and results that we will need later. In the end of the section we describe how we approximate the non-smooth potential by a smooth potential. In Section 3 we recall some results and definitions on rough \(\hbar\)-pseudo-differential operators. We also prove some specific results for rough Schrodinger operators. In Section 4 we establish a number of estimates for operator satisfying Assumption 1.1. The ideas and techniques used here are inspired by the ideas and techniques used in [27]. Some of the results will also be taken directly from [27]. These auxiliary results are needed to prove a version of the main theorem under an additional non-critical condition. This version is proven in Section 5. Finally in Section 6 we give the proof of the main theorem in two steps. First in the case where \(\mu\leq\mu_{0}<1\) and then the general case. ### Acknowledgement The author is grateful to the Leverhulme Trust for their support via Research Project Grant 2020-037. ## 2 Preliminaries We start by specifying some notation. For an open set \(\Omega\subset\mathbb{R}^{d}\) we will in the following by \(\mathcal{B}^{\infty}(\Omega)\) denote the space \[\mathcal{B}^{\infty}(\Omega)\coloneqq\big{\{}\psi\in C^{\infty}(\Omega)\, \big{|}\,\|\partial^{\alpha}\psi\|_{L^{\infty}(\Omega)}<\infty\,\forall\alpha \in\mathbb{N}_{0}^{d}\big{\}}. \tag{2.1}\] We will for an operator \(A\) acting in a Hilbert space \(\mathscr{H}\) denote the operator norm by \(\|A\|_{\mathrm{op}}\) and the trace norm by \(\|A\|_{1}\). Next we describe the operators we will be working with. If we have \(a_{j}\in L^{2}_{loc}(\mathbb{R}^{d})\) for all \(j\in\{1,\ldots,d\}\) then we can consider the following form \[\mathfrak{h}_{0}[f,g]=\sum_{j=1}^{d}\int_{\mathbb{R}^{d}}(-i\hbar\partial_{x_ {j}}-\mu a_{j}(x))f(x)\overline{(-i\hbar\partial_{x_{j}}-\mu a_{j}(x))g(x)}\, dx\quad f,g\in\mathcal{D}[\mathfrak{h}_{0}] \tag{2.2}\] for \(\mu\geq 0\) and \(\hbar>0\), where \(\mathcal{D}[\mathfrak{h}_{0}]\) is the domain for the form. Note that \(C_{0}^{\infty}(\mathbb{R}^{d})\subset\mathcal{D}[\mathfrak{h}_{0}]\). Moreover, this form is closable and lower semibounded (by zero) see [25] for details. Hence there exists a positive self-adjoint operator associated to the form (the Friederichs extension). For details see e.g. [24] or [25]. We will by \(\mathcal{Q}_{j}\) denote the square root of this operator. When we also have a potential \(V\in L^{\infty}(\mathbb{R}^{d})\) we can define the operator \(H_{\hbar,\mu}\) as the Friederichs extension of the quadratic form \[\mathfrak{h}[f,g]=\int_{\mathbb{R}^{d}}\sum_{j=1}^{d}(-i\hbar\partial_{x_{j}}-\mu a _{j}(x))g(x)\overline{(-i\hbar\partial_{x_{j}}-\mu a_{j}(x))g(x)}+V(x)f(x) \overline{g(x)}\,dx\quad f,g\in\mathcal{D}[\mathfrak{h}] \tag{2.3}\] for \(\mu\geq 0\) and \(\hbar>0\), where \(\mathcal{D}[\mathfrak{h}]\) is the domain for the form. This construction gives us that \(H_{h,\mu}\) is self-adjoint and lower semibounded. Again for details see e.g. [24] or [25]. When working with the Fourier transform we will use the following semiclassical version for \(\hbar>0\) \[\mathcal{F}_{\hbar}[\varphi](p)\coloneqq\int_{\mathbb{R}^{d}}e^{-ih^{-1}(x,p) }\varphi(x)\,dx,\] and with inverse given by \[\mathcal{F}_{\hbar}^{-1}[\psi](x)\coloneqq\frac{1}{(2\pi\hbar)^{d}}\int_{ \mathbb{R}^{d}}e^{ih^{-1}(x,p)}\psi(p)\,dp,\] where \(\varphi\) and \(\psi\) are elements of \(\mathcal{S}(\mathbb{R}^{d})\). Here \(\mathcal{S}(\mathbb{R}^{d})\) denotes the Schwartz space. We will for some of the results see that they are true for a larger class of functions containing \(g_{\gamma}\). These classes where first defined in [26, 27] and we recall the definition here. **Definition 2.1**.: A function \(g\in C^{\infty}(\mathbb{R}\setminus\{0\})\) is said to belong to the class \(C^{\infty,\gamma}(\mathbb{R})\), \(\gamma\in[0,1]\), if \(g\in C(\mathbb{R})\) for \(\gamma>0\), for some constants \(C>0\) and \(r>0\) it holds that \[g(t) =0,\qquad\text{for all }t\geq C\] \[|\partial_{t}^{m}g(t)| \leq C_{m}|t|^{r},\qquad\text{for all }m\in\mathbb{N}_{0}\text{ and }t\leq-C\] \[|\partial_{t}^{m}g(t)| \leq\begin{cases}C_{m}&\text{if }\gamma=0,1\\ C_{m}|t|^{\gamma-m}&\text{if }\gamma\in(0,1)\end{cases},\qquad\text{for all }m\in \mathbb{N}\text{ and }t\in[-C,C]\setminus\{0\}.\] A function \(g\) is said to belong to \(C_{0}^{\infty,\gamma}(\mathbb{R})\) if \(g\in C^{\infty,\gamma}(\mathbb{R})\) and \(g\) has compact support. We will in our analysis need deferent ways for expressing functions of self-adjoint operators. One of these is the Helffer-Sjostrand formula. Before we state it we will recall a definition of an almost analytic extension. **Definition 2.2** (Almost analytic extension).: For \(f\in C_{0}^{\infty}(\mathbb{R})\) we call a function \(\tilde{f}\in C_{0}^{\infty}(\mathbb{C})\) an almost analytic extension if it has the properties \[|\bar{\partial}\tilde{f}(z)| \leq C_{n}|\operatorname{Im}(z)|^{n},\qquad\text{for all }n\in \mathbb{N}_{0}\] \[\tilde{f}(t) =f(t)\qquad\text{for all }t\in\mathbb{R},\] where \(\bar{\partial}=\frac{1}{2}(\partial_{x}+i\partial_{y})\). For how to construct the almost analytic extension for a given \(f\in C_{0}^{\infty}(\mathbb{R})\) see e.g. [8, 31]. The following theorem is a simplified version of a theorem in [7]. **Theorem 2.3** (The Helffer-Sjostrand formula).: _Let \(H\) be a self-adjoint operator acting on a Hilbert space \(\mathscr{H}\) and \(f\) a function from \(C_{0}^{\infty}(\mathbb{R})\). Then the bounded operator \(f(H)\) is given by the equation_ \[f(H)=-\frac{1}{\pi}\int_{\mathbb{C}}\bar{\partial}\tilde{f}(z)(z-H)^{-1}\,L(dz),\] _where \(L(dz)=dxdy\) is the Lebesgue measure on \(\mathbb{C}\) and \(\tilde{f}\) is an almost analytic extension of \(f\)._ ### Approximation of the potential In our analysis we will need to approximate the potential with a smooth potential. How we choose this approximation is the content of the next lemma. **Lemma 2.4**.: _Let \(V\in C_{0}^{k,\kappa}(\mathbb{R}^{d})\) be real valued, where \(k\in\mathbb{N}_{0}\) and \(\kappa\in[0,1]\). Then for all \(\varepsilon>0\) there exists a rough potential \(V_{\varepsilon}\in C_{0}^{\infty}(\mathbb{R}^{d})\) such that_ \[\begin{split}\big{|}\partial_{x}^{\alpha}V(x)-\partial_{x}^{ \alpha}V_{\varepsilon}(x)\big{|}&\leq C_{\alpha}\varepsilon^{k+ \kappa-|\alpha|}\quad\text{for all }\alpha\in\mathbb{N}_{0}^{d}\text{ such that }|\alpha|\leq k\\ \big{|}\partial_{x}^{\alpha}V_{\varepsilon}(x)\big{|}& \leq C_{\alpha}\varepsilon^{k+\kappa-|\alpha|}\quad\text{for all }\alpha\in\mathbb{N}_{0}^{d}\text{ such that }|\alpha|>k,\end{split} \tag{2.4}\] _where the constants \(C_{\alpha}\) are independent of \(\varepsilon\) but depend on \(\|\partial^{\beta}V\|_{L^{\infty}(\mathbb{R}^{d})}\) for \(\beta\in\mathbb{N}_{0}^{d}\) with \(|\beta|\leq\min(|\alpha|,k)\). Moreover, if for some open set \(\Omega\) and a constant \(c>0\) it holds that_ \[|V(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all }x\in\Omega.\] _Then there exists a constant \(\tilde{c}\) such that for all \(\varepsilon\) sufficiently small it holds that_ \[|V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq\tilde{c}\qquad\text{for all }x\in\Omega.\] Proof.: A proof of the estimates in (2.4) can be found in either [5, Proposition 1.1] or [15, Proposition 4.A.2]. The second part of the lemma is a direct consequence of the estimates in (2.4). To see this note that \[|V_{\varepsilon}(x)-V(x)|\leq C_{0}\varepsilon^{k+\kappa}\implies|V_{ \varepsilon}(x)|\geq|V(x)|-C_{0}\varepsilon^{k+\kappa}. \tag{2.5}\] Hence for \(C_{0}\varepsilon^{k+\kappa}<\frac{c}{2}\) we obtain the desired estimate. This concludes the proof. We will in the following call the potentials depending on the parameter \(\varepsilon\) for rough potentials. _Remark 2.5_.: Let \(\mathcal{H}_{h,\mu}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\) and assume it satisfies Assumption 1.1 with some open set \(\Omega\), numbers \(\hbar>0\), \(\mu\geq 0\) and \(\gamma\in[0,1]\). When ever we have such an operator we have by assumption the associated magnetic Schrodinger operator \(H_{h,\mu}=(-i\hbar\nabla-\mu a)^{2}+V\), where \(V\in C_{0}^{2,\kappa}(\mathbb{R}^{d})\). Applying Lemma 2.4 to \(V\) we can also associate the approximating rough Schrodinger operator \(H_{h,\mu,\varepsilon}=(-i\hbar\nabla-\mu a)^{2}+V_{\varepsilon}\) to \(\mathcal{H}_{h,\mu}\). In what follows when we have an operator \(\mathcal{H}_{h,\mu}\) satisfies Assumption 1.1 we will just say with associated rough Schrodinger operator \(H_{h,\mu,\varepsilon}\). This will always be the operator we get from replacing \(V\) by \(V_{\varepsilon}\) from Lemma 2.4. One thing to observe is that often when you want to prove sharp spectral asymptotics without full regularity you compare quadratic forms. See e.g. [5, 12, 13, 14, 22, 30]. This is due to the observation that if you have an operator \(A(\hbar)\) and two approximating or framing operators \(A^{\pm}(\hbar)\) such that \[A^{-}(\hbar)\leq A(\hbar)\leq A^{+}(\hbar)\] in the sense of quadratic forms. Then by the min-max-theorem we obtain the raltion \[\operatorname{Tr}[\mathbf{1}_{(-\infty,0]}(A^{+}(\hbar))]\leq\operatorname{ Tr}[\mathbf{1}_{(-\infty,0]}(A(\hbar))]\leq\operatorname{Tr}[\mathbf{1}_{(- \infty,0]}(A^{-}(\hbar))]. \tag{2.6}\] The aim is then to choose the approximating operators such that sharp asymptotics can be obtained for these and then use (2.6) to deduce it for the original operator \(A(\hbar)\). In the situation we currently are considering we have also a localisation. This implies that we can not get an relation like (2.6) from the min-max theorem. What we instead will do is to estimate the difference directly and prove that the traces of our original problem is sufficiently close to the trace when we have inserted the approximation. ## 3 Rough \(\hbar\)-pseudo-differential operators Our proof is based on the theory of \(\hbar\)-pseudo-differential operators (\(\hbar\)-\(\Psi\)DO's). To be precise we will need a rough version of the general theory. We will here recall properties and results concerning rough \(\hbar\)-\(\Psi\)DO's. A more complete discussion of these operators can be found in [22]. An version of rough \(\hbar\)-\(\Psi\)DO theory can be found in [15]. It first appears in Vol. 1 Section 2.3. ### Definitions and basic properties By a rough pseudo-differential operator \(A_{\varepsilon}(\hbar)=\operatorname{Op}_{\hbar}^{\mathrm{w}}(a_{ \varepsilon})\) of regularity \(\tau\) we mean the operator \[\operatorname{Op}_{\hbar}^{\mathrm{w}}(a_{\varepsilon})\psi(x)=\frac{1}{(2 \pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}e^{i\hbar^{-1}\langle x-y,p\rangle}a_{ \varepsilon}(\tfrac{x+y}{2},p)\psi(y)\,dy\,dp\quad\text{for $\psi\in\mathcal{S}( \mathbb{R}^{d})$}, \tag{3.1}\] where \(a_{\varepsilon}(x,p)\) is a rough symbol of regularity \(\tau\in\mathbb{Z}\) and satisfies for all \(\alpha,\beta\in\mathbb{N}_{0}^{d}\) that \[|\partial_{x}^{\alpha}\partial_{p}^{\beta}a_{\varepsilon}(x,p)|\leq C_{\alpha \beta}\varepsilon^{\min(0,\tau-|\alpha|)}m(x,p)\quad\text{for all $(x,p)\in\mathbb{R}^{d}\times \mathbb{R}^{d}$}, \tag{3.2}\] where \(C_{\alpha\beta}\) is independent of \(\varepsilon\) and \(m\) is a tempered weight function. A tempered weight function is in some parts of the literature called an order function. The integral in (3.1) should be understood as an oscillatory integral. For \(\varepsilon>0\), \(\tau\in\mathbb{Z}\) and a tempered weight function \(m\) we will use the notation \(\Gamma_{\varepsilon}^{m,\tau}(\mathbb{R}^{2d})\) for the set of all \(a_{\varepsilon}(x,p)\in C^{\infty}(\mathbb{R}^{2d})\) which satisfies (3.2) for all \(\alpha,\beta\in\mathbb{N}_{0}^{d}\). As we are interested in traces of our operators it will be important for us to know when the operator is bounded and trace class. This is the content of the following two theorems. **Theorem 3.1**.: _Let \(a_{\varepsilon}\in\Gamma_{\varepsilon}^{m,\tau}(\mathbb{R}^{2d})\), where we assume \(m\in L^{\infty}(\mathbb{R}^{2d})\) and \(\tau\geq 0\). Suppose \(\hbar\in(0,\hbar_{0}]\) and there exists a \(\delta\) in \((0,1)\) such that \(\varepsilon\geq\hbar^{1-\delta}\). Then there exists a constant \(C_{d}\) and an integer \(k_{d}\) only depending on the dimension such that_ \[\|\operatorname{Op}_{\hbar}^{\mathrm{w}}(a_{\varepsilon})\psi\|_{L^{2}( \mathbb{R}^{d})}\leq C_{\begin{subarray}{c}|\alpha|,|\beta|\leq k_{d}\\ (x,p)\in\mathbb{R}^{2d}\end{subarray}}\varepsilon^{|\alpha|}\left|\partial_{ x}^{\alpha}\partial_{p}^{\beta}a_{\varepsilon}(x,p)\right|\|\psi\|_{L^{2}( \mathbb{R}^{d})}\quad\text{for all $\psi\in\mathcal{S}(\mathbb{R}^{d})$}.\] _Especially \(\operatorname{Op}_{\hbar}^{\mathrm{w}}(a_{\varepsilon})\) can be extended to a bounded operator on \(L^{2}(\mathbb{R}^{d})\)._ **Theorem 3.2**.: _There exists a constant \(C(d)\) only depending on the dimension such_ \[\|\operatorname{Op}_{\hbar}^{\mathrm{w}}(a_{\varepsilon})\|_{\mathrm{Tr}} \leq\frac{C(d)}{\hbar^{d}}\sum_{|\alpha|+|\beta|\leq 2d+2}\varepsilon^{| \alpha|}\hbar^{\delta|\beta|}\int_{\mathbb{R}^{2d}}|\partial_{x}^{\alpha} \partial_{p}^{\beta}a_{\varepsilon}(x,p)|\,dxdp.\] _for every rough symbol \(a_{\varepsilon}\in\Gamma_{\varepsilon}^{m,\tau}(\mathbb{R}^{2d})\) with \(\tau\geq 0\), \(\hbar\in(0,\hbar_{0}]\) and \(\varepsilon\geq\hbar^{1-\delta}\) for some \(\delta\in(0,1)\)._ Both of these theorems can be found in [22], where they are Theorem 3.25 and Theorem 3.26 respectively. We will also need to calculate the trace of a rough \(\hbar\)-\(\Psi\)DO. This is the content of the next theorem. **Theorem 3.3**.: _Let \(a_{\varepsilon}\) be in \(\Gamma_{\varepsilon}^{m,\tau}(\mathbb{R}^{2d})\) with \(\tau\geq 0\) and suppose \(\partial_{x}^{\alpha}\partial_{p}^{\beta}a_{\varepsilon}(x,p)\) is an element of \(L^{1}(\mathbb{R}^{2d})\) for all \(|\alpha|+|\beta|\leq 2d+2\). Then \(\operatorname{Op}_{\hbar}^{\mathrm{w}}(a_{\varepsilon})\) is trace class and_ \[\operatorname{Tr}(\operatorname{Op}_{\hbar}^{\mathrm{w}}(a_{\varepsilon}))= \frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}a_{\varepsilon}(x,p)\,dxdp.\] This theorem is Theorem 3.27 from [22]. We will also need to compose operators. The following theorem is a simplified version of Theorem 3.24 from [22] on composing rough \(\hbar\)-\(\Psi\)DO's. **Theorem 3.4**.: _Let \(a_{\varepsilon}\) be in \(\Gamma_{\varepsilon}^{m_{1},\tau_{a}}(\mathbb{R}^{2d})\) and \(b_{\varepsilon}\) be in \(\Gamma_{\varepsilon}^{m_{2},\tau_{b}}(\mathbb{R}^{2d})\) with \(\tau_{a},\tau_{b}\geq 0\) and \(m_{1},m_{2}\in L^{\infty}(\mathbb{R}^{2d})\). Suppose \(\hbar\in(0,\hbar_{0}]\) and \(\varepsilon\geq\hbar^{1-\delta}\) for a \(\delta\in(0,1)\) and let \(\tau=\min(\tau_{a},\tau_{b})\). Then there exists a a sequence of rough symbols in \(\{c_{\varepsilon,j}\}_{j\in\mathbb{N}_{0}}\) such that \(c_{\varepsilon,j}\in\Gamma_{\varepsilon}^{m_{1}m_{2},\tau-j}(\mathbb{R}^{2d})\) for all \(j\in\mathbb{N}_{0}\) and for every \(N\in\mathbb{N}\) there exists \(N_{\delta}\geq N\) such that_ \[\operatorname{Op}_{\hbar}^{\mathrm{w}}(a_{\varepsilon})\operatorname{Op}_{ \hbar}^{\mathrm{w}}(b_{\varepsilon})=\sum_{j=0}^{N_{\delta}}\hbar^{j} \operatorname{Op}_{\hbar}^{\mathrm{w}}(c_{\varepsilon,j})+\hbar^{N_{\delta}+ 1}\mathcal{R}_{\varepsilon}(N_{\delta};\hbar),\] _where \(\mathcal{R}_{\varepsilon}(N_{\delta};\hbar)\) is a rough \(\hbar\)-\(\Psi\)DO which satisfies the bound_ \[\hbar^{N_{\delta}+1}\|\mathcal{R}_{\varepsilon}(N_{\delta};\hbar)\|_{ \operatorname{op}}\leq C_{N}\hbar^{N}, \tag{3.3}\] _where \(C_{N}\) is independent of \(\varepsilon\), but depend on the numbers \(N\), \(\|m_{1}\|_{L^{\infty}(\mathbb{R}^{2d})}\), \(\|m_{2}\|_{L^{\infty}(\mathbb{R}^{2d})}\) and the constants \(C_{\alpha\beta}\) from (3.2) for both \(a_{\varepsilon}\) and \(b_{\varepsilon}\). The rough symbols \(c_{\varepsilon,j}\) are explicit given by_ \[c_{\varepsilon,j}(x,p)=(-i)^{j}\sum_{|\alpha|+|\beta|=j}\frac{1}{\alpha!\beta!}\Big{(}\frac{1}{2}\Big{)}^{|\alpha|}\Big{(}-\frac{1}{2}\Big{)}^{|\beta|}( \partial_{p}^{\alpha}\partial_{x}^{\beta}a_{\varepsilon})(x,p)(\partial_{p}^ {\beta}\partial_{x}^{\alpha}b_{\varepsilon})(x,p).\] _Remark 3.5_.: Assume we are in the setting of Theorem 3.4. If we had assumed the at least one of the tempered weight functions \(m_{1}\) or \(m_{2}\) was in \(L^{\infty}(\mathbb{R}^{2d})\cap L^{1}(\mathbb{R}^{2d})\) we would get that the error term is not just bounded in operator norm but also in trace norm. That is \[\hbar^{N_{\delta}+1}\|\mathcal{R}_{\varepsilon}(N_{\delta};\hbar)\|_{1}\leq C _{N}\hbar^{N-d}. \tag{3.4}\] The following lemma is an easy consequence of the result on composition of \(\hbar\)-\(\Psi\)DO's. It can also be found in [27] as Lemma 2.2. **Lemma 3.6**.: _Let \(\theta_{1},\theta_{2}\in\mathcal{B}^{\infty}(\mathbb{R}^{2d})\) and suppose that there exists a constant \(c>0\) such that_ \[\operatorname{dist}(\operatorname{supp}(\theta_{1}),\operatorname{supp}( \theta_{2}))\geq c. \tag{3.5}\] _Then for all \(N\in\mathbb{N}\) it holds that_ \[\|\operatorname{Op}_{\hbar}^{\mathrm{w}}(\theta_{1})\operatorname{Op}_{ \hbar}^{\mathrm{w}}(\theta_{2})\|_{\operatorname{op}}\leq C_{N}\hbar^{N}.\] _If we further assume \(\theta_{1}\in C_{0}^{\infty}(\mathbb{R}^{2d})\) it holds for all \(N\in\mathbb{N}\) that_ \[\|\operatorname{Op}_{\hbar}^{\mathrm{w}}(\theta_{1})\operatorname{Op}_{ \hbar}^{\mathrm{w}}(\theta_{2})\|_{1}\leq C_{N}\hbar^{N}.\] _In both cases the constant \(C_{N}\) depends on the numbers \(\|\partial_{x}^{\alpha}\partial_{p}^{\beta}\theta_{1}\|_{L^{\infty}(\mathbb{R} ^{2d})}\) and \(\|\partial_{x}^{\alpha}\partial_{p}^{\beta}\theta_{2}\|_{L^{\infty}(\mathbb{R} ^{2d})}\) for all \(\alpha,\beta\in\mathbb{N}_{0}^{d}\). In the second case the constant \(C_{N}\) will also depend on \(c\) from (3.5)._ ### Properties of rough Schrodinger operators We will in the following consider rough Schrodinger operators that satisfies the following assumption. **Assumption 3.7**.: Let \(H_{h,\mu,\varepsilon}=(-i\hbar\nabla-\mu a)^{2}+V_{\varepsilon}\) be a rough Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\). Suppose that \(a_{j}\in C_{0}^{\infty}(\mathbb{R}^{d})\) for all \(j\in\{1,\ldots,d\}\) and are real valued. Moreover, suppose that \(V_{\varepsilon}\) is a rough potential of regularity \(\tau\geq 0\) such that * \(V_{\varepsilon}\) is real, smooth and \(\min_{x\in\mathbb{R}^{d}}V_{\varepsilon}(x)>-\infty\). * There exists a \(\zeta>0\) such that for all \(\alpha\in\mathbb{N}_{0}^{d}\) there exists a constant \(C_{\alpha}\) such that \[|\partial_{x}^{\alpha}V_{\varepsilon}(x)|\leq C_{\alpha}\varepsilon^{\min(0, \tau-|\alpha|)}(V_{\varepsilon}(x)+\zeta)\qquad\text{for all }x\in\mathbb{R}^{d}.\] * There exists two constants \(C,M>0\) such that \[|V_{\varepsilon}(x)|\leq C(V_{\varepsilon}(y)+\zeta)(1+|x-y|)^{M}\qquad\text{ for all }x,y\in\mathbb{R}^{d}.\] _Remark 3.8_.: When a rough Schrodinger operator \(H_{h,\mu,\varepsilon}\) satisfies Assumption 3.7 it can be shown that as a \(\hbar\)-\(\Psi\)DO it is essentially self-adjoint. For details see e.g. [22, Section 4]. We will in these cases denote the closure by \(H_{h,\mu,\varepsilon}\) as well. In the case where \(H_{h,\mu,\varepsilon}=(-i\hbar\nabla-\mu a)^{2}+V_{\varepsilon}\) with \(a_{j}\in C_{0}^{\infty}(\mathbb{R}^{d})\) for all \(j\in\{1,\dots,d\}\) and \(V_{\varepsilon}\) having compact support we have that \(H_{h,\mu,\varepsilon}\) satisfies Assumption 3.7. The following theorem is a simplified version of a more general theorem that can be found in [22]. **Theorem 3.9**.: _Let \(H_{h,\mu,\varepsilon}=(-i\hbar\nabla-\mu a)^{2}+V_{\varepsilon}\) be a rough Schrodinger operator of regularity \(\tau\geq 1\) acting in \(L^{2}(\mathbb{R}^{d})\) with \(\hbar\) in \((0,\hbar_{0}]\) and \(\mu\in[0,\mu_{0}]\). Suppose that \(H_{h,\mu,\varepsilon}\) satisfies Assumption 3.7 and there exists a \(\delta\) in \((0,1)\) such that \(\varepsilon\geq\hbar^{1-\delta}\). Then for any function \(f\in C_{0}^{\infty}(\mathbb{R})\) and every \(N\in\mathbb{N}\) there exists a \(N_{\delta}\in\mathbb{N}\) such that_ \[f(H_{h,\mu,\varepsilon})=\sum_{j=0}^{N_{\delta}}\hbar^{j}\operatorname{Op}_{h }^{\text{w}}(a_{\varepsilon,j}^{f})+\hbar^{N_{\delta}+1}\mathcal{R}_{ \varepsilon}(N_{\delta},f;\hbar),\] _where_ \[\hbar^{N_{\delta}+1}\|\mathcal{R}_{\varepsilon}(N_{\delta};\hbar)\|_{\rm op} \leq C_{N}\hbar^{N}, \tag{3.6}\] _and_ \[a_{\varepsilon,0}^{f}(x,p) =f((p-\mu a(x))^{2}+V_{\varepsilon}(x)), \tag{3.7}\] \[a_{\varepsilon,1}^{f}(x,p) =0,\] \[a_{\varepsilon,j}^{f}(x,p) =\sum_{k=1}^{2j-1}\frac{(-1)^{k}}{k!}d_{\varepsilon,j,k}(x,p)f^{ (k)}((p-\mu a(x))^{2}+V_{\varepsilon}(x))\qquad\text{for }j\geq 2,\] _where \(d_{\varepsilon,j,k}\) are universal polynomials in \(\partial_{p}^{\alpha}\partial_{x}^{\beta}[(p-\mu a(x))^{2}+V_{\varepsilon}(x)]\) for \(|\alpha|+|\beta|\leq j\). Especially we have that \(a_{\varepsilon,j}^{f}(x,p)\) is a rough symbol of regularity \(\tau-j\) for all \(j\in\mathbb{N}_{0}\)._ _Remark 3.10_.: In order to prove the following theorem one will need to understand the Schrodinger propagator associated to \(H_{h,\mu,\varepsilon}\). That is the operator \(e^{ih^{-1}tH_{h,\mu,\varepsilon}}\). Under the assumptions of the following theorem we can find an operator with an explicit kernel that locally approximate \(e^{ih^{-1}tH_{h,\mu,\varepsilon}}\) in a suitable sense. This local construction is only valid for times of order \(\hbar^{1-\frac{\delta}{2}}\). But if we locally have a non-critical condition the approximation can be extended to a small time interval \([-T_{0},T_{0}]\). For further details see [22]. In the following we will reference this remark and the number \(T_{0}\). **Theorem 3.11**.: _Let \(H_{h,\mu,\varepsilon}\) be a rough Schrodinger operator of regularity \(\tau\geq 2\) acting in \(L^{2}(\mathbb{R}^{d})\) with \(\hbar\) in \((0,\hbar_{0}]\) and \(\mu\in[0,\mu_{0}]\) which satisfies Assumption 3.7. Suppose there exists a \(\delta\) in \((0,1)\) such that \(\varepsilon\geq\hbar^{1-\delta}\). Assume that \(\theta\in C_{0}^{\infty}(\mathbb{R}^{2d})\) and there exists two constants \(\eta,c>0\) such that_ \[|\nu-V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all }(x,p)\in \operatorname{supp}(\theta)\text{ and }\nu\in(-2\eta,2\eta).\] _Let \(\chi\) be in \(C^{\infty}_{0}((-T_{0},T_{0}))\) and \(\chi=1\) in a neighbourhood of \(0\), where \(T_{0}\) is the number from Remark 3.10. Then for every \(f\) in \(C^{\infty}_{0}((-\eta,\eta))\) we have_ \[\Big{|}\operatorname{Tr}\big{[}\operatorname{Op}_{\hbar}^{\operatorname{w}}( \theta)f(H_{h,\mu,\varepsilon})\mathcal{F}_{\hbar}^{-1}[\chi](H_{h,\mu, \varepsilon}-s)\big{]}-\frac{1}{(2\pi\hbar)^{d}}f(s)\int_{\{a_{\varepsilon,0}= s\}}\frac{\theta}{|\nabla a_{\varepsilon,0}|}\,dS_{s}\Big{|}\leq C\hbar^{2-d}.\] _where \(a_{\varepsilon,0}(x,p)=(p-\mu a(x))^{2}+V_{\varepsilon}(x)\), \(S_{s}\) is the Euclidean surface measure on the surface \(\{a_{\varepsilon,0}(x,p)=s\}\). The error term is uniform with respect to \(s\in(-\eta,\eta)\) but the constant \(C\) depends on the dimension \(d\), the numbers \(\mu_{0}\), \(\|\partial_{x}^{\alpha}\partial_{p}^{\beta}\theta\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha,\beta\in N_{0}^{d}\), \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) with \(|\alpha|\geq 1\) and \(j\in\{1\dots,d\}\), \(\|V\|_{L^{\infty}(\operatorname{supp}(\theta))}\) and the numbers \(C_{\alpha}\) from Assumption 3.7._ This theorem is a special case of [22, Theorem 6.1]. One thing to observe is that in the formulation of Theorem 6.1 the assumption on the principal symbol \(a_{\varepsilon,0}\) is \[|\nabla_{p}a_{\varepsilon,0}(x,p)|\geq c\quad\text{for all }(x,p)\in a_{ \varepsilon,0}^{-1}([-2\eta,2\eta]).\] This is technically the same assumption as the one we make in Theorem 3.11 up to a square root. To see this note that we here have that \(a_{\varepsilon,0}(x,p)=(p^{2}-\mu a(x))^{2}+V_{\varepsilon}(x)\). Hence we have that \[|\nabla_{p}a_{\varepsilon,0}(x,p)|^{2}=4(p-\mu a(x))^{2}=4(\nu-V_{\varepsilon }(x)) \tag{3.8}\] for all \((x,p)\in\mathbb{R}^{2d}\) such that \(a_{\varepsilon,0}(x,p)=\nu\). From (3.8) we see that the two assumptions are indeed equivalent. Furthermore, if we had assumed the operator was of regularity \(1\) we can obtain an error that is slightly better than \(\hbar^{1-d}\) but not \(\hbar^{2-d}\). Before we continue we will need the following remark to set some notation and the following Proposition that is a type of Tauberian result. _Remark 3.12_.: Let \(T\in(0,T_{0}]\), where \(T_{0}\) is the number from Remark 3.10 and let \(\hat{\chi}\in C^{\infty}_{0}((-T,T))\) be a real valued function such that \(\hat{\chi}(s)=\hat{\chi}(-s)\) and \(\hat{\chi}(s)=1\) for all \(t\in(-\frac{T}{2},\frac{T}{2})\). Define \[\chi_{1}(t)=\frac{1}{2\pi}\int_{\mathbb{R}}\hat{\chi}(s)e^{ist}\,ds.\] We assume that \(\chi_{1}(t)\geq 0\) for all \(t\in\mathbb{R}\) and there exist \(T_{1}\in(0,T)\) and \(c>0\) such that \(\chi_{1}(t)\geq c\) for all \(T\in[-T_{1},T_{1}]\). We can guarantee these assumptions by (possible) replace \(\hat{\chi}\) by \(\hat{\chi}*\hat{\chi}\). We will by \(\chi_{\hbar}(t)\) denote the function \[\chi_{\hbar}(t)=\tfrac{1}{\hbar}\chi_{1}(\tfrac{t}{\hbar})=\mathcal{F}_{\hbar}^ {-1}[\hat{\chi}](t).\] Moreover for any function \(g\in L^{1}_{loc}(\mathbb{R})\) we will use the notation \[g^{(\hbar)}(t)=g*\chi_{\hbar}(t)=\int_{\mathbb{R}}g(s)\chi_{\hbar}(t-s).\] **Proposition 3.13**.: _Let \(A\) be a self-adjoint operator acting in a Hilbert space \(\mathscr{H}\) and \(g\in C^{\infty,\gamma}_{0}(\mathbb{R})\). Let \(\chi_{1}\) be defined as in Remark 3.12. If for a Hilbert-Schmidt operator \(B\)_ \[\sup_{t\in\mathcal{D}(\delta)}\|B^{*}\chi_{\hbar}(A-t)B\|_{1}\leq Z(\hbar), \tag{3.9}\] _where \(\mathcal{D}(\delta)=\{t\in\mathbb{R}\,|\operatorname{dist}(\operatorname{supp }(g)),t\}\leq\delta\}\), \(Z(\beta)\) is some positive function and strictly positive number \(\delta\). Then it holds that_ \[\|B^{*}(g(A)-g^{(\hbar)}(A))B\|_{1}\leq C\hbar^{1+\gamma}Z(\hbar)+C^{\prime}_{ N}\hbar^{N}\|B^{*}B\|_{1}\quad\text{for all }N\in\mathbb{N}, \tag{3.10}\] _where the constants \(C\) and \(C^{\prime}\) depend on the number \(\delta\) and the functions \(g\) and \(\chi_{1}\) only._ The Proposition is taken from [27], where it is Proposition 2.6. It first appeared in [26] for \(\gamma\in(0,1]\). In order to apply this proposition we we will establish a case where we have a bound of the type (3.9) from Proposition 3.13. **Lemma 3.14**.: _Let \(H_{h,\mu,\varepsilon}\) be a rough Schrodinger operator of regularity \(\tau\geq 2\) acting in \(L^{2}(\mathbb{R}^{d})\) with \(\hbar\) in \((0,\hbar_{0}]\) and \(\mu\in[0,\mu_{0}]\) which satisfies Assumption 3.7. Suppose there exists a \(\delta\) in \((0,1)\) such that \(\varepsilon\geq\hbar^{1-\delta}\). Assume that \(\theta\in C_{0}^{\infty}(\mathbb{R}^{2d})\) and there exists two constants \(\eta,c>0\) such that_ \[|\nu-V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all $(x,p)\in \operatorname{supp}(\theta)$ and $\nu\in(-2\eta,2\eta)$.}\] _Let \(\chi_{\hbar}\) be the function from Remark 3.12. Then for every \(f\) in \(C_{0}^{\infty}((-\eta,\eta))\) we have_ \[\|\operatorname{Op}_{\hbar}^{\mathrm{w}}(\theta)f(H_{\hbar,\mu,\varepsilon}) \chi_{\hbar}(H_{h,\mu,\varepsilon}-s)f(H_{h,\mu,\varepsilon})\operatorname{ Op}_{\hbar}^{\mathrm{w}}(\theta)\|_{1}\leq C\hbar^{-d},\] _where the constant depends on the dimension, the numbers \(\mu_{0}\), \(\|\partial_{x}^{\alpha}\partial_{p}^{\beta}\theta\|_{L^{\infty}(\mathbb{R}^{ d})}\) for all \(\alpha,\beta\in N_{0}^{d}\), \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) with \(|\alpha|\geq 1\) and \(j\in\{1\dots,d\}\), \(\|V_{\varepsilon}\|_{L^{\infty}(\operatorname{supp}(\theta))}\), the numbers \(C_{\alpha}\) from Assumption 3.7 and \(\|\partial^{\alpha}f\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}\)._ Proof.: Since we assume that \(\chi_{\hbar}(t)\geq 0\) for all \(t\in\mathbb{R}\) we have that the composition of the operators will be a positive operator and hence we have that \[\begin{split}&\|\operatorname{Op}_{\hbar}^{\mathrm{w}}(\theta)f(H_{ h,\mu,\varepsilon})\chi_{\hbar}(H_{h,\mu,\varepsilon}-s)f(H_{h,\mu, \varepsilon})\operatorname{Op}_{\hbar}^{\mathrm{w}}(\theta)\|_{1}\\ &\qquad=\operatorname{Tr}[\operatorname{Op}_{\hbar}^{\mathrm{w}} (\theta)f(H_{h,\mu,\varepsilon})\chi_{\hbar}(H_{h,\mu,\varepsilon}-s)f(H_{h,\mu,\varepsilon})\operatorname{Op}_{\hbar}^{\mathrm{w}}(\theta)]\\ &\qquad=\operatorname{Tr}[\operatorname{Op}_{\hbar}^{\mathrm{w}} (\theta)\operatorname{Op}_{\hbar}^{\mathrm{w}}(\theta)f^{2}(H_{h,\mu, \varepsilon})\chi_{\hbar}(H_{h,\mu,\varepsilon}-s)],\end{split} \tag{3.11}\] where we in the last equality have used cyclicality of the trace. From applying Theorem 3.4 and Remark 3.5 we obtain that \[\begin{split}&|\operatorname{Tr}[\operatorname{Op}_{\hbar}^{ \mathrm{w}}(\theta)\operatorname{Op}_{\hbar}^{\mathrm{w}}(\theta)f^{2}(H_{h, \mu,\varepsilon})\chi_{\hbar}(H_{h,\mu,\varepsilon}-s)]|\\ &\qquad\leq|\operatorname{Tr}[\operatorname{Op}_{\hbar}^{ \mathrm{w}}(\theta^{2})f^{2}(H_{h,\mu,\varepsilon})\chi_{\hbar}(H_{h,\mu, \varepsilon}-s)]|+C\hbar^{-d},\end{split} \tag{3.12}\] where the constant \(C\) depends on the numbers \(\|\partial_{x}^{\alpha}\partial_{p}^{\beta}\theta\|_{L^{\infty}(\mathbb{R}^{ d})}\) and \(\|f\|_{L^{\infty}(\mathbb{R})}\). Applying Theorem 3.11 we get that \[|\operatorname{Tr}[\operatorname{Op}_{\hbar}^{\mathrm{w}}(\theta^{2})f^{2}(H_ {h,\mu,\varepsilon})\chi_{\hbar}(H_{h,\mu,\varepsilon}-s)]|\leq C\hbar^{-d}, \tag{3.13}\] where the constant \(C\) depends on. Finally by combining (3.11), (3.12) and (3.13) we obtain the desired estimate and this concludes the proof. **Theorem 3.15**.: _Let \(H_{h,\mu,\varepsilon}\) be a rough Schrodinger operator of regularity \(\tau\geq 2\) acting in \(L^{2}(\mathbb{R}^{d})\) with \(\hbar\) in \((0,\hbar_{0}]\) and \(\mu\in[0,\mu_{0}]\) which satisfies Assumption 3.7. Suppose there exists a \(\delta\) in \((0,1)\) such that \(\varepsilon\geq\hbar^{1-\delta}\). Moreover, suppose there exists some \(c>0\) such that_ \[|V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all $x\in \Omega$.}\] _Then for \(\gamma\in[0,1]\) and any \(g\in C^{\infty,\gamma}(\mathbb{R})\) and any \(\theta\in C^{\infty}_{0}(\Omega\times\mathbb{R}^{d})\) it holds that_ \[\Big{|}\operatorname{Tr}[\operatorname{Op}_{\hbar}^{\mathrm{w}}(\theta)g(H_{ h,\mu,\varepsilon})]-\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g((p-\mu a (x))^{2}+V_{\varepsilon}(x))\theta(x,p)\,dxdp\Big{|}\leq C\hbar^{1+\gamma-d},\] _where the constant \(C\) is depending on the dimension, \(\mu_{0}\), the numbers \(\|\partial_{x}^{\alpha}\partial_{p}^{\beta}\theta\|_{L^{\infty}(\mathbb{R}^{ d})}\) for all \(\alpha,\beta\in N_{0}^{d}\), \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) with \(|\alpha|\geq 1\) and \(j\in\{1\dots,d\}\), \(\|V_{\varepsilon}\|_{L^{\infty}(\Omega)}\) and the numbers \(C_{\alpha}\) from Assumption 3.7._ Proof.: By continuity there exists an \(\eta>0\) such that \[|\nu-V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq\frac{c}{2}\qquad\text{for all $x\in\Omega$ and $\nu\in(-2\eta,2\eta)$.}\] Let \(f_{1},f_{2}\in C_{0}^{\infty}(\mathbb{R})\) such that \(\operatorname{supp}(f_{2})\subset(-\eta,\eta)\) and \[g(H_{h,\mu,\varepsilon})=f_{1}(H_{h,\mu,\varepsilon})+f_{2}^{2}(H_{h,\mu, \varepsilon})g(H_{h,\mu,\varepsilon}). \tag{3.14}\] We can ensure this since \(H_{h,\mu,\varepsilon}\) is lower semibounded. With these functions we have that \[\operatorname{Tr}[\operatorname{Op}_{h}^{\mathrm{w}}(\theta)g(H_{h,\mu, \varepsilon})]=\operatorname{Tr}[\operatorname{Op}_{h}^{\mathrm{w}}(\theta)f _{1}(H_{h,\mu,\varepsilon})]+\operatorname{Tr}[\operatorname{Op}_{h}^{ \mathrm{w}}(\theta)f_{2}^{2}(H_{h,\mu,\varepsilon})g(H_{h,\mu,\varepsilon})]. \tag{3.15}\] We will consider each term separately and start by considering the first term on the right hand side of (3.15). Here we get by applying Theorem 3.9 that \[\operatorname{Tr}[\operatorname{Op}_{h}^{\mathrm{w}}(\theta)f_{1}(H_{h,\mu, \varepsilon})]=\operatorname{Tr}[\operatorname{Op}_{h}^{\mathrm{w}}(\theta) \operatorname{Op}_{h}^{\mathrm{w}}(a_{\varepsilon,0}^{f_{1}})]+C\hbar^{2-d}, \tag{3.16}\] where the constant \(C\) depends on the numbers \(\|\partial_{x}^{\alpha}\partial_{p}^{\beta}\theta\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha,\beta\in\mathbb{N}_{0}^{d}\), \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}^{d}\) and \(j\in\{1\ldots,d\}\) and \(\|\partial_{x}^{\alpha}V_{\varepsilon}\|_{L^{\infty}(\Omega)}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) such that \(|\alpha|\leq 2\). Moreover we have used the notation \(a_{\varepsilon,0}^{f_{1}}(x,p)=f_{1}((p-\mu a(x))^{2}+V_{\varepsilon}(x))\). From applying Theorem 3.4 and Theorem 3.3 we get that \[\begin{split}\operatorname{Tr}[\operatorname{Op}_{h}^{\mathrm{w}} (\theta)\operatorname{Op}_{h}^{\mathrm{w}}(a_{\varepsilon,0}^{f_{1}})]& =\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}f_{1}((p-\mu a( x))^{2}+V_{\varepsilon}(x))\theta(x,p)\,dxdp\\ &\quad-\frac{i\hbar}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}c_{ \varepsilon,1}(x,p)\,dxdp+\mathcal{O}(\hbar^{2-d}),\end{split} \tag{3.17}\] where \(c_{\varepsilon,1}\) is the subprincipal symbol we get from composing the operators. Since the lefthand side of (3.17) is real and \(c_{\varepsilon,1}\) is real we have that the second term of the righthand side has to be of lower order. Hence we have that \[\operatorname{Tr}[\operatorname{Op}_{h}^{\mathrm{w}}(\theta)\operatorname{Op }_{h}^{\mathrm{w}}(a_{\varepsilon,0}^{f_{1}})]=\frac{1}{(2\pi\hbar)^{d}}\int_ {\mathbb{R}^{2d}}f_{1}((p-\mu a(x))^{2}+V_{\varepsilon}(x))\theta(x,p)\,dxdp+ \mathcal{O}(\hbar^{2-d}). \tag{3.18}\] Now we turn to the second term on the righthand side of (3.15). When we consider this term we may due to the support properties of \(f_{2}\) assume that \(\operatorname{supp}(g)\subset(-\frac{3}{2}\eta,0]\) that is \(g\in C_{0}^{\infty,s}(\mathbb{R})\). Let \(g^{\hbar}\) be the smoothed version of \(g\) as described in Remark 3.12. We then have \[\begin{split}\operatorname{Tr}[\operatorname{Op}_{h}^{\mathrm{w} }(\theta)f_{2}^{2}(H_{h,\mu,\varepsilon})g(H_{h,\mu,\varepsilon})]=& \operatorname{Tr}[\operatorname{Op}_{h}^{\mathrm{w}}(\theta)f_{2}^{2}(H_{h, \mu,\varepsilon})g^{\hbar}(H_{h,\mu,\varepsilon})]\\ &+\operatorname{Tr}\big{[}\operatorname{Op}_{h}^{\mathrm{w}}( \theta)f_{2}(H_{h,\mu,\varepsilon})[g(H_{h,\mu,\varepsilon})-g^{\hbar}(H_{h, \mu,\varepsilon})]f_{2}(H_{h,\mu,\varepsilon})\big{]}.\end{split} \tag{3.19}\] Let \(\theta_{1}\in C_{0}^{\infty}(\Omega\times\mathbb{R}^{d})\) such that \(\theta\theta_{1}=\theta\). Then from applying Lemma 3.6 twice we get for all \(N\in\mathbb{N}^{d}\) that \[\begin{split}\big{|}\operatorname{Tr}&\big{[} \operatorname{Op}_{h}^{\mathrm{w}}(\theta)f_{2}(H_{h,\mu,\varepsilon})[g(H_{ h,\mu,\varepsilon})-g^{\hbar}(H_{h,\mu,\varepsilon})]f_{2}(H_{h,\mu, \varepsilon})\big{]}\big{|}\\ &\quad\leq\|\operatorname{Op}_{h}^{\mathrm{w}}(\theta)\|_{ \operatorname{op}}\|\operatorname{Op}_{h}^{\mathrm{w}}(\theta_{1})f_{2}(H_{h, \mu,\varepsilon})[g(H_{h,\mu,\varepsilon})-g^{\hbar}(H_{h,\mu,\varepsilon})]f_{ 2}(H_{h,\mu,\varepsilon})\operatorname{Op}_{h}^{\mathrm{w}}(\theta_{1})\big{\|} _{1}+C_{N}\hbar^{N}.\end{split} \tag{3.20}\] From Lemma 3.14 we have that assumption (3.9) from Proposition 3.13 is satisfied with \(B=f_{2}(H_{h,\mu,\varepsilon})\operatorname{Op}_{h}^{\mathrm{w}}(\theta_{1})\). Hence Proposition 3.13 gives us that \[\big{\|}\operatorname{Op}_{h}^{\mathrm{w}}(\theta_{1})f_{2}(H_{h,\mu, \varepsilon})[g(H_{h,\mu,\varepsilon})-g^{\hbar}(H_{h,\mu,\varepsilon})]f_{2}(H_ {h,\mu,\varepsilon})\operatorname{Op}_{h}^{\mathrm{w}}(\theta_{1})\big{\|}_{1} \leq C\hbar^{1+\gamma-d}. \tag{3.21}\] Using the definition of \(g^{h}\) and applying Theorem 3.11 we have that \[\begin{split}&\operatorname{Tr}[\operatorname{Op}_{h}^{\mathrm{w}}( \theta)f_{2}^{2}(H_{h,\mu,\varepsilon})g^{h}(H_{h,\mu,\varepsilon})]\\ &=\,\int_{\mathbb{R}}g_{\gamma}(s)\operatorname{Tr}[ \operatorname{Op}_{h}^{\mathrm{w}}(\theta)f_{2}^{2}(H_{h,\mu,\varepsilon}) \chi_{h}(H_{h,\mu,\varepsilon}-s)]\,ds\\ &=\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}}g(s)f_{2}^{2}(s)\int_ {\{a_{\varepsilon,0}=s\}}\frac{\theta}{|\nabla a_{\varepsilon,0}|}\,dS_{s}\, ds+\mathcal{O}(\hbar^{2-d})\\ &=\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}f_{2}^{2}g((p- \mu a(x))^{2}+V_{\varepsilon}(x))\theta(x,p)\,dxdp+\mathcal{O}(\hbar^{2-d}). \end{split} \tag{3.22}\] From combining (3.19), (3.20), (3.21) and (3.22) we obtain that \[\begin{split}&\operatorname{Tr}[\operatorname{Op}_{h}^{\mathrm{w}}( \theta)f_{2}^{2}(H_{h,\mu,\varepsilon})g(H_{h,\mu,\varepsilon})]\\ &=\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}f_{2}^{2}g((p- \mu a(x))^{2}+V_{\varepsilon}(x))\theta(x,p)\,dxdp+\mathcal{O}(\hbar^{1+ \gamma-d}).\end{split} \tag{3.23}\] Recalling the identity in (3.14) and combining (3.15), (3.18) and (3.23) we obtain that \[\Big{|}\operatorname{Tr}[\operatorname{Op}_{h}^{\mathrm{w}}(\theta)g(H_{h, \mu,\varepsilon})]-\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g((p-\mu a (x))^{2}+V_{\varepsilon}(x))\theta(x,p)\,dxdp\Big{|}\leq C\hbar^{1+\gamma-d},\] where the constant depends on the numbers stated in the theorem. This concludes the proof. ## 4 Auxiliary estimates We will in the this section establish bounds on traces of the form \(\|\varphi f(\mathcal{H}_{h,\mu})\|\), where \(f\in C_{0}^{\infty}(\mathbb{R})\), \(\varphi\in C_{0}^{\infty}(\Omega)\) and \(\mathcal{H}_{h,\mu}\) satisfies Assumption 1.1 with some set \(\Omega\) and the numbers \(\hbar>0\) and \(\mu\geq 0\). The results in this section is based on ideas originating in [27]. The main estimate from which the other estimates is deduced is contained in the following Lemma. The Lemma is taken from [27], where it is Lemma 3.6. **Lemma 4.1**.: _Let \(H_{h,\mu}=(-i\hbar\nabla-\mu a)^{2}+V\) be a Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\) and assume that \(V\in L^{\infty}(\mathbb{R}^{d})\) and \(a_{j}\in L^{2}_{loc}(\mathbb{R}^{d})\) for \(j\in\{1,\ldots,d\}\). Moreover, suppose that \(\mu\leq\mu_{0}<1\) and \(\hbar\in(0,\hbar_{0}]\), with \(\hbar_{0}\) sufficiently small. Let \(\varphi_{1}\in C_{0}^{\infty}(\mathbb{R}^{d})\) and \(\varphi_{2}\in\mathcal{B}^{\infty}(\mathbb{R}^{d})\) such that_ \[\operatorname{dist}\big{\{}\operatorname{supp}(\varphi_{1}),\operatorname{supp }(\varphi_{2})\big{\}}\geq c>0, \tag{4.1}\] _and let \(r,m\in\{0,1\}\). Then for any \(N>\frac{d}{2}\) it holds that_ \[\|\varphi_{1}Q_{l}^{r}(H_{h}-z)^{-1}(Q_{q}^{*})^{m}\varphi_{2}\|_{1}\leq C_{N} \frac{\langle z\rangle^{\frac{m+r}{2}}}{d(z)}\frac{\langle z\rangle^{\frac{d}{ 2}}}{\hbar^{d}}\frac{\langle z\rangle^{N}\hbar^{2N}}{d(z)^{2N}},\] _where \(Q_{l}=-i\hbar\partial_{x_{l}}-\mu a_{j}\). The constant \(C_{N}\) depends only on the numbers \(N\), \(\|\partial^{\alpha}\varphi_{1}\|_{L^{\infty}(\mathbb{R}^{d})}\), \(\|\partial^{\alpha}\varphi_{1}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) and the constant \(c\)._ The next Lemma is also from [27], where it is Lemma 3.9. **Lemma 4.2**.: _Let \(H_{h,\mu}=(-i\hbar\nabla-\mu a)^{2}+V\) be a Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\) and assume that \(V\in L^{\infty}(\mathbb{R}^{d})\) and \(a_{j}\in L^{2}_{loc}(\mathbb{R}^{d})\) for \(j\in\{1,\ldots,d\}\). Moreover, suppose that \(\mu\leq\mu_{0}<1\) and \(\hbar\in(0,\hbar_{0}]\), with \(\hbar_{0}\) sufficiently small. Let \(f\in C_{0}^{\infty}(\mathbb{R})\) and \(\varphi\in C_{0}^{\infty}(\mathbb{R}^{d})\). Then_ \[\|\varphi f(H_{h})\|_{1}\leq C\hbar^{-d}.\] _If \(\varphi_{1}\in C_{0}^{\infty}(\mathbb{R}^{d})\) and \(\varphi_{2}\in\mathcal{B}^{\infty}(\mathbb{R}^{d})\) such that_ \[\operatorname{dist}\big{\{}\operatorname{supp}(\varphi_{1}),\operatorname{supp }(\varphi_{2})\big{\}}\geq c>0. \tag{4.2}\] _Then for any \(N\geq 0\) it holds that_ \[\|\varphi_{1}f(H_{\hbar})\varphi_{2}\|_{1}\leq C_{N}\hbar^{N}.\] _The constant \(C_{N}\) depends only on the numbers \(N\), \(\|f\|_{L^{\infty}(\mathbb{R})}\), \(\|\partial^{\alpha}\varphi_{1}\|_{L^{\infty}(\mathbb{R}^{d})}\), \(\|\partial^{\alpha}\varphi_{1}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) and the constant \(c\)._ There is an almost similar result as the next lemma in [27]. This is Theorem 3.12. The difference in the two results is that our constant will not be directly dependent on the number \(\lambda_{0}\) (with the notation from [27]). This is due to us using the Helffer-Sjostrand formula instead of the representation formula for \(f(A)\) used in [27], where \(f\in C_{0}^{\infty}(\mathbb{R}^{d})\) and \(A\) is some self-adjoint lower semibounded operator. **Lemma 4.3**.: _Let \(\mathcal{H}_{h,\mu}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\) which satisfies Assumption 1.1 with the open set \(\Omega\) and the local operator \(H_{h,\mu}=(-i\hbar\nabla-\mu a)^{2}+V\). Assume that \(\mu\leq\mu_{0}<1\) and \(\hbar\in(0,\hbar_{0}]\), with \(\hbar_{0}\) sufficiently small. Then for \(f\in C_{0}^{\infty}(\mathbb{R})\) and \(\varphi\in C_{0}^{\infty}(\Omega)\) we have for any \(N\in\mathbb{N}_{0}\) that_ \[\|\varphi[f(\mathcal{H}_{h,\mu})-f(H_{h,\mu})]\|_{1}\leq C_{N}\hbar^{N},\] _and_ \[\|\varphi f(\mathcal{H}_{h,\mu})\|_{1}\leq C\hbar^{-d},\] _The constant \(C_{N}\) depends only on the numbers \(N\), \(\|f\|_{L^{\infty}(\mathbb{R})}\), \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) and the constant \(c\)._ Proof.: Using the Helffer-Sjostrand formula (Theorem 2.3) we obtain that \[\varphi[f(\mathcal{H}_{h,\mu})-f(H_{h,\mu})]=-\frac{1}{\pi}\int_{\mathbb{C}} \tilde{\partial}\tilde{f}(z)\varphi[(z-\mathcal{H}_{h,\mu})^{-1}-(z-H_{h,\mu })^{-1}]\,L(dz), \tag{4.3}\] where \(\tilde{f}\) is an almost analytic extension of \(f\). Since we assume that \(\varphi\in C_{0}^{\infty}(\Omega)\) there exists a positive constant \(c\) such that \[\operatorname{dist}\big{(}\operatorname{supp}(\varphi),\partial\Omega\big{)} \geq 4c.\] Let \(\varphi_{1}\in C_{0}^{\infty}(\mathbb{R}^{d})\) such that \(\varphi_{1}(x)\in[0,1]\) for all \(x\in\mathbb{R}^{d}\). Moreover, we chose \(\varphi_{1}\) such that \(\varphi_{1}(x)=1\) on the set \(\{x\in\mathbb{R}^{d}\,|\,\operatorname{dist}(\operatorname{supp}(\varphi),x) \leq c\}\) and \[\operatorname{supp}(\varphi_{1})\subset\{x\in\mathbb{R}^{d}\,|\,\operatorname {dist}(\operatorname{supp}(\varphi),x)\leq 3c\}.\] With this function we have that \[\varphi[(z-\mathcal{H}_{h,\mu})^{-1}-(z-H_{h,\mu})^{-1}] \tag{4.4}\] \[\qquad=\varphi[\varphi_{1}(z-\mathcal{H}_{h,\mu})^{-1}-(z-H_{h, \mu})^{-1}\varphi_{1}]-\varphi(z-H_{h,\mu})^{-1}(1-\varphi_{1}).\] For the second term on the right hand side of (4.4) we have by Lemma 4.1 for all \(N>\frac{d}{2}\) that \[\|\varphi(z-H_{h,\mu})^{-1}(1-\varphi_{1})\|_{1}\leq C_{N}\frac{\langle z \rangle^{N+\frac{d}{2}}\hbar^{2N-d}}{d(z)^{2N+1}}, \tag{4.5}\] where \(C_{N}\) depends only on the numbers \(N\), the functions \(\varphi\), \(\varphi_{1}\) and the constant \(c\). For the first term on the right hand side of (4.4) we have by resolvent formalism that \[\varphi_{1}(z -\mathcal{H}_{h,\mu})^{-1}-(z-H_{h,\mu})^{-1}\varphi_{1} \tag{4.6}\] \[=\sum_{j=1}^{d}(z-H_{h,\mu})^{-1}[Q_{j}^{*}Q_{j},\varphi_{1}](z- \mathcal{H}_{h,\mu})^{-1}\] \[=\sum_{j=1}^{d}(z-H_{h,\mu})^{-1}\big{(}-i\hbar Q_{j}\partial_{x_{ j}}\varphi_{1}-\hbar^{2}\partial_{x_{j}}^{2}\varphi_{1}\big{)}(z-\mathcal{H}_{h, \mu})^{-1},\] where \(\partial_{x_{j}}\varphi_{1}\) and \(\partial_{x_{j}}^{2}\varphi_{1}\) are the derivatives of \(\varphi_{1}\) with respect to \(x_{j}\) once or twice respectively. Notice that due to our choice of \(\varphi_{1}\) we have that \[\mathrm{dist}\big{(}\operatorname{supp}(\partial_{x_{j}}\varphi_{1}), \operatorname{supp}(\varphi)\big{)}\geq c\quad\text{and}\quad\mathrm{dist} \big{(}\operatorname{supp}(\partial_{x_{j}}^{2}\varphi_{1}),\operatorname{ supp}(\varphi)\big{)}\geq c.\] Using (4.6) we have by Lemma 4.1 for all \(N>\frac{d}{2}\) that \[\|\varphi[ \varphi_{1}(z-\mathcal{H}_{h,\mu})^{-1}-(z-H_{h,\mu})^{-1} \varphi_{1}]\|_{1} \tag{4.7}\] \[\leq\big{\|}(z-\mathcal{H}_{h,\mu})^{-1}\big{\|}_{\mathrm{op}} \sum_{j=1}^{d}\hbar\big{\|}\varphi(z-H_{h,\mu})^{-1}Q_{j}\partial_{x_{j}} \varphi_{1}\big{\|}_{1}+\hbar^{2}\big{\|}\varphi(z-H_{h,\mu})^{-1}\partial_{x_ {j}}^{2}\varphi_{1}\big{\|}_{1}\] \[\leq C_{N}\frac{\langle z\rangle^{N+\frac{d+1}{2}}\hbar^{2N-d}}{d (z)^{2N+1}}\frac{\hbar+\hbar^{2}}{|\operatorname{Im}(z)|},\] where \(C_{N}\) depends only on the dimension the numbers \(N\), the functions \(\varphi\), \(\varphi_{1}\) and the constant \(c\). From combing (4.4), (4.5) and (4.7) we obtain that \[\big{\|}\varphi[(z-\mathcal{H}_{h,\mu})^{-1}-(z-H_{h,\mu})^{-1}]\big{\|}_{1} \leq C_{N}\frac{\langle z\rangle^{N+\frac{d+1}{2}}\hbar^{2N-d}}{| \operatorname{Im}(z)|^{2N+2}}. \tag{4.8}\] Combining (4.3), (4.8) and using properties of the integral we get for all \(N>\frac{d}{2}\) that \[\big{\|}\varphi[f(\mathcal{H}_{h,\mu})-f(H_{h,\mu})]\big{\|}_{1} \leq\frac{1}{\pi}\int_{\mathbb{C}}\big{|}\tilde{\partial}\tilde{f} (z)\big{|}\big{\|}\varphi[(z-\mathcal{H}_{h,\mu})^{-1}-(z-H_{h,\mu})^{-1}] \big{\|}_{1}\,L(dz) \tag{4.9}\] \[\leq C_{N}\frac{\hbar^{2N-d}}{\pi}\int_{\mathbb{C}}\bar{\partial} \big{|}\tilde{f}(z)\big{|}\frac{\langle z\rangle^{N+\frac{d+1}{2}}}{| \operatorname{Im}(z)|^{2N+2}}\,L(dz)\leq\tilde{C}_{N}\hbar^{2N-d},\] where the constant \(\tilde{C}_{N}\) depends on the dimension the numbers \(N\), the functions \(\varphi\), \(\varphi_{1}\), \(f\) and the constant \(c\). We have in the last inequality used the properties of the almost analytic extension \(\tilde{f}\). The estimate in (4.9) concludes the proof. **Lemma 4.4**.: _Let \(\mathcal{H}_{h,\mu}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\). Suppose \(\mathcal{H}_{h,\mu}\) satisfies Assumption 1.1 with the open set \(\Omega\) and let \(H_{h,\mu,\varepsilon}=(-i\hbar\nabla-\mu a)^{2}+V_{\varepsilon}\) be the associated rough Schrodinger operator. Assume that \(\mu\leq\mu_{0}<1\) and \(\hbar\in(0,\hbar_{0}]\), with \(\hbar_{0}\) sufficiently small. Let \(f\in C_{0}^{\infty}(\mathbb{R})\) and \(\varphi\in C_{0}^{\infty}(\Omega)\) then it holds that_ \[\|\varphi[f(\mathcal{H}_{h,\mu})-f(H_{h,\mu,\varepsilon})]\|_{1}\leq C\varepsilon ^{2+\kappa}\hbar^{-d}. \tag{4.10}\] _The constant \(C_{N}\) depends only on the dimension, the numbers \(\|f\|_{L^{\infty}(\mathbb{R})}\), \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\), \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) with \(|\alpha|\geq 1\) and \(j\in\{1\ldots,d\}\) and \(\|\partial_{x}^{\alpha}V\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) such that \(|\alpha|\leq 2\)._ Proof.: Let \(H_{h,\mu}\) be the magnetic Schrodinger operator associated to \(\mathcal{H}_{h,\mu}\). We then have that \[\|\varphi[f(\mathcal{H}_{h,\mu})-f(H_{h,\mu,\varepsilon})]\|_{1}\leq\|\varphi[f( \mathcal{H}_{h,\mu})-f(H_{h,\mu})]\|_{1}+\|\varphi[f(H_{h,\mu})-f(H_{h,\mu, \varepsilon})]\|_{1}. \tag{4.11}\] By Lemma 4.3 it follows for all \(N\in\mathbb{N}\) that \[\|\varphi[f(\mathcal{H}_{h,\mu})-f(H_{h,\mu})]\|_{1}\leq C_{N}\hbar^{N}. \tag{4.12}\] To estimate the second term on the right hand side of (4.11) let \(f_{1}\in C_{0}^{\infty}(\mathbb{R})\) such that \(f_{1}(t)f(t)=f(t)\) for all \(t\in\mathbb{R}\). Moreover, let \(\varphi_{1}\in C_{0}^{\infty}(\Omega)\) such that \(\varphi_{1}(x)=1\) for all \(x\in\operatorname{supp}(\varphi)\). We then have for each \(N\in\mathbb{N}_{0}\) that \[\begin{split}&\|\varphi[f(H_{h,\mu})-f(H_{h,\mu, \varepsilon})]\|_{1}\\ &\leq\|\varphi[f_{1}(H_{h,\mu})-f_{1}(H_{h,\mu,\varepsilon})] \varphi_{1}f(H_{h,\mu})\|_{1}+\|\varphi f_{1}(H_{h,\mu,\varepsilon})[f(H_{h, \mu})-f(H_{h,\mu,\varepsilon})]\|_{1}+C_{N}\hbar^{N}\\ &\leq C\hbar^{-d}\big{[}\|f_{1}(H_{h,\mu})-f_{1}(H_{h,\mu, \varepsilon})\|_{\operatorname{op}}+\|f(H_{h,\mu})-f(H_{h,\mu,\varepsilon}) \|_{\operatorname{op}}\big{]}+C_{N}\hbar^{N},\end{split} \tag{4.13}\] where we have used Lemma 4.2 three times. We can use this as \(V,V_{\varepsilon}\in L^{\infty}(\mathbb{R}^{d})\) and the functions \(\varphi\) and \(1-\varphi_{1}\) have disjoint support. Applying Theorem 2.3 and the resolvent formalism we get that \[\begin{split}\|f(H_{h,\mu})-f(H_{h,\mu,\varepsilon})\|_{ \operatorname{op}}&\leq\frac{1}{\pi}\int_{\mathbb{C}}|\partial \tilde{f}(z)|\|(z-H_{h,\mu})-(z-H_{h,\mu,\varepsilon})\|_{\operatorname{op}}L (dz)\\ &\leq\frac{1}{\pi}\int_{\mathbb{C}}\frac{|\bar{\partial}\tilde{f}( z)|}{|\operatorname{Im}(z)|^{2}}\|V-V_{\varepsilon}\|_{\operatorname{op}}L(dz)\\ &\leq C\varepsilon^{2+\kappa},\end{split} \tag{4.14}\] where we in the last inequality have used that \(\tilde{f}\) is an almost analytic extension and have compact support and Lemma 2.4. Analogously we obtain that \[\|f_{1}(H_{h,\mu})-f_{1}(H_{h,\mu,\varepsilon})\|_{\operatorname{op}}\leq C \varepsilon^{2+\kappa}. \tag{4.15}\] Combining the estimates in (4.11), (4.12), (4.13), (4.14) and (4.15) we obtain the estimate in (4.10). This concludes the proof. Before we proceed we will need a technical Lemma. This Lemma gives us a version of the estimate (3.9) from Proposition 3.13. **Lemma 4.5**.: _Let \(H_{h,\mu,\varepsilon}=(-i\hbar\nabla-\mu a)^{2}+V_{\varepsilon}\) be a rough Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\) of regularity \(\tau\geq 2\) with \(\mu\leq\mu_{0}<1\) and \(\hbar\in(0,\hbar_{0}]\), \(\hbar_{0}\) sufficiently small. Assume that \(a_{j}\in C_{0}^{\infty}(\mathbb{R}^{d})\) for all \(j\in\{1,\ldots,d\}\) and \(V_{\varepsilon}\in C_{0}^{\infty}(\mathbb{R}^{d})\). Suppose there is an open set \(\Omega\subset\operatorname{supp}(V_{\varepsilon})\) and a \(c>0\) such that_ \[|V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all }x\in\Omega.\] _Let \(\chi_{h}(t)\) be the function from Remark 3.12, \(f\in C_{0}^{\infty}(\mathbb{R})\) and \(\varphi\in C_{0}^{\infty}(\Omega)\) then it holds for \(s\in\mathbb{R}\) that_ \[\|\varphi f(H_{\hbar,\mu,\varepsilon})\chi_{h}(H_{\hbar,\mu,\varepsilon}-s)f( H_{\hbar,\mu,\varepsilon})\varphi\|_{1}\leq C\hbar^{-d}.\] _The constant \(C_{N}\) depends only on the dimension and the numbers \(\|f\|_{L^{\infty}(\mathbb{R})}\), \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\), \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) with \(|\alpha|\geq 1\) and \(j\in\{1\ldots,d\}\), \(\|V_{\varepsilon}\|_{L^{\infty}(\mathbb{R}^{d})}\) and the numbers \(C_{\alpha}\) from Assumption 3.7._ Proof.: Under the assumptions of the Lemma we have that \(a\) and \(V_{\varepsilon}\) satisfies Assumption 3.7. Hence if we can find \(\theta\in C_{0}^{\infty}(\Omega\times\mathbb{R}^{d})\) such that for all \(N\in\mathbb{N}\) we have that \[\begin{split}&\big{\|}\varphi f(H_{h,\mu,\varepsilon})\chi_{h}(H_{ h,\mu,\varepsilon}-s)f(H_{h,\mu,\varepsilon})\varphi\\ &\quad-\varphi\operatorname{Op}_{h}^{\mathrm{w}}(\theta)f(H_{h, \mu,\varepsilon})\chi_{h}(H_{h,\mu,\varepsilon}-s)f(H_{h,\mu,\varepsilon}) \operatorname{Op}_{h}^{\mathrm{w}}(\theta)\varphi\big{\|}_{1}\leq C_{N}\hbar^{ N},\end{split} \tag{4.16}\] where \(C_{N}\) depends on. Then the result will follow from Lemma 3.14. Since this gives us that \[\|\varphi\operatorname{Op}_{h}^{\mathrm{w}}(\theta)f(H_{h,\mu,\varepsilon}) \chi_{h}(H_{h,\mu,\varepsilon}-s)f(H_{h,\mu,\varepsilon})\operatorname{Op}_ {h}^{\mathrm{w}}(\theta)\varphi\|_{1}\leq C\hbar^{-d}.\] In order to find such a \(\theta\) we observe that since \(V_{\varepsilon}\) and \(a_{j}\) are bounded for all \(j\in\{1,\ldots,d\}\) there exist a \(K>1\) such that \[f(a_{\varepsilon,0}^{f}(x,p))=0\qquad\text{if }|p|\geq K-1,\] where we have also used that \(f\) is compactly supported and the notation \(a_{\varepsilon,0}^{f}(x,p)=f((p-\mu a(x))^{2}+V_{\varepsilon}(x))\). Hence we will choose \(\theta\in C_{0}^{\infty}(\Omega\times B(0,K+1))\) such that \[\operatorname{supp}(\varphi)\cap\operatorname{supp}(1-\theta)\cap \operatorname{supp}(f(a_{\varepsilon,0}^{f}))=\emptyset.\] Hence from applying Lemma 3.6 and Theorem 3.9 we obtain that \[\|\varphi(1-\operatorname{Op}_{h}^{\mathrm{w}}(\theta))f(H_{h,\mu,\varepsilon })\|_{\operatorname{op}}\leq C_{N}\hbar^{N}. \tag{4.17}\] By Theorem 3.2 and Lemma 4.2 we have that \(\|\operatorname{Op}_{h}^{\mathrm{w}}(\theta)\|_{1}\leq C\hbar^{-d}\) and \(\|\varphi f(H_{h,\mu,\varepsilon})\|_{1}\leq C\hbar^{-d}\) respectively. Hence we get that \[\begin{split}&\big{\|}\varphi f(H_{h,\mu,\varepsilon})\chi_{h}(H_{ h,\mu,\varepsilon}-s)f(H_{h,\mu,\varepsilon})\varphi-\varphi\operatorname{Op}_{h}^{ \mathrm{w}}(\theta)f(H_{h,\mu,\varepsilon})\chi_{h}(H_{h,\mu,\varepsilon}-s)f (H_{h,\mu,\varepsilon})\operatorname{Op}_{h}^{\mathrm{w}}(\theta)\varphi \big{\|}_{1}\\ &\quad\leq\big{\|}\varphi(1-\operatorname{Op}_{h}^{\mathrm{w}}( \theta))f(H_{h,\mu,\varepsilon})\chi_{h}(H_{h,\mu,\varepsilon}-s)f(H_{h,\mu, \varepsilon})\varphi\big{\|}_{1}\\ &\quad\quad+\big{\|}\varphi\operatorname{Op}_{h}^{\mathrm{w}}( \theta)f(H_{h,\mu,\varepsilon})\chi_{h}(H_{h,\mu,\varepsilon}-s)f(H_{h,\mu, \varepsilon})(1-\operatorname{Op}_{h}^{\mathrm{w}}(\theta))\varphi\big{\|}_{1 }\\ &\quad\leq C\hbar^{-d}\|\varphi(1-\operatorname{Op}_{h}^{\mathrm{w} }(\theta))f(H_{h,\mu,\varepsilon})\|_{\operatorname{op}}\leq C_{N}\hbar^{N}. \end{split} \tag{4.18}\] where we have used (4.17). This establishes (4.16) and concludes the proof. In the same manner as the previous lemma we will prove an asymptotic formula for the case with a compactly supported potential. **Lemma 4.6**.: _Let \(H_{h,\mu,\varepsilon}=(-i\hbar\nabla-\mu a)^{2}+V_{\varepsilon}\) be a rough Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\) of regularity \(\tau\geq 2\) with \(\mu\leq\mu_{0}<1\) and \(\hbar\in(0,\hbar_{0}]\), \(\hbar_{0}\) sufficiently small. Assume that \(a_{j}\in C_{0}^{\infty}(\mathbb{R}^{d})\) for all \(j\in\{1,\ldots,d\}\) and \(V_{\varepsilon}\in C_{0}^{\infty}(\mathbb{R}^{d})\). Suppose there is an open set \(\Omega\subset\operatorname{supp}(V_{\varepsilon})\) and a \(c>0\) such that_ \[|V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all }x\in\Omega.\] _Then for \(g\in C^{\infty,\gamma}(\mathbb{R})\) with \(\gamma\in[0,1]\) and any \(\varphi\in C_{0}^{\infty}(\Omega)\) it holds that_ \[\Big{|}\operatorname{Tr}[\varphi g(H_{h,\mu,\varepsilon})]-\frac{1}{(2\pi \hbar)^{d}}\int_{\mathbb{R}^{2d}}g(p^{2}+V_{\varepsilon}(x))\varphi(x)\,dxdp \Big{|}\leq C\hbar^{1+\gamma-d}.\] _The constant \(C_{N}\) depends only on the dimension and the numbers \(\|f\|_{L^{\infty}(\mathbb{R})}\), \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\), \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) with \(|\alpha|\geq 1\) and \(j\in\{1\ldots,d\}\), \(\|V_{\varepsilon}\|_{L^{\infty}(\mathbb{R}^{d})}\) and the numbers \(C_{\alpha}\) from Assumption 3.7._ Proof.: As in the proof of Lemma 4.5 we let \(\theta\in C_{0}^{\infty}(\Omega\times B(0,K+1))\) such that \[\operatorname{supp}(\varphi)\cap\operatorname{supp}(1-\theta)\cap \operatorname{supp}(f(a^{f}_{\varepsilon,0}))=\emptyset,\] where \(a^{f}_{\varepsilon,0}(x,p)=f((p-\mu a(x))^{2}+V_{\varepsilon}(x))\). Then as in the proof of Lemma 4.5 we then get for all \(N\in\mathbb{N}\) that \[\operatorname{Tr}[\varphi g(H_{h,\mu,\varepsilon})]=\operatorname{Tr}[ \varphi\operatorname{Op}_{h}^{\mathrm{w}}(\theta)g(H_{h,\mu,\varepsilon})]+ \mathcal{O}(\hbar^{N}). \tag{4.19}\] This choice of \(\theta\) ensures the assumptions of Theorem 3.15 is satisfied. Hence we get that \[\Big{|}\operatorname{Tr}[\varphi\operatorname{Op}_{h}^{\mathrm{w}}(\theta)g( H_{h,\mu})]-\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g((p-\mu a(x))^{2}+V_{ \varepsilon}(x))\varphi(x)\theta(x,p)\,dxdp\Big{|}\leq C\hbar^{1+\gamma-d}, \tag{4.20}\] From the support properties of \(\theta\) we have that \[\int_{\mathbb{R}^{2d}}g((p-\mu a(x))^{2}+V_{\varepsilon}(x))\varphi(x)\theta( x,p)\,dxdp=\int_{\mathbb{R}^{2d}}g(p^{2}+V_{\varepsilon}(x))\varphi(x)\,dxdp. \tag{4.21}\] From combining (4.19), (4.20) and (4.21) we obtain the desired estimate. This concludes the proof. **Lemma 4.7**.: _Let \(\mathcal{H}_{h,\mu}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\). Suppose \(\mathcal{H}_{h,\mu}\) satisfies Assumption 1.1 with the open set \(\Omega\) and let \(H_{h,\mu,\varepsilon}=(-i\hbar\nabla-\mu a)^{2}+V_{\varepsilon}\) be the associated rough Schrodinger operator. Assume that \(\mu\leq\mu_{0}<1\) and \(\hbar\in(0,\hbar_{0}]\), with \(\hbar_{0}\) sufficiently small. Moreover, let \(\chi_{h}(t)\) be the function from Remark 3.12, \(f\in C_{0}^{\infty}(\mathbb{R})\) and \(\varphi\in C_{0}^{\infty}(B(\Omega))\) then it holds for \(s\in\mathbb{R}\) that_ \[\|\varphi f(\mathcal{H}_{h,\mu})\chi_{h}(\mathcal{H}_{h,\mu}-s)f(\mathcal{H}_{ h,\mu})\varphi-\varphi f(H_{h,\mu,\varepsilon})\chi_{h}(H_{h,\mu,\varepsilon}-s)f(H _{h,\mu,\varepsilon})\varphi\|_{1}\leq C\varepsilon^{2+\kappa}\hbar^{-d-1}. \tag{4.22}\] _Moreover, suppose there exists some \(c>0\) such that_ \[|V(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all $x\in\Omega$}.\] _Then it holds that_ \[\|\varphi f(\mathcal{H}_{h,\mu})\chi_{h}(\mathcal{H}_{h,\mu}-s)f(\mathcal{H}_ {h,\mu})\varphi\|_{1}\leq C\hbar^{-d}. \tag{4.23}\] _The constant \(C\) depends only on the dimension and the numbers \(\|f\|_{L^{\infty}(\mathbb{R})}\), \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\), \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) with \(|\alpha|\geq 1\) and \(j\in\{1\dots,d\}\) and \(\|\partial_{x}^{\alpha}V\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) such that \(|\alpha|\leq 2\).._ Proof.: Let \(H_{h,\mu}\) be the magnetic Schrodinger operator associated to \(\mathcal{H}_{h,\mu}\). We then have \[\|\varphi f(\mathcal{H}_{h,\mu})\chi_{h}(\mathcal{H}_{h,\mu}-s)f( \mathcal{H}_{h,\mu})\varphi-\varphi f(H_{h,\mu,\varepsilon})\chi_{h}(H_{h,\mu, \varepsilon}-s)f(H_{h,\mu,\varepsilon})\varphi\|_{1} \tag{4.24}\] \[\leq\|\varphi f(\mathcal{H}_{h,\mu})\chi_{h}(\mathcal{H}_{h,\mu}- s)f(\mathcal{H}_{h,\mu})\varphi-\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)f(H_{h, \mu})\varphi\|_{1}\] \[\qquad+\|\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)f(H_{h,\mu}) \varphi-\varphi f(H_{h,\mu,\varepsilon})\chi_{h}(H_{h,\mu,\varepsilon}-s)f(H_{h,\mu,\varepsilon})\varphi\|_{1}\] We start by estimating the the first term on the righthand side of (4.24). By applying Lemma 4.3 we get that \[\|\varphi f(\mathcal{H}_{h,\mu})\chi_{h}(\mathcal{H}_{h,\mu}-s)f( \mathcal{H}_{h,\mu})\varphi-\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)f(H_{h,\mu} )\varphi\|_{1} \tag{4.25}\] \[\quad\leq C\hbar^{-d}\|\varphi f(\mathcal{H}_{h,\mu})\chi_{h}( \mathcal{H}_{h,\mu}-s)-\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)\|_{\mathrm{op}}\] \[\qquad+C\hbar^{-1}\|f(\mathcal{H}_{h,\mu})\varphi-f(H_{h,\mu}) \varphi\|_{1}\] \[\leq C\hbar^{-d}\|\varphi f(\mathcal{H}_{h,\mu})\chi_{h}(\mathcal{H} _{h,\mu}-s)-\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)\|_{\mathrm{op}}+C\hbar^{N},\] where we in the first inequality have added and subtracted the term \(\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)f(\mathcal{H}_{h,\mu})\varphi\), used the triangle inequality, Lemma 4.3 and that \(\sup_{t\in\mathbb{R}}\chi_{h}(t)\leq C\hbar^{-1}\). In the second inequality we have used Lemma 4.4. We observe that we can write the function \(\chi_{h}(z-s)\) as \[\chi_{h}(z-s)=\mathcal{F}_{h}^{-1}[\chi](z-s). \tag{4.26}\] From this expression we observe that \(\chi_{h}(z-s)\) is holomorphic since \(\chi\in C_{0}^{\infty}(\mathbb{R})\). Hence using the Helffer-Sjostrand formula (Theorem 2.3) we get that \[\begin{split}\varphi f(\mathcal{H}_{h,\mu})\chi_{h}(\mathcal{H}_ {h,\mu}-s)-\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)\\ =-\frac{1}{\pi}\int_{\mathbb{C}}\bar{\partial}\tilde{f}(z)\chi_ {h}(z-s)\varphi[(z-\mathcal{H}_{h,\mu})^{-1}-(z-H_{h,\mu})^{-1}]\,L(dz),\end{split} \tag{4.27}\] where \(\tilde{f}\) is an almost analytic extension of \(f\). From the proof of Lemma 4.3 we have the estimate \[\big{\|}\varphi[(z-\mathcal{H}_{h,\mu})^{-1}-(z-H_{h,\mu})^{-1}]\big{\|}_{ \mathrm{op}}\leq C_{N}\frac{\langle z\rangle^{N+\frac{d+1}{2}}\hbar^{2N-d}}{| \operatorname{Im}(z)|^{2N+2}}. \tag{4.28}\] Since the trace norm dominates the operator norm. Combining (4.27), (4.28) and using the properties of \(\tilde{f}\) and \(\chi_{h}\) we obtain that \[\|\varphi f(\mathcal{H}_{h,\mu})\chi_{h}(\mathcal{H}_{h,\mu}-s)-\varphi f(H_{ h,\mu})\chi_{h}(H_{h,\mu}-s)\|_{\mathrm{op}}\leq C_{N}\hbar^{N}, \tag{4.29}\] where \(C_{N}\) depends on the dimension, the number \(N\) and the functions \(f\), \(\varphi\). Combining the estimates in (4.25) and (4.29) we obtain that \[\|\varphi f(H_{h})\chi_{h}(H_{h}-s)f(H_{h})\varphi-\varphi f(\tilde{H}_{h}) \chi_{h}(\tilde{H}_{h}-s)f(\tilde{H}_{h})\varphi\|_{1}\leq C\hbar^{N}. \tag{4.30}\] We now turn to the lefthand side of (4.24). Here we do the same type of estimates as in (4.25) \[\begin{split}&\|\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)f(H_{h, \mu})\varphi-\varphi f(H_{h,\mu,\varepsilon})\chi_{h}(H_{h,\mu,\varepsilon}-s )f(H_{h,\mu,\varepsilon})\varphi\|_{1}\\ &\leq C\hbar^{-d}\|\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)- \varphi f(H_{h,\mu,\varepsilon})\chi_{h}(H_{h,\mu,\varepsilon}-s)\|_{\mathrm{ op}}\\ &\qquad+C\hbar^{-1}\|f(H_{h,\mu})\varphi-f(H_{h,\mu,\varepsilon} )\varphi)\varphi\|_{1}\\ &\leq C\hbar^{-d}\|\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)- \varphi f(H_{h,\mu,\varepsilon})\chi_{h}(H_{h,\mu,\varepsilon}-s)\|_{\mathrm{ op}}+C\varepsilon^{2+\kappa}\hbar^{-d-1},\end{split} \tag{4.31}\] where the last inequality follows from the proof of Lemma 4.4. As above we again use the Helffer-Sjostrand formula and the resolvent formalism and obtain that \[\begin{split}&\|\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)- \varphi f(H_{h,\mu,\varepsilon})\chi_{h}(H_{h,\mu,\varepsilon}-s)\|_{\mathrm{ op}}\\ &\qquad=\frac{1}{\pi}\big{\|}\int_{\mathbb{C}}\bar{\partial} \tilde{f}(z)\chi_{h}(z-s)\varphi[(z-H_{h,\mu})^{-1}-(z-H_{h,\mu,\varepsilon})^ {-1}]\,L(dz)\big{\|}_{\mathrm{op}}\\ &\qquad\leq\frac{1}{\pi}\int_{\mathbb{C}}\frac{|\tilde{\partial} \tilde{f}(z)|}{|\operatorname{Im}(z)|^{2}}\chi_{h}(z-s)\|V-V_{\varepsilon}\|_{ \mathrm{op}}\,L(dz)\leq C\hbar^{-1}\varepsilon^{2+\kappa}.\end{split} \tag{4.32}\] Combining the estimates in (4.31) and (4.32) we get that \[\|\varphi f(H_{h,\mu})\chi_{h}(H_{h,\mu}-s)f(H_{h,\mu})\varphi-\varphi f(H_{h, \mu,\varepsilon})\chi_{h}(H_{h,\mu,\varepsilon}-s)f(H_{h,\mu,\varepsilon}) \varphi\|_{1}\leq C\varepsilon^{2+\kappa}\hbar^{-d-1}. \tag{4.33}\] Finally by combing the estimates in (4.24), (4.30) and (4.33) we obtain the estimate stated in (4.22). By combining the the estimate in (4.22) with Lemma 4.5 we can obtain the estimate (4.23). This concludes the proof. **Lemma 4.8**.: _Let \(\mathcal{H}_{h,\mu}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\). Suppose \(\mathcal{H}_{h,\mu}\) satisfies Assumption 1.1 with the open set \(\Omega\) and let \(H_{h,\mu,\varepsilon}=(-i\hbar\nabla-\mu a)^{2}+V_{\varepsilon}\) be the associated rough Schrodinger operator. Assume that \(\mu\leq\mu_{0}<1\) and \(\hbar\in(0,\hbar_{0}]\), with \(\hbar_{0}\) sufficiently small. Moreover, suppose there exists some \(c>0\) such that_ \[|V(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all }x\in\Omega\] _and let \(\varphi\in C_{0}^{\infty}(B(0,R))\). Then for \(g\in C^{\infty,\gamma}(\mathbb{R})\) with \(\gamma\in[0,1]\) it holds that_ \[\Big{|}\operatorname{Tr}[\varphi g(\mathcal{H}_{h,\mu})]-\operatorname{Tr}[ \varphi g(H_{h,\mu,\varepsilon})]\Big{|}\leq C\hbar^{1+\gamma-d}+C^{\prime} \varepsilon^{2+\kappa}\hbar^{-d-1}. \tag{4.34}\] _The constants \(C\) and \(C^{\prime}\) depends on the dimension and the numbers \(\|f\|_{L^{\infty}(\mathbb{R})}\), \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\), \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) with \(|\alpha|\geq 1\) and \(j\in\{1\dots,d\}\) and \(\|\partial_{x}^{\alpha}V\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) such that \(|\alpha|\leq 2\).._ Proof.: Since both operators are lower semi-bounded we may assume that \(g\) is compactly supported. Let \(f\in C_{0}^{\infty}(\mathbb{R})\) such that \(f(t)g(t)=g(t)\) for all \(t\in\mathbb{R}\). Moreover let \(\varphi_{1}\in C_{0}^{\infty}(\Omega)\) such that \(\varphi(x)\varphi_{1}(x)=\varphi(x)\) for all \(x\in\mathbb{R}^{d}\). Moreover, let \(\chi_{\hbar}(t)\) be the function from Remark 3.12 and set \(g^{(\hbar)}(t)=g*\chi_{\hbar}(t)\). With this notation set up we have that \[\Big{|}\operatorname{Tr}[\varphi g(\mathcal{H}_{h,\mu})]- \operatorname{Tr}[\varphi g(H_{h,\mu,\varepsilon})]\Big{|}\] \[\quad\leq\|\varphi\varphi_{1}f(\mathcal{H}_{h,\mu})(g(\mathcal{H} _{h,\mu})-g^{(\hbar)}(\mathcal{H}_{h,\mu}))f(\mathcal{H}_{h,\mu})\varphi_{1} ]\|_{1}\] \[\qquad\quad+\|\varphi\varphi_{1}f(H_{h,\mu,\varepsilon})(g(H_{h, \mu,\varepsilon})-g^{(\hbar)}(H_{h,\mu,\varepsilon}))f(H_{h,\mu,\varepsilon}) \varphi_{1}]\|_{1}+\|\varphi\|_{L^{\infty}(\mathbb{R}^{d})}\int_{\mathbb{R}}g_{ \gamma}(s)\,ds\] \[\qquad\quad\times\sup_{s\in\mathbb{R}}\hskip-1.422638pt\|\varphi \varphi_{1}f(\mathcal{H}_{h,\mu})\chi_{\hbar}(\mathcal{H}_{h,\mu}-s)f( \mathcal{H}_{h,\mu})\varphi_{1}-\varphi_{1}f(H_{h,\mu,\varepsilon})\chi_{\hbar} (H_{h,\mu,\varepsilon}-s)f(H_{h,\mu,\varepsilon})\varphi_{1}\|_{1}. \tag{4.35}\] Lemma 4.5 and Lemma 4.7 gives us that the assumptions of Proposition 3.13 is fulfilled with \(B\) equal to \(\varphi_{1}f(H_{\hbar})\) and \(\varphi_{1}f(H_{h,\varepsilon})\) respectively. Hence we have that \[\|\varphi\varphi_{1}f(\mathcal{H}_{h,\mu})(g(\mathcal{H}_{h,\mu})-g^{(\hbar)}( \mathcal{H}_{h,\mu}))f(\mathcal{H}_{h,\mu})\varphi_{1}\|_{1}\leq C\hbar^{1+ \gamma-d} \tag{4.36}\] and \[\|\varphi\varphi_{1}f(H_{h,\mu,\varepsilon})(g(H_{h,\mu,\varepsilon})-g^{( \hbar)}(H_{h,\mu,\varepsilon}))f(H_{h,\mu,\varepsilon})\varphi_{1}\|_{1}\leq C \hbar^{1+\gamma-d}. \tag{4.37}\] From applying Lemma 4.7 we get that \[\sup_{s\in\mathbb{R}}\hskip-1.422638pt\|\varphi\varphi_{1}f( \mathcal{H}_{h,\mu})\chi_{\hbar}(\mathcal{H}_{h,\mu}-s)f(\mathcal{H}_{h,\mu}) \varphi_{1}-\varphi_{1}f(H_{h,\mu\varepsilon})\chi_{\hbar}(H_{h,\mu\varepsilon} -s)f(H_{h,\mu,\varepsilon})\varphi_{1}\|_{1}\] \[\qquad\leq C\varepsilon^{2+\kappa}\hbar^{-d-1}. \tag{4.38}\] Finally from combining the estimates in (4.35), (4.36), (4.37) and (4.38)we obtain the desired estimate and this concludes the proof. ## 5 Local model problem Before we state and prove our local model problem we will state a result on comparison of phase-space integrals that we will need later. **Lemma 5.1**.: _Suppose \(\Omega\subset\mathbb{R}^{d}\) is an open set and let \(\varphi\in C_{0}^{\infty}(\Omega)\). Moreover, let \(\varepsilon>0\), \(\hbar\in(0,\hbar_{0}]\) and \(V,V_{\varepsilon}\in L^{1}_{loc}(\mathbb{R}^{d})\cap C(\Omega)\). Suppose that_ \[\|V-V_{\varepsilon}\|_{L^{\infty}(\Omega)}\leq c\varepsilon^{k+\mu}. \tag{5.1}\] _Then for \(\gamma\in[0,1]\) and \(\varepsilon\) sufficiently small it holds that_ \[\Big{|}\int_{\mathbb{R}^{2d}}[g_{\gamma}(p^{2}+V_{\varepsilon}(x))-g_{\gamma}(p ^{2}+V(x))]\varphi(x)\,dxdp\Big{|}\leq C\varepsilon^{k+\mu}, \tag{5.2}\] _where the constant \(C\) depends on the dimension and the numbers \(\gamma\) and \(c\) in (5.1)._ Proof.: Firstly we observe that due to (5.1) we have that \[\sup_{x\in\Omega}\big{|}V(x)_{-}-V_{\varepsilon}(x)_{-}\big{|}\leq c \varepsilon^{k+\mu}. \tag{5.3}\] To compare the phase-space integrals we start by evaluating the integral in \(p\). This yields \[\begin{split}\int_{\mathbb{R}^{2d}}&[g_{\gamma}(p^ {2}+V_{\varepsilon}(x))-g_{\gamma}(p^{2}+V(x))]\varphi(x)\,dxdp\\ &=L^{\mathrm{cl}}_{\gamma,d}\int_{\mathbb{R}^{d}}\big{[}V_{ \varepsilon}(x)_{-}^{\frac{d}{2}+\gamma}-V(x)_{-}^{\frac{d}{2}+\gamma}\big{]} \varphi(x)\,dx,\end{split} \tag{5.4}\] where the constant \(L^{\mathrm{cl}}_{\gamma,d}\) is given by \[L^{\mathrm{cl}}_{\gamma,d}=\frac{\Gamma(\gamma+1)}{(4\pi)^{\frac{d}{2}}\Gamma (\gamma+\frac{d}{2}+1)},\] where \(\Gamma\) is the standard gamma functions. Since both \(V_{\varepsilon}(x)_{-}\) and \(V(x)_{-}\) are bounded from below and \(d\geq 3\) we can use that the map \(r\mapsto r^{\frac{d}{2}+\gamma}\) is uniformly Lipschitz continuous, when restricted to a compact domain. This gives us that \[\int_{\mathbb{R}^{d}}\big{[}V_{\varepsilon}(x)_{-}^{\frac{d}{2}+\gamma}-V(x)_ {-}^{\frac{d}{2}+\gamma}\big{]}\varphi(x)\,dx\leq C_{\gamma}\int_{\mathbb{R}^ {d}}\big{|}V_{\varepsilon}(x)_{-}-V(x)_{-}\big{|}\varphi(x)\,dx\leq\tilde{C}_ {\gamma}\varepsilon^{k+\mu}, \tag{5.5}\] where we have used (5.3) and that \(\operatorname{supp}(\varphi)\subset\Omega\). From combining (5.4) and (5.5) we obtain the desired estimate and this concludes the proof. With this established we can now state our model problem. **Theorem 5.2**.: _Let \(\mathcal{H}_{h,\mu}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\) and let \(\gamma\in[0,1]\). Suppose \(\mathcal{H}_{h,\mu}\) satisfies Assumption 1.1 with the open set \(\Omega\) and let \(H_{h,\mu}=(-i\hbar\nabla-\mu a)^{2}+V\) be the associated Schrodinger operator. Assume that \(\mu\leq\mu_{0}<1\) and \(\hbar\in(0,\hbar_{0}]\), with \(\hbar_{0}\) sufficiently small. Moreover, suppose there exists some \(c>0\) such that_ \[|V(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all }x\in\Omega.\] _Then for any \(\varphi\in C_{0}^{\infty}(\Omega)\) it holds that_ \[\Big{|}\operatorname{Tr}[\varphi g_{\gamma}(\mathcal{H}_{h,\mu})]-\frac{1}{(2 \pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x))\varphi(x)\,dxdp \Big{|}\leq C\hbar^{1+\gamma-d},\] _where the constant \(C\) is depending on the dimension, the numbers \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\), \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) with \(|\alpha|\geq 1\) and \(j\in\{1\ldots,d\}\) and \(\|\partial_{x}^{\alpha}V\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) such that \(|\alpha|\leq 2\)._ Proof of Theorem 5.2.: Let \(H_{\hbar,\mu,\varepsilon}=(-i\hbar\nabla-\mu a)^{2}+V_{\varepsilon}\) be the rough Schrodinger operator associated to \(\mathcal{H}_{\hbar,\mu}\). We have in the construction of \(V_{\varepsilon}\) chosen \(\varepsilon=\hbar^{1-\delta}\), where \[\delta=\frac{\kappa-\gamma}{2+\kappa}. \tag{5.6}\] Note that since we assume \(\kappa>\gamma\), we have that \(1>\delta>0\). With this choice of \(\varepsilon\) and \(\delta\) we have that \[\varepsilon^{2+\kappa}=\hbar^{(1-\delta)(2+\kappa)}=\hbar^{2+\gamma}. \tag{5.7}\] Moreover since we have assumed a non-critical condition for our original problem we get that there exists a constant \(\tilde{c}\) such that for all \(\varepsilon\) sufficiently small it holds that \[|V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq\tilde{c}\qquad\text{for all }x\in\Omega.\] With this in place we have that \[\begin{split}&\Big{|}\operatorname{Tr}[\varphi g_{\gamma}( \mathcal{H}_{\hbar,\mu})]-\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{ \gamma}(p^{2}+V(x))\varphi(x)\,dxdp\Big{|}\\ &\quad\leq\Big{|}\operatorname{Tr}[\varphi g_{\gamma}(\mathcal{H} _{\hbar,\mu})-\operatorname{Tr}[\varphi g_{\gamma}(H_{\hbar,\mu,\varepsilon })]\Big{|}\\ &\quad\quad+\Big{|}\operatorname{Tr}[\varphi g_{\gamma}(H_{\hbar,\mu,\varepsilon})]-\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}( p^{2}+V_{\varepsilon}(x))\varphi(x)\,dxdp\Big{|}\\ &\quad\quad+\Big{|}\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2 d}}(g_{\gamma}(p^{2}+V_{\varepsilon}(x))-g_{\gamma}(p^{2}+V(x)))\varphi(x)\,dxdp \Big{|}.\end{split} \tag{5.8}\] We have by Lemma 4.8 that \[\Big{|}\operatorname{Tr}[\varphi g_{\gamma}(\mathcal{H}_{\hbar,\mu})- \operatorname{Tr}[\varphi g_{\gamma}(H_{\hbar,\mu,\varepsilon})]\Big{|}\leq C \hbar^{1+\gamma-d}+C\varepsilon^{2+\kappa}\hbar^{-d-1}=\tilde{C}\hbar^{1+ \gamma-d}, \tag{5.9}\] where we in the last equality have used (5.7). From Lemma 4.6 we get that \[\Big{|}\operatorname{Tr}[\varphi g_{\gamma}(H_{\hbar,\mu,\varepsilon})]-\frac {1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V_{\varepsilon}(x)) \varphi(x)\,dxdp\Big{|}\leq C\hbar^{1+\gamma-d}. \tag{5.10}\] To estimate the last contribution in (5.8) we first notice that by construction of \(V_{\varepsilon}\) we have that \[\sup_{x\in\Omega}\big{|}V(x)_{-}-V_{\varepsilon}(x)_{-}\big{|}\leq C \varepsilon^{2+\kappa}=C\hbar^{2+\gamma}.\] Hence it follow from Lemma 5.1 that \[\Big{|}\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}(g_{\gamma}(p^{2}+V_{ \varepsilon}(x))-g_{\gamma}(p^{2}+V(x)))\varphi(x)\,dxdp\Big{|}\leq C\hbar^{2 +\gamma-d}. \tag{5.11}\] Finally by combining (5.8), (5.9), (5.10) and (5.11) we obtain the desired estimate and this concludes the proof. ## 6 Proof of Theorem 1.2 This section is devoted to the proof of Theorem 1.2. The proof is based on the multi-scale techniques of [27] (see also [12, 16]). Before we start the proof we will recall the following Lemma from [27] where it is Lemma 5.4. **Lemma 6.1**.: _Let \(\Omega\subset\mathbb{R}^{d}\) be an open set and let \(l\) be a function in \(C^{1}(\bar{\Omega})\) such that \(l>0\) on \(\bar{\Omega}\) and assume that there exists \(\rho\) in \((0,1)\) such that_ \[|\nabla_{x}l(x)|\leq\rho, \tag{6.1}\] _for all \(x\) in \(\Omega\)._ _Then_ * _There exists a sequence_ \(\{x_{k}\}_{k=0}^{\infty}\) _in_ \(\Omega\) _such that the open balls_ \(B(x_{k},l(x_{k}))\) _form a covering of_ \(\Omega\)_. Furthermore, there exists a constant_ \(N_{\rho}\)_, depending only on the constant_ \(\rho\)_, such that the intersection of more than_ \(N_{\rho}\) _balls is empty._ * _One can choose a sequence_ \(\{\varphi_{k}\}_{k=0}^{\infty}\) _such that_ \(\varphi_{k}\in C_{0}^{\infty}(B(x_{k},l(x_{k})))\) _for all_ \(k\) _in_ \(\mathbb{N}\)_. Moreover, for all multiindices_ \(\alpha\) _and all_ \(k\) _in_ \(\mathbb{N}\)__ \[|\partial_{x}^{\alpha}\varphi_{k}(x)|\leq C_{\alpha}l(x_{k})^{-|\alpha|},\] _and_ \[\sum_{k=1}^{\infty}\varphi_{k}(x)=1,\] _for all_ \(x\) _in_ \(\Omega\)_._ The proof of the Lemma is analogous to the proof of [11, Theorem 1.4.10]. Before we give a proof of Theorem 1.2 we will prove the following theorem, where we have an additional assumption on the magnetic field compared to Theorem 1.2. **Theorem 6.2**.: _Let \(\mathcal{H}_{h,\mu}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\) and let \(\gamma\in[0,1]\). If \(\gamma=1\) we assume \(d\geq 3\) and if \(\gamma\in(0,1]\) we assume \(d\geq 4\). Suppose that \(\mathcal{H}_{h,\mu}\) satisfies Assumption 1.1 with the set \(\Omega\) and the functions \(V\) and \(a_{j}\) for \(j\in\{1,\ldots,d\}\). Then for any \(\varphi\in C_{0}^{\infty}(\Omega)\) it holds that_ \[\Big{|}\operatorname{Tr}[\varphi g_{\gamma}(\mathcal{H}_{h,\mu})]-\frac{1}{( 2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x))\varphi(x)\,dxdp \Big{|}\leq C\hbar^{1+\gamma-d}.\] _for all \(\hbar\in(0,\hbar_{0}]\) and \(\mu\leq\mu_{0}<1\), \(\hbar_{0}\) sufficiently small. The constant \(C\) is depending on the dimension, the numbers \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R}^{d})}\) and \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) and \(j\in\{1\ldots,d\}\) and \(\|\partial_{x}^{\alpha}V\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) such that \(|\alpha|\leq 2\)._ Proof.: Since \(\varphi\in C_{0}^{\infty}(\Omega)\) there exists a number \(\epsilon>0\) such that \[\operatorname{dist}(\operatorname{supp}(\varphi),\Omega^{c})>\epsilon.\] We need this number to ensure we stay in the region where \(\mathcal{H}\) behaves as a magnetic Schrodinger operator. We let \[l(x)=A^{-1}\sqrt{|V(x)|^{2}+\hbar^{\frac{4}{3}}}\quad\text{and}\quad f(x)= \sqrt{l(x)}.\] Where we choose \(A>0\) sufficiently large such that \[l(x)\leq\min\big{(}\tfrac{\epsilon}{11},1\big{)}\quad\text{and}\quad|\nabla l (x)|\leq\rho<\frac{1}{8} \tag{6.2}\] for all \(x\in\overline{\operatorname{supp}(\varphi)}\). Note that we can choose \(A\) independent of \(\hbar\) and uniformly for \(\hbar\in(0,\hbar_{0}]\). Moreover, we have that \[|V(x)|\leq Al(x). \tag{6.3}\] We use Lemma 6.1 with the set \(\operatorname{supp}(\varphi)\) and the function \(l(x)\). We can do this as due to the presence of \(\hbar\) in the definition of \(l\) we have that \(l>0\). By Lemma 6.1 with the set \(\operatorname{supp}(\varphi)\) and the function \(l(x)\) there exists a sequence \(\{x_{k}\}_{k=1}^{\infty}\) in \(\operatorname{supp}(\varphi)\) such that \(\operatorname{supp}(\varphi)\subset\cup_{k\in\mathbb{N}}B(x_{k},l(x_{k}))\) and there exists a constant \(N_{\frac{1}{8}}\) such that at most \(N_{\frac{1}{8}}\) of the sets \(B(x_{k},l(x_{k}))\) can have a non-empty intersection. Moreover there exists a sequence \(\{\varphi_{k}\}_{k=1}^{\infty}\) such that \(\varphi_{k}\in C_{0}^{\infty}(B(x_{k},l(x_{k})),\) \[\big{|}\partial_{x}^{\alpha}\varphi_{k}(x)\big{|}\leq C_{\alpha}l(x_{k})^{-| \alpha|}\qquad\text{for all $\alpha\in\mathbb{N}_{0}$}, \tag{6.4}\] and \[\sum_{k=1}^{\infty}\varphi_{k}(x)=1\qquad\text{for all $\operatorname{supp}( \varphi)$}.\] We have that \(\cup_{k\in\mathbb{N}}B(x_{k},l(x_{k}))\) is an open covering of \(\operatorname{supp}(\varphi)\) and since this set is compact there exists a finite subset \(\mathcal{I}^{\prime}\subset\mathbb{N}\) such that \[\operatorname{supp}(\varphi)\subset\bigcup_{k\in\mathcal{I}^{\prime}}B(x_{k},l(x_{k})).\] In order to ensure that we have a finite partition of unity over the set \(\operatorname{supp}(\varphi)\) we define the set \[\mathcal{I}=\bigcup_{j\in\mathcal{I}^{\prime}}\Big{\{}k\in\mathbb{N}\,\big{|} \,B(x_{k},l(x_{k}))\cap B(x_{j},l(x_{j}))\neq\emptyset\Big{\}}.\] We have that \(\mathcal{I}\) is still finite since at most \(N_{\frac{1}{8}}\) balls can have non-empty intersection. Moreover, we have that \[\sum_{k\in\mathcal{I}}\varphi_{k}(x)=1\qquad\text{for all $\operatorname{supp}( \varphi)$}.\] From this we get the following identity \[\operatorname{Tr}[\varphi\mathbf{1}_{(-\infty,0]}(H_{\hbar,\varepsilon})]= \sum_{k\in\mathcal{I}}\operatorname{Tr}[\varphi_{k}\varphi\mathbf{1}_{(- \infty,0]}(H_{\hbar,\varepsilon})], \tag{6.5}\] where we have used linearity of the trace. In what follows we will use the following notation \[l_{k}=l(x_{k}),\quad f_{k}=f(x_{k}),\quad h_{k}=\frac{\hbar}{l_{k}f_{k}}\quad \text{and}\quad\mu_{k}=\frac{\mu l_{k}}{f_{k}}.\] We have that \(h_{k}\) is uniformly bounded from above since \[l(x)f(x)=A^{-\frac{3}{2}}(|V_{\varepsilon}(x)|^{2}+\hbar^{\frac{4}{3}})^{\frac {3}{2}}\geq A^{-\frac{3}{2}}\hbar,\] for all \(x\in\mathbb{R}^{d}\). Moreover, due to our choice of \(f\) and \(l\) we have that \(\mu_{k}\) is bounded from above by \(\mu_{0}\) since for all \(x\in\mathbb{R}^{d}\) we have that \[\frac{l(x)}{f(x)}=\sqrt{l(x)}\leq 1. \tag{6.6}\] We define the two unitary operators \(U_{l}\) and \(T_{z}\) by \[U_{l}f(x)=l^{\frac{d}{2}}f(lx)\quad\text{and}\quad T_{z}f(x)=f(x+z)\qquad \text{for $f\in L^{2}(\mathbb{R}^{d})$}.\] Moreover we set \[\tilde{\mathcal{H}}_{h_{k},\mu_{k}}=f_{k}^{-2}(T_{x_{k}}U_{l_{k}})\mathcal{H} _{h,\mu}(T_{x_{k}}U_{l_{k}})^{*}.\] Since we have that \(\mathcal{H}_{h,\mu}\) satisfies Assumption 1.1 with the open set \(\Omega\) and the functions \(V\) and \(a_{j}\) for all \(j\in\{1,\ldots,d\}\) and we have that \(\tilde{\mathcal{H}}_{h_{k},\mu_{k}}\) will satisfies Assumption 1.1 with the open set \(B(0,10)\) and the functions \(\tilde{V}\) and \(\tilde{a}_{l}\) for all \(j\in\{1,\ldots,d\}\), where \[\tilde{V}(x)=f_{k}^{-2}V(l_{k}x+x_{k})\quad\text{and}\quad\tilde{a}_{l}(x)=l_ {k}^{-1}a_{j}(l_{k}x+x_{k})\quad\text{for all }j\in\{1,\ldots,d\}. \tag{6.7}\] We will here need to establish that this rescaled operator satisfies the assumptions of Theorem 5.2 with the parameters \(\hbar_{k}\) and \(\mu_{k}\) and the set \(B(0,8)\). Since we have that \(\hbar_{k}\) is bounded from above and \(\mu_{k}\leq\mu_{0}\) as established above what remains is to verify that we have a non-critical condition. To establish this we firstly observe that by (6.2) we have \[(1-8\rho)l_{k}\leq l(x)\leq(1+8\rho)l_{k}\qquad\text{for all }x\in B(x_{k},8l_{k}). \tag{6.8}\] Using (6.8) we have for \(x\) in \(B(0,8)\) that \[\left|\tilde{V}(x)\right|+h_{k}^{\frac{2}{3}} =f_{k}^{-2}\left|V(l_{k}x+x_{k})\right|+(\tfrac{\hbar}{\hbar_{k} \hbar_{k}})^{\frac{2}{3}}=l_{k}^{-1}(|V(l_{k}x+x_{k})|+\hbar^{\frac{2}{3}})\] \[\geq l_{k}^{-1}Al(l_{k}x+x_{k})\geq(1-8\rho)A.\] Hence we have a non-critical assumption for all \(x\in B(0,8)\). What remains is to verify that the norms of the functions \(\widetilde{\varphi_{k}\varphi}=(T_{x_{k}}U_{l_{k}})\varphi_{k}\varphi(T_{x_{ k}}U_{l_{k}})^{*}\), \(\tilde{V}\) and \(\tilde{a}_{l}\) for all \(j\in\{1,\ldots,d\}\) are independent of \(\hbar\) and \(k\). Due to (6.3) and that \(l\) is slowly varying (6.8) we have that \[\|\tilde{V}\|_{L^{\infty}(B(0,8))}=\sup_{x\in B(0,8)}\left|f_{k}^{-2}V(l_{k}x +x_{k})\right|\leq A.\] For \(\alpha\in\mathbb{N}_{0}^{d}\) with \(1\leq|\alpha|\leq 2\) we have that \[\|\partial_{x}^{\alpha}\tilde{V}(x)\|_{L^{\infty}(B(0,8))}=f_{k}^{-2}l_{k}^{| \alpha|}\sup_{x\in B(0,8)}\left|(\partial_{x}^{\alpha}V)(l_{k}x+x_{k})\right| \leq\|\partial_{x}^{\alpha}V(x)\|_{L^{\infty}(\mathbb{R}^{d})}.\] For \(\alpha\in\mathbb{N}_{0}^{d}\) with \(|\alpha|\geq 1\) we have that \[\|\partial_{x}^{\alpha}\tilde{a}_{l}(x)\|_{L^{\infty}(B(0,8))}=l_{k}^{|\alpha |-1}\sup_{x\in B(0,8)}\left|(\partial_{x}^{\alpha}a_{j})(l_{k}x+x_{k})\right| \leq\|\partial_{x}^{\alpha}a_{j}(x)\|_{L^{\infty}(\mathbb{R}^{d})},\] for all \(j\in\{1,\ldots,d\}\). Both bounds are independent of \(k\) and \(\hbar\). The last numbers we check are the numbers \(\|\partial_{x}^{\alpha}\widetilde{\varphi_{k}\varphi}\|_{L^{\infty}(\mathbb{ R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\). Here we have by construction of \(\varphi_{k}\) ((6.4)) for all \(\alpha\in\mathbb{N}_{0}^{d}\) that \[\|\partial_{x}^{\alpha}\widetilde{\varphi_{k}\varphi}\|_{L^{ \infty}(\mathbb{R}^{d})} =\sup_{x\in\mathbb{R}^{d}}\left|l_{k}^{|\alpha|}\sum_{\beta\leq \alpha}\binom{\alpha}{\beta}(\partial_{x}^{\beta}\varphi_{k})(l_{k}x+x_{k})( \partial_{x}^{\alpha-\beta}\varphi)(l_{k}x+x_{k})\right|\] \[\leq C_{\alpha}\sup_{x\in\mathbb{R}^{d}}\sum_{\beta\leq\alpha} \binom{\alpha}{\beta}l_{k}^{|\alpha-\beta|}\left|(\partial_{x}^{\alpha-\beta} \varphi)(l_{k}x+x_{k})\right|\leq\widetilde{C}_{\alpha}.\] With this we have established that all numbers the constant from Theorem 5.2 depends on are independent of \(\hbar\) and \(k\). From applying Theorem 5.2 we get that \[\begin{split}&\big{|}\operatorname{Tr}[\varphi g_{\gamma}(\mathcal{H}_{ h,\mu})]-\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x)) \varphi(x)\,dxdp\big{|}\\ &\leq\,\sum_{k\in\mathcal{I}}\big{|}\operatorname{Tr}[\varphi_{k} \varphi g_{\gamma}(\mathcal{H}_{h,\mu})]-\frac{1}{(2\pi\hbar)^{d}}\int_{ \mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x))\varphi_{k}\varphi(x)\,dxdp\big{|}\\ &\leq\,\sum_{k\in\mathcal{I}}f_{k}^{2\gamma}\big{|}\operatorname{Tr }[g_{\gamma}(\mathcal{H}_{h_{k},\mu_{k}})\widetilde{\varphi_{k}\varphi}]- \frac{1}{(2\pi\hbar_{k})^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+\tilde{V}(x ))\widetilde{\varphi_{k}\varphi}(x)\,dxdp\big{|}\\ &\leq C\sum_{k\in\mathcal{I}}h_{k}^{1+\gamma-d}f_{k}^{2\gamma}. \end{split} \tag{6.9}\] When we consider the sum over the error terms we have \[\begin{split}\sum_{k\in\mathcal{I}}Ch_{k}^{1+\gamma-d}f_{k}^{2\gamma }&=\sum_{k\in\mathcal{I}}\tilde{C}\hbar^{1+\gamma-d}\int_{B(x_{k},l _{k})}l_{k}^{-d}f_{k}^{2\gamma}(l_{k}f_{k})^{d-1-\gamma}\,dx\\ &=\sum_{k\in\mathcal{I}}\tilde{C}\hbar^{1+\gamma-d}\int_{B(x_{k}, l_{k})}l_{k}^{\gamma-d}l_{k}^{\frac{3d-3-3\gamma}{2}}\,dx\\ &\leq\sum_{k\in\mathcal{I}}\hat{C}\hbar^{1+\gamma-d}\int_{B(x_{k},l_{k})}l(x)^{\frac{d-3-\gamma}{2}}\,dx,\leq C\hbar^{1+\gamma-d}\end{split} \tag{6.10}\] where we have used the definition of \(f_{k}\) and that \(l\) is slowly varying. In the last inequality we have used that \(\operatorname{supp}(\varphi)\) is compact. This ensures that the constant is finite. Moreover, in order for the last inequality to be true we have used our assumptions on dimensions. Combining the estimates in (6.9) and (6.10) we obtain the desired estimate. This concludes the proof. We are now ready to give a proof of our main Theorem. Most of the work have already been done in establishing Theorem 6.2. When comparing to Theorem 6.2 what remains in in establishing Theorem 1.2 is to allow \(\mu\leq C\hbar^{-1}\) for some positive constant and not be bounded by \(1\). The argument is identical to the argument used in [27] to allow \(\mu\leq C\hbar^{-1}\) for some positive constant and not be bounded by \(1\). We have include it for sake of completeness. Proof of Theorem 1.2.: Since the Theorem have already been established for \(\mu\leq\mu_{0}<1\) we can without loss of generality assume that \(\mu\geq\mu_{0}\), where \(\mu_{0}<1\). We will use the same scaling technique as we used in the proof of Theorem 6.2. Again we have a \(\epsilon>0\) such that \[\operatorname{dist}(\operatorname{supp}(\varphi),\Omega^{c})>\epsilon\] since \(\varphi\in C_{0}^{\infty}(\Omega)\). This time however, we let \[l(x)=\min\big{(}1,\tfrac{\epsilon}{11}\big{)}\frac{\mu_{0}}{\mu}\quad\text{ and}\quad f(x)=1.\] We can again use Lemma 6.1 with \(l\) from above to construct the the partition of unity for \(\operatorname{supp}(\varphi)\). After this we do the rescaling as above with unitary conjugations. For this case we get \[h_{k}=\frac{\hbar}{l_{k}f_{k}}\leq\hbar\mu\leq C\quad\text{and}\quad\mu_{k}= \frac{\mu l_{k}}{f_{k}}=\mu\min\big{(}1,\tfrac{\epsilon}{11}\big{)}\frac{\mu_ {0}}{\mu}\leq\mu_{0}.\] Moreover, we can analogously to above verify that all norm bounds are independent of \(k\), \(\mu\) and \(\hbar\). So after rescaling we have operators satisfing the assumptions of Theorem 6.2 from applying this theorem we get analogous to the calculation in (6.9) that \[\big{|}\operatorname{Tr}[\varphi g_{\gamma}(\mathcal{H}_{\hbar,\mu})]-\frac{1 }{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x))\varphi(x)\,dxdp \big{|}\leq C\sum_{k\in\mathcal{I}}h_{k}^{1+\gamma-d}. \tag{6.11}\] Since \(\mathcal{I}\) is a finite set we have by our choice of the functions \(l\) and \(f\) that \[\sum_{k\in\mathcal{I}}h_{k}^{1+\gamma-d}=C\mu^{1+\gamma}\hbar^{1+\gamma-d}, \tag{6.12}\] where \(C\) depends on \(\mu_{0}\), \(\epsilon\) and the number of elements in \(\mathcal{I}\). Combining the estimates in (6.11) and (6.12) we obtain that \[\big{|}\operatorname{Tr}[\varphi g_{\gamma}(\mathcal{H}_{\hbar,\mu})]-\frac{1 }{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x))\varphi(x)\,dxdp \big{|}\leq C\mu^{1+\gamma}\hbar^{1+\gamma-d}. \tag{6.13}\] Recalling the results from Theorem 6.2 we get for all \(\mu\leq C\hbar^{-1}\) that \[\big{|}\operatorname{Tr}[\varphi g_{\gamma}(\mathcal{H}_{h,\mu})]-\frac{1}{(2 \pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x))\varphi(x)\,dxdp\big{|} \leq C\langle\mu\rangle^{1+\gamma}\hbar^{1+\gamma-d}, \tag{6.14}\] where \(\langle\mu\rangle=(1+|\mu|^{2})^{\frac{1}{2}}\). This concludes the proof.
2309.13505
Rewrite Caption Semantics: Bridging Semantic Gaps for Language-Supervised Semantic Segmentation
Vision-Language Pre-training has demonstrated its remarkable zero-shot recognition ability and potential to learn generalizable visual representations from language supervision. Taking a step ahead, language-supervised semantic segmentation enables spatial localization of textual inputs by learning pixel grouping solely from image-text pairs. Nevertheless, the state-of-the-art suffers from clear semantic gaps between visual and textual modality: plenty of visual concepts appeared in images are missing in their paired captions. Such semantic misalignment circulates in pre-training, leading to inferior zero-shot performance in dense predictions due to insufficient visual concepts captured in textual representations. To close such semantic gap, we propose Concept Curation (CoCu), a pipeline that leverages CLIP to compensate for the missing semantics. For each image-text pair, we establish a concept archive that maintains potential visually-matched concepts with our proposed vision-driven expansion and text-to-vision-guided ranking. Relevant concepts can thus be identified via cluster-guided sampling and fed into pre-training, thereby bridging the gap between visual and textual semantics. Extensive experiments over a broad suite of 8 segmentation benchmarks show that CoCu achieves superb zero-shot transfer performance and greatly boosts language-supervised segmentation baseline by a large margin, suggesting the value of bridging semantic gap in pre-training data.
Yun Xing, Jian Kang, Aoran Xiao, Jiahao Nie, Ling Shao, Shijian Lu
2023-09-24T00:05:39Z
http://arxiv.org/abs/2309.13505v4
# Rewrite Caption Semantics: Bridging Semantic Gaps for Language-Supervised Semantic Segmentation ###### Abstract Vision-Language Pre-training has demonstrated its remarkable zero-shot recognition ability and potential to learn generalizable visual representations from language supervision. Taking a step ahead, language-supervised semantic segmentation enables spatial localization of textual inputs by learning pixel grouping solely from image-text pairs. Nevertheless, the state-of-the-art suffers from clear _semantic gaps_ between visual and textual modality: plenty of visual concepts appeared in images are missing in their paired captions. Such semantic misalignment circulates in pre-training, leading to inferior zero-shot performance in dense predictions due to insufficient visual concepts captured in textual representations. To close such _semantic gap_, we propose Concept Curation (CoCu), a pipeline that leverages CLIP to compensate for the missing semantics. For each image-text pair, we establish a _concept archive_ that maintains potential visually-matched concepts with our proposed _vision-driven expansion_ and _text-to-vision-guided ranking_. Relevant concepts can thus be identified via _cluster-guided sampling_ and fed into pre-training, thereby bridging the gap between visual and textual semantics. Extensive experiments over a broad suite of 8 segmentation benchmarks show that CoCu achieves superb zero-shot transfer performance and greatly boosts language-supervised segmentation baseline by a large margin, suggesting the value of bridging _semantic gap_ in pre-training data. Code is available at [https://github.com/xing0047/rewrite](https://github.com/xing0047/rewrite). ## 1 Introduction Vision-Language Pre-training [34; 21; 1; 10], which aims to learn visual representations directly from natural language supervision, has endowed existing recognition systems with superior generality and open-vocabulary understanding capability. As a representative, CLIP [34] performs contrastive language-image pre-training on 400M web-crawled image-text pairs, whereby the learnt models may effortlessly transfer to a wide spectrum of classification tasks in a zero-shot manner. Motivated by the breakthrough, recent studies [43; 35; 44] extend the supervision paradigm to semantic segmentation, enabling spatial localization of textual queries in images and pixel grouping with solely supervision from image-text pairs. Distinct from conventional semantic segmentation, the language-supervised paradigm obviates the need for costly manual pixel-level annotation and enables million-level pre-training scale with much less effort. Despite the progresses [43; 35] in language-supervised semantic segmentation, the pre-training stage still suffers heavily from clear _semantic gap_ between visual and textual modality. In image-text pairs used for pre-training, it is ubiquitous that visual concepts appeared in images are missing in the corresponding textual captions. This happens largely because captions merely describe salient concepts that are worthy of mention [16; 25], while naturally forgo full semantic coverage of images (Fig. 1 (a)). Under the presence of clear cross-modal _semantic gap_ in image-text pairs, the pre-training stage of language-supervised segmentation is found to be harder to converge, leading to inferior zero-shot performance on downstream tasks (more details are elaborated in Section 4.3). This work explores to bridge _semantic gaps_ in language-supervised semantic segmentation. For each image in the pre-training data, the goal is to recover the missing visual concepts in its paired caption for more comprehensive image-text modeling. With the rich vision-language correlations in off-the-shelf foundation models such as CLIP [34], a straight solution is to retrieve the missing concepts from the text captions of pre-training data. However, such retrieved captions suffer from the _semantic bias_ illustrated in Fig. 1 (b) (e.g., "person" recovered but "grass" and "sky" still missing). The root cause lies with the original text captions in foundation model pre-training, which only capture salient concepts appeared in the paired images. Hence, the retrieved captions and concepts still suffer from clear cross-modal _semantic gap_. We propose Concept Curation (CoCu), a novel pipeline that side-steps the negative effect by _semantic bias_2 while exploiting vision-language foundation models for semantic segmentation. CoCu consists of three sequential stages: 1) _vision-driven expansion_ that constructs _concept archive_ via cross-image retrieval; 2) _text-to-vision-guided ranking_ that scores the retrieved concepts according to their assigned relevancies; and 3) _cluster-guided sampling_ that exploits semantic diversity beyond the relevancy scores for concept ranking (Fig. 1 (c)). We perform pre-training from scratch on the segmentation backbone of [43] and evaluate zero-shot transfer over 8 widely adopted segmentation benchmarks. The experiments show that the proposed CoCu improves the baseline as well as the state-of-the-art consistently by large margins, indicating the necessity of closing the _semantic gap_ and the effectiveness of our designs in concept archiving and concept ranking. Footnote 2: to clarify, we refer to _semantic gap_ as a problem in web-crawled image-text pairs and _semantic bias_ as an issue in pre-trained vision-language model. In summary, the contributions of this work are three-fold. _First_, we identify the issue of _semantic gap_ in language-supervised semantic segmentation, and demonstrate the effectiveness of mining more relevant visual concepts for segmentation model pre-training. _Second_, we design Concept Curation (CoCu), a novel pipeline that constructs _concept archives_ to expa Figure 1: Cross-modal _semantic gap_ is prevalent in web-crawled image-text pairs. As in (a), the caption text often captures certain salient visual concepts only in the paired image but misses many others (i.e., ‘_person_’, ‘_grass_’, and ‘_sky_’) that are also useful in image-text modeling. Leveraging CLIP [34], more useful visual concepts could be captured via image-to-text retrieval, but the retrieved captions usually suffer from the _semantic bias_ as in (b) (i.e., ‘_person_’ recovered but ‘_grass_’ and ‘_sky_’ still missing). Our proposed Concept Curation (CoCu) bridges the cross-modal _semantic gap_ effectively by _vision-driven expansion_, _text-to-vision-guided ranking_ and _cluster-guided sampling_ while avoiding the negative effect by _semantic bias_, as illustrated in (c). Best viewed in color. visual concepts from pre-training data, mitigating the _semantic gap_ between textual and visual modalities effectly. _Third_, extensive experiments show that the proposed pipeline achieves superior zero-shot transfer performance and outperforms the state-of-the-art across 8 segmentation benchmarks consistently by large margins. ## 2 Related Works **Semantic Segmentation.** Partitioning an image into semantic regions, also known as semantic segmentation, has been widely studied due to myriad real-world applications such as video surveillance and autonomous driving. It has been explored along different directions, e.g., by designing different network architectures [29; 9; 11; 42], constructing benchmarks of diverse scenes and categories [12; 49; 5; 17], etc. However, collecting per-category dense annotations for supervised training is notoriously labor-intensive, which impedes the upscaling of semantic vocabulary greatly. Different from conventional semantic segmentation, segmentation from language supervision [43; 28; 7; 32; 35; 44] relieves the burden of mask annotation by leveraging image-text pairs available on the Internet. Beyond that, it can handle arbitrary new semantics thanks to the language supervision paradigm, making it feasible to learn generalizable segmentation models. **Vision-Language Pre-training.** Recently, Vision-Language Pre-training has become a predominant trend by learning visual representations from natural language supervision [34; 21; 24; 47; 1; 31; 10; 2]. By matching billion-scale image-text pairs via contrast, the learnt representations can be seamlessly transferred to various downstream classification tasks in a zero-shot manner. As a representative, CLIP [34] can match the performance of supervised baselines on ImageNet [13], meanwhile obtain competitive performance over plethora of downstream tasks without accessing any target data. The same learning paradigm has recently been explored for the task of semantic segmentation by hierarchical grouping [43], supervision mining [35; 44; 7; 32], etc. Nevertheless, state-of-the-art language-supervised segmentation is held back by the cross-modal _semantic gap_ between textual and visual pre-training data. Instead of relaxing the strict one-to-one correspondence in vanilla contrastive learning [35], we mitigate the _semantic gap_ by automated curation of relevant visual concepts through concept expanding and concept ranking. **Open-Vocabulary Semantic Segmentation.** Open-Vocabulary Semantic Segmentation has been studied extensively and most existing work can be broadly grouped into three categories. **Mix Supervision:** the first category follows a zero-shot manner [4; 23] which aims to segment new classes by learning from densely annotated seen classes in the pre-training data. Recently, several studies [18; 22; 46; 19; 26; 51; 45; 48] introduce language supervision to enhance the generality of the learnt zero-shot models. These approaches require no data annotation from new classes, but still rely on dense annotations from seen classes during the pre-training stage. **No Supervision**: the second category follows a training-free approach [50; 38; 37] which explores the segmentation potential of frozen vision-language models (VLMs) [50; 38; 37] to predict segmentation masks. However, most VLMs are trained with image-level supervision which restricts their capability on pixel/region-level predictions in semantic segmentation. **Language Supervision:** the third category follows a pure-language-supervision paradigm [43; 28; 32; 35; 7; 44] which aims to learn pixel grouping from solely image-text pairs. Our work follows the third approach. Different from existing studies, we identify the _semantic gap_ in pre-training image and text data and design concept curation that mitigates the _semantic gap_ with clearly improved semantic segmentation performance, more details to be described in the ensuing subsections. ## 3 Methodology With clear _semantic gaps_ between visual and textual concepts in pre-training data as illustrated in Fig. 1 (a), one naive solution is to employ given image as query to retrieve related captions and derive missing textual concepts as described in Sec. 3.2. However, the naive solution suffers from clear _semantic bias_ as most VLM-retrieved captions contain salient concepts only as illustrated in Fig. 1 (b). We thus further design _vision-guided expansion_, _text-to-image-guided ranking_ and _cluster-guided sampling_ for better mitigation of the _semantic gap_ as presented in Fig. 1 (c) and Sec. 3.3. ### Revisiting GroupViT **Segmentation Architecture.** We use GroupViT [43] as the segmentation backbone for pre-training. Assume a batch of image-text pairs \(\{(x^{I},x^{T})\}_{i=1}^{B}\), where \(x^{I}\) amd \(x^{T}\) denotes an image and its paired caption, respectively. For the vision flow, a grouping-aware transformer \(\mathcal{F}_{s}^{I}\) encodes image \(x^{I}\) as G segment tokens \(z_{\textit{seg}}^{I}=\{z_{\textit{seg}_{g}}^{I},g=1,...,G\}\in\mathbb{R}^{G \times d}\), where each segment token \(z_{\textit{seg}_{g}}^{I}\in\mathbb{R}^{d}\) encodes an arbitrary-shaped region in image \(x^{I}\). **Image-Caption Contrastive Loss.** To perform pre-training, the segment tokens \(Z_{\textit{seg}}^{I}\) are merged via average pooling, producing a global representation \(z_{\textit{seg}}^{I}\in\mathbb{R}^{d}\) that captures all the visual concepts appeared in image \(x^{I}\). Meanwhile, the paired caption \(x^{T}\) is encoded to \(z^{T}\in\mathbb{R}^{d}\) by a text encoder \(\mathcal{F}^{T}\). The visual embedding \(z_{\textit{seg}}^{I}\) and textual embedding \(z^{T}\) are mapped to the same space by separate linear projectors. The segmentator is then learnt from language supervision by the standard contrastive objective InfoNCE [33], which is defined as: \[\mathcal{L}_{I\to T} =-\frac{1}{B}\sum_{i=1}^{B}\log\frac{\exp(z_{i}^{I}\cdot z_{i}^{T}/ \tau)}{\sum_{j=1}^{B}\exp(z_{i}^{I}\cdot z_{j}^{T}/\tau)} \tag{1}\] \[\mathcal{L}_{T\to I} =-\frac{1}{B}\sum_{i=1}^{B}\log\frac{\exp(z_{i}^{T}\cdot z_{i}^{I}/ \tau)}{\sum_{j=1}^{B}\exp(z_{i}^{T}\cdot z_{j}^{I}/\tau)} \tag{2}\] where \(\tau\) is a learnable parameter initialized with 0.07 [34] and the \(z^{I}\cdot z^{T}\) computes the cross-modal cosine similarity. **Multi-Label Loss**. Beyond learning segmentation from raw caption \(x^{T}\), GroupViT [43] further introduces \(L\) extra text labels \(\{x^{T_{l}},l=1,...,L\}\) by prompting the extracted concepts \(\{c_{l},l=1,...,L\}\) with handcrafted templates [34] (e.g., "a photo of a {concept}"). The \(L\) text labels are fed to the same text encoder \(\mathcal{F}^{T}\) to obtain textual representations of \(\{z^{T_{l}},l=1,...,L\}\). The language supervision by multi-label loss is thus defined as: \[\mathcal{L}_{I\rightarrow\{T_{l}\}_{l=1}^{L}}=-\frac{1}{B}\sum_{i=1}^{B}\log \frac{\sum_{l=1}^{L}\exp(z_{i}^{I}\cdot z_{i}^{T_{l}}/\tau)}{\sum_{l=1}^{L} \sum_{j=1}^{B}\exp(z_{i}^{I}\cdot z_{j}^{T_{l}}/\tau)} \tag{3}\] \[\mathcal{L}_{\{T_{l}\}_{l=1}^{L}\to I}=-\frac{1}{LB}\sum_{l=1}^{L} \sum_{i=1}^{B}\log\frac{\exp(z_{i}^{T_{l}}\cdot z_{i}^{I}/\tau)}{\sum_{j=1}^{B} \exp(z_{i}^{T_{l}}\cdot z_{j}^{I}/\tau)} \tag{4}\] The overall training objective of learning segmentation from language supervision in [43] is defined as: \[\mathcal{L}=\mathcal{L}_{I\leftrightarrow T}+\mathcal{L}_{\{T_{l}\}_{l=1}^{L} \leftrightarrow I} \tag{5}\] **Discussion.** We use the exact same training objective as in GroupViT [43] to learn segmentation from language supervision. For each pre-training image \(x^{I}\), the _multi-label loss_ enhances the contrastive learning [43] with \(L\) extra positive pairs and \(L(B-1)\) extra negative pairs (\(B\) denotes batch size used in pre-training). However, we highlight that the simple concept prompting does not expand the textual concepts much. The cross-modal _semantic gap_ still exists and circulates in pre-training, which holds back the training convergence and degrades the zero-shot transfer. ### Naive Solution **Caption Curation.** The web-crawled image-text pairs are often noisy with imprecise and even irrelevant text descriptions [41]. In addition, many visual concepts (especially those inconspicuous in the background) in images are often missing in the corresponding text descriptions. Both factors lead to clear _semantic gaps_ between web-crawled images and texts. With super-rich image-text correlations in pre-trained VLMs such as CLIP [34], a straight solution, which we term by _caption curation_, is to apply \(x^{I}\) as query to retrieve \(L\) extra captions \(\{x^{T_{l}},l=1,...,L\}\) from pre-training data. The _semantic gaps_ between visual and textual modality could thus be mitigated by identifying relevant concepts from the retrieved captions. **Semantic Bias.** Though _caption curation_ expands visual concepts effectively, retrieved captions often suffer from clear _semantic bias_: VLMs tend to retrieve salient concepts but miss many inconspicuous ones that are also useful in image description. Consequently, visual concepts \(C^{I}=\{c_{l},\;l=1,...,M^{I}\}\) appeared in an image \(x^{I}\) are usually clearly more than textual concepts \(C^{T}=\{c_{l},\;l=1,...,M^{T}\}\) extracted from \(\{x^{T},x^{T_{1}},...,x^{T_{L}}\}\) (i.e., \(M^{I}>M^{T}\)). The root cause of the _semantic bias_ lies with the loose correlation between the visual and textual pre-training data of VLMs, where most captions just capture partial visual concepts appeared in the paired images [25]. The _semantic bias_ thus impedes convergence and effectiveness of language-supervised training without language supervision available for those non-described image regions. ### Concept Curation To bridge semantic gaps in image-text pairs, we propose **Concept Curation (CoCu)** to rewrite caption semantics with the help of a pre-trained vision-language model. Consequently, **CoCu** finds more concept candidates that are aligned to images and compensate for missing semantics in captions. In pre-training, a multi-modal segmentor matches images and visual-enriched captions by contrastive objectives mentioned in Sec. 3.1, encoding better vision-language alignment in its representations. Details of CoCu are described as below. **Vision-driven Expansion.** For an image-text pair \((x^{I},\;x^{T})\), the goal of the vision-driven expansion is to build an archive of textual concepts \(C^{T}=\{c_{m},\;m=1,...,M\}\) that are potentially matched with \(x^{I}\) as illustrated in Fig. 2. Instead of acquiring \(C^{T}\) via direct text retrieval as in _caption curation_, we resort to cross-image retrieval to achieve the expansion. Concretely, \(N\) image-text pairs \(P=\{(x^{I}_{i},\;x^{T}_{i}),\;i=1,...,N\}\) are automatically selected from the pre-training data, where \(\{x^{I}_{i},\;i=1,...,N\}\) are N captioned images whose visual features match the best with that of \(x^{I}\) (all encoded by CLIP). \(C^{T}\) can thus be derived by extracting textual concepts from the captions Figure 2: Illustration of _vision-driven expansion_ (above) and _text-to-image-guided ranking_ (below) in CoCu. To compensate for missing semantics, _vision-driven expansion_ establishes an archive of potential matched concepts through image-to-image retrieval, while _text-to-vision-guided ranking_ scores retrieved concepts based on assigned relevancy. The textual concepts can later be identified in pre-training by sampling. In the figure, images with a blue border \(\square\) are retrieved via expanded concepts (marked as blue) using their paired captions, while images with a red border \(\square\) represent images for curation (as anchor). Best viewed in color. \(\{x_{i}^{T},\;i=1,...,N\}\) of the \(N\) best matched images. Compared with the _caption curation_ that retrieves \(L\) descriptions \(\{x_{i}^{T},\;i=1,...,L\}\), _vision-driven expansion_ exploits all visual information in images instead of biased caption texts (mostly describes salient visual concepts only), which helps restore more relevant textual concepts. In addition, it builds an extra image set \(\{x_{i}^{I},\;i=1,...,N\}\) that plays a pivotal role in the upcoming stages. **Text-to-Vision-Guided Ranking.** For each textual concept \(c_{m}\) in the concept archive \(C^{T}\), we assign a score \(s_{c_{m}}\) to represent its relevancy to image \(x^{I}\). A naive solution to \(p_{m}\) is to compute the cosine similarity between the visual representation of \(x^{I}\) and textual representation of \(t_{m}\) encoded by CLIP [34], which is simply defined as: \[s_{c_{m}}^{a}=f(x^{I},\;t_{m}) \tag{6}\] where \(t_{m}\) is derived from visual concept \(c_{m}\) via prompt engineering [34; 43]. Note direct text retrieval could easily get biased towards salient visual concepts here, imposing low relevance scores for other text concepts in _concept archive_. Beyond \(f(x^{I},\;t_{m})\), we also design a non-biased metric to capture the relevancy between image \(x^{I}\) and concept \(c_{m}\). Specifically, for the \(N\) retrieved image-text pairs \(P\), we first extract a subset \(P_{G}\) (blue box/text in Fig. 2 (below)), whose caption of each image-text pair contains the textual concept \(c_{m}\). The non-biased metric is thus defined with \(x^{I}\) (red box) and image-text pairs \(P_{G}=\{(x_{i}^{I},x_{i}^{T}),\;i=1,...,N^{\prime}\}\) as follows: \[s_{c_{m}}^{b}=\frac{(1+N^{\prime})f(t_{m},x^{I})}{f(t_{m},x^{I})+\sum_{i=1}^{N ^{\prime}}f(t_{m},x_{i}^{I})} \tag{7}\] The given term functions as follows: 1) lower relevancy between image \(x^{I}\) and irrelevant concepts (e.g., 'horse' in Fig. 2); 2) enhance relevancy between \(x^{I}\) and inconspicuous concepts (e.g., 'grass'). Instead of computing relevancies of \(x^{I}\) to all textual concepts in a single run, we consider one \(c_{m}\) at a time and measure its relevancy by comparing \(f(x^{I},\;t_{m})\) with \(\{f(x_{i}^{I},\;t_{m}),\;i=1,...,N^{\prime}\}\). The idea behind this is simple: 1) comparing the responses of images \(\{x^{I},x_{1}^{I},...,x_{N^{\prime}}^{I}\}\) to the same textual concept \(c_{m}\); 2) high relevancy is given if response of \(x^{I}\) to textual concept \(c_{m}\) is comparably high and vice versa. Take the visual concept 'grass' in Fig. 2 (below) as an example. The \(t_{m}\) ('a photo of grass') causes \(x^{I}\) to rank considerably higher than image captioned with \(c_{m}\) ('grass'). In this case, we should be fairly confident to pass the concept \(c_{m}\) to \(x^{I}\). In conclusion, the relevancy score is simply defined as: \[s_{c_{m}}=s_{c_{m}}^{a}\;+\;s_{c_{m}}^{b} \tag{8}\] we perform ranking according to computed relevancies \(\{s_{c_{m}},\;m=1,...,M\}\), which represents chances identified by later sampling. **Cluster-guided Sampling.** The pre-training can thus be empowered by including the expanded and ranked textual concepts which as selected by sampling \(L\) textual concepts according to their computed relevancies as in [43]. However, selection with relevancy alone is often short of semantic diversity in the selected text concepts. Instead of directly selecting \(L\) concepts from the ranked archive \(C^{T}\), we partition \(C^{T}\) into \(L\) semantic clusters based on their textual representations and sample one textual concept from each semantic cluster. The _cluster-guided sampling_ has two clear benefits: 1) it includes more diverse semantics in each single training step; 2) it keeps good consistency with the expression of visual concepts, more details to be discussed in Sec. 4.4 and appendix. ## 4 Experiments ### Experimental Setup **Training Detail**. We follow the prior study [43] and conduct pre-training on three publicly available image-text datasets: CC3M (C3) [36], CC12M (C12) [8], YFCC14M (Y14) [39]. For fair comparison, we use the same GroupViT [43] as the visual encoder, which is built upon ViT-S backbone [14; 40] and learnt from scratch. We set the global batch size for contrastive learning as 1,024 and use 4 Tesla V100 GPUs to carry out pre-training for all experiments. Consistent with [43], we set the initial learning rate to 0.0016. The pre-training undergoes 30 epochs, with a linear warmup for the first 2 epochs and a cosine schedule for the remaining epochs. \(L\) is set to 3. In our ablations and discussions, we report the performance of models pre-trained on CC3M. **Implementation of curation.**The curation pipeline utilizes clip-retrieval [3], a utility that enables efficient computation of CLIP embeddings and fast indexing for retrieval. We employ the CLIP ViT-B/16 [34] model for image/text inference and concept curation. For efficient semantic searching, we build indexing systems using autofaiss 3. It is worth mentioning that alternative systems can also be used for implementing the curation process. Footnote 3: [https://github.com/criteo/autofaiss.git](https://github.com/criteo/autofaiss.git) **Evaluation**. We benchmark zero-shot transfer performance of CoCu on the validation splits of eight different datasets that cover a myriad of scenes and category sets, including Pascal VOC [15], Pascal Context [30], COCO [27], ImageNet-S-50, ImageNet-S-300 [17], COCO Stuff [5], Cityscapes [12], and ADE20K [49]. For the first five datasets, we follow [43] and evaluate foreground classes by thresholding the similarity between visual and textual embeddings. For other datasets, we evaluate both foreground and background classes. More details are given in the appendix. ### Comparison with the state-of-the-art We first benchmark CoCu with state-of-the-art zero-shot methods [50; 43] and evaluate its effectiveness. Specifically, we follow prior work [43] and pre-train CoCu over the combination of C3, C12, and Y14 datasets. Tab. 1 reports zero-shot segmentation results. Besides GroupViT as the baseline method, we also compare the advanced MaskCLIP [50]), which directly leverages the frozen CLIP model for segmentation prediction without pre-training. In addition, for a comprehensive comparison, we list the performance of other advanced methods including 1) fully-supervised method [40] that provides Oracle's performance, 2) self-supervised methods [20; 6] that pre-train models with unlabeled data and fine-tuning models over segmentation datasets. Detailed implementations of the comparing methods could be found in the appendix. As shown in Tab. 1, MaskCLIP achieves limited segmentation performance, primarily due to CLIP being trained with image-level supervision and thus falling short in precise pixel-level predictions. GroupViT achieves better performance than MaskCLIP, but still limited by insufficient supervision from language side in pre-training. On the contrary, our CoCu achieves the best segmentation performance over all eight benchmarks, surpassing GroupViT by large margins on average. This indicates the necessity of bridging _semantic gaps_ in language-supervised semantic segmentation and the effectiveness of our design. We further evaluate the robustness of CoCu with different pre-train data. Spcifically, we pre-train GroupViT and CoCu over CC3M and CC12M, respectively. We also sub-sample half of image-text pairs from CC12M (denoted as C12\({}^{*}\)) for pre-training. Tab. 2 shows the experimental results. We can observe consistent yet significant performance gains on eight benchmarks. The improvement by bridging _semantic gap_ is thus robust and not affected by pre-training size. \begin{table} \begin{tabular}{l l l l l l l l l l l l l l} \hline \hline **Method** & **Pertrain Data** & **Supervision** & **LC** & **BS** & **Backbone** & **PVOC** & **PCON** & **COCO** & **IN80** & **IN80** & **CITY** & **ADE** & **STUF** & **ANG** \\ \hline DeiT [40] & IN-1K & full & - & VtT-S & 53.0 & 35.9 & - & - & - & - & - & - & - \\ \hline MoCo [20] & IN-1K & self & - & - & 34.3 & 21.3 & - & - & - & - & - & - & - \\ DINO [6] & IN-1K & self & - & - & 39.1 & 20.4 & - & - & - & - & - & - & - \\ MoCo [20] & C12Y14 & self & - & - & - & 36.1 & 23.0 & - & - & - & - & - & - & - \\ DINO [6] & C12Y14 & self & - & - & 37.6 & 22.8 & - & - & - & - & - & - & - \\ \hline MaskCLIP [50] & - & N-A & ✓ & - & ResNet-50 & 41.5 & 18.5 & 10.5 & 13.8 & 7.9 & 18.8 & 8.3 & 10.2 & 15.0 \\ MaskCLIP [50] & - & N-A & ✓ & ✓ & VtT-B/16 & 49.5 & 21.7 & 13.6 & 25.9 & 11.7 & 19.8 & 9.5 & 12.5 & 20.5 \\ \hline GroupViT [43] & C3,C12,Y14 & text & 4,096 & VtT-S & **52.4** & 23.3 & **24.3** & 44.3 & 23.5 & 15.8 & 10.4 & 13.0 & 25.7 \\ \hline GroupViT [43] & C3,C12,Y14 & text & 1,024 & VtT-S & 43.8 & 19.3 & 19.6 & 37.8 & 17.2 & 17.2 & 10.4 & 13.6 & 22.4 \\ CoCo (ours) & C3,C12,Y14 & text & 1,034 & VtT-S & 49.7 & 22.8 & 22.0 & 46.7 & 24.7 & 21.9 & 12.0 & 14.9 & 26.8 \\ \hline GroupViT [43] & C3,C12,Y14 & text & ✓ & 1,024 & VtT-S & 45.4 & 19.9 & 20.3 & 39.2 & 17.7 & 17.6 & 10.6 & 13.9 & 23.1 \\ CoCo (ours) & C3,C12,Y14 & text & ✓ & 1,024 & VtT-S & 51.4 & **23.6** & 22.7 & **48.8** & **25.5** & **22.1** & **12.3** & **15.2** & **27.7** \\ \hline \hline \end{tabular} \end{table} Table 1: **Performance of different zero-shot methods for semantic segmentation. Abbreviations of benchmarks, from left to right: Pascal VOC [15], Pascal Context [30], Microsoft COCO [5], ImageNet-S [17], Cityscapes [12], and ADE20K [49]. BS denotes pre-training batch size, while LC represents local consistency [2] in mask prediction. \(\dagger\) denotes our re-implementation. CoCu consistently achieves the best performance across all benchmarks.** ### CoCu helps convergence **Loss Curve.** In Figure 3 (a), we compare the pre-training loss curves of GroupViT and our proposed method CoCu. We can see that CoCu exhibits a notably faster convergence rate, primarily attributed to the inclusion of curated semantic concepts for each image, resulting in more effective contrastive learning. Additionally, CoCu achieves a lower minimum loss by extracting significantly richer language concepts from image data. This enriches the training process by incorporating more identified image regions and ultimately learning representations that better align with the training data. **Qualitative Comparison.** We also present qualitative results that demonstrate the effectiveness of CoCu. In Figure 3 (b), we show binary segmentation results of GroupViT (first row) and CoCu (second row) over an example image with the caption "a red fox drinking water." Our focus is on the concept of "grass," which is missing in the caption. We compare the visual responses of models trained using these two methods at different checkpoints. Both methods improve progressively during training. However, GroupViT fails to correctly localize the region of "grass" due to the lack of direct supervision from the language side. In contrast, CoCu bridges the semantic gap by accurately capturing and localizing "grass," encoding it in representations during pre-training. Consequently, it achieves significantly better segmentation results under zero-shot context. Figure 4 displays the activation maps of GroupViT and CoCu for different concepts as text inputs that do not appear in the corresponding captions. These maps further demonstrate the superiority of CoCu in language-supervised learning. In all presented images, GroupViT incorrectly activates corresponding regions based on the given text inputs (e.g., activating the "sky" region with a text input of "person" for the first image). In contrast, CoCu enables the segmentor to have the highest activations on visually relevant regions indicated by the text. This suggests that segmentors derived from our method have a better capability to discriminate various visual concepts. More convergence results can be found in the appendix. \begin{table} \begin{tabular}{l|l l l l l l l l l l} \hline \hline **Method** & **Pertori** & **Provo (Tii)** & **COCu (iii)** & **COCu (8)** & **FN8-(5)** & **IN300 (00)** & **CITY (9)** & **ADE (35)** & **STV (9T)** & **AVE** \\ \hline GroupViT [4] & C3 & 15.5 & 10.4 & 6.5 & 10.2 & 2.9 & 8.1 & 4.4 & 7.7 & 8.2 \\ CoCu & C3 & 30.6 (\(\pm\)13.75) & 13.3 (\(\pm\)3.57) & 10.3 (\(\pm\)4.73) & 13.3 (\(\pm\)4.6) & 82.2 (\(\pm\)0.15) & 6.1 (\(\pm\)1.75) & 85.5 (\(\pm\)0.98) & 13.1 (\(\pm\)1.95) \\ \hline GroupViT [4] & C27 & - & 13.3 & 13.3 & 12.9 & 27.9 & 12.4 & 10.7 & 5.6 & 8.6 & 8.6 & 13.5 \\ CoCa & C127 & 34.1 (\(\pm\)1.46) & 16.4 (\(\pm\)3.15) & 17.0 (\(\pm\)4.15) & 33.6 (\(\pm\)4.15) & 17.3 (\(\pm\)4.95) & 11.8 (\(\pm\)1.15) & 8.1 (\(\pm\)2.95) & 9.5 (\(\pm\)0.95) & 18.4 (\(\pm\)2.95) \\ \hline GroupViT [4] & C3,C127 & 36.5 & 15.9 & 16.2 & 33.5 & 14.0 & 12.4 & 7.0 & 10.5 & 18.2 \\ CoCa & C3,C127 & 38.1 (\(\pm\)1.66) & 19.2 (\(\pm\)3.37) & 30.1 (\(\pm\)3.95) & 58.8 (\(\pm\)2.93) & 18.9 (\(\pm\)4.96) & 14.7 (\(\pm\)2.95) & 9.6 (\(\pm\)2.95) & 11.1 (\(\pm\)1.95) & 21.0 (\(\pm\)2.95) \\ \hline GroupViT [4] & C12 & 37.5 & 18.0 & 18.3 & 35.7 & 16.8 & 13.5 & 9.1 & 13.1 & 20.2 \\ CoCa & C12 & 40.9 (\(\pm\)3.47) & 21.2 (\(\pm\)3.25) & 20.3 (\(\pm\)2.95) & 40.0 (\(\pm\)4.57) & 19.4 (\(\pm\)2.85) & 15.0 (\(\pm\)1.55) & 11.1 (\(\pm\)2.05) & 13.6 (\(\pm\)0.55) & 22.7 (\(\pm\)2.55) \\ \hline \hline \end{tabular} \end{table} Table 2: **Zero-shot semantic segmentation performance with different pre-training data.** CoCu consistently outperforms the baseline method GroupViT across all benchmarks, demonstrating its effectiveness in bridging _semantic gaps_ and achieving significant improvements. Figure 3: **CoCu enhances training convergence.** (a) The training loss curves of GroupViT and CoCu demonstrate that CoCu significantly accelerates pre-training convergence. (b) CoCu achieves superior binary segmentation results (second row) compared to GroupViT (first row) for the concept of ”grass,” which is missing in the caption, using an example image captioned as ”a red fox drinking water.” Best viewed in color. ### Analysis **Ablation Study.** We further assess the effectiveness of each module in CoCu, which includes _vision-driven expansion_, _text-to-image-guided ranking_ and _cluster-guided sampling_. Specifically, we pre-train five models with the combination of these modules or their alternative strategies, namely: 1) Baseline model of GroupViT, which is pre-trained without involving concept curation. 2) Model #1, which utilizes language-driven expansion, naive ranking, and naive sampling (Caption Curation in Sec. 3.2). 3) Model #2, which replaces language-driven expansion with vision-driven expansion on top of Model #1. 4) Model #3, which incorporates text-to-image-guided ranking on top of Model #2. And 4) the full CoCu Model #4, which combines vision-driven expansion, text-to-image-guided ranking, and cluster-guided sampling in pre-training. We report the average segmentation performance of these models across the eight datasets used previously (as shown in Table 1 and Table 2). Detailed illustrations of implementations are provided in appendix. As Tab. 3 shows, the simplest strategy of _language-driven_ in model #1 improves average mIoU by \(1.7\%\), which comes from stronger vision-language correlation in pre-training data enhanced by direct text retrieval. Next, replacing direct text retrieval with _vision-driven expansion_ in model #2 brings an additional performance boost, highlighting its significance in capturing unbiased semantics. Furthermore, incorporating _text-to-vision-guided ranking_ in Model #3 brings another noticeable performance gain, underscoring the importance of measuring concept-to-image relevancy. Finally, we upgrade the sampling strategy from the naive one that solely relies on relevancy to _cluster-guided sampling_, and build model #4 with the full CoCu, which provides more diverse semantic information in each pre-training step, ultimately leading to the best zero-shot transfer performance for semantic segmentation. **Zero-Shot Classification.** In addition to its application in zero-shot segmentation, CoCu can also be used to improve zero-shot classification. Following the previous study [43], we evaluate CoCu and compare it with GroupViT on the ImageNet-1K dataset [13]. As shown in Table 4, CoCu exhibits significant performance gains over GroupViT, demonstrating its superiority in bridging semantic gaps across tasks and achieving improved zero-shot classification results. \begin{table} \begin{tabular}{l|l|l|l} \hline \multirow{2}{*}{Method} & Pre-training & \multicolumn{2}{c}{Zero-shot} \\ \cline{3-4} & data & Acc@1(\%) & Acc@5(\%) \\ \hline GroupViT & C12 & 34.9 & 63.3 \\ CoCu (ours) & C12 & **38.4** (4.5 \(\uparrow\)) & **68.6** (5.3 \(\uparrow\)) \\ \hline GroupViT & C3,C12,Y14 & 36.8 & 66.8 \\ CoCu (ours) & C3,C12,Y14 & **43.0** (6.2 \(\uparrow\)) & **73.7** (6.9 \(\uparrow\)) \\ \hline \end{tabular} \end{table} Table 4: **Zero-shot classification on ImageNet-1K**. Acc@1 and Acc@5 denote top-1 and top-5 accuracy, respectively. Figure 4: **Visualization of activation heatmaps.** GroupViT fails to activate on corresponding visual regions for concepts not represented in captions, while CoCu exhibits significantly better localization. High activation is shown as red, and low activation is displayed as blue. Best viewed in color. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Expansion} & Ranking & Sampling & Average \\ \cline{2-5} & _lang-driven_ & _vision-driven_ & _naive text-to-vision-guided_ & _naive cluster-guided_ & \\ \hline Baseline [43] & & & & & \(8.2\) \\ \#1 & ✓ & ✓ & ✓ & ✓ & \(9.9\) (\(1.7\)\(\uparrow\)) \\ \#2 & & ✓ & ✓ & ✓ & \(10.3\) (\(2.1\)\(\uparrow\)) \\ \#3 & & ✓ & ✓ & ✓ & \(12.4\) (\(4.2\)\(\uparrow\)) \\ \#4 (_Full_ CoCu) & & ✓ & ✓ & ✓ & \(13.1\) (\(4.9\)\(\uparrow\)) \\ \hline \end{tabular} \end{table} Table 3: **Ablation study of CoCu.** We conduct an ablation study on each designed modules. Zero-shot transfer performance on semantic segmentation results are reported, averaged across eight evaluation datasets. “Naive ranking” refers to solely using cosine similarity between visual and textual representations (encoded by CLIP) as concept-to-image relevancy. “Naive sampling” denotes selecting textual concepts based solely on relevancy before pre-training. Conclusion In this paper, we identify the issue of _semantic gap_ in language-supervised semantic segmentation and explore how to bridge _semantic gaps_ effectively. To achieve this, we design Concept Curation, a novel pipeline that resolves the issue by three consecutive stages: _vision-driven expansion_, _text-to-vision-guided ranking_ and _cluster-guided sampling_. Extensive experiments demonstrate the superiority of our method for boosting language-supervised semantic segmentation across a bundle of pre-training sets and evaluation benchmarks. Looking ahead, we hope to extend the idea of concept curation to other computer vision tasks, including object detection and instance segmentation. ## Acknowledgments and Disclosure of Funding This project is funded by the Ministry of Education Singapore, under the Tier-2 project scheme with a project number MOE-T2EP20220-0003.
2309.15612
Illuminating Router Vendor Diversity Within Providers and Along Network Paths
The Internet architecture has facilitated a multi-party, distributed, and heterogeneous physical infrastructure where routers from different vendors connect and inter-operate via IP. Such vendor heterogeneity can have important security and policy implications. For example, a security vulnerability may be specific to a particular vendor and implementation, and thus will have a disproportionate impact on particular networks and paths if exploited. From a policy perspective, governments are now explicitly banning particular vendors, or have threatened to do so. Despite these critical issues, the composition of router vendors across the Internet remains largely opaque. Remotely identifying router vendors is challenging due to their strict security posture, indistinguishability due to code sharing across vendors, and noise due to vendor mergers. We make progress in overcoming these challenges by developing LFP, a tool that improves the coverage, accuracy, and efficiency of router fingerprinting as compared to the current state-of-the-art. We leverage LFP to characterize the degree of router vendor homogeneity within networks and the regional distribution of vendors. We then take a path-centric view and apply LFP to better understand the potential for correlated failures and fate-sharing. Finally, we perform a case study on inter- and intra-United States data paths to explore the feasibility to make vendor-based routing policy decisions, i.e., whether it is possible to avoid a particular vendor given the current infrastructure.
Taha Albakour, Oliver Gasser, Robert Beverly, Georgios Smaragdakis
2023-09-27T12:18:59Z
http://arxiv.org/abs/2309.15612v1
# Illuminating Router Vendor Diversity Within Providers and Along Network Paths ###### Abstract. The Internet architecture has facilitated a multi-party, distributed, and heterogeneous physical infrastructure where routers from different vendors connect and inter-operate via IP. Such vendor heterogeneity can have important security and policy implications. For example, a security vulnerability may be specific to a particular vendor and implementation, and thus will have a disproportionate impact on particular networks and paths if exploited. From a policy perspective, governments are now explicitly banning particular vendors--or have threatened to do so. Despite these critical issues, the composition of router vendors across the Internet remains largely opaque. Remotely identifying router vendors is challenging due to their strict security posture, indistinguishability due to code sharing across vendors, and noise due to vendor mergers. We make progress in overcoming these challenges by developing LFP, a tool that improves the coverage, accuracy, and efficiency of router fingerprinting as compared to the current state-of-the-art. We leverage LFP to characterize the degree of router vendor homogeneity within networks and the regional distribution of vendors. We then take a path-centric view and apply LFP to better understand the potential for correlated failures and fate-sharing. Finally, we perform a case study on inter- and intra-United States data paths to explore the feasibility to make vendor-based routing policy decisions, i.e., whether it is possible to avoid a particular vendor given the current infrastructure. Device Fingerprinting, Network Security, Network Measurement + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyright: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyright: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyright: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: 2023 + Footnote: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote active router IPs--more than double the coverage as compared to current state-of-the-art (SS7). * An accuracy evaluation of LFP compared to current tools and techniques showing that it is at least as good as Nmap while sending orders of magnitude fewer packets and improving coverage (SS7.3). * Inference of router vendors in more than 6,700 networks, including around 1,800 networks for which no vendor information was available in previous studies (SS7.5). * End-to-end data path-based router vendor analysis and case studies that provide valuable insights for current security and policy-based routing decisions (SS6). * LFP is publicly available, along with the the derived signatures and classification results from this study to enable reproducibility and future work (Birshick et al., 2018). ## 2. Related Work Prior work developed passive and active techniques that leverage open ports, identifiers, and implementation-specific differences to fingerprint devices at various granularities. Most of these techniques were developed for generic hosts, while a few attempt to fingerprint routers. **Nmap:** Nmap (Nmap, 2018) is an open-source network scanning and reconnaissance tool. It performs remote OS fingerprinting by running up to sixteen tests that send ICMP, UDP, and TCP packets with different field values, flags, and options. By examining the responses, Nmap finds the best matching operating system from a database of fingerprints. The latest Nmap version (7.93) contains more than six thousand fingerprints; of these, approximately 160 and 20 correspond to Cisco and Juniper routers, respectively. Two drawbacks of Nmap are its reliance on open ports and the large amount of probe packets needed to perform fingerprinting. We compare our approach against Nmap in (SS7.3.1). **Hershel:** Hershel (Hershel, 1998) is a low-overhead framework that models the problem of single-packet OS fingerprinting and develops novel approaches for tackling delay jitter, packet loss, and user modification to SYN-ACK features. Based on this theory, a classification method is developed that increases the accuracy of single-packet fingerprinting. Censys (Censys, 2018) includes Hershel signatures in recent raw scanning data. We compare LFP to Hershel in (SS7.3.2). **Banner Grabbing:** A popular technique for remote operating system fingerprinting and vendor information is "banner grabbing," whereby publicly available services leak information. For instance, the Cisco SSH server implementation returns identifying information in its response string. Internet-wide scanning and banner grabbing are performed regularly (Censys, 2018; Dwork et al., 2018; Dwork et al., 2018; Dwork et al., 2018). In a recent paper (Dwork et al., 2018), the authors utilize banners augmented with active measurements to perform large-scale network equipment vendor classification. Similar techniques can be applied in passive measurements as well for automatic traffic classification (Hershel, 1998). Unfortunately, banner analysis requires an open remote service that returns this discriminating information. Routers are frequently tightly secured and unresponsive to banner queries. Moreover, banner datasets are typically proprietary or commercial, with some offering free academic licenses (Censys, 2018). **TCP Stack Fingerprinting:** Many TCP stack variables, e.g., Window Size and Maximum Segment Size are implementation-specific (Birshick et al., 2018; Dwork et al., 2018; Dwork et al., 2018). These variables can differ between operating systems and versions. Consequently, TCP features can form a unique signature that can be used for fingerprinting. For instance, the initial TCP SYN-ACK packet provides some valuable information about a target's TCP stack characteristics such as the initial Time to Live (TTL) value, sequence number, and window size. When combined with the behavior of the re-transmission timeout of the SYN-ACK packets it was shown to serve as a fingerprinting technique to 25 different operating systems (Dwork et al., 2018), and in another work this was extended to cover more than 90 OSes (Dwork et al., 2018). **Sundials:** Sundials (Nandish, 2018) uses ICMP timestamps for fingerprinting purposes. Even though NTP has replaced ICMP timestamps, approximately 15% of 14.5M IP addresses in this study responded to ICMP timestamp requests. Sundials uses the variety of response behaviors as a new fingerprinting technique. However, given filtering and the relative lack of ICMP timestamp support among routers, this method has limited coverage for our fingerprinting purposes. **IPID-based Fingerprinting:** The IPID identifier (IPID) is a mandatory IPv4 header field used for fragmentation and reassembly. Thus, it is frequently possible to elicit an IPID value from a router via a simple ICMP echo. RFC 4413 (Dwork et al., 2018) classifies IPID behavior into three classes: _(1) Sequential jump:_ an incremental IPID counter that is used for all packet streams, or _(2) Random:_ a pseudo-random number generator is used for the IPID value, or _(3) Sequential:_ an incremental IPID counter on a per-stream basis. The IPID may also have a static value, e.g., zero. While the limited size (16 bits) of the IPID counter can be problematic, Internet researchers have utilized the IPID field for a broad range of applications. Bellovin (Billovin, 2018) uses IPID to count NATed hosts, alias resolution tools such as MIDAR (Dwork et al., 2018) and Allty (Billovin, 2018) use monotonic IPID counters to infer aliases, and Chen et al. (Chen et al., 2018) use PIDs to characterize end-systems. In this work, we utilize the differences in IPID value generation between router vendors across protocols for fingerprinting purposes. **TTL-based Fingerprinting:** Vanaubel et al. propose a router fingerprinting technique based solely on TTL responses (Vanaubel et al., 2018). They send TCP, UDP, and ICMP probes toward the target, and show that the tuple of inferred initial TTL (iTTL) values from the responses can coarsely differentiate between some well-known vendors, including Cisco and Juniper. Unfortunately, the possible iTTL value range is small, and can lead to a large number of incorrect inferences, e.g., we find that Huawei has the same iTTL signature as Cisco. Nevertheless, we use the iTTL values as part of a larger feature set. **SNMPv3-based Fingerprinting:** Most recently, research has shown that the adoption of the SNMPv3 protocol offers an opportunity for remote fingerprinting of network infrastructure (Birshick et al., 2018) including routers. In addition to gathering detailed information about network devices, such as vendor, uptime, and the number of restarts, the reply also contains a strong, persistent identifier that allows for lightweight alias resolution and dual-stack association. We leverage this SNMPv3 technique to build a ground truth, and use it as a baseline for our proposed LFP method. ## 3. Methodology This section presents our methodology to scalably classify routers in the wild at vendor granularity. We first give an overview, then describe our dataset, measurement probes, and the features we use for classification. Subsequently, we generate signatures based on these features and we describe how we handle classification edge cases due to ambiguity or lack of data. Finally, we elaborate on the limitations of our methodology. We refer the reader to Section 5 for the ethical principles guiding our measurements. For a pipeline of our methodology, we refer the reader to Figure 1. ### Overview Our methodology builds a model based in part on high-confidence router vendor labels and then uses that model to extend coverage and improve accuracy. Such ground-truth data can be collected using private information about the deployment of routers in a network or via information leakage using protocol scanning (Krizhevsky et al., 2017). In our method, we utilize SNMPv3 as described in (Bahdan et al., 2017) which is able to accurately label around 20% of routers in the wild by sending a single unsolicited and unauthenticated request. The reply to this request contains detailed information including a router "Engine ID." This ID easily and reliably identifies the router vendor. We leverage the SNMPv3 technique and scan a set of router addresses to create labeled data and build a classification model for router fingerprinting measurement. We expect that, typically, routers will not expose services to the public Internet. We decide to use three types of probe packets over ICMP, TCP, and UDP. ICMP has been used before to fingerprint routers, as many routers respond to ICMP packets. For TCP and UDP, we expect that routers do not expose such services to the public Internet. However, the response to packets targeting a closed port for these protocols can provide useful information towards fingerprinting the router vendor. In addition to the SNMPv3 requests, we send three single-packet probes over each of the three primary transport protocols, namely, ICMP, TCP, and UDP for a total of nine probes per IP address (Figure 1 ). We explain the rationale to use these three protocols in Section 3.4. The feature values of the responses to our transport protocol measurements, listed in Table 1, are used to build a signature database for _Lightweight Fingerprinting (LFP)_. For IPs that are responsive to SNMPv3 requests, we extract the vendor information and used it as a label (Figure 1 ). Note that our methodology is not dependent on the SNMPv3 to label routers. In principle, any reliable router label source can be used as input to our classification method. ### Datasets To select target router IPs, we leverage two complementary public router datasets: the RIPE Atlas traceroutes dataset (Zhu et al., 2017) and the ITDK dataset (Krizhevsky et al., 2017). We list the router datasets with dates, address counts, and AS coverage in Table 2. **RIPE Atlas Traceroutes.** We extract intermediate IP hops from RIPE Atlas traceroute measurements to obtain router IPv4 addresses. We explicitly ignore the last responsive hop, if it is the same as the targeted host IP, to ensure that we only include router IPs. We utilize five snapshots of traceroute data over a ten-month period from January - November 2022. We extract between 446k to 496k router IPs from each snapshot. Further, each snapshot covers between 18.3k to 20.2k ASes. In order to increase the coverage, we utilize all five snapshots to gather signatures and evaluate their stability over time. Moreover, we find that RIPE Atlas traceroutes are relatively stable across the ten-month period, with a pairwise router IP overlap of about 88% between consecutive collections. Therefore, we utilize the most recent RIPE Atlas snapshot, i.e., RIPE-5, for our IP level analysis. **ITDK Router-Level Topologies Dataset.** In addition to IP level traceroute data, we also use the router topology from CAIDA's February 2022 ITDK (Krizhevsky et al., 2017). This complementary dataset contains router alias sets (excluding singletons) inferred via MIDAR (Krizhevsky et al., 2017) and affinder measurements. This dataset covers fewer IP addresses and about half of the number of ASes compared to RIPE Atlas. This is expected as addresses in this dataset must respond over at least one protocol (ICMP, UDP, or TCP) which is required for alias resolution. This is also evident in our active measurement where we note a higher responsiveness for the ITDK data compared to RIPE Atlas as shown in Figure 4. The complementary nature of this dataset is underscored by a relatively low overlap of at most 26% router IPs present in any of the RIPE Atlas traceroute datasets. We use the ITDK data for gathering signatures and router level analysis. The union of all RIPE Atlas traceroute and ITDK MIDAR datasets covers more than 970k router IP addresses in about 25k ASes. We note that our methodology is not limited to these selected datasets, but in fact other datasets containing candidate router IP addresses could be used as well. Next, we run active measurements toward targets in each of these datasets to gather features and build signatures for router fingerprinting. ### Active Scanning Packets To collect router fingerprints, we send 10 packets in total per target IP: 3 for each transport protocol and a single SNMPv3 request. We Figure 1. Data Collection Pipeline. aim to reduce the impact of our scan on the target by using simpleping probes and avoiding any malformed packets. For ICMP, we send three echo requests. For each of these requests we expect an echo reply. For TCP and UDP we target port 33533, with the assumption that no services are active on this port. For TCP we send two ACK packets and one SYN packet with a non-zero acknowledgment number. We expect that all three TCP packets - both the ACK and SYN - will elicit a TCP RST response. For UDP we send three packets, each with 12 bytes of all zero payload. For each packet, we expect to receive an ICMP port unreachable response. ### Feature Set We limit our methodology to features that can be extracted mainly from the IP layer. In total, we extract 15 features from our 9 probe packets (see Table 1). We consider four groups of features: #### 3.4.1. Ipid We send a trio of consecutive packets and collect the IPID values from all responses. We then construct IPID sequences for each protocol. Previous work (Zhou et al., 2017; Wang et al., 2018) showed that IPID sequences exhibit distinct patterns, e.g., they can be monotonically increasing or random. These patterns can not only be used to perform IP-alias resolution as shown in previous work, but also facilitate the identification of a router's vendor. One test for device fingerprinting is checking if ICMP request and response IPID values match (Beng et al., 2017; Wang et al., 2018). IPID sequences differ among different protocols, but some implementations use the same sequence across all protocols. As we show in Table 1, the ICMP PID Echo feature indicates whether Echo request and response IPID values are the same (true) or different (false). The IPID counter for any of the three protocols (ICMP, TCP, and UDP) can be characterized as incremental (which can also include wrap-around from the largest 16 bit value back to starting at zero), random, static (always the same value, other than zero), zero (always responds with an IPID of zero), or with duplicates (where exactly two responses have the same IPID value). #### 3.4.2. iTll Previous work (Wang et al., 2018) showed that different initial TTL (iTTL) values may differ between different protocols and even message types. We collect the iTTL values for each response that we receive. Typically, the iTTL value depends on the operating system or network card per vendor. Indeed, in Table 1, we show the different values, four in total, that we have collected in all our experiments (see Section 4 for details). #### 3.4.3. Response Size To further diversify our features, we collect the response size for all protocols. We notice that typically, the ICMP and TCP response size often do not provide any information gain. However, the ICMP port unreachable response to a UDP request packet can differ between router vendors. This depends on whether the request packet is fully or partially quoted (and if so, how much of the original packet is quoted) in the ICMP response packet (Beng et al., 2017; Wang et al., 2018). As we show in Table 1, the characteristic value is variable and differs by router vendor and implementation, which allows us to make use of the response size for router fingerprinting. #### 3.4.4. Additional Features RFC 793 (Wang et al., 2018) states that if a port is closed, any incoming segment except a reset triggers a reset response. If the segment has an ACK field, the reset takes its sequence number from the ACK field, otherwise, it uses a sequence number of zero. We noticed that only a few vendors are compliant with the RFC in this regard. For the set of features and the possible feature values we refer to Table 1. We note that most of these features are only available for IPv4. Thus, in this paper we focus only on the classification of IPv4 router interfaces. ### Signatures We assemble all responses for each IP address into a feature vector based on Table 1. We use the instances of a particular feature vector that are associated with a vendor obtained from the SNMPv3 probes to create a mapping of a feature vector to a vendor. We then apply a basic filter based on occurrences threshold as described in (SS4.3). At this point the feature vector is used as a _signature_ for the vendor (Figure 1 ). **Unique Signatures.** If a signature is mapped only to a single vendor, then we call this a _unique signature_. In this case we have high confidence in the accuracy of the signature. **Non-Unique Signatures.** When a signature is associated with _multiple_ vendors, we characterize this as a _non-unique signature_. \begin{table} \begin{tabular}{l l l} \hline \hline Data Source & Date & \# IPv4 addrs. & \# ASes \\ \hline RIPE-1 & 2022-01-24 & 494,867 & 20,178 \\ RIPE-2 & 2022-02-24 & 484,930 & 19,989 \\ RIPE-3 & 2022-06-09 & 496,167 & 20,085 \\ RIPE-4 & 2022-07-04 & 446,629 & 18,304 \\ RIPE-5 & 2022-11-07 & 476,577 & 18,837 \\ ITDK & 2022-02 & 343,312 & 9,922 \\ \hline Union & & 971,343 & 24,909 \\ \hline \hline \end{tabular} \end{table} Table 2. Overview of router address datasets: Number of unique IP addresses and Autonomous Systems. We utilize all data sources for signatures gathering. However, we use RIPE-5 for path and IP-level analysis and ITDK for router analysis. \begin{table} \begin{tabular}{l l} \hline \hline Feature & Possible Values \\ \hline ICMP IPID echo & true, false \\ ICMP IPID counter & incremental, random, static, zero, duplicate \\ TCP IPID counter & incremental, random, static, zero, duplicate \\ UDP IID counter & incremental, random, static, zero, duplicate \\ TCP UDP KID shared counter & true, false \\ TCP ICMP shared counter & true, false \\ UDP ICMD shared counter & true, false \\ UDP TITL & 32, 64, 128, 255 \\ ICMP ITTL & 32, 64, 128, 255 \\ TCP rll & 32, 64, 128, 255 \\ ICMP echo response size & variable \\ TCP response size & variable \\ UDP response size & variable \\ TCP SYN sequence number & zero, non-zero \\ \hline \hline \end{tabular} \end{table} Table 1. List of features used with possible values. This may happen, e.g., due to the change of the default router configuration by network operators, or simply a shared TCP/IP stack implementation between multiple vendors. As we will show in Section 4, typically there is one vendor that dominates even for non-unique signatures, or the non-unique signatures map to a family of routers that are based on the same OS or network stack. However, for the purpose of this study, we take a conservative approach and only consider _unique_ signatures in our analysis. **Partial Signatures.** There are also cases where a router IP responds only to a subset of the all three protocols (ICMP, TCP, and UDP). In this case, we characterize the signature as a _partial signature_. Even partial signature may prove useful to identify the vendor of a router. If the partial signature is unique for a vendor, then we call this a _partial unique signature_. If this partial signature is associated with multiple vendors, we call it a _partial non-unique signature_. As we elaborate in detail in Section 4, it is common for a single vendor to have multiple signatures. This is to be expected as vendors often offer multiple products and versions of the same product, or it can be an artifact of acquisitions. Once the list of signatures for a given vendor has been compiled, we can match the signatures using our active measurements to infer the vendor of unlabeled routers (Figure 1 ). With this technique we can substantially increase the coverage of routers that we can fingerprint in the wild (Figure 1 ). ### IPID Threshold In order to determine whether an IPID counter is incremental or random, we investigate the returned IPID values per IP address and across all three protocols. Consequently, we sample the IPID values only for fully responsive addresses. We calculate the step values for each consecutive packet pairs and aggregate them by applying a maximum function1. In Figure 2 we show the distribution of maximum IPID step per IP in the responses to all three protocols. In order to distinguish random from sequential increases, we check for a knee in the distributions of Figure 2. We empirically take a conservative threshold value of 1,300 to distinguish between sequential and random IPID counters. Note that a sequential increase can be larger than '1', as concurrent traffic from that router also leads to an increase in sequential IPIDs. Footnote 1: We obtain similar results when applying an average function instead of a maximum function. Since the maximum function is more conservative, we use it in our methodology. We evaluate the empirical threshold by estimating the probability of misclassifying a random IPID counters as a sequential. Recall that we sent 9 packets in total and calculated 8 IPID steps by determining the difference between two consecutive IPID values. Given our threshold, the probability of a random IPID counter generating a value less than or equal to the threshold is \(1301/2^{16}\) which is \(\approx 0.019\). For our classifier to misclassify a random counter as sequential, all eight IPID steps need to be less than or equal to the threshold, which has an extremely low probability of \(0.019^{8}\) when considering all protocols, or \(0.019^{2}\) when considering each protocol separately. We further explore the empirical threshold in Figure 3, where we plot the distribution of the IPID difference for consecutive responses for fully responsive IPs in the RIPE-5 dataset. It is clear that around 20% of IPID differences are close to zero. Close to 90% of the IPID difference values are included by setting a threshold of 1,300, as shown with the dashed vertical line We use this threshold to differentiate between incremental values and random that are dispersed across the full range of possible IPID difference values. Note that an effectively random IPID might by chance fall within this 1,300 threshold. Since with LFP we take the conservative approach of using the maximum IPID difference between consecutive probes, this random effect is very unlikely to occur twice in a row and thus strongly minimizes the number of false positives. Finally, we also characterize the sharing of IPID sequences across pairs or all protocols as true of false, if this takes place or not, respectively. ### Limitations LFP improves the state-of-the-art in remote router vendor fingerprinting. However, we acknowledge that several limitations remain: * Our classification builds on highly accurate vendor data obtained via SNMPv3 probing, however SNMPv3 coverage is not universal and imparts bias. While the SNMPv3 technique obtains correct labels for approximately 20% of the routers and 30% of the router IPs we probed, we do not generate a signature for vendors that do not implement or do not respond to SNMPv3 requests. This results in a bias toward SNMPv3 enabled routers and can negatively affect the uniqueness of generated signatures. * As we elaborate in Section 4, a non-negligible fraction of routers do not respond to _any_ remote probe. This differs across sets Figure 3. Distribution of IPID difference values for consecutive responses. Figure 2. Maximum IPID step distribution per IP address. The vertical line shows the chosen threshold between sequential and random IPID increases. of router datasets, but for these routers our technique cannot provide any insights. * New signatures may be created as as new router models or vendors are introduced in the market. Although in Section 4 we show that over a period of ten months, and for different router datasets, the signatures we discover remain stable, over longer period of time, e.g., years, new measurements may be required to keep LFP signatures up-to-date. * We restrict our analysis to core routers. A primary challenge to fingerprinting edge routers is the greater diversity of Customer Premise Equipment (CPE) and residential equipment, along with substantial amounts of IP churn. Although we believe that our technique can be used to fingerprint edge network equipment, we defer such an investigation to future work. * We may misclassify random IPID response sequences as sequential. To significantly minimize the potential for erroneous inference, we we take the maximum IPID step difference among all pairwise IPID values (see Section 3.6). * We focus our study on the IPv4 Internet. Many of the features that LFP relies on (see Table 1) are not available in IPv6 or do not provide the same discriminatory opportunities for fingerprinting. For instance, the IPv6 header does not include an IPID field unless fragmentation is induced (Kumar et al., 2017), rendering all IPID-related features inapplicable for fingerprinting. Furthermore, all IPv6 implementations use the recommended initial TTL value of 64 (Kumar et al., 2017). The remaining features do not provide significant information gain to produce an accurate vendor signature. * We limit the scope of our work and focus only on the technical aspects of remote router vendor fingerprinting that can be used to inform routing decisions. We recognize that better insight into vendors within ASes and along end-to-end paths is especially interesting given the current climate where, e.g., some countries are imposing restrictions on the use of equipment from particular vendors. In this paper, we discuss this issue in a purely factual, impartial, and non-political manner. Since we are not legal or political science scholars, we do not discuss, opine, or speculate on non-technical matters, e.g., the legal, financial, and social impact of our work. ## 4. Active Experiments We now apply our LFP methodology in active experiments to fingerprint routers in the wild. We run six measurements, one for each data source (five RIPE Atlas traceroutes, and one ITDK's MIDAR dataset, cf. Table 3). We find the five RIPE Atlas based measurements to be relatively consistent. Between 82k and 100k IPs are responsive to SNMPv3. Of those around 50k respond to all LFP probes, i.e., our labeled dataset which we use to extract vendor information. Another 58k-77k respond only to LFP probing, i.e., our dataset that we can fingerprint with the LFP technique without the IPs responding to SNMPv3. The ITDK dataset provides more SNMPv3-responsive IPs, with a similar number of LFP responses compared to RIPE Atlas traceroutes. ### Responsiveness Next, we analyze how responsive the target datasets are to LFP probes. This determines the upper limit of our coverage with LFP. Figure 4 shows the distribution of the number of responsive protocols (TCP, UDP, ICMP) per IP, comparing the ITDK and RIPE-5 datasets. Since we rely on responsiveness to create signatures, the higher the number of responsive protocols, the higher the entropy in the signature. Generally, we find that ITDK provides more responsive protocols compared to RIPE. About 50% of ITDK IPs are responsive on all three protocols, which is only 35% for RIPE. It is very encouraging, however, that we get responses for an at least one protocol for 90.7% and 72.3% for ITDK and RIPE, respectively. One other factor influencing the uniqueness of our signatures is the number of responses _per protocol_. Figures 5 and 6 show the responsiveness per protocol for RIPE-5 and ITDK, respectively. In both datasets we see that ICMP is more likely to elicit responses compared to TCP or UDP. Moreover, we see that an IP responds either to all three probe packets per protocol or to none, i.e., the line from zero to three in the plot is almost horizontal. Finally, we find that IP addresses from the ITDK dataset are generally more likely to be responsive compared to the RIPE dataset: 84.4% vs. 65.7% are responsive on all three ICMP probes for ITDK and RIPE, respectively; for TCP and UDP the difference is 63.6% in ITDK compared to 39.5% in RIPE. ### Signatures After collecting all responses from our measurements, we extract features (cf. Section 3.4) and create signatures based on our labeled SNMPv3 data (cf. Section 3.5). As can be seen in Table 3, each dataset individually contributes 34-62 unique signatures and 7-13 non-unique signatures. Unique signatures give us a high confidence when applying our LFP technique, as all labeled instances can be mapped to the same router vendor. Note, that if the same unique signature would be found with different vendors in different datasets, we count it as a non-unique signature. We find this case to be relatively rare, however, with only 2 occurrences for our five datasets. In our fingerprinting analysis, we exclude any non-unique signature and use the union of all five datasets to create a total of 89 unique signatures. We set a threshold of a minimum of 20 router samples per signature. Setting the threshold lower will only increase the covered routers by 1%, but disproportionally increases the number of signatures. signatures and excluding a large number of rare signatures with only a few occurrences in the hundreds of thousands of IPs in our dataset. In total, for this study, we identified 89 unique signatures and 23 non-unique signatures. We provide the full list of signatures in (Brandt et al., 2017). In addition to signatures where we get responses from all protocols, we also leverage partial signatures. Table 4 shows the partial fingerprints for different combinations of partial protocol responsiveness. We find that if we see responses from two protocols (i.e., TCP & UDP, ICMP & UDP, or ICMP & TCP), the majority of partial signatures are still unique and can therefore be leveraged by the LTP technique. Regarding single protocol signatures, the results are mixed. Most signatures are unique for TCP-UDP, ICMP-UDP and ICMP-TCP, while about half are unique for just TCP, UDP or ICMP. In general, utilizing unique partial signatures expands coverage by \(\approx 15\%\) while maintaining accuracy. ### Mapping Signatures to Vendors In Table 5 we show the vendor distribution based on the labeled dataset (i.e., SNMPv3-responsive addresses). To our positive surprise, more than 82% of the IPs map to a vendor with a unique signature. In total, our dataset covers 16 different vendors. We find Cisco to be the dominant router vendor for our labeled dataset with 51% of labeled router IPs with unique signatures, followed by Juniper and Huawei with 10% each. For the major router vendors, the majority of IPs can be mapped to unique signatures, which increases our confidence in applying our technique to non-labeled data. Indeed, this is the case for 100% of Juniper, Alcatel/Nokia, and Ericsson router IPs, 98% of Cisco router IPs, and 86% of Huawei router IPs. Two notable exceptions \begin{table} \begin{tabular}{l r r r} \hline \hline Protocols & Total & Unique & Non-unique \\ \hline TCP \& UDP & 61 & 43 & 18 \\ ICMP \& UDP & 60 & 42 & 18 \\ ICMP \& TCP & 51 & 36 & 15 \\ UDP & 20 & 12 & 8 \\ ICMP & 19 & 9 & 10 \\ TCP & 17 & 10 & 7 \\ \hline \hline \end{tabular} \end{table} Table 4. Partial signatures for different responsive protocol combinations. Figure 4. Responsive protocols per IP for the RIPE-5 and ITDK datasets. Figure 5. Responsiveness per protocol for the RIPE-5 dataset. Figure 6. Responsiveness per protocol for the ITDK dataset. \begin{table} \begin{tabular}{l r r r r r r} \hline \hline Measurement & IPs & SNMPv3 & SNMPv3 \(\cap\) LFP & LFP \(\backslash\) SNMPv3 & Unique & sigs & Non-unique \\ \hline RIPE-1 & 359,263 & 99,560 & 55,116 & 58,266 & 62 & 9 \\ RIPE-2 & 355,709 & 95,600 & 54,933 & 59,400 & 46 & 8 \\ RIPE-3 & 363,464 & 94,699 & 53,196 & 58,843 & 47 & 10 \\ RIPE-4 & 323,141 & 82,047 & 48,360 & 72,969 & 49 & 11 \\ RIPE-5 & 327,534 & 90,540 & 47,700 & 77,298 & 51 & 13 \\ ITDK & 311,607 & 113,089 & 58,492 & 53,952 & 34 & 7 \\ \hline Union & 736,260 & 218,129 & 132,524 & 169,143 & 89 & 23 \\ \hline \hline \end{tabular} \end{table} Table 3. Measurement overview: Responsive IPs (IPs), responsive IPs to SNMPv3 (SNMPv3), to SNMPv3 and LFP (SNMPv3 \(\cap\) LFP), only to LFP (LFP \(\backslash\) SNMPv3), number of unique signatures (Unique sigs), and non-unique signatures (Non-unique sigs). are MikroTik and H3C. For these two vendors, we might attribute a lower bound of routers. Note that both these vendors utilize UNIX-based solutions. We use the union of signatures in the following sections for router fingerprinting: network homogeneity, and end-to-end path analyses. ## 5. Ethical Considerations During the design and the application of our methodology we took care to minimize any potential harm to the operation of routers and networks. First, the load of our measurements is very low. More specifically, we send ten packets, i.e., one SNMPv3 request and nine probes, three for each one of ICMP, TCP, and UDP. We do not send any malformed packet to avoid any unexpected behavior. Moreover, we coordinated with our local network administrators to ensure that our scanning efforts do not harm the local or upstream network. We follow current best practices (Kipf and Welling, 2017; Kipf and Welling, 2017; Kipf and Welling, 2017) for active measurements and ensure that our proper IP address has a meaningful DNS PTR record. Additionally, we show information about our measurements and opt-out possibilities on a website of our scanning servers. During our active experiments, we did not receive any complaints or opt-out requests. Our work uncovers potentially sensitive security, robustness, and business information about network providers, e.g., router vendor. For this we plan to respond to any request by operators regarding their networks. ## 6. Router Vendors on a Path In this section, we apply LFP to study the diversity of vendors along data-plane forwarding paths. Such insights are helpful as they could inform routing policy decisions by taking the equipment on a path into account. For example, if policy or law restricts a specific vendor, e.g., (Kipf and Welling, 2017), a different path without this vendor might be selected. For this analysis, we use the most recent RIPE dataset, namely RIPE-5 (see Table 3), consisting of 7.3M traceroutes. Figure 8 shows the ECDF of the number of hops per path in the RIPE-5 dataset. In this traceroute dataset, more than \(\approx\) 7.1M of the paths (95%) have at least three IP hops. For our analysis, we consider only routable IPv4 addresses and we exclude any addresses that are private, or reserved. Moreover, more than 95% of the paths have a length of at most 15 hops. Figure 9 shows the fraction of router IPs that we can map to a router vendor. We notice that for traceroutes with at least three hops, LFP can identify at least two of the hops in 62% of the cases. This fraction increases to 82% to identify the vendor of at least one hop. This is a substantial improvement compared to the baseline with the SNMPv3 remote router vendor fingerprinting technique alone, as shown in Figure 10, where at least one vendor can be identified for only 35% of the traceroutes. ### Identifying Router Vendors on a Path First, we investigate the diversity of router vendors per path as fingerprinted with LFP. Figure 11 shows the number of unique vendors identified on paths where we can identify at least one hop; we identify around 650 unique sets of vendors. However, for around 50% of paths, LFP identifies only a single vendor. For around 40% paths, LFP identifies two vendors, and only 7% of the paths have \begin{table} \begin{tabular}{l r r r} \hline \hline Vendor & Labeled & Unique sigs & Non-unique sigs \\ \hline Cisco & 83,918 & 25 (82,020) & 1 (1,898) \\ MikroTik & 28,989 & 26 (9,489) & 4 (19,500) \\ Huawei & 19,869 & 8 (17,034) & 4 (2,835) \\ Juniper & 17,665 & 15 (17,665) & 0 (0) \\ H3C & 2,469 & 5 (358) & 5 (2,111) \\ Alcatel/Nokia & 1,111 & 2 (1,111) & 0 (0) \\ Ericsson & 200 & 1 (200) & 0 (0) \\ Other & 9,676 & 4 (497) & 18 (9,179) \\ \hline \hline \end{tabular} \end{table} Table 5. Number of signatures in ground-truth dataset per router vendor. (#IPs are noted in parentheses). Figure 8. Path length distribution in the RIPE-5 traceroute dataset. Figure 10. LFP vs. SNMPv3: Identifiable routers distribution on a path. Figure 9. Identifiable routers on a path (RIPE-5). three distinct vendors. Four or more different router vendors are identified in fewer than 2% of the paths. Next, we analyze the most popular combinations of router vendors on paths (without respect to their order along the path). Figure 12 shows that the top nine sets of vendors cover more than 95% of the RIPE-5 paths. The top three vendor combinations only involve Cisco and Juniper, making up almost 60% paths. Traceroute paths with all other combinations account for fewer than 3% each. ### Case Study: US-related Paths As a case study, we consider router vendor diversity specifically for the United States. There are ongoing discussions whether traffic that originates from the US, or has as a destination in the US, should be carried by "untrusted vendors" (Vaswani et al., 2017). Moreover, if a vulnerability for a specific router vendor is discovered (Vaswani et al., 2017; Vaswani et al., 2017; Vaswani et al., 2017), paths with these vendors might, in theory, be avoided until a patch is developed and applied. With knowledge about vendors on a forwarding path, possible alternative paths from a source to a destination may receive preferential treatment in routing decisions by network operators. This could be facilitated with source routing techniques (Vaswani et al., 2017) or enforced by the upstream provider (Vaswani et al., 2017). #### 6.2.1. Intra-US Paths First, we investigate the case that both the source and the destination of a traceroute are within the US. To geolocate the endpoints, we rely on IP address registry information. While other (more fine-grained and more accurate) geolocation techniques exist, we are primarily interested in policies and regulations that are frequently governed by the home country of the service provider, which is best reflected in the address registry. We exclude from our analysis anycast IPs (Bordes et al., 2017) as they may be announced from different locations. The RIPE-5 dataset contains 395,775 traceroutes with at least three hops where both the source and the destination IP geolocate to the US. For around 60%, of them, we can identify two or more router IPs and assign them to vendors using LFP. Furthermore, we find that for more than half of intra-US traces we can identify at least a third of the router vendors on the path. Moreover, in Figure 11, we show that in around 70% of the intra-US paths, all the IPs belong to a single vendor. The majority of the remaining 30% of intra-US paths have routers that belong to two distinct vendors, and the cases where there are three or more vendors is negligible. Indeed, our results suggest a high degree of consolidation of router vendors. Regarding the most popular set of router vendors for intra-US paths (cf. Figure 13), we see a similar picture compared to the overall dataset. Combinations of Cisco and Juniper dominate, even more so than in the overall dataset, making up more than two thirds of all intra-US paths combined. This shows that intra-US paths have low vendor diversity, consisting mostly of Cisco, Juniper, or a combination of both. Such homogeneity may be indicative of potential critical infrastructure weaknesses e.g., where all devices are affected by a vulnerability. #### 6.2.2. Inter-US Paths We also investigate the case that only one of the source and the destination are in the US. In the RIPE-5 dataset, there are 3M traceroutes of least three hops where only the source or only the destination IP geolocate to the US. For around 58% of these, we can identify the vendors for two or more router IPs using LFP. For more than half of inter-US traces, we can identify the vendor of at least a third of the router IPs on the path, showing a similar distribution as intra-US as well as other paths. Figure 11. Router vendor diversity on a path. Figure 14. Top router vendor combinations on inter-US paths (source or destination US). Figure 12. Top router vendor combinations for paths in the RIPE-5 dataset. Figure 13. Top router vendor combinations for intra-US paths in the RIPE-5 dataset. Moreover, in Figure 11, we show that in around 60% of inter-US paths, all IPs belong to a single vendor. Almost all of the remaining paths contain two mappable router vendors. These observations are similar to the intra-US study, and show a high degree of vendor consolidation. Cisco and Juniper are again the most prominent vendor combinations (see Figure 14), showing a similar distribution to intra-US and overall paths. However, the results suggest that inter-US paths exhibit more heterogeneity than the intra-US paths. ### Case Study: Informed Routing Knowing the vendors across the path can inform routing policy. For example, a sender may want to avoid sending traffic through ASes dominated by hardware from vendors they do not trust. Thus, the routing policy could choose an alternative path if available. Our methodology can inform the possible alternatives and may serve as a step toward enforcing such policies. As a case study, we find vendor homogeneous ASes in the RIPE-5 dataset: ASes with at least 1k router IPs where LFP finds at least 85% of the IPs belong to a single vendor. Next, we use the CAIDA AS relationship dataset (Levy et al., 2017) to find AS paths where these vendor homogeneous ASes serve as transit ASes. Then, we consider the destinations ASes where the homogeneous transit AS appears on the path. For these destinations, we investigate if there exists an alternative path from the _same destination_ but with a transit AS using a _different_ vendor. Note that while our analysis utilizes the CAIDA AS relationships in order to identify policy-compliant transit ASes, such inferences may be limited by the available data and the visibility of _all_ AS paths toward a given AS. We acknowledge that there may exist paths that cannot be observed from publicly available data, or that an alternative path may not be compliant in the traditional economic or valley-free routing sense. As a demonstration of the insights possible from this analysis, we examine two networks: AS9808 and AS3786. AS9808 is a large transit provider where LFP infers Huawei to be the the dominant router vendor. We identify 25,134 AS paths where AS9808 serves as a transit provider. For 68 destination ASes, no alternative path that does not transit AS9808 is visible2. On the other hand, for 167 destination ASes, an alternative path via ASes that operate non-Huawei routers is available. Footnote 2: Note that not all AS paths are visible in the BGP (Levy et al., 2017; Levy et al., 2017; Levy et al., 2017; Levy et al., 2017), thus our analysis is limited to the visible paths only. As a second example for a different router vendor, LFP shows that Juniper is the dominant router vendor for AS3786. We identify 1.3M AS paths, and 436 unique destinations where AS3786 appears as a transit provider. For 53 destinations there is no alternative path visible to us. Naturally, our inferences depend on our visibility into the AS, however, this result suggests that our methodology can similarly be applied to any destination when the set of paths is available. ## 7. Router Fingerprinting With the signatures collected in our active experiments, we now apply our fingerprinting technique to the router datasets. We leverage 89 unique signatures and 78 partial unique signatures from the union dataset (cf. Tables 3 and 4). Recall that both full and partial unique signatures provide exact matches between a signature and a vendor. ### IP to Vendor Mapping We use our combined full and partial unique signatures on the latest RIPE dataset, i.e., RIPE-5, and ITDK datasets to map IP addresses to vendors. For RIPE-5, our analysis shows that our method fingerprints 56.7% of router IPs when we use unique signatures. For reference, the SNMPv3 technique fingerprints only 26% of the router IPs. LFP alone fingerprints 49%. Figures 15 and 16 show the fingerprinting results based on responsive IPs from the RIPE-5 and ITDK datasets, respectively. We report the router IPs identified only by LFP, only by SNMPv3, and by both methods. We find that our LFP technique roughly doubles the number of fingerprintable IP addresses for both datasets. Moreover, we see that the number of fingerprintable IPs increases quite drastically for certain vendors: Juniper sees an increase of 650% and 259.3%, and Huawei 249.8% and 136.4% for RIPE-5 and ITDK, respectively. Generally, we see a more balanced router vendor distribution, with the most dominant vendor Cisco decreasing its share from \(\approx\)65% with SNMPv3 only to \(\approx\)50% for SNMPv3 + LFP. We provide an analysis for the non-unique signature precision and recall in Appendix B. ### Router to Vendor Mapping Next, we make use of the ITDK dataset not only containing IP address information, but also alias sets. We apply our signatures to all non-singleton router alias sets. First, we check if all IPs within fingerprinted alias sets report the same vendor. We find this to be the case for \(\approx\) 99% of all alias sets, with 498 router IPs producing conflicting vendor inferences (0.65%). Second, we plot the router vendor distribution counted by alias set in Figure 17. The router distribution is similar to the IP-based distribution (cf. Figure 16), with Cisco being the dominant vendor, followed by Huawei and Juniper. Again, we can map about 96.4% more routers with the combined SNMPv3 + LFP technique, compared to SNMPv3-only. ### Comparison with other Tools To evaluate the accuracy of vendor fingerprinting by LFP and the associated bandwidth requirements, we conduct a comparison with Nmap (Nmap, 2018) and Hershel (Hershel, 2018). For this, we acquire a set of addresses from Censys, which are known to reveal vendor information through service banners. Censys also provides Hershel fingerprints and OS identification where available. For each of the top six vendors found via LFP, we randomly select 500 IP addresses \begin{table} \begin{tabular}{|c|c c c c c c c c|c c c c|} Juniper & False & r & r & r & False & False & False & False & 255 & 64 & 64 & 84 & 40 & 56 & 0 \\ Cisco & False & r & r & r & False & False & False & False & 255 & 64 & 84 & 40 & 56 & 0 \\ \end{tabular} \end{table} Table 6. Two sample unique signatures: top for Juniper and bottom for Cisco. By changing the default value of Juniper for ICMP iTTL from 64 to 255 (values in box), the classifier misidentifies Juniper routers as Cisco. and perform tests using both LFP and Nmap. Additionally, we compare our findings with Hershel fingerprints, wherever possible. #### 7.3.1. Comparison with Nmap Table 7 shows the coverage and accuracy results for LFP and Nmap for six different vendors. Coverage refers to the percentage of responsive IPs for each vendor, while accuracy refers to the percentage of correct fingerprints for the responsive IPs. Although both tools have similar accuracy, LFP has the ability to achieve substantially higher coverage. After evaluating the coverage and accuracy of LFP and Nmap, we proceeded to analyze their respective bandwidth requirements. Specifically, LFP sends a consistent 9 packets (3 ICMP, 3 TCP, and 3 UDP) to each targeted IP address. In contrast, Nmap sends substantially more packets when attempting to fingerprint a device. Figure 19 shows the distribution of packets sent and received by Nmap using the default OS fingerprinting options. We find that Nmap sends more than 1000 packets to more than 80% of all targeted IPs. Moreover, our analysis shows that Nmap sends an average of 1,538 packets per IP and receives 1,065 packets. However, it should be noted that in certain cases Nmap may send an extremely high number of packets to a single IP address, exceeding 10 _thousand_ packets. This behavior is largely influenced by the services operating on the target system. In contrast, we observe that the LFP technique has considerably lower bandwidth requirements compared to Nmap, making it a more lightweight option overall. #### 7.3.2. Comparison with Hershel we also compare LFP with Hershel fingerprints. By design Hershel requires a single packet to obtain a fingerprint, which is even less bandwidth-intensive than the 9 packets sent by LFP, and much less intrusive than the multitude of packets sent by Nmap. Our analysis of the test sample shows that Hershel has an overall coverage of approximately 50%. Furthermore, we find that Hershel is only able to identify the target vendor with less than 1% accuracy for our top 3 vendors. This suggests that Hershel--while it may perform well for servers--is not a suitable tool for router fingerprinting. Additionally, we observe that Hershel often identifies Linux-based systems (such as Mikrotik) simply as Linux machines. This is due to the limited number of signatures for router vendors in the Hershel fingerprinting database. In summary, LFP achieves a balance between coverage and accuracy while also having a low network footprint. ### Family-level Fingerprinting After discovering that many vendors have not a single but multiple signatures, we investigate whether these signatures can be linked to different router models or families. To test this hypothesis, we collect a sample of 400 Cisco IPs running SNMPv2c and query for the Sys.desc O.I.D. (Deng et al., 2018). This provides a small ground-truth sample with fingerprinting information beyond the vendor. Next, we run LFP against these targets, and collect their signatures. The results show that the collected signatures belong to the top 13 most common Cisco signatures, which cover over 96% of labeled Cisco data. Additionally, we identify a unique signature for three different IOS families (3 XR, 3 NX, and 7 IOS signatures), which are not shared with the other versions. However, due to the limited ground-truth dataset, it is not possible to evaluate the accuracy of these results in detail, and we leave this task for future work. Overall, the sample data supports the assumption that different signatures can be linked to specific router models or families, which can lead to a more fine-grained router fingerprinting. \begin{table} \begin{tabular}{l r r r r} \hline \hline & \multicolumn{2}{c}{Coverage} & \multicolumn{2}{c}{Accuracy} \\ \cline{2-5} Vendor & LFP & Nmap & LFP & Nmap \\ \hline Cisco & 40\% & 10\% & 95\% & 84\% \\ Juniper & 81\% & 31\% & 99\% & 98\% \\ Huawei & 49\% & 20\% & 55\% & 50\% \\ Ericsson & 93\% & 6\% & 80\% & 0\% \\ Mikrotik & 83\% & 15\% & 10\% & 5\% \\ Alcatel & 38\% & 11\% & 48\% & 16\% \\ \hline \hline \end{tabular} \end{table} Table 7. Comparing coverage and accuracy of LFP and Nmap for Censys-labeled data. Figure 19. **Routers to vendors: SNMPv3 vs. LFP for the ITDK dataset.** Figure 19. **Routers to vendors: SNMPv3 vs. LFP for the ITDK dataset.** Figure 20. **Sent and received packets from Nmap.** ### New Insights on Router Deployment Using the collected router fingerprints, we next conduct a comprehensive analysis of global router vendor distribution by comparing our findings with a similar study (Beng et al., 2017). Specifically, we utilize LFP to identify the vendor of routers and examine the global distribution of these vendors. Our analysis provides a detailed overview of the global router vendor landscape. For our analysis, we focus on the ITDK dataset (see Table 3). Recall that this dataset has information about all the interfaces (IPs) associated with the same router via alias resolution. LFP can identify unique signature routers in 6,743 ASes, compared to 4,929 ASes with the SNMPv3 method. Thus, not only can LFP identify more than double the number of router IPs (see the previous section), but it also identifies routers in 1,814 additional ASes (+36.8%). This is a substantial contribution of LFP as it sheds light on previous blind spots in the Internet and contributes to a better estimation of the global router vendor distribution. In Appendix A we demonstrate the efficacy of utilizing LFP to enhance router coverage in a network. Our findings reveal that LFP can identify more than twice the number of routers in large networks, thereby substantially improving coverage. Additionally, LFP provides a comprehensive analysis of router homogeneity across different points, offering a more detailed report on homogeneity. ## 8. Discussion **Obfuscating remote router vendor fingerprinting:** Our analysis shows that it is possible to hide from remote router fingerprinting. The obvious way is to drop UDP and TCP traffic, especially from non-whitelisted sources. But even if UDP and TPC traffic is not dropped, it is still possible to create rare signatures that are more difficult to be mapped to a specific vendor. It is also possible by configuring a router to confuse the classification algorithm (similar to an adversarial attack on classifiers). Some of the features are difficult to change, e.g., ICMP, TCP, or UDP IPID counters, if they can be configured at all since they might be directly implemented in the router OS. However, it is easier to change default iTTL values. In Table 6, we present two unique sample signatures for Juniper (top) and for Cisco (bottom). By changing the default value of the ICMP iTTL (see Table 1 for details) of the Juniper routers from 64 to 255, LFP would misclassify the Juniper router as a Cisco router. **Using additional sources of information for fingerprinting:** Our methodology relies solely on network characteristics and active probing. Other techniques utilize other sources of information, e.g., banners, that offer good coverage (Zhu et al., 2017). Banner data analysis requires the development of heuristics. One of the benefits of using a simple rule-based approach such as LFP compared to machine learning (ML) techniques, is that it is clear why certain decisions are being taken, whereas ML techniques usually suffer from a lack of explainability. Furthermore, complex ML models in networking can suffer from deficits such as shortcut learning, spurious correlations, and vulnerability to out-of-distribution samples (Zhu et al., 2017). Future work should explore the possibility of using explainable ML models for router fingerprinting. Moreover, banners' raw data is less accessible, typically proprietary, that comes with commercial or limited academic licenses. Nevertheless, banner data analysis can complement our technique and improve fingerprinting coverage and granularity. As part of our future work, we plan to use information fusion of our data and banner data and assess the benefit of using additional information sources for router fingerprinting, especially for vendors with non-unique signature, and hopefully for finer-grained fingerprinting, e.g., model-level fingerprinting. **Non-Unique Signatures:** While we only utilize unique signatures in this study, non-unique signatures can offer insights into router deployments. This is particularly relevant when a single vendor dominates a non-unique signature with thousands of instances. Additionally, utilizing non-unique signatures can increase LFP coverage to 64% in the RIPE-5 dataset. We explore the precision and recall of non-unique signatures in Appendix B and intend to investigate additional features to enhance the uniqueness of such signatures in future research. **Integrating LFP into Nmap:** We also plan to investigate how the insights gained by our study can be transferred and integrated into Nmap (Zhu et al., 2017). Our analysis shows that LFP can achieve better accuracy with ten packets (including the SNMPv3 request) than the default Nmap OS detection mode, which sends up to thousands of probe packets. At least in the case of router fingerprinting, LFP has proven to be more scalable, less intrusive, and more accurate. We are already developing a Nmap variation that will replicate our experiment, and we will share it with the Nmap community to get feedback and comments. This way, we can improve our methodology and enable more researchers and engineers to use our technique. **Longitudinal analysis:** As part of our future research agenda, we would like to investigate how we can use our classification methodology and our collected data to perform a large-scale longitudinal analysis of vendor changes over time, vendor changes for a network, or vendor changes per router interface IP. So far, we have collected data that spans more than six months, but the real potential of our technique will be unveiled by collecting data that spans multiple years. We plan to publicly make the tools and data available to the research community and report on our results. We also plan to investigate how geopolitical events, economic changes, security incidents, and vendor strategies may influence the distribution of routers by different vendors across different time scales and geographical regions. ## 9. Conclusion In this paper, we have shown that only 10 packet probes per router IP are enough to accurately fingerprint up to 64% of routers in the IPv4 Internet. We developed and evaluated LFP\(-\)a lightweight fingerprinting technique that sends three probe packets for three transport protocols, namely, ICMP, TCP, and UDP. By augmenting our traces with labeled router data that relies on SNMPv3 responses, we generated around 90 unique signatures that can accurately identify all major router vendors. To our surprise, more than half of the routers replied to our probe packets. The vast majority of the responsive routers (more than 82%) can be assigned to only one vendor using our classification. Our results showed that compared to the state-of-the-art, we more than doubled the coverage of routers that we can remotely fingerprint, and more accurately inferred the router vendor compared to popular tools like Nmap. All of this is achieved with orders of magnitude less probing packets than required by Nmap. Thus, our mechanism is more scalable, less intrusive, and does not rely on external and proprietary data like banner grabs. Our classification provides valuable insights into the deployment of routers within networks and regions, and the router vendor equipment on a given path. Thus, it can be used to inform routing decisions, to assesses router deployment strategies, to analyze hardware manufacturer market share, and to help estimate the potential impact of router vulnerabilities in a network or a region. Finally, to enable further research in the area, we plan to make our LFP tool publicly available. ## Acknowledgements This work was supported in part by the European Research Council (ERC) under Starting Grant ResolutionNet (ERC-StG-679158).
2309.06108
Baxter Q-operators in Ruijsenaars-Sutherland hyperbolic systems: one- and two-particle cases
In these notes we review the technique of Baxter Q-operators in the Ruijsenaars-Sutherland hyperbolic systems in the cases of one and two particles. Using these operators we show in particular that eigenfunctions of these systems admit two dual integral representations and prove their orthogonality and completeness.
N. Belousov, S. Derkachov, S. Kharchev, S. Khoroshkin
2023-09-12T10:22:33Z
http://arxiv.org/abs/2309.06108v1
###### Abstract ###### Abstract In these notes we review the technique of Baxter \(Q\)-operators in the Ruijsenaars-Sutherland hyperbolic systems in the cases of one and two particles. Using these operators we show in particular that eigenfunctions of these systems admit two dual integral representations and prove their orthogonality and completeness. **Baxter \(Q\)-operators in Ruijsenaars-Sutherland hyperbolic systems: one- and two-particle cases** **N. Belousov\({}^{\dagger}\), S. Derkachov\({}^{\dagger}\), S. Kharchev\({}^{\times*}\), S. Khoroshkin\({}^{\circ*}\)** \({}^{\dagger}\)_Steklov Mathematical Institute, Fontanka 27, St. Petersburg, 191023, Russia;_ \({}^{\times}\)_National Research Center "Kurchatov Institute", Kurchatov Square 1, Moscow, 123182, Russia;_ \({}^{\circ}\)_National Research University Higher School of Economics, Myasnitskaya 20, Moscow, 101000, Russia;_ \({}^{*}\)_Institute for Information Transmission Problems RAS (Kharkevich Institute), Bolshoy Karetny per. 19, Moscow, 127994, Russia_ ###### Contents * 1 Introduction * 2 Case \(n=1\) * 2.1 Definitions of \(Q\)-operators and commutativity * 2.2 Eigenfunctions and beta integrals * 2.3 Reductions * 2.3.1 Commutation relations between \(Q\)-operators * 2.3.2 \(Q\Lambda\)-commutation relations * 2.4 Regularization and the scalar product * 2.4.1 \(Q\)-operator regularization * 2.4.2 \(\hat{Q}\)-operator regularization **Case \(n=2\)** \(\begin{array}{ll}3&\mbox{Case $n=2$}\\ &\mbox{\small 3.1}&\mbox{$Q$-operators and $\Lambda$-operators as integral operators}\\ &\mbox{\small 3.2}&\mbox{$Q\Lambda$-commutation relations}\\ &\mbox{\small 3.3}&\mbox{Eigenfunctions}\\ &\mbox{\small 3.3.1}&\mbox{Mellin transform}\\ &\mbox{\small 3.3.2}&\mbox{Equivalence through dual $Q$-operators}\\ &\mbox{\small 3.4}&\mbox{Scalar product}\\ &\mbox{\small 3.4.1}&\mbox{Standard quantum mechanical calculation}\\ &\mbox{\small 3.4.2}&\mbox{Eigenfunctions of the operator $Q(\lambda)$}\\ &\mbox{\small 3.4.3}&\mbox{Eigenfunctions of the operator $\hat{Q}(x)$}\\ &\mbox{\small 3.4.4}&\mbox{Eigenfunctions of the operators $Q(\lambda|\mathbf{\omega})$ and $\hat{Q}(x|\mathbf{\omega})$}\\ &\mbox{\small 3.4.5}&\mbox{Orthogonality and completeness}\end{array}\) **A The double sine function** \(\begin{array}{ll}\mbox{\small A.1}&\mbox{Asymptotics with respect to periods}\\ &\mbox{\small A.2}&\mbox{Beta integrals}\\ &\mbox{\small A.3}&\mbox{Asymptotics for large argument}\end{array}\) **B Commutativity of \(Q\)-operators** \(\begin{array}{ll}\mbox{\small B.1}&\mbox{$n=1$ and arbitrary $g$}\\ &\mbox{\small B.2}&\mbox{$g=1$ and arbitrary $n$}\end{array}\) **C Delta-sequence** ## 1 Introduction **1.** In recent years, there has been a significant progress in the study of hyperbolic Ruijsenaars-Sutherland quantum integrable models. The study of these hyperbolic systems has gone along a path different from that used in the compact trigonometric versions of the corresponding models, where the main role is played by the theory of Macdonald polynomials and of the double affine Hecke algebras by I. Cherednick. Instead, the cornerstone of constructions in hyperbolic systems is the so-called _kernel function_ by S. Ruijsenaars, first introduced and studied in [R2]. The kernel function is factorized into a product of two-point functions and can be regarded as a counterpart of the reproducing kernel in the theory of Macdonald polynomials [NS1,NS2]. With its use M. Hallnas and S. Ruijsenaars constructed integral representations for the wave functions of the hyperbolic Ruijsenaars-Sutherland models [HR1, HR2]. In our recent papers [BDKK1, BDKK2] we considered the Ruijsenaars system and used the kernel function to construct the commuting family of integral operators called _Baxter \(Q\)-operators_. With these operators at hand we obtained another (dual) integral representation of the wave function, given by integrals over spectral parameters. We also proved that the wave function is symmetric with respect to both space and spectral variables and solves a bispectral problem, that is it is also an eigenfunction of difference operators acting on spectral variables. As it is shown in [BDKK2], all these properties stem from the commutativity of \(Q\)-operators, which was proven in [BDKK1]. In a certain limit the Ruijsenaars model degenerates to the Sutherland model, and the whole construction can be repeated for it, as we will show in our future work. The goal of these notes is to demonstrate basic ideas of \(Q\)-operator's technique in hyperbolic Ruijsenaars-Sutherland models in the cases of one and two particles, where many subtle points can be already visualized. 2.Let us first demonstrate how \(Q\)-operators appear in these models. Consider the hyperbolic Sutherland model with two particles. The Hamiltonian is given by \[H_{S}=-\partial_{x_{1}}^{2}-\partial_{x_{2}}^{2}+\frac{2g(g-1)}{\sinh^{2}(x_{ 1}-x_{2})}.\] Performing the similarity transformation \[H=\sinh^{g}|x_{1}-x_{2}|\cdot H_{S}\cdot\frac{1}{\sinh^{g}|x_{1}-x_{2}|}\] we arrive at the operator \[H=-\partial_{x_{1}}^{2}-\partial_{x_{2}}^{2}-2g\coth(x_{1}-x_{2})(\partial_{x _{1}}-\partial_{x_{2}})-2g^{2}. \tag{1.1}\] It commutes with the "full momentum" operator \[P=-i\partial_{x_{1}}-i\partial_{x_{2}}.\] With its use one can reduce the spectral problem for the operator (1.1) to the solution of ordinary differential equation, which coincides with classical hypergeometric equation over independent variable \(t=e^{2(x_{1}-x_{2})}\). Its solution can be represented by means of the Barnes type integral \[\begin{split}\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})& =\frac{2^{2g-3}}{\pi\Gamma^{2}(g)}\int_{\mathbb{R}}d\gamma\,\Gamma \Big{(}\frac{i\lambda_{1}-i\gamma+g}{2}\Big{)}\,\Gamma\Big{(}\frac{i\gamma-i \lambda_{1}+g}{2}\Big{)}\\ &\times\Gamma\Big{(}\frac{i\lambda_{2}-i\gamma+g}{2}\Big{)}\, \Gamma\Big{(}\frac{i\gamma-i\lambda_{2}+g}{2}\Big{)}\,e^{i(\lambda_{1}+\lambda _{2}-\gamma)x_{2}}\,e^{i\gamma x_{1}}\end{split} \tag{1.2}\] so that \[\begin{split} P\,\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})& =(\lambda_{1}+\lambda_{2})\,\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2}),\\ H\,\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})&=( \lambda_{1}^{2}+\lambda_{2}^{2})\,\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2}). \end{split}\] The normalization constant behind the integral is chosen for the latter convenience. The solution of hypergeometric equation also admits Euler beta integral representation. To find it apply the Mellin transform to the integral (1.2) (see Section 3.3.1), then (1.2) takes the following form \[\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=\int_{\mathbb{R}}dy\ \text{ch}^{-g}(x_{1}-y)\ \text{ch}^{-g}(x_{2}-y)\,e^{i\lambda_{2}(x_{1}+x_{2}-y)}\,e^{i\lambda_{1}y}. \tag{1.3}\] Clearly, this integral representation is symmetric with respect to \(x_{j}\), and the first one (1.2) is symmetric with respect to \(\lambda_{j}\) \[\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=\Psi_{\lambda_{1},\lambda_{2}}(x_{ 2},x_{1})=\Psi_{\lambda_{2},\lambda_{1}}(x_{1},x_{2}).\] Besides, one can check using the first representation (1.2) that \(\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})\) as a function of \(\lambda_{1},\lambda_{2}\) solves another (dual) spectral problem \[\begin{split}\mathcal{P}\,\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x _{2})&=e^{2x_{1}+2x_{2}}\,\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x _{2}),\\ \mathcal{H}\,\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})& =\left(e^{2x_{1}}+e^{2x_{2}}\right)\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2} )\end{split}\] with operators \[\begin{split}\mathcal{P}&=e^{-2i\partial_{\lambda_{1 }}-2i\partial_{\lambda_{2}}},\\ \mathcal{H}&=\frac{\lambda_{1}-\lambda_{2}+2i(g-1)}{ \lambda_{2}-\lambda_{1}}e^{-2i\partial_{\lambda_{1}}}+\frac{\lambda_{2}- \lambda_{1}+2i(g-1)}{\lambda_{1}-\lambda_{2}}e^{-2i\partial_{\lambda_{2}}}. \end{split} \tag{1.4}\] Now consider the asymptotic behavior of the eigenfunction \(\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})\) as \(x_{2}-x_{1}\to\infty\). Due to the structure of the Hamiltonian (1.1) it should be given by a combination of plane waves. Its precise form can be obtained from the integral representation (1.2) by residue evaluation in the lower half-plane. The leading behavior of the residue series in the limit \(x_{2}-x_{1}\to\infty\) is given by the closest to integration contour poles \[\begin{split}&\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=2^{2g-1} \Gamma^{-1}(g)\,e^{-g(x_{2}-x_{1})}\\ &\times\Bigg{[}\Gamma\Big{(}\frac{i\lambda_{2}-i\lambda_{1}}{2} \Big{)}\,\Gamma\Big{(}\frac{i\lambda_{1}-i\lambda_{2}}{2}+g\Big{)}\,e^{i \lambda_{1}x_{1}+i\lambda_{2}x_{2}}\\ &+\Gamma\Big{(}\frac{i\lambda_{1}-i\lambda_{2}}{2}\Big{)}\, \Gamma\Big{(}\frac{i\lambda_{2}-i\lambda_{1}}{2}+g\Big{)}\,e^{i\lambda_{2}x_{ 1}+i\lambda_{1}x_{2}}\,\Bigg{]}+O\big{(}e^{-(g+2)(x_{2}-x_{1})}\big{)}.\end{split} \tag{1.5}\] In order to separate the single plane wave in the asymptotics we add an imaginary part \(-i\varepsilon\) with \(\varepsilon>0\) to the variable \(\lambda_{2}\) and consider the asymptotic behavior as \(x_{2}\to\infty\) \[\begin{split}\Psi_{\lambda_{1},\lambda_{2}-i\varepsilon}(x_{1},x_ {2})&=2^{2g-1}\Gamma^{-1}(g)\,\Gamma\Big{(}\frac{i\lambda_{2}-i \lambda_{1}+\varepsilon}{2}\Big{)}\,\Gamma\Big{(}\frac{i\lambda_{1}-i\lambda_ {2}-\varepsilon}{2}+g\Big{)}\\ &\times e^{i(\lambda_{1}-ig)x_{1}+i(\lambda_{2}-i\varepsilon+ ig)x_{2}}+O\big{(}e^{-gx_{2}}\big{)}.\end{split} \tag{1.6}\] On the other hand, the same asymptotic of the integral (1.3) is given by \[2^{g}\,e^{gx_{1}+i(\lambda_{2}-i\varepsilon+ig)x_{2}}\int_{\mathbb{R}}dy\ \mathrm{ ch}^{-g}(x_{1}-y)\,e^{i(\lambda_{2}-i\varepsilon+ig)(x_{1}-y)}\,e^{i \lambda_{1}y} \tag{1.7}\] since \[\mathrm{ch}^{-g}(x_{2}-y)\sim 2^{g}\,e^{-g(x_{2}-y)},\qquad x_{2}\to\infty.\] The comparison of (1.6) and (1.7) yields the integral identity \[\int_{\mathbb{R}}dy\ \mathrm{ch}^{-g}(x_{1}-y)\,e^{i\lambda_{2}^{ \prime}(x_{1}-y)}e^{i\lambda_{1}y}\\ =2^{g-1}\Gamma^{-1}(g)\,\Gamma\Big{(}\frac{i\lambda_{2}^{\prime}- i\lambda_{1}+g}{2}\Big{)}\,\Gamma\Big{(}\frac{i\lambda_{1}-i\lambda_{2}^{ \prime}+g}{2}\Big{)}e^{i\lambda_{1}x_{1}} \tag{1.8}\] with \(\lambda_{2}^{\prime}=\lambda_{2}+ig-i\varepsilon\). It is equivalent to the classical beta integral. The integrals (1.3) and (1.8) have similar structure. We can interpret both of them in terms of integral operators. The relation (1.3) says that the two-particle wave function \(\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})\) is obtained from one-particle function \(\Psi_{\lambda_{1}}(x_{1})=e^{i\lambda_{1}x_{1}}\) by means of the integral operator \[\big{[}\Lambda(\lambda)f\big{]}(x_{1},x_{2})=\int_{\mathbb{R}}dy\ \mbox{ch}^{-g}(x_{1}-y)\ \mbox{ch}^{-g}(x_{2}-y)\,e^{i\lambda(x_{1}+x_{2}-y)}\,f(y),\] which we call _raising operator_. The relation (1.8) says that the one-particle wave function \(\Psi_{\lambda_{1}}(x_{1})=e^{i\lambda_{1}x_{1}}\) is an eigenfunction of the integral operator \[\big{[}Q(\lambda)f\big{]}(x_{1})=\int_{\mathbb{R}}dy\ \mbox{ch}^{-g}(x_{1}-y)\,e^{i \lambda(x_{1}-y)}\,f(y),\] with the eigenvalue \[q(\lambda,\lambda_{1})=2^{g-1}\Gamma^{-1}(g)\,\Gamma\Big{(}\frac{i\lambda-i \lambda_{1}+g}{2}\Big{)}\,\Gamma\Big{(}\frac{i\lambda_{1}-i\lambda+g}{2}\Big{)}, \tag{1.9}\] which we call _Baxter \(Q\)-operator_. Analogously, the integral (1.2) says that the two-particle wave function \(\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})\) is obtained from one-particle function \(\Psi_{\lambda_{1}}(x_{1})=e^{i\lambda_{1}x_{1}}\) by means of the _dual_ raising integral operator \[\big{[}\hat{\Lambda}(x)f\big{]}(\lambda_{1},\lambda_{2}) =\frac{2^{2g-3}}{\pi\Gamma^{2}(g)}\int_{\mathbb{R}}d\gamma\, \Gamma\Big{(}\frac{i\lambda_{1}-i\gamma+g}{2}\Big{)}\,\Gamma\Big{(}\frac{i \gamma-i\lambda_{1}+g}{2}\Big{)} \tag{1.10}\] \[\times\Gamma\Big{(}\frac{i\lambda_{2}-i\gamma+g}{2}\Big{)}\, \Gamma\Big{(}\frac{i\gamma-i\lambda_{2}+g}{2}\Big{)}\,e^{i(\lambda_{1}+ \lambda_{2}-\gamma)x_{2}}\,f(\gamma).\] Due to the structure of the dual Hamiltonian (1.4) we can similarly consider the asymptotic behavior of the wave function as \(\lambda_{2}\to\infty\) (with the help of Stirling formula for the gamma function). Then it gives the dual \(Q\)-operator \[\big{[}\hat{Q}(x)f\big{]}(\lambda_{1})=\frac{2^{g-2}}{\pi\Gamma(g)}\int_{ \mathbb{R}}d\gamma\,\Gamma\Big{(}\frac{i\lambda_{1}-i\gamma+g}{2}\Big{)}\, \Gamma\Big{(}\frac{i\gamma-i\lambda_{1}+g}{2}\Big{)}\,e^{i(\lambda_{1}-\gamma) x_{2}}\,f(\gamma). \tag{1.11}\] Again its eigenfunctions are one-particle wave functions \(\Psi_{\lambda_{1}}(x_{1})=e^{i\lambda_{1}x_{1}}\) with the eigenvalue \[\hat{q}(x,x_{1})=\mbox{ch}^{-g}(x-x_{1}). \tag{1.12}\] The last statement amounts to the Fourier transform of the identity (1.8). Remarkably, these observations generalize to the \(n\)-particle case. In the work [HR2] Hallnas and Ruijsenaars showed that the \(n\)-particle eigenfunction of the Hamiltonian \[H_{n}=-\sum_{j=1}^{n}\partial_{x_{j}}^{2}-2g\sum_{\begin{subarray}{c}j,k=1\\ j<k\end{subarray}}^{n}\coth(x_{j}-x_{k})(\partial_{x_{j}}-\partial_{x_{k}}) \tag{1.13}\] can be obtained from the \((n-1)\)-particle function \[\Psi_{\lambda_{1},\ldots,\lambda_{n}}(x_{1},\ldots,x_{n})=\Lambda_{n}(\lambda _{n})\,\Psi_{\lambda_{1},\ldots,\lambda_{n-1}}(x_{1},\ldots,x_{n-1})\] using the integral operator \[\big{[}\Lambda_{n}(\lambda)f\big{]}(x_{1},\ldots, x_{n})=\int_{\mathbb{R}^{n-1}}dy_{1}\ldots dy_{n-1}\prod_{ \begin{subarray}{c}j,k=1\\ j<k\end{subarray}}^{n-1}\sh^{2g}|y_{j}-y_{k}|\\ \times e^{i\lambda\big{(}\sum\limits_{j=1}^{n}x_{j}-\sum \limits_{j=1}^{n-1}y_{j}\big{)}}\prod_{j=1}^{n}\prod_{k=1}^{n-1}\ch^{-g}(x_{j} -y_{k})\,f(y_{1},\ldots,y_{n-1}).\] As before, considering its asymptotic behavior as \(x_{n}\to\infty\) we arrive at the integral operator \[\big{[}Q_{n-1}(\lambda)f\big{]}(x_{1},\ldots, x_{n-1})=\int_{\mathbb{R}^{n-1}}dy_{1}\ldots dy_{n-1}\prod_{ \begin{subarray}{c}j,k=1\\ j<k\end{subarray}}^{n-1}\sh^{2g}|y_{j}-y_{k}|\\ \times e^{i\lambda\big{(}\sum\limits_{j=1}^{n-1}x_{j}-\sum \limits_{j=1}^{n-1}y_{j}\big{)}}\prod_{j,k=1}^{n-1}\ch^{-g}(x_{j}-y_{k})\,f(y_ {1},\ldots,y_{n-1}).\] Due to the structure of the Hamiltonian (1.13) the \((n-1)\)-particle wave functions should be its eigenfunctions \[Q_{n-1}(\lambda)\,\Psi_{\lambda_{1},\ldots,\lambda_{n-1}}(x_{1}, \ldots,x_{n-1})\\ =q(\lambda,\lambda_{1},\ldots,\lambda_{n-1})\,\Psi_{\lambda_{1}, \ldots,\lambda_{n-1}}(x_{1},\ldots,x_{n-1}). \tag{1.14}\] The same holds for the dual integral representation [KK] and dual operators \(\hat{\Lambda}_{n}\), \(\hat{Q}_{n-1}\) generalizing (1.10), (1.11). The equivalence of two integral representations in the case of \(n\) particles is a nontrivial fact, but assuming it and comparing their asymptotics we notice that \(Q\)-operator's eigenvalue should be factorized into functions (1.9) \[q(\lambda,\lambda_{1},\dots,\lambda_{n-1})=(n-1)!\prod_{j=1}^{n-1}q(\lambda, \lambda_{j}).\] This can be also guessed from the general principle of \(S\)-matrix factorization in the theory of integrable models. The proof of the \(Q\)-operator diagonalization (1.14) in the case of \(n\)-particles Sutherland model will be given in our future work. In these notes we trace the path for the first nontrivial case -- the operator \(Q_{2}(\lambda)\). The key ingredient of the proof is the commutativity of \(Q\)-operators \[Q_{n}(\lambda)\,Q_{n}(\rho)=Q_{n}(\rho)\,Q_{n}(\lambda).\] In the certain limit this identity degenerates to exchange relation between \(Q\) and \(\Lambda\) operators, which in turn implies the statement (1.14) together with factorization (1.15). One more remarkable feature of the above calculation is the form of the eigenvalues (1.9) and (1.12) of operators \(Q(\lambda)\) and \(\hat{Q}(x)\): the eigenvalue (1.9) is the constituent of the kernel of dual \(Q\)-operator \(\hat{Q}(x)\), and the eigenvalue (1.12) is the constituent of the kernel of \(Q(\lambda)\). Using this property one can write down one more integral representation for the wave function \[\Psi_{\lambda_{1},\dots,\lambda_{n}}(x_{1},\dots,x_{n})=e^{i\lambda_{n}x_{n}} \,Q_{n-1}(\lambda_{n})\,\hat{Q}_{n-1}(x_{n})\,\Psi_{\lambda_{1},\dots,\lambda_ {n-1}}(x_{1},\dots,x_{n-1})\] and prove the equality of two mentioned integral representations found in [11, 12]. 3.The described program is implemented for the Ruijsenaars hyperbolic system in [1]. This system is governed by commuting difference operators \[M_{k}=\sum_{\begin{subarray}{c}A\subset\{1,\dots,n\}\\ |A|=k\end{subarray}}\prod_{\begin{subarray}{c}a\in A\\ b\notin A\end{subarray}}\frac{\operatorname{sh}\frac{\pi}{\omega_{1}}\,(x_{a} -x_{b}-ig)}{\operatorname{sh}\frac{\pi}{\omega_{1}}\,(x_{a}-x_{b})}\prod_{a \in A}e^{-i\omega_{2}\partial_{x_{a}}},\qquad k=1,\dots,n.\] Here and in what follows we assume that \(g,\omega_{1},\omega_{2}\) are positive constants, such that \[0<g<\omega_{1}+\omega_{2}.\] In [BDKK2] a more general case of complex constants is considered. The Sutherland system can be obtained after scaling \(g\to g\omega_{2}\) in the limit \(\omega_{2}\to 0\). Here is a short list of some results from [HR1, BDKK1, BDKK2]. The building blocks of the construction are the measure function \(\mu_{g}(\boldsymbol{x}_{n})\) and the kernel function \(K_{g}(\boldsymbol{x}_{n},\boldsymbol{y}_{m})\), which are defined for tuples \[\boldsymbol{x}_{n}=(x_{1},\ldots,x_{n}),\qquad\boldsymbol{y}_{m}=(y_{1},\ldots,y_{m})\] via products \[\mu_{g}(\boldsymbol{x}_{n})=\prod_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\mu_{g}(x_{i}-x_{j}),\qquad K_{g}(\boldsymbol{x}_{n}, \boldsymbol{y}_{m})=\prod_{i=1}^{n}\prod_{j=1}^{m}K_{g}(x_{i}-y_{j})\] where \[\begin{array}{l}\mu_{g}(x)=S_{2}(ix|\boldsymbol{\omega})S_{2}(g-ix| \boldsymbol{\omega}),\\ K_{g}(x)=S_{2}^{-1}\Big{(}\frac{g}{2}+ix\Big{|}\boldsymbol{\omega}\Big{)}S_{2 }^{-1}\Big{(}\frac{g}{2}-ix\Big{|}\boldsymbol{\omega}\Big{)}.\end{array} \tag{1.17}\] Here \(S_{2}(z|\boldsymbol{\omega})\) is the double sine function, see Appendix A. Denote also \[g^{*}=\omega_{1}+\omega_{2}-g.\] In these notations the Baxter operator is the integral operator \[\big{[}Q_{n}(\lambda|\boldsymbol{\omega})f\big{]}(\boldsymbol{x}_{n})=d_{n}(g )\,\int_{\mathbb{R}^{n}}d\boldsymbol{y}_{n}\,Q(\boldsymbol{x}_{n},\boldsymbol {y}_{n};\lambda)f(\boldsymbol{y}_{n})\] with the kernel \[Q(\boldsymbol{x}_{n},\boldsymbol{y}_{n};\lambda)=e^{\frac{2\pi i}{\omega_{1} \omega_{2}}\lambda(\underline{\boldsymbol{x}}_{n}-\underline{\boldsymbol{y}}_ {n})}\,K_{g^{*}}(\boldsymbol{x}_{n},\boldsymbol{y}_{n})\,\mu_{g^{*}}( \boldsymbol{y}_{n})\] and normalizing constant \[d_{n}(g)=\frac{1}{n!}\left[\sqrt{\omega_{1}\omega_{2}}S_{2}(g|\boldsymbol{ \omega})\right]^{-n}.\] Here and below for a tuple \(\boldsymbol{x}_{n}=(x_{1},\ldots,x_{n})\) we use the notation \(\underline{\boldsymbol{x}}_{n}\) for the sum of components \[\underline{\boldsymbol{x}}_{n}=x_{1}+\ldots+x_{n}.\] The raising operator \(\Lambda_{n}(\lambda)\) is a similar integral operator \[\big{[}\Lambda_{n}(\lambda|\boldsymbol{\omega})f\big{]}(\boldsymbol{x}_{n})=d _{n-1}(g)\int_{\mathbb{R}^{n-1}}d\boldsymbol{y}_{n-1}\,\Lambda(\boldsymbol{x} _{n},\boldsymbol{y}_{n-1};\lambda)f(\boldsymbol{y}_{n-1})\] with the kernel \[\Lambda(\boldsymbol{x}_{n},\boldsymbol{y}_{n-1};\lambda)=e^{\frac{2\pi i}{\omega_{ 1}\omega_{2}}\lambda(\underline{\boldsymbol{x}}_{n}-\underline{\boldsymbol{y }}_{n-1})}\,K_{g^{*}}(\boldsymbol{x}_{n},\boldsymbol{y}_{n-1})\,\mu_{g^{*}}( \boldsymbol{y}_{n-1}).\] It was proved in [HR1] that the function \[\Psi_{\boldsymbol{\lambda}_{n}}(\boldsymbol{x}_{n}|\boldsymbol{\omega})= \Lambda_{n}(\lambda_{n}|\boldsymbol{\omega})\,\Lambda_{n-1}(\lambda_{n-1}| \boldsymbol{\omega})\cdots\Lambda_{2}(\lambda_{2}|\boldsymbol{\omega})\,e^{ \frac{2\pi i}{\omega_{1}\omega_{2}}\lambda_{1}x_{1}} \tag{1.18}\] is a joint eigenfunction of the operators (1.16) \[M_{k}\,\Psi_{\boldsymbol{\lambda}_{n}}(\boldsymbol{x}_{n}|\boldsymbol{\omega })=e_{k}\big{(}e^{\frac{2\pi\lambda_{1}}{\omega_{1}}},\ldots,e^{\frac{2\pi \lambda_{n}}{\omega_{1}}}\big{)}\Psi_{\boldsymbol{\lambda}_{n}}(\boldsymbol{ x}_{n}|\boldsymbol{\omega}),\qquad k=1,\ldots,n\] under the condition \(g<\omega_{1}\). Here \(e_{k}\) are the elementary symmetric functions. Furthermore, in [BDKK2] we showed that it is also an eigenfunction of the Baxter operators \(Q_{n}(\lambda|\boldsymbol{\omega})\) \[Q_{n}(\lambda|\boldsymbol{\omega})\,\Psi_{\boldsymbol{\lambda}_{n}}( \boldsymbol{x}_{n}|\boldsymbol{\omega})=\prod_{j=1}^{n}K_{g}(\lambda-\lambda_ {j})\,\Psi_{\boldsymbol{\lambda}_{n}}(\boldsymbol{x}_{n}|\boldsymbol{\omega}).\] In a similar manner dual Baxter and raising operators are integral operators \[\big{[}\hat{Q}_{n}(x|\boldsymbol{\omega})f\big{]}(\boldsymbol{\lambda}_{n})= d_{n}(g^{*})\,\int_{\mathbb{R}^{n}}d\boldsymbol{\gamma}_{n}\,\hat{Q}( \boldsymbol{\lambda}_{n},\boldsymbol{\gamma}_{n};x)f(\boldsymbol{\gamma}_{n}),\] \[\big{[}\hat{\Lambda}_{n}(x|\boldsymbol{\omega})f\big{]}(\boldsymbol{\lambda}_{ n})=d_{n-1}(g^{*})\int_{\mathbb{R}^{n-1}}d\boldsymbol{\gamma}_{n-1}\,\hat{ \Lambda}(\boldsymbol{\lambda}_{n},\boldsymbol{\gamma}_{n-1};x)f(\boldsymbol{ \gamma}_{n-1})\] with the kernels \[\hat{Q}(\boldsymbol{\lambda}_{n},\boldsymbol{\gamma}_{n};x)=e^{\frac{2\pi i}{ \omega_{1}\omega_{2}}x(\boldsymbol{\lambda}_{n}-\underline{\boldsymbol{ \gamma}}_{n})}\,K_{g}(\boldsymbol{\lambda}_{n},\boldsymbol{\gamma}_{n})\,\mu_{ g}(\boldsymbol{\gamma}_{n}),\] \[\hat{\Lambda}(\boldsymbol{\lambda}_{n},\boldsymbol{\gamma}_{n-1};x)=e^{\frac{ 2\pi i}{\omega_{1}\omega_{2}}x(\boldsymbol{\lambda}_{n}-\underline{\boldsymbol{ \gamma}}_{n-1})}K_{g}(\boldsymbol{\lambda}_{n},\boldsymbol{\gamma}_{n-1})\,\mu_ {g}(\boldsymbol{\gamma}_{n-1}).\] The duality property established in [BDKK2] implies that the wave function admits along with (1.18) another integral representation \[\Psi_{\boldsymbol{\lambda}_{n}}(\boldsymbol{x}_{n}|\boldsymbol{\omega})= \hat{\Lambda}_{n}(x_{n}|\boldsymbol{\omega})\,\hat{\Lambda}_{n-1}(x_{n-1}| \boldsymbol{\omega})\cdots\hat{\Lambda}_{2}(x_{2}|\boldsymbol{\omega})\,e^{ \frac{2\pi i}{\omega_{1}\omega_{2}}\lambda_{1}x_{1}},\] so that it solves the spectral problem for dual Macdonald operators as well, and it is also an eigenfunction of dual Baxter operators \[\hat{Q}_{n}(x|\boldsymbol{\omega})\,\Psi_{\boldsymbol{\lambda}_{n}}( \boldsymbol{x}_{n}|\boldsymbol{\omega})=\prod_{j=1}^{n}K_{g^{*}}(x-x_{j})\,\Psi _{\boldsymbol{\lambda}_{n}}(\boldsymbol{x}_{n}|\boldsymbol{\omega}).\] We remark that operators defined here are different from the ones in [1, 2] by rescaling of spectral parameters \(\lambda_{j}\to\lambda_{j}/\omega_{1}\omega_{2}\). Such rescaling simplifies formulas in the case of real constants \(\omega_{i}\), and oppositely complicates matters in the case of complex \(\omega_{i}\) considered in [1, 2]. After rescaling \(g\to g\omega_{2}\) in the limit \(\omega_{2}\to 0\) the Ruijsenaars system degenerates to the Sutherland system so that the measure and kernel functions (1.17) turn into \[\mu(x)=\mbox{sh}^{g}\,|x|,\qquad\qquad\hat{\mu}(\lambda)=2^{1-g}\Gamma(g)\, \Gamma^{-1}\Big{(}\frac{i\lambda}{2}+g\Big{)}\Gamma^{-1}\Big{(}-\frac{i\lambda }{2}\Big{)},\] \[K(x)=\mbox{ch}^{-g}(x),\qquad\quad\hat{K}(\lambda)=2^{g-1}\Gamma^{-1}(g)\, \Gamma\Big{(}\frac{g+i\lambda}{2}\Big{)}\Gamma\Big{(}\frac{g-i\lambda}{2} \Big{)},\] see Section 2.3. With these functions Baxter and raising operators and their duals are defined by the same formulas. In the future work we will show that all statements listed above remain valid for Sutherland system as well. 4.The plan of the paper is as follows. In Section 2 we collect all required calculations for \(n=1\) case. Basic integral identities for \(n=1\) are Euler beta integral and its hyperbolic generalization. There are two possible points of view on non-relativistic model. If one has all needed relativistic formulas, one can try to obtain non-relativistic analogs by the appropriate reduction. On the other hand one can try to work out all needed non-relativistic formulas independently. We test both points of view, that is we reduce the appearing operator identities to the classical integrals independently and also trace the degeneration procedure of these identities from Ruijsenaars to Sutherland model. Although one-particle wave functions are just plane waves and their completeness and orthogonality is well-known, in this section we present a proof suitable to generalization for \(n>1\). Section 3 is devoted to more nontrivial \(n=2\) case. Here the basic operator relations are given by the integral identities which implicitly appeared in [10]. The presentation in the case \(n=2\) is close to the one given for \(n=1\). We prove the equivalence of two integral representations for the eigenfunction in non-relativistic case by different methods, one of them admits natural generalization to arbitrary \(n\). The calculation of the scalar product between eigenfunctions of the operator \(Q_{2}(\lambda)\) is also performed by two methods. The first one is a standard method from textbooks, yet we do not know its generalization to the case of arbitrary \(n\). Due to this reason we present calculation of the scalar product using \(Q\)-operator, which works for all constructed eigenfunctions and admits natural generalization to arbitrary \(n\). In the last subsection we discuss the completeness of eigenfunctions. Due to the remarkable property of self-duality in relativistic case the completeness relation is in fact equivalent to the orthogonality relation up to the change \(g\rightleftarrows g^{*}\). In non-relativistic case the completeness relation for the eigenfunctions of \(Q\)-operator is equivalent to the orthogonality relation for eigenfunctions of the dual \(\hat{Q}\)-operator. Case \(n=1\) In this section we demonstrate the main relations using the simplest example \(n=1\). The plan step by step by subsections is the following: 1. Description of all \(Q\)-operators as integral operators and proof of their commutativity. 2. Diagonalization of \(Q\)-operators in a straightforward way. Calculation of the eigenvalues is based on the beta integral and its generalization. 3. Reductions that relate different \(Q\)-operators. Derivation of commutation relations between \(Q\)-operators and \(\Lambda\)-operators from the commutation relations between \(Q\)-operators. 4. Calculation of the scalar product for \(Q\)-operator eigenfunctions. For \(n=1\) it is reduced to the proof of the standard formula for the plane waves. For illustration we prove this formula using two nonstandard regularizations which are very useful in the general case. ### Definitions of \(Q\)-operators and commutativity We have three one-parametric families of commuting operators \[\begin{split} Q(\lambda)Q(\mu)&=Q(\mu)Q(\lambda), \\ \hat{Q}(x)\hat{Q}(y)&=\hat{Q}(y)\hat{Q}(x),\\ \hat{Q}(x|\mathbf{\omega})\hat{Q}(y|\mathbf{\omega})&=\hat{ Q}(y|\mathbf{\omega})\hat{Q}(x|\mathbf{\omega}).\end{split} \tag{2.19}\] The \(Q\)-operator \(Q(\lambda)\) depends on the spectral parameter \(\lambda\) and acts on the functions of variable \(x\): it is the integral operator \[[Q(\lambda)\Psi](x)=\int\limits_{-\infty}^{+\infty}dy\,e^{i\lambda(x-y)}K(x-y )\,\Psi(y)\] with the kernel \(e^{i\lambda(x-y)}K(x-y)\). For the dual \(\hat{Q}\)-operator \(\hat{Q}(x)\) the roles of variable \(x\) and spectral parameter \(\lambda\) interchange: the variable \(x\) plays the role of the spectral parameter and \(\hat{Q}\)-operator acts on the functions of variable \(\lambda\). It is the integral operator \[[\hat{Q}(x)\Psi](\lambda)=\int\limits_{-\infty}^{+\infty}\frac{d\gamma}{2\pi}\,e ^{ix(\lambda-\gamma)}\hat{K}(\lambda-\gamma)\,\Psi(\gamma)\] with the kernel \(e^{ix(\lambda-\gamma)}\hat{K}(\lambda-\gamma)\). The explicit expressions for \(K(x)\) and \(\hat{K}(\lambda)\) are given by the formulas \[K(x)=\frac{1}{\text{ch}^{g}(x)},\qquad\hat{K}(\lambda)=\frac{\Gamma\left(\frac {g+i\lambda}{2}\right)\Gamma\left(\frac{g-i\lambda}{2}\right)}{2^{1-g}\Gamma (g)}\] and in explicit form we have \[[Q(\lambda)\Psi](x) =\int\limits_{-\infty}^{+\infty}dy\,\frac{e^{i\lambda(x-y)}}{ \text{ch}^{g}(x-y)}\,\Psi(y)\,,\] \[[\hat{Q}(x)\Psi](\lambda) =\int\limits_{-\infty}^{+\infty}\frac{d\gamma}{2\pi}\,e^{ix( \lambda-\gamma)}\frac{\Gamma\left(\frac{g+i(\lambda-\gamma)}{2}\right)\Gamma \left(\frac{g-i(\lambda-\gamma)}{2}\right)}{2^{1-g}\Gamma(g)}\,\Psi(\gamma)\,.\] On the higher relativistic level we have the integral operator which we supply in this section by index \(\boldsymbol{\omega}\) to recall its dependence on periods \(\boldsymbol{\omega}=(\omega_{1},\omega_{2})\). \[[\hat{Q}(x|\boldsymbol{\omega})\Psi](\lambda)=\int\limits_{-\infty}^{+\infty} d\gamma\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}x(\lambda-\gamma)}K_{g}( \lambda-\gamma)\,\Psi(\gamma)\] with the kernel \(e^{\frac{2\pi i}{\omega_{1}\omega_{2}}x(\lambda-\gamma)}K_{g}(\lambda-\gamma)\) where \[K_{g}(\lambda)=\frac{S\left(\omega_{1}+\omega_{2}+i\lambda-\frac{g}{2}\right) }{S\left(i\lambda+\frac{g}{2}\right)}=\frac{1}{S\left(\frac{g}{2}+i\lambda \right)\,S\left(\frac{g}{2}-i\lambda\right)}.\] Here \(S(z):=S_{2}(z|\boldsymbol{\omega})\) is the double sine function, see its properties in Appendix A. In explicit form we have \[[\hat{Q}(x|\boldsymbol{\omega})\Psi](\lambda)=\int\limits_{-\infty}^{+\infty} d\gamma\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}x(\lambda-\gamma)}\,\frac{S \left(\omega_{1}+\omega_{2}+i(\lambda-\gamma)-\frac{g}{2}\right)}{S\left(i( \lambda-\gamma)+\frac{g}{2}\right)}\,\Psi(\gamma).\] Relations (2.19) are proved in all cases uniformly. Let us consider for definiteness the first relation \(Q(\lambda)Q(\mu)=Q(\mu)Q(\lambda)\). We have to prove that integral kernels of operators in both sides of equality coincide or, equivalently, that integral kernel of the operator \(Q(\lambda)Q(\mu)\) is symmetric under exchange \(\lambda\rightleftarrows\mu\). The corresponding kernel is given by the convolution of the kernels of operators \(Q(\lambda)\) and \(Q(\mu)\). The needed symmetry is proved by the change of integration variable \(s=z+x-t\) \[\int\limits_{-\infty}^{+\infty}ds\,e^{i\lambda(x-s)}\,K(x-s)\,e^{ i\mu(s-z)}\,K(s-z)\] \[=\int\limits_{-\infty}^{+\infty}dt\,e^{i\lambda(t-z)}\,K(t-z)\,e^ {i\mu(x-t)}\,K(x-t).\] ### Eigenfunctions and beta integrals All \(Q\)-operators commute with operator of translation \(T^{a}f(x)=f(x+a)\) or, equivalently, integral kernels of \(Q\)-operators depend only on difference of coordinates. Eigenfunctions of the operator \(T^{a}\) are usual plane waves so that we expect the same for all \(Q\)-operators. It is indeed the fact \[Q(\lambda)\,e^{i\lambda_{1}x}=q(\lambda,\lambda_{1})\,e^{i\lambda_{1}x},\] \[\hat{Q}(x)\,e^{ix_{1}\lambda}=\hat{q}(x,x_{1})\,e^{ix_{1}\lambda},\] \[\hat{Q}(x|\mathbf{\omega})\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}x_{1}\lambda}= \hat{q}(x,x_{1}|\mathbf{\omega})\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}x_{1} \lambda}.\] and eigenvalues coincide with the Fourier transformation of the integral kernel \[q(\lambda,\lambda_{1})=\hat{K}(\lambda-\lambda_{1}),\] \[\hat{q}(x,x_{1})=K(x-x_{1}),\] \[\hat{q}(x,x_{1}|\mathbf{\omega})=\sqrt{\omega_{1}\omega_{2}}\,S(g^{*})\,K_{g^{*}}( x-x_{1}).\] The proof is straightforward. Let us start from the operator \(Q(\lambda)\) \[Q(\lambda)\,e^{i\lambda_{1}x}=\int\limits_{-\infty}^{+\infty}dy\,e^{i\lambda (x-y)}\,K(x-y)\,e^{i\lambda_{1}y}=e^{i\lambda_{1}x}\int\limits_{-\infty}^{+ \infty}dz\,e^{i(\lambda-\lambda_{1})z}\,K(z).\] After change of variables \(y=x-z\) in initial integral the \(x\)-dependence is factorized out in the form \(e^{i\lambda_{1}x}\). The remaining integral does not depend on \(x\) and gives the explicit expression for the corresponding eigenvalue. Functions \(K(z)\) and \(\hat{K}(\lambda)\) are connected by Fourier transformation \[\int\limits_{-\infty}^{+\infty}dz\,e^{i\lambda z}\,K(z)=\hat{K}( \lambda),\qquad\int\limits_{-\infty}^{+\infty}\frac{d\lambda}{2\pi}\,e^{-i \lambda z}\,\hat{K}(\lambda)=K(z) \tag{2.20}\] so that one obtains \[Q(\lambda)\,e^{i\lambda_{1}x}=\hat{K}(\lambda-\lambda_{1})\,e^{i \lambda_{1}x}\,.\] The first relation in (2.20) in explicit form looks as follows \[\int\limits_{-\infty}^{+\infty}dz\,\frac{e^{i\lambda z}}{\mathrm{ ch}^{g}(z)}=\frac{\Gamma\left(\frac{g+i\lambda}{2}\right)\Gamma\left(\frac{g-i \lambda}{2}\right)}{2^{1-g}\Gamma(g)}. \tag{2.21}\] It is equivalent to the Euler beta integral in the form \[\int_{0}^{\infty}\frac{x^{a}}{(1+x)^{a+b}}\frac{dx}{x}=\frac{ \Gamma(a)\Gamma(b)}{\Gamma(a+b)}.\] Indeed, changing the variable \(x=e^{2z}\) in (2.21) we have \[\int\limits_{-\infty}^{+\infty}dz\,\frac{e^{i\lambda z}}{\mathrm{ ch}^{g}(z)} =2^{g}\int\limits_{-\infty}^{+\infty}dz\,\frac{e^{i(\lambda-ig)z}} {(1+e^{2z})^{g}}\] \[=2^{g-1}\,\int\limits_{0}^{+\infty}\frac{dx}{x}\,\frac{x^{\frac {g+i\lambda}{2}}}{(1+x)^{g}}=\frac{\Gamma\left(\frac{g+i\lambda}{2}\right) \Gamma\left(\frac{g-i\lambda}{2}\right)}{2^{1-g}\Gamma(g)}.\] The second relation in (2.20) explicitly looks as follows \[\int\limits_{-\infty}^{+\infty}\frac{d\lambda}{2\pi}\,e^{-i\lambda z}\,\frac{ \Gamma\left(\frac{g+i\lambda}{2}\right)\Gamma\left(\frac{g-i\lambda}{2}\right) }{2^{1-g}\Gamma(g)}=\frac{1}{\mathrm{ch}^{g}(z)}. \tag{2.22}\] It is simply the formula of the inverse Fourier transformation. Previous calculations can be almost literally repeated for the dual \(\hat{Q}\)-operator \[\hat{Q}(x)\,e^{ix_{1}\lambda} =\int\limits_{-\infty}^{+\infty}\frac{d\gamma}{2\pi}\,e^{ix( \lambda-\gamma)}\hat{K}(\lambda-\gamma)\,e^{ix_{1}\gamma}\] \[=e^{ix_{1}\lambda}\,\int\limits_{-\infty}^{+\infty}\frac{d\gamma }{2\pi}\,e^{i(x-x_{1})\gamma}\hat{K}(\gamma)=K(x-x_{1})\,e^{ix_{1}\lambda}\,,\] and for the remaining higher level \(\hat{Q}(x|\boldsymbol{\omega})\)-operator \[\hat{Q}(x|\boldsymbol{\omega})\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}x_{1} \lambda} =\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi i}{\omega _{1}\omega_{2}}x(\lambda-\gamma)}K_{g}(\lambda-\gamma)\,e^{\frac{2\pi i}{ \omega_{1}\omega_{2}}x_{1}\gamma}\] \[= e^{\frac{2\pi i}{\omega_{1}\omega_{2}}x_{1}\lambda}\int\limits _{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\gamma(x- x_{1})}K_{g}(\gamma)=e^{\frac{2\pi i}{\omega_{1}\omega_{2}}x_{1}\lambda}\, \sqrt{\omega_{1}\omega_{2}}\,S(g^{*})\,K_{g^{*}}(x-x_{1})\] where \(g^{*}=\omega_{1}+\omega_{2}-g\) and on the last step we used the generalization of the beta integral in the form \[\int\limits_{-\infty}^{+\infty}dz\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}xz} \,K_{g}(z)=\sqrt{\omega_{1}\omega_{2}}\,S(g^{*})\,K_{g^{*}}(x), \tag{2.23}\] or explicitly (see [PT,FKV]) \[\int\limits_{-\infty}^{+\infty}dz\,\frac{e^{\frac{2\pi i}{\omega_{1}\omega_{2 }}xz}}{S\left(\frac{g}{2}+iz\right)S\left(\frac{g}{2}-iz\right)}=\frac{\sqrt{ \omega_{1}\omega_{2}}\,S(g^{*})}{S\left(\frac{g^{*}}{2}+ix\right)S\left(\frac {g^{*}}{2}-ix\right)}\,.\] ### Reductions Now we demonstrate that formulas from the higher level are in some sense universal and all formulas for operators \(Q(\lambda)\) and \(\hat{Q}(x)\) can be obtained by simple reductions from the corresponding formulas for the operator \(\hat{Q}(x|\boldsymbol{\omega})\). The needed reduction is based on the leading asymptotics as \(\omega_{2}\to 0\) \[K_{g\omega_{2}}(\lambda\omega_{2})\to\frac{2^{1-g}\Gamma(g)}{2\pi}\,\left(\frac{2 \pi\omega_{2}}{\omega_{1}}\right)^{g-1}\hat{K}(2\lambda),\] \[K_{\omega_{1}+\omega_{2}-g\omega_{2}}(\lambda\omega_{2})\to 2^{g}\,K\left(\frac{\pi \lambda}{\omega_{1}}\right).\] In Appendix A we derive these formulas and demonstrate that beta integral (2.23) reduces in the corresponding asymptotic regimes to the relations (2.20). #### 2.3.1 Commutation relations between \(Q\)-operators We start from the commutation relation for \(Q\)-operators in relativistic model \[\hat{Q}(x|\boldsymbol{\omega})\hat{Q}(y|\boldsymbol{\omega})=\hat{Q}(y| \boldsymbol{\omega})\hat{Q}(x|\boldsymbol{\omega}) \tag{2.24}\] and derive the commutation relations \[Q(\lambda)Q(\mu)=Q(\mu)Q(\lambda),\qquad\hat{Q}(x)\hat{Q}(y)=\hat{Q}(y)\hat{Q }(x)\] by appropriate reductions. In integral form the relation (2.24) looks as follows \[\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi ix(\lambda -\gamma)}{\omega_{1}\omega_{2}}}K_{g}(\lambda-\gamma)\,e^{\frac{2\pi iy( \gamma-\mu)}{\omega_{1}\omega_{2}}}K_{g}(\gamma-\mu)\\ =\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi iy( \lambda-\gamma)}{\omega_{1}\omega_{2}}}K_{g}(\lambda-\gamma)\,e^{\frac{2\pi ix (\gamma-\mu)}{\omega_{1}\omega_{2}}}K_{g}(\gamma-\mu). \tag{2.25}\] In this identity we rescale \[g\to g\omega_{2},\quad\lambda\to\frac{\lambda\omega_{2}}{2},\quad\gamma\to \frac{\gamma\omega_{2}}{2},\quad\mu\to\frac{\mu\omega_{2}}{2},\quad x\to\frac {\omega_{1}x}{\pi},\quad y\to\frac{\omega_{1}y}{\pi}\] and then using the leading asymptotic as \(\omega_{2}\to 0\) \[K_{g\omega_{2}}(\lambda\omega_{2})\to\frac{2^{1-g}\Gamma(g)}{2\pi}\,\left( \frac{2\pi\omega_{2}}{\omega_{1}}\right)^{g-1}\hat{K}(2\lambda),\qquad\hat{K} (\lambda)=\frac{\Gamma\,\left(\frac{g+i\lambda}{2}\right)\Gamma\,\left(\frac{ g-i\lambda}{2}\right)}{2^{1-g}\Gamma(g)}\] we obtain \[\int\limits_{-\infty}^{+\infty}d\gamma\,e^{ix(\lambda-\gamma)}\hat{K }(\lambda-\gamma)\,e^{iy(\gamma-\mu)}\hat{K}(\gamma-\mu)\\ =\int\limits_{-\infty}^{+\infty}d\gamma\,e^{iy(\lambda-\gamma)} \hat{K}(\lambda-\gamma)\,e^{ix(\gamma-\mu)}\hat{K}(\gamma-\mu)\,.\] It is precisely the integral form of the relation \(\hat{Q}(x)\hat{Q}(y)=\hat{Q}(y)\hat{Q}(x)\). For the second reduction we start from the relation (2.25) for the dual coupling constant, that is \(g\to\omega_{1}+\omega_{2}-g\). Again rescale \[g\to g\omega_{2},\quad\lambda\to\frac{\lambda\,\omega_{1}\omega_{2}}{\pi}, \quad\gamma\to\frac{\gamma\,\omega_{1}\omega_{2}}{\pi},\quad\mu\to\frac{\mu\, \omega_{1}\omega_{2}}{\pi},\quad x\to\frac{x}{2},\quad y\to\frac{y}{2}\] and using the leading asymptotic as \(\omega_{2}\to 0\) \[K_{\omega_{1}+\omega_{2}-g\omega_{2}}(\lambda\omega_{2})\to 2^{g}\,K\left( \frac{\pi\lambda}{\omega_{1}}\right),\qquad K(x)=\frac{1}{\mbox{ch}^{g}(x)}\] we arrive at \[\int\limits_{-\infty}^{+\infty}d\gamma\,e^{ix(\lambda-\gamma)}K( \lambda-\gamma)\,e^{iy(\gamma-\mu)}K(\gamma-\mu)\\ =\int\limits_{-\infty}^{+\infty}d\gamma\,e^{iy(\lambda-\gamma)}K( \lambda-\gamma)\,e^{ix(\gamma-\mu)}K(\gamma-\mu)\,,\] which is the integral form of the relation \(Q(x)Q(y)=Q(y)Q(x)\) (modulo simple renaming of the spectral and function variables \(x\,,y\rightleftarrows\lambda\,,\mu\)). #### 2.3.2 \(Q\Lambda\)-commutation relations In the general case \(Q\Lambda\)-commutation relations are used to diagonalize \(Q\)-operators and can be derived from the \(QQ\)-commutation relations. Now we are going to demonstrate how it works in the simplest example \(n=1\) for all kinds of operators. Reduction considered in this section can be viewed as a preparation for similar calculations in the case \(n=2\). The \(n=1\) example is in some sense degenerate because \(\Lambda\)-operators are operators of multiplication by plane waves. In fact \(Q\Lambda\)-commutation relations in this case are equivalent to the statement that plane waves are eigenfunctions of the \(Q\)-operators, which has been already checked. First of all we perform all calculations in relativistic case and on the next stage repeate the same procedure at lower level, i.e. in non-relativistic situation. We start from the commutation relation \[\hat{Q}(x|\boldsymbol{\omega})\hat{Q}(y|\boldsymbol{\omega})=\hat{Q}(y| \boldsymbol{\omega})\hat{Q}(x|\boldsymbol{\omega}) \tag{2.26}\] and derive the relation \[\hat{Q}(x|\boldsymbol{\omega})\,\hat{\Lambda}\left(y-\tfrac{ig^{*}}{2}| \boldsymbol{\omega}\right)=\hat{\Lambda}\left(y-\tfrac{ig^{*}}{2}|\boldsymbol{ \omega}\right)\,\hat{q}\left(x\,,y-\tfrac{ig^{*}}{2}|\boldsymbol{\omega}\right) \tag{2.27}\] All operators act on the functions of the variable \(\lambda\) and operator \(\hat{\Lambda}(y|\boldsymbol{\omega})\) is the operator of multiplication by \(e^{\frac{2\pi i\lambda}{\omega_{1}\omega_{2}}y}\). Note that starting from \(QQ\)-commutation relations one obtains \(\Lambda\)-operators in \(Q\Lambda\)-commutation relations with shifted spectral parameters \(y\to y-ig^{*}/2\). The shift is universal for all \(n\) so that it is instructive to fix it in the simplest case \(n=1\). In integral form the commutation relation (2.26) looks as follows \[\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi ix( \lambda-\gamma)}{\omega_{1}\omega_{2}}}K_{g}(\lambda-\gamma)\,e^{\frac{2\pi iy (\gamma-\mu)}{\omega_{1}\omega_{2}}}K_{g}(\gamma-\mu)\\ =\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi iy( \lambda-\gamma)}{\omega_{1}\omega_{2}}}K_{g}(\lambda-\gamma)\,e^{\frac{2\pi ix (\gamma-\mu)}{\omega_{1}\omega_{2}}}K_{g}(\gamma-\mu) \tag{2.28}\] and we consider reduction of this identity as \(\mu\to\infty\) using the following leading asymptotic \[K_{g}(\lambda)\to e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\,\lambda\,\tfrac{ig^{ *}}{2}},\qquad\lambda\to\infty\] see Appendix A.3. In the left hand side of relation (2.28) we have for \(\mu\to\infty\) \[K_{g}(\gamma-\mu)=K_{g}(\mu-\gamma)\to e^{\frac{2\pi i}{\omega_{1}\omega_{2}} \,(\mu-\gamma)\,\tfrac{ig^{*}}{2}}.\] To obtain the same asymptotic in the right hand side we have to shift \(\gamma\to\gamma+\mu\) and then send \(\mu\to\infty\) using \[K_{g}(\lambda-\gamma-\mu)\to e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\,(\mu+ \gamma-\lambda)\,\tfrac{ig^{*}}{2}}.\] So, the leading asymptotics of both sides give \[\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi ix(\lambda-\gamma)}{\omega_ {1}\omega_{2}}}K_{g}(\lambda-\gamma)\,e^{\frac{2\pi iy(\gamma-\mu)}{\omega_{1} \omega_{2}}}e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\,(\mu-\gamma)\,\frac{iq^{*}} {2}}\\ =\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi iy(\lambda -\mu-\gamma)}{\omega_{1}\omega_{2}}}e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\,( \mu+\gamma-\lambda)\,\frac{ig^{*}}{2}}\,e^{\frac{2\pi ix\gamma}{\omega_{1} \omega_{2}}}K_{g}(\gamma)\,,\] so that \(\mu\)-dependent contributions can be canceled and after all we obtain the identity \[\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi ix(\lambda- \gamma)}{\omega_{1}\omega_{2}}}K(\lambda-\gamma)\,e^{\frac{2\pi i\gamma}{ \omega_{1}\omega_{2}}\left(y-\frac{ig^{*}}{2}\right)}\\ =e^{\frac{2\pi i\lambda}{\omega_{1}\omega_{2}}\left(y-\frac{ig^{ *}}{2}\right)}\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi i\gamma}{ \omega_{1}\omega_{2}}\left(x-y+\frac{ig^{*}}{2}\right)}K(\gamma)\,,\] which is exactly the relation (2.27) in explicit form. Now we repeat similar reduction starting from commutation relation \[\hat{Q}(x)\hat{Q}(y)=\hat{Q}(y)\hat{Q}(x) \tag{2.29}\] and derive the relation \[\hat{Q}(x)\,\hat{\Lambda}\left(y-\frac{i\pi}{2}\right)=\hat{\Lambda}\left(y- \frac{i\pi}{2}\right)\,\hat{q}\left(x\,,y-\frac{i\pi}{2}\right). \tag{2.30}\] All operators act on the functions of the variable \(\lambda\) and \(\hat{\Lambda}(y)\) is the operator of multiplication by \(e^{i\lambda y}\). Note the same rule as in relativistic case: starting from \(QQ\)-commutation relations one obtains \(\Lambda\)-operators with shifted spectral parameters \(y\to y-i\pi/2\). Again the shift is universal for all \(n\) and it is instructive to fix it in the simplest case \(n=1\). The commutation relation (2.29) in integral form looks as follows \[\int\limits_{-\infty}^{+\infty}\frac{d\gamma}{2\pi}\,e^{ix(\lambda-\gamma)} \hat{K}(\lambda-\gamma)\,e^{iy(\gamma-\mu)}\hat{K}(\gamma-\mu)\\ =\int\limits_{-\infty}^{+\infty}\frac{d\gamma}{2\pi}\,e^{iy( \lambda-\gamma)}\hat{K}(\lambda-\gamma)\,e^{ix(\gamma-\mu)}\hat{K}(\gamma-\mu )\,.\] The leading asymptotic of \(\hat{K}(\gamma-\mu)\) for \(\mu\to\infty\) has the following form (see Appendix A) \[\hat{K}(\gamma-\mu)=\frac{\Gamma\left(\frac{g+i(\gamma-\mu)}{2}\right)\Gamma \left(\frac{g-i(\gamma-\mu)}{2}\right)}{2^{1-g}\Gamma(g)}\to\frac{2\pi}{\Gamma (g)}\,\mu^{g-1}\,e^{\frac{\pi}{2}(\gamma-\mu)}.\] The whole reduction is the same as in previous case: in the left hand side we simply send \(\mu\to\infty\), but in the right hand side we shift \(\gamma\to\gamma+\mu\) and then send \(\mu\to\infty\) \[\int\limits_{-\infty}^{+\infty}\!\frac{d\gamma}{2\pi}\,e^{ix( \lambda-\gamma)}\hat{K}(\lambda-\gamma)\,e^{iy(\gamma-\mu)}\,\mu^{g-1}\,e^{ \frac{\pi}{2}(\gamma-\mu)}\\ =\int\limits_{-\infty}^{+\infty}\!\frac{d\gamma}{2\pi}\,e^{iy( \lambda-\gamma-\mu)}\,\mu^{g-1}\,e^{\frac{\pi}{2}(\gamma-\lambda-\mu)}\,e^{ ix\gamma}\hat{K}(\gamma).\] Again \(\mu\)-dependent contributions can be canceled and one obtains \[\int\limits_{-\infty}^{+\infty}\frac{d\gamma}{2\pi}\,e^{ix(\lambda-\gamma)} \hat{K}(\lambda-\gamma)\,e^{i\gamma\left(y-\frac{i\pi}{2}\right)}=e^{i\lambda \left(y-\frac{i\pi}{2}\right)}\int\limits_{-\infty}^{+\infty}\!\frac{d\gamma} {2\pi}\,e^{i\gamma\left(x-y+\frac{i\pi}{2}\right)}\hat{K}(\gamma)\,,\] which is exactly the relation (2.30) in an explicit form. The last relation \[Q(\lambda)\,\Lambda\left(\mu-ig\right)=\Lambda\left(\mu-ig\right)\,q(\lambda\,,\mu-ig) \tag{2.31}\] is derived from the commutativity relation \[Q(\lambda)\,Q(\mu)=Q(\mu)\,Q(\lambda)\] in the same way. More general reduction is presented for the case \(n=2\) below. ### Regularization and the scalar product The orthogonality relation \[\langle\Psi_{\mu}|\Psi_{\lambda}\rangle=\int\limits_{-\infty}^{+\infty}\!dx\, \overline{\Psi_{\mu}(x)}\,\Psi_{\lambda}(x)=2\pi\,\delta(\lambda-\mu)\] and completeness relation \[\int\limits_{-\infty}^{+\infty}\frac{d\lambda}{2\pi}\,\Psi_{\lambda}(x)\,\overline {\Psi_{\lambda}(y)}=\delta(x-y)\] for the \(Q\)-operators' eigenfunctions \(\Psi_{\lambda}(x)=e^{i\lambda x}\) are reduced to the standard integral \[\int\limits_{-\infty}^{+\infty}dx\,e^{i\lambda x}=2\pi\,\delta( \lambda). \tag{2.32}\] Below we demonstrate on this simplest example the method of calculation of the scalar product which we shall use in general case of arbitrary \(n\). The integral (2.32) is ill-defined and we shall understand this integral as an appropriate limit of the regularized integral. Regularization should have two natural properties: * regularized integrals must be convergent; * regularized integrals must be calculable in an explicit form. #### 2.4.1 \(Q\)-operator regularization In this subsection we introduce regularization such that the regularized integral gives precisely the action of the \(Q\)-operator on eigenfunction. It appears that in general case it is possible to introduce similar regularization preserving the same property: regularized integral contains action of \(Q\)-operator on eigenfunction and can be calculated in explicit form. Let us regularize integral adding the external point \(z_{0}\) and inserting additional \(\varepsilon>0\) in exponent \[\int\limits_{-\infty}^{+\infty}dx\,e^{i\lambda x}=\frac{1}{2^{g} }\lim\limits_{z_{0}\to+\infty}\lim\limits_{\varepsilon\to 0}\int\limits_{- \infty}^{+\infty}dx\,\frac{e^{g(z_{0}-x)}\,e^{\varepsilon x}\,\,e^{i\lambda x }}{\mbox{ch}^{g}(x-z_{0})}\,.\] To prove that in the limit \(z_{0}\to+\infty\) the initial integral is reproduced we use the following asymptotic \[\frac{1}{2^{g}}\frac{1}{\mbox{ch}^{g}(x-z_{0})}=\frac{e^{-g(z_{0}- x)}}{(1+e^{-2(z_{0}-x)})^{g}}\to e^{-g(z_{0}-x)},\qquad z_{0}\to+\infty.\] For \(x\to+\infty\) the integrand behaves as \(e^{2gz_{0}}\,e^{(-2g+i\lambda+\varepsilon)x}\), so that decreasing factor \(e^{-2gx}\) guarantees the convergence. For \(x\to-\infty\) the integrand behaves as \(e^{i\lambda x+\varepsilon x}\) and decreasing factor \(e^{\varepsilon x}\) guarantees the convergence for \(\varepsilon>0\). The calculation of the integral follows the same line as calculation of \(Q\)-operator eigenvalue and everything is reduced to the beta integral \[\int\limits_{-\infty}^{+\infty}dx\,\frac{e^{g(z_{0}-x)}\,e^{ \varepsilon x}\,\,e^{i\lambda x}}{\mathrm{ch}^{g}(x-z_{0})} =\{x\to x+z_{0}\}=e^{(\varepsilon+i\lambda)z_{0}}\int\limits_{- \infty}^{+\infty}dx\,\frac{e^{i(\lambda-i\varepsilon+ig)x}}{\mathrm{ch}^{g}x}\] \[=e^{(\varepsilon+i\lambda)z_{0}}\,\frac{\Gamma\left(\frac{g+i( \lambda-i\varepsilon+ig)}{2}\right)\Gamma\left(\frac{g-i(\lambda-i\varepsilon +ig)}{2}\right)}{2^{1-g}\Gamma(g)}\] \[=e^{(\varepsilon+i\lambda)z_{0}}\,\frac{\Gamma\left(\frac{i( \lambda-i\varepsilon)}{2}\right)\Gamma\left(g-\frac{i(\lambda-i\varepsilon)} {2}\right)}{2^{1-g}\Gamma(g)}.\] This formula shows that order of the limits is fixed. If we shall fix \(\varepsilon>0\) then \(\lim_{z_{0}\to+\infty}\) does not exist due to the factor \(e^{\varepsilon z_{0}}\). It is possible to argue that the prescribed order is the right one without explicit calculation. Let us introduce slightly different regularization \[\int\limits_{-\infty}^{+\infty}dx\,e^{i\lambda x}=\frac{1}{2^{g}}\lim_{z_{0} \to+\infty}\,\lim_{\varepsilon\to 0}\,\int\limits_{-\infty}^{+\infty}dx\, \frac{e^{g(z_{0}-x)}\,e^{-\varepsilon|x|}\,\,e^{i\lambda x}}{\mathrm{ch}^{g}(x -z_{0})}\,.\] Due to the factor \(e^{-\varepsilon|x|}\) integral is convergent at \(x\to\pm\infty\). Hence, \(z_{0}\)-regularization is not needed and the order of the limits is not important. But in prescribed order it is possible to change \(e^{-\varepsilon|x|}\to e^{\varepsilon x}\) \[\int\limits_{-\infty}^{+\infty}dx\,e^{i\lambda x} =\frac{1}{2^{g}}\lim_{z_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\, \int\limits_{-\infty}^{+\infty}dx\,\frac{e^{g(z_{0}-x)}\,e^{-\varepsilon|x|}\, \,e^{i\lambda x}}{\mathrm{ch}^{g}(x-z_{0})}\] \[=\frac{1}{2^{g}}\lim_{z_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\, \int\limits_{-\infty}^{+\infty}dx\,\frac{e^{g(z_{0}-x)}\,e^{\varepsilon x}\,\,e ^{i\lambda x}}{\mathrm{ch}^{g}(x-z_{0})}\,.\] because at \(x\to+\infty\) everything is regularized by external \(z_{0}\). Using prescribed order of limits we reproduce the standard answer \[\int\limits_{-\infty}^{+\infty}dx\,e^{i\lambda x} =\lim_{z_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,e^{( \varepsilon+i\lambda)z_{0}}\,\frac{\Gamma\left(\frac{i(\lambda-i\varepsilon)}{2} \right)\Gamma\left(g-\frac{i(\lambda-i\varepsilon)}{2}\right)}{2^{g}\,2^{1-g} \Gamma(g)}\] \[=\frac{2}{i}\frac{\Gamma\left(1+\frac{i\lambda}{2}\right)\Gamma \left(g-\frac{i\lambda}{2}\right)}{2\,\Gamma(g)}\lim_{z_{0}\to+\infty}\,\lim_{ \varepsilon\to 0}\,\frac{e^{i\lambda z_{0}}}{\lambda-i\varepsilon}\] \[=\frac{2}{i}\frac{\Gamma\left(1+\frac{i\lambda}{2}\right)\Gamma \left(g-\frac{i\lambda}{2}\right)}{2\,\Gamma(g)}\,2\pi i\delta(\lambda)=2\pi \delta(\lambda)\] where we used the formula \[\lim_{z_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,\frac{e^{i\lambda z_{0}}}{ \lambda-i\varepsilon}=2\pi i\delta(\lambda). \tag{2.33}\] It is a relative of the formula \[\lim_{z_{0}\to+\infty}\,\frac{\sin(\lambda z_{0})}{\lambda}=\pi\delta(\lambda)\] and the possible proof is the following. Let us consider the integral with the test function and divide it on two parts: the first integral can be calculated by residues and due to cancelation of singularity at \(\lambda=0\) it is possible to put \(\varepsilon\to 0\) in the second part \[\int_{\mathbb{R}}f(\lambda)\,\frac{e^{i\lambda z_{0}}}{\lambda-i \varepsilon}\,d\lambda =f(0)\,\int_{\mathbb{R}}\,\frac{e^{i\lambda z_{0}}}{\lambda-i \varepsilon}\,d\lambda+\int_{\mathbb{R}}\frac{f(\lambda)-f(0)}{\lambda-i \varepsilon}\,e^{i\lambda z_{0}}\,d\lambda\] \[\xrightarrow{\varepsilon\to 0}2\pi i\,f(0)+\int_{\mathbb{R}} \frac{f(\lambda)-f(0)}{\lambda}\,e^{i\lambda z_{0}}\,d\lambda.\] Due to the Riemann-Lebesgue lemma the second contribution tends to zero in the limit \(z_{0}\to\infty\), so that we obtain after removing both regularizations \[\int_{\mathbb{R}}f(\lambda)\,\frac{e^{i\lambda z_{0}}}{\lambda-i\varepsilon} \,d\lambda\to 2\pi i\,f(0),\] or equivalently \[\frac{e^{i\lambda z_{0}}}{\lambda-i\varepsilon}\to 2\pi i\,\delta(\lambda).\] #### 2.4.2 \(\hat{Q}\)-operator regularization There exists a second variant of regularization. It is possible to introduce regularization, such that resulting integral coincides with the action of the \(\hat{Q}\)-operator on eigenfunction. In general case it is possible to introduce a similar regularization preserving the same property: regularized integral gives action of \(\hat{Q}\)-operator on eigenfunction and can be calculated in explicit form. To regularize the integral we introduce two external parameters \(\gamma_{0}\) and \(\varepsilon>0\) \[\begin{split}\int\limits_{-\infty}^{+\infty}\frac{d\lambda}{2\pi }\,e^{ix\lambda}\,=\frac{\Gamma(g)}{2\pi}\,\lim\limits_{\gamma_{0}\to+\infty} \,\lim\limits_{\varepsilon\to 0}\,\gamma_{0}^{1-g}&\int\limits_{- \infty}^{+\infty}\frac{d\lambda}{2\pi}\,e^{-\frac{\pi}{2}(\lambda-\gamma_{0}) }\,e^{\varepsilon\lambda}\,e^{ix\lambda}\\ &\times\frac{\Gamma\left(\frac{g+i(\lambda-\gamma_{0})}{2} \right)\Gamma\left(\frac{g-i(\lambda-\gamma_{0})}{2}\right)}{2^{1-g}\Gamma(g )}.\end{split} \tag{2.34}\] To prove that in the limit \(\gamma_{0}\to+\infty\) we reproduce the initial integral we use the following asymptotic as \(\gamma_{0}\to+\infty\) \[\frac{\Gamma\left(\frac{g+i(\lambda-\gamma_{0})}{2}\right)\Gamma\left(\frac{g -i(\lambda-\gamma_{0})}{2}\right)}{2^{1-g}\Gamma(g)}\to\frac{2\pi}{\Gamma(g)} \,\gamma_{0}^{g-1}\,e^{\frac{\pi}{2}(\lambda-\gamma_{0})}\,.\] To check convergence we test behaviour in two regions: \(\lambda\to+\infty\) and \(\lambda\to-\infty\). For \(\lambda\to+\infty\) the integrand behaves as \(e^{(-\pi+ix+\varepsilon)\lambda}\), so that integral converges in this region due to decreasing factor \(e^{-\pi\lambda}\). For \(\lambda\to-\infty\) we have \(e^{(ix+\varepsilon)\lambda}\), so that for \(\varepsilon>0\) integral converges in this region too. The \(\lambda\)-integral can be calculated explicitly (it is easy to recognize in this integral the action of the \(\hat{Q}\)-operator on eigenfunction) \[\begin{split}\int\limits_{-\infty}^{+\infty}\frac{d\lambda}{2\pi }\,e^{-\frac{\pi}{2}(\lambda-\gamma_{0})}\,e^{\varepsilon\lambda}\,e^{ix \lambda}&\,\frac{\Gamma\left(\frac{g+i(\lambda-\gamma_{0})}{2} \right)\Gamma\left(\frac{g-i(\lambda-\gamma_{0})}{2}\right)}{2^{1-g}\Gamma(g) }\\ &=e^{i\gamma_{0}(x-i\varepsilon)}\,\int\limits_{-\infty}^{+\infty }\frac{d\lambda}{2\pi}\,e^{i\lambda(x+\frac{i\pi}{2}-i\varepsilon)}\,\frac{ \Gamma\left(\frac{g+i\lambda}{2}\right)\Gamma\left(\frac{g-i\lambda}{2} \right)}{2^{1-g}\Gamma(g)}\\ &=\frac{e^{i\gamma_{0}(x-i\varepsilon)}}{\mathrm{ch}^{g}(x+ \frac{i\pi}{2}-i\varepsilon)}=\frac{e^{i\gamma_{0}(x-i\varepsilon)}\,e^{-i \frac{\pi}{2}g}}{\mathrm{sh}^{g}(x-i\varepsilon)}\end{split}\] so that we have \[\int\limits_{-\infty}^{+\infty}\frac{d\lambda}{2\pi}\,e^{ix\lambda}\,=\,\frac{ \Gamma(g)}{2\pi}\,e^{-i\frac{\pi}{2}g}\,\lim\limits_{\gamma_{0}\to+\infty}\, \lim\limits_{\varepsilon\to 0}\,\gamma_{0}^{1-g}\,\frac{e^{i\gamma_{0}(x-i \varepsilon)}}{\mathrm{sh}^{g}(x-i\varepsilon)}\] \[=\frac{\Gamma(g)}{2\pi}\,e^{-i\frac{\pi}{2}g}\,\lim\limits_{\varepsilon\to 0 }\,\frac{(x-i\varepsilon)^{g}}{\mathrm{sh}^{g}(x-i\varepsilon)}\,\lim\limits _{\gamma_{0}\to+\infty}\,\lim\limits_{\varepsilon\to 0}\frac{\gamma_{0}^{1-g}\,e^{ ix\gamma_{0}}}{(x-i\varepsilon)^{g}}\] \[=\lim\limits_{\varepsilon\to 0}\,\frac{(x-i\varepsilon)^{g}}{\mathrm{sh}^{g}(x-i \varepsilon)}\,\,\delta(x)=\delta(x).\] In the last line we used the formula \[\lim\limits_{\gamma_{0}\to+\infty}\,\lim\limits_{\varepsilon\to 0}\,\frac{ \gamma_{0}^{1-g}\,e^{ix\gamma_{0}}}{(x-i\varepsilon)^{g}}=\frac{2\pi}{\Gamma( g)}\,e^{i\frac{\pi}{2}g}\,\delta(x). \tag{2.35}\] Note that this formula is reduced to (2.33) in the case \(g=1\). The possible proof of this formula is the following. Let us consider the integral with the test function and divide this integral into three parts (\(\delta>0\)) \[\gamma_{0}^{1-g}\,\int\limits_{-\infty}^{+\infty}dx\,\frac{e^{ix\gamma_{0}}}{ (x-i\varepsilon)^{g}}\,f(x)=\,\gamma_{0}^{1-g}\,\int\limits_{-\delta}^{+ \delta}dx\,\frac{e^{ix\gamma_{0}}}{(x-i\varepsilon)^{g}}\,f(x)\\ +\gamma_{0}^{1-g}\,\int\limits_{-\infty}^{-\delta}dx\,\frac{e^{ ix\gamma_{0}}}{(x-i\varepsilon)^{g}}\,f(x)+\gamma_{0}^{1-g}\,\int\limits_{ \delta}^{+\infty}dx\,\frac{e^{ix\gamma_{0}}}{(x-i\varepsilon)^{g}}\,f(x).\] The last two terms do not contain singularity in integration domains, so that it is possible to put \(\varepsilon=0\) and due to the Riemann-Lebesgue lemma both contributions tend to zero in the limit \(\gamma_{0}\to+\infty\). In the first integral we perform the change of variables \(y=\gamma_{0}x\) and then use the standard formula for the Fourier transformation of the generalised function \((y-i0)^{-g}\) \[\lim\limits_{\gamma_{0}\to+\infty}\,\lim\limits_{\varepsilon\to 0}\,\gamma_{0}^{1-g} \,\int\limits_{-\delta}^{+\delta}dx\,\frac{e^{ix\gamma_{0}}}{(x-i\varepsilon )^{g}}\,f(x)=\lim\limits_{\gamma_{0}\to+\infty}\,\lim\limits_{\varepsilon\to 0 }\,\int\limits_{-\delta\gamma_{0}}^{+\delta\gamma_{0}}dy\,\frac{e^{iy}}{(y-i \varepsilon)^{g}}\,f\left(\frac{y}{\gamma_{0}}\right)\\ =f(0)\,\int\limits_{-\infty}^{+\infty}dy\,\frac{e^{iy}}{(y-i0)^{g }}=f(0)\,\frac{2\pi}{\Gamma(g)}\,e^{i\frac{\pi}{2}g}.\] The used Fourier transformation formula has the following form [GS] \[\int\limits_{-\infty}^{+\infty}dy\,(y-i0)^{-g}\,e^{iyp}=\frac{2\pi}{\Gamma(g)}\,e ^{i\frac{\pi}{2}g}\,p_{+}^{g-1}.\] Case \(n=2\) The plan step by step by subsections is the following: 1. Description of all \(Q\)-operators and \(\Lambda\)-operators as integral operators. 2. Derivation of the commutation relations between \(Q\)-operators and \(\Lambda\)-operators from commutativity of \(Q\)-operators. 3. Construction of eigenfunctions. Demonstration of the equivalence of Mellin-Barnes and Hallnas-Ruijsenaars representations. 4. Calculation of the scalar product between eigenfunctions of all kinds using \(Q\)-operator regularizations. ### \(Q\)-operators and \(\Lambda\)-operators as integral operators Let us define \(Q\)-operators in the case \(n=2\) as integral operators by the formulas \[[Q_{2}(\lambda)\Psi](x_{1},x_{2})=\int\limits_{-\infty}^{+\infty}dy_{1}dy_{2} \,\operatorname{sh}^{2g}|y_{1}-y_{2}|\,e^{i\lambda(x_{1}+x_{2}-y_{1}-y_{2})}\] \[\times\prod\limits_{i,k=1,2}K(x_{i}-y_{k})\,\Psi(y_{1},y_{2}),\] \[[\hat{Q}_{2}(x)\Psi](\lambda_{1},\lambda_{2})=\int\limits_{-\infty}^{+\infty }\frac{d\gamma_{1}}{2\pi}\,\frac{d\gamma_{2}}{2\pi}\,\mu(\gamma_{1},\gamma_{ 2})\,e^{ix(\lambda_{1}+\lambda_{2}-\gamma_{1}-\gamma_{2})}\] \[\times\prod\limits_{i,k=1,2}\hat{K}(\lambda_{i}-\gamma_{k})\,\Psi(\gamma_{1},\gamma_{2}),\] \[[\hat{Q}_{2}(x|\boldsymbol{\omega})\Psi](\lambda_{1},\lambda_{2})=\int\limits _{-\infty}^{+\infty}d\gamma_{1}\,d\gamma_{2}\,\mu_{g}(\gamma_{1},\gamma_{2}) \,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}x(\lambda_{1}+\lambda_{2}-\gamma_{1}- \gamma_{2})}\] \[\times\prod\limits_{i,k=1,2}K_{g}(\lambda_{i}-\gamma_{k})\,\Psi(\gamma_{1}, \gamma_{2}).\] The building blocks are the same as in the case \(n=1\) \[K(x)=\frac{1}{\operatorname{ch}^{g}(x)},\qquad\hat{K}(\lambda)=\frac{\Gamma\left( \frac{g+i\lambda}{2}\right)\Gamma\left(\frac{g-i\lambda}{2}\right)}{2^{1-g} \Gamma(g)},\] \[K_{g}(\lambda)=\frac{1}{S\left(\frac{g}{2}+i\lambda\right)S\left(\frac{g}{2}-i \lambda\right)},\] except the nontrivial measure: \(\operatorname{sh}^{2g}|y_{1}-y_{2}|\) in the simplest case and \[\mu(\gamma_{1},\gamma_{2})=\frac{[2^{1-g}\Gamma(g)]^{2}}{\Gamma\left(g\pm\frac{ i(\gamma_{1}-\gamma_{2})}{2}\right)\Gamma\left(\pm\frac{i(\gamma_{1}-\gamma_{2}) }{2}\right)}, \tag{3.36}\] \[\mu_{g}(\gamma_{1},\gamma_{2})=S(g\pm i(\gamma_{1}-\gamma_{2}))S(\pm i(\gamma _{1}-\gamma_{2})). \tag{3.37}\] Here and in what follows we use notation for the products of functions \[f(a\pm b)=f(a+b)f(a-b).\] **Remark.** Note that in relativistic case Fourier transformation of the function \(K_{g}\) gives the function \(K_{g^{*}}\) (2.23). This means that the transition to the dual \(Q\)-operator is reduced to the change \(g\to g^{*}\) and renaming of arguments \(x\rightleftarrows\lambda\) \[[Q_{2}(\lambda|\boldsymbol{\omega})\Psi](x_{1},x_{2})=\int\limits_{-\infty}^{ +\infty}dy_{1}\,dy_{2}\,\mu_{g^{*}}(y_{1}\,,y_{2})\,e^{\frac{2\pi i}{\omega_{ 1}\omega_{2}}\lambda(x_{1}+x_{2}-y_{1}-y_{2})}\\ \times\prod_{i,k=1,2}K_{g^{*}}(x_{i}-y_{k})\,\Psi(y_{1},y_{2}). \tag{3.38}\] To avoid the simple duplication of formulas we state all the results only for the operator \(Q_{2}(\lambda|\boldsymbol{\omega})\) or \(\hat{Q}_{2}(x|\boldsymbol{\omega})\). Commutation relation \[\hat{Q}_{2}(x|\boldsymbol{\omega})\,\hat{Q}_{2}(y|\boldsymbol{\omega})=\hat{Q} _{2}(y|\boldsymbol{\omega})\,\hat{Q}_{2}(x|\boldsymbol{\omega}) \tag{3.39}\] is proven by residue calculation method in [BDKK1]. The same method (in fact, in a much simpler form) can be used for the proof of the commutation relation \[\hat{Q}_{2}(x)\,\hat{Q}_{2}(y)=\hat{Q}_{2}(y)\,\hat{Q}_{2}(x).\] Unfortunately we do not have at the moment any direct proof of the commutation relation \[Q_{2}(\lambda)\,Q_{2}(\rho)=Q_{2}(\rho)\,Q_{2}(\lambda),\] except the case \(g=1\), see Appendix B. However, the last two commutation relations can be deduced from the first one (3.39) by the appropriate reduction in full analogy with \(n=1\) case, see Section 2.3.1. The closest relatives of \(Q\)-operators are integral \(\Lambda\)-operators defined as \[[\Lambda_{2}(\lambda)\Psi](x_{1},x_{2})=\int\limits_{-\infty}^{+ \infty}dy\,e^{i\lambda(x_{1}+x_{2}-y)}\,K(x_{1}-y)\,K(x_{2}-y)\,\Psi(y),\] \[[\hat{\Lambda}_{2}(x)\Psi](\lambda_{1},\lambda_{2})=\int\limits_{ -\infty}^{+\infty}\frac{d\gamma}{2\pi}\,e^{ix(\lambda_{1}+\lambda_{2}-\gamma )}\,\hat{K}(\lambda_{1}-\gamma)\,\hat{K}(\lambda_{2}-\gamma)\,\Psi(\gamma),\] \[[\hat{\Lambda}_{2}(x|\boldsymbol{\omega})\Psi](\lambda_{1}, \lambda_{2})=\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi i}{\omega_{ 1}\omega_{2}}x(\lambda_{1}+\lambda_{2}-\gamma)}\,K_{g}(\lambda_{1}-\gamma)\,K _{g}(\lambda_{2}-\gamma)\,\Psi(\gamma).\] ### \(Q\Lambda\)-commutation relations In this section we explain how commutation relations between \(Q\)-operators \[Q_{2}(\lambda)\,Q_{2}(\rho)=Q_{2}(\rho)\,Q_{2}(\lambda),\] \[\hat{Q}_{2}(x)\,\hat{Q}_{2}(y)=\hat{Q}_{2}(y)\,\hat{Q}_{2}(x),\] \[\hat{Q}_{2}(x|\boldsymbol{\omega})\,\hat{Q}_{2}(y|\boldsymbol{ \omega})=\hat{Q}_{2}(y|\boldsymbol{\omega})\,\hat{Q}_{2}(x|\boldsymbol{\omega})\] degenerate to the relations between \(Q\) and \(\Lambda\)-operators \[Q_{2}(\lambda)\,\Lambda_{2}(\rho)=2\,q(\lambda,\rho)\,\Lambda_{2}(\rho)\,Q_{ 1}(\lambda), \tag{3.40}\] \[\hat{Q}_{2}(x)\,\hat{\Lambda}_{2}(y)=2\,\hat{q}(x,y)\,\hat{\Lambda}_{2}(y)\, \hat{Q}_{1}(x),\] \[\hat{Q}_{2}(x|\boldsymbol{\omega})\,\hat{\Lambda}_{2}(y|\boldsymbol{\omega})= 2\,\hat{q}(x,y|\boldsymbol{\omega})\,\hat{\Lambda}_{2}(y|\boldsymbol{\omega}) \,Q_{1}(x|\boldsymbol{\omega}).\] where \[q(\lambda,\rho)=\hat{K}(\lambda-\rho),\qquad\hat{q}(x,y)=K(x-y),\] \[\hat{q}(x,y|\boldsymbol{\omega})=\sqrt{\omega_{1}\omega_{2}}\,S(g^{ *})\,K_{g^{*}}(x-y).\] For simplicity we consider the case of operators with hyperbolic functions \(Q_{2}(\lambda)\) and \(\Lambda_{2}(\lambda)\). The derivation for the rest two cases follows the same steps. Baxter and raising operators of all kinds have similar kernels. Furthermore, their building blocks \(K(x)\) and \(\mu(x)\) have nice asymptotics as \(|x|\to\infty\) and the corresponding bounds for \(x\in\mathbb{R}\). In the simplest case of hyperbolic functions \[K(x)=\operatorname{ch}^{-g}(x),\qquad\mu(x)=\operatorname{sh}^{g}|x|\] we have exponential asymptotics and bounds \[K(x)\sim 2^{g}\,e^{-g|x|},\,\mu(x)\sim 2^{-g}\,e^{g|x|}, |x|\to\infty,\] \[|K(x)|\leq e^{-g|x|},\ \ |\mu(x)|\leq 2^{-g}\,e^{g|x|}, x\in\mathbb{R}. \tag{3.41}\] The case of double sines differs only by constants, and the case of gamma functions differs by constants and subleading polynomial growth. Nevertheless, in all three cases these asymptotics and bounds allow to degenerate the kernel of \(Q\)-operator to the kernel of \(\Lambda\)-operator in some limit. For \(n=2\) in the case of hyperbolic functions the kernels are \[Q(x_{1},x_{2},y_{1},y_{2};\lambda) =e^{i\lambda(x_{1}+x_{2}-y_{1}-y_{2})}\operatorname{ch}^{-g}(x_{1 }-y_{1})\operatorname{ch}^{-g}(x_{2}-y_{1})\] \[\times\operatorname{ch}^{-g}(x_{1}-y_{2})\operatorname{ch}^{-g}(x _{2}-y_{2})\operatorname{sh}^{2g}|y_{2}-y_{1}|,\] \[\Lambda(x_{1},x_{2},y_{1};\lambda) =e^{i\lambda(x_{1}+x_{2}-y_{1})}\operatorname{ch}^{-g}(x_{1}-y_{1 })\operatorname{ch}^{-g}(x_{2}-y_{1}).\] They are related in the limit \(y_{2}\to\infty\) \[\lim_{y_{2}\to\infty}e^{gy_{1}+i\lambda y_{2}}\,Q(x_{1},x_{2},y_{1},y_{2}; \lambda)=\Lambda(x_{1},x_{2},y_{1};\lambda-ig). \tag{3.42}\] Note the shift by \(-ig\) in the variable \(\lambda\) from the right. The same reduction can be applied to the commutation relation of \(Q\)-operators \[Q_{2}(\lambda)\,Q_{2}(\rho)=Q_{2}(\rho)\,Q_{2}(\lambda).\] Denote by \(\mathcal{Q}\) the kernel of two \(Q\)-operators product \[\mathcal{Q}(x_{1},x_{2},z_{1},z_{2};\lambda,\rho)=\int_{\mathbb{R}^{2}}dy_{1}dy_{ 2}\,Q(x_{1},x_{2},y_{1},y_{2};\lambda)\,Q(y_{1},y_{2},z_{1},z_{2};\rho).\] Then the commutativity is equivalent to the integral identity \[\mathcal{Q}(x_{1},x_{2},z_{1},z_{2};\lambda,\rho)=\mathcal{Q}(x_{1},x_{2},z_{1 },z_{2};\rho,\lambda). \tag{3.43}\] In complete analogy with the limit (3.42), we consider the limit of the left-hand side of identity (3.43) \[\lim_{z_{2}\to\infty}e^{gz_{1}+i\rho z_{2}}\,\mathcal{Q}(x_{1},x_{2},z_{1},z_{ 2};\lambda,\rho)=\mathcal{L}(x_{1},x_{2},z_{1};\lambda,\rho-ig) \tag{3.44}\] and arrive at the integral \(\mathcal{L}\) that represents the kernel of the product \(Q_{2}(\lambda)\,\Lambda_{2}(\rho-ig)\) \[\mathcal{L}(x_{1},x_{2},z_{1};\lambda,\rho-ig)=\int_{\mathbb{R}^{2}}dy_{1}dy_ {2}\,Q(x_{1},x_{2},y_{1},y_{2};\lambda)\,\Lambda(y_{1},y_{2},z_{1};\rho-ig).\] Note again the same shift by \(-ig\). The only subtle point is the interchange of the limit and the integral. To justify it, use bounds (3.41) and dominated convergence theorem. We also remark that if the original integral \(\mathcal{Q}\) is absolutely convergent for \(\mathrm{Im}(\lambda-\rho)\in(-2g,2g)\), the integral \(\mathcal{L}\) is absolutely convergent for \(\mathrm{Im}(\lambda-\rho)\in(-2g,0)\), so that we assume this condition when performing the limit. The limit of the right-hand side of (3.43) is more involved. After multiplying by the same function \(e^{gz_{1}+i\rho z_{2}}\), as in the limit (3.44), we have \[\begin{split} e^{gz_{1}+i\rho z_{2}}&\,\mathcal{Q} (x_{1},x_{2},z_{1},z_{2};\rho,\lambda)=\int\limits_{\mathbb{R}^{2}}dy_{1}dy_{ 2}\,e^{i\rho(x_{1}+x_{2}+z_{2}-y_{1}-y_{2})+gz_{1}+i\lambda(y_{1}+y_{2}-z_{1}- z_{2})}\\ &\times\mathrm{ch}^{-g}(x_{1}-y_{1})\,\mathrm{ch}^{-g}(x_{2}-y_{ 1})\,\mathrm{ch}^{-g}(x_{1}-y_{2})\,\mathrm{ch}^{-g}(x_{2}-y_{2})\\ &\times\mathrm{ch}^{-g}(y_{1}-z_{1})\,\mathrm{ch}^{-g}(y_{2}-z_{ 1})\,\mathrm{ch}^{-g}(y_{1}-z_{2})\,\mathrm{ch}^{-g}(y_{2}-z_{2})\\ &\times\mathrm{sh}^{2g}\,|y_{1}-y_{2}|\,\,\mathrm{sh}^{2g}\,|z_{ 1}-z_{2}|.\end{split} \tag{3.45}\] The integrand contains exponent \(e^{i(\rho-\lambda)z_{2}}\) and therefore doesn't have pointwise limit as \(z_{2}\to\infty\). However, this exponent can be canceled by the shift of variable \(y_{2}\to y_{2}+z_{2}\). Besides, to use dominated convergence theorem we should have the integrand which can be bounded by integrable function independent of \(z_{2}\) (for big enough \(z_{2}\)). To end up with such bound we notice that the integrand before the shift (3.45), say \(F(y_{1},y_{2})\), is symmetric with respect to \(y_{1},y_{2}\), and therefore the integration domain can be reduced to \(y_{2}>y_{1}\) \[\int_{\mathbb{R}^{2}}dy_{1}dy_{2}\,F(y_{1},y_{2})=2\int_{y_{2}>y_{1}}dy_{1}dy_{2 }\,F(y_{1},y_{2}).\] Therefore, after the shift the integral can be rewritten as follows \[e^{gz_{1}+i\rho z_{2}} \,\mathcal{Q}(x_{1},x_{2},z_{1},z_{2};\rho,\lambda)=2\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The first one is just the beta integral (2.21) \[\int_{\mathbb{R}}dy_{2}\,e^{i(\lambda-\rho+ig)y_{2}}\,\operatorname{ch}^{-g}(y_{2 })=\frac{\Gamma\big{(}\frac{g+i(\lambda-\rho+ig)}{2}\big{)}\,\Gamma\big{(}\frac{ g-i(\lambda-\rho+ig)}{2}\big{)}}{2^{1-g}\Gamma(g)}=q(\lambda,\rho-ig).\] The second integral coincides with the kernel of the operator \(\Lambda_{2}(\rho-ig)\,Q_{1}(\lambda)\). In the limit we have the equivalence of the kernels from the left and right-hand sides \(\mathcal{L}=\mathcal{R}\) and therefore the equivalence of operators \[Q_{2}(\lambda)\,\Lambda_{2}(\rho-ig)=2\,q(\lambda,\rho-ig)\,\Lambda_{2}(\rho- ig)\,Q_{1}(\lambda).\] Shifting \(\rho\to\rho+ig\) we arrive at the stated identity (3.40). ### Eigenfunctions The \(Q\Lambda\)-commutation relation allows to construct eigenfunction of the \(Q\)-operator using corresponding \(\Lambda\)-operator. Let us consider the example from the previous section. We have two commutation relations \[Q_{2}(\lambda)\,\Lambda_{2}(\lambda_{2}) =2\,q(\lambda,\lambda_{2})\,\Lambda_{2}(\lambda_{2})\,Q_{1}( \lambda), \tag{3.46}\] \[Q_{1}(\lambda)\,\Lambda_{1}(\lambda_{1}) =q(\lambda,\lambda_{1})\,\Lambda_{1}(\lambda_{1})\] where the second one is the relation (2.31) from \(n=1\) example. Note that in the section \(n=1\) for simplicity we omit index \(n=1\) in notations of all operators but now we have to restore it. Let us consider the following function \[\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2}) =\Lambda_{2}(\lambda_{2})\,\Lambda_{1}(\lambda_{1}) \tag{3.47}\] \[=\int\limits_{-\infty}^{+\infty}dt\,e^{i\lambda_{2}(x_{1}+x_{2}-t )}\,K(x_{1}-t)\,K(x_{2}-t)\,e^{i\lambda_{1}t}.\] It is easy to show using (3.46) that \(\Psi_{\lambda_{1}\lambda_{2}}(x_{1},x_{2})\) is an eigenfunction of the operator \(Q_{2}(\lambda)\) and calculate the corresponding eigenvalue. Indeed we have \[Q_{2}(\lambda)\,\Lambda_{2}(\lambda_{2})\,\Lambda_{1}(\lambda_{ 1}) =2\,q(\lambda,\lambda_{2})\,\Lambda_{2}(\lambda_{2})\,Q_{1}(\lambda )\,\Lambda_{1}(\lambda_{1})\] \[=2\,q(\lambda,\lambda_{2})\,q(\lambda,\lambda_{1})\,\Lambda_{2}( \lambda_{2})\,\Lambda_{1}(\lambda_{1}),\] or in more detailed notation \[[Q_{2}(\lambda)\,\Psi_{\lambda_{1},\lambda_{2}}](x_{1},x_{2})=2\,q(\lambda,\lambda _{2})\,q(\lambda,\lambda_{1})\,\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2}). \tag{3.48}\] It is clear that it is possible to repeat everything almost literally and construct in the same way eigenfunctions of all \(Q\)-operators. In this way we obtain eigenfunctions of the operator \(\hat{Q}_{2}(x)\) \[\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2}) =\hat{\Lambda}_{2}(x_{2})\,\hat{\Lambda}_{1}(x_{1})\] \[=\int\limits_{-\infty}^{+\infty}\frac{d\gamma}{2\pi}\,e^{ix_{2}( \lambda_{1}+\lambda_{2}-\gamma)}\,\hat{K}(\lambda_{1}-\gamma)\,\hat{K}( \lambda_{2}-\gamma)\,e^{ix_{1}\gamma},\] \[[\hat{Q}_{2}(x)\,\Phi_{x_{1},x_{2}}](\lambda_{1},\lambda_{2})=2 \,\hat{q}(x,x_{2})\,\hat{q}(x,x_{1})\,\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_ {2}) \tag{3.49}\] and eigenfunctions of the operator \(\hat{Q}_{2}(x|\boldsymbol{\omega})\) \[\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2}|g,\boldsymbol{\omega}) =\hat{\Lambda}_{2}(x_{2}|\boldsymbol{\omega})\,\hat{\Lambda}_{1}(x _{1}|\boldsymbol{\omega})\] \[=\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2\pi i}{\omega _{1}\omega_{2}}x_{2}(\lambda_{1}+\lambda_{2}-\gamma)}\,K_{g}(\lambda_{1}- \gamma)\,K_{g}(\lambda_{2}-\gamma)\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}x_{1 }\gamma},\] \[[\hat{Q}_{2}(x|\boldsymbol{\omega})\,\Phi_{x_{1},x_{2}}](\lambda_ {1},\lambda_{2}|g,\boldsymbol{\omega})=2\,\hat{q}(x,x_{2}|\boldsymbol{\omega} )\,\hat{q}(x,x_{1}|\boldsymbol{\omega})\,\Phi_{x_{1},x_{2}}(\lambda_{1}, \lambda_{2}|g,\boldsymbol{\omega}). \tag{3.50}\] Using the Fourier transformation it is possible to establish connections between eigenfunctions of dual operators \[\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=\Phi_{x_{1},x_{2}}( \lambda_{1},\lambda_{2}), \tag{3.51}\] \[\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2}|g,\boldsymbol{\omega} )=S^{2}(g^{*})\,\Phi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2}|g^{*},\boldsymbol{ \omega}). \tag{3.52}\] Indeed the relation (2.20) between \(K(z)\) and \(\hat{K}(\lambda)\) and \[\int\limits_{-\infty}^{+\infty}\frac{d\lambda}{2\pi}\,e^{-i\lambda z}\,\hat{K }(\lambda)=K(z)\] allows to transform the integral representation (3.47) to the integral representation (3.49) \[\int\limits_{-\infty}^{+\infty}dt\,e^{i\lambda_{2}(x_{1}+x_{2}-t)}\, K(x_{1}-t)\,K(x_{2}-t)\,e^{i\lambda_{1}t}\\ =\int\limits_{-\infty}^{+\infty}\frac{d\gamma}{2\pi}\,e^{ix_{1}( \lambda_{1}+\lambda_{2}-\gamma)}\,\hat{K}(\lambda_{1}-\gamma)\,\hat{K}( \lambda_{2}-\gamma)\,e^{ix_{2}\gamma}. \tag{3.53}\] In the same way relation (2.23) between \(K_{g}(z)\) and \(K_{g^{*}}(\lambda)\) \[\int\limits_{-\infty}^{+\infty}dz\,e^{\frac{2\pi ixz}{\omega_{1}\omega_{2}}}\, K_{g}(z)=\sqrt{\omega_{1}\omega_{2}}\,S(g^{*})\,K_{g^{*}}(x)\] allows to prove similar relation \[\int\limits_{-\infty}^{+\infty}dt\,e^{\frac{2\pi i}{\omega_{1} \omega_{2}}\lambda_{2}(x_{1}+x_{2}-t)}\,K_{g}(x_{1}-t)\,K_{g}(x_{2}-t)\,e^{ \frac{2\pi i}{\omega_{1}\omega_{2}}\lambda_{1}t}\\ =S^{2}(g^{*})\int\limits_{-\infty}^{+\infty}d\gamma\,e^{\frac{2 \pi i}{\omega_{1}\omega_{2}}x_{1}(\lambda_{1}+\lambda_{2}-\gamma)}\,K_{g^{*}}( \lambda_{1}-\gamma)\,K_{g^{*}}(\lambda_{2}-\gamma)\,e^{\frac{2\pi i}{\omega_{ 1}\omega_{2}}x_{2}\gamma}. \tag{3.54}\] Consider the first relation (3.53). In the left hand side of (3.53) substitute \(K(x_{k}-t)\) with their Fourier representations, then \(t\)-integral gives \(\delta\)-function, which can be simply integrated; after that change the remaining integration variable \(\gamma_{1}=\gamma-\lambda_{1}\) and use \(\hat{K}(\lambda)=\hat{K}(-\lambda)\): \[\int\limits_{-\infty}^{+\infty}dt\,e^{i\lambda_{2}(x_{1}+x_{2}-t)+i \lambda_{1}t}\int\limits_{-\infty}^{+\infty}\frac{d\gamma_{1}}{2\pi}\,e^{-i \gamma_{1}(x_{1}-t)}\,\hat{K}(\gamma_{1})\int\limits_{-\infty}^{+\infty}\frac {d\gamma_{2}}{2\pi}\,e^{-i\gamma_{2}(x_{2}-t)}\,\hat{K}(\gamma_{2})\] \[=\int\limits_{-\infty}^{+\infty}\frac{d\gamma_{1}}{2\pi}\,\frac{ d\gamma_{2}}{2\pi}\,\hat{K}(\gamma_{1})\,\hat{K}(\gamma_{2})\,e^{ix_{1}(\lambda_{2}- \gamma_{1})+ix_{2}(\lambda_{2}-\gamma_{2})}\,2\pi\delta(\lambda_{1}-\lambda_{ 2}+\gamma_{1}+\gamma_{2})\] \[=\int\limits_{-\infty}^{+\infty}\frac{d\gamma_{1}}{2\pi}\,\hat{K }(\gamma_{1})\,\hat{K}(\lambda_{2}-\lambda_{1}-\gamma_{1})\,e^{ix_{1}(\lambda_ {2}-\gamma_{1})+ix_{2}(\lambda_{1}+\gamma_{1})}\] \[=\int\limits_{-\infty}^{+\infty}\frac{d\gamma}{2\pi}\,\hat{K}( \lambda_{1}-\gamma)\,\hat{K}(\lambda_{2}-\gamma)\,e^{ix_{1}(\lambda_{1}+ \lambda_{2}-\gamma)+ix_{2}\gamma}.\] The relation (3.54) can be proved in a similar way. Let us summarize our results: * The relation \(\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=\Phi_{x_{1},x_{2}}(\lambda_{1}, \lambda_{2})\) states the equivalence of the Hallnas-Ruijsenaars and Mellin-Barnes representations for the eigenfunctions \[\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2}) =\int\limits_{-\infty}^{+\infty}dt\,\frac{e^{i\lambda_{1}(x_{1}+ x_{2}-t)}\,e^{i\lambda_{2}t}}{\operatorname{ch}^{g}(x_{1}-t)\operatorname{ch}^{g}(x_{2 }-t)}\] \[=\int\limits_{-\infty}^{+\infty}\frac{d\gamma}{2\pi}\,\frac{ \Gamma\left(\frac{g+i(\lambda_{1}-\gamma)}{2}\right)\Gamma\left(\frac{g+i( \gamma-\lambda_{1})}{2}\right)}{2^{1-g}\Gamma(g)}\] \[\quad\times\frac{\Gamma\left(\frac{g+i(\lambda_{2}-\gamma)}{2} \right)\Gamma\left(\frac{g+i(\gamma-\lambda_{2})}{2}\right)}{2^{1-g}\Gamma(g )}\,e^{ix_{1}(\lambda_{1}+\lambda_{2}-\gamma)}\,e^{ix_{2}\gamma}.\] * The Hallnas-Ruijsenaars representation is evidently has symmetry \(x_{1}\rightleftarrows x_{2}\), and the Mellin-Barnes representation is invariant under \(\lambda_{1}\rightleftarrows\lambda_{2}\) \[\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2}) =\Psi_{\lambda_{1},\lambda_{2}}(x_{2},x_{1})=\Psi_{\lambda_{2}, \lambda_{1}}(x_{1},x_{2}).\] #### 3.3.1 Mellin transform In this subsection we demonstrate another proof of the relation \[\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=\Phi_{x_{1},x_{2}}(\lambda_{1}, \lambda_{2})\] using Mellin transform. First, we separate variables in both functions shifting integration variables \(y\to y+(x_{1}+x_{2})/2\) and \(\gamma\to\gamma+(\lambda_{1}+\lambda_{2})/2\) in the corresponding representations \[\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=e^{i\frac{\lambda_{1}+\lambda_{2 }}{2}(x_{1}+x_{2})}\int\limits_{-\infty}^{\infty}\!dy\,K\Big{(}\frac{x_{1}-x_{ 2}}{2}-y\Big{)}\] \[\times K\Big{(}\frac{x_{2}-x_{1}}{2}-y\Big{)}\,e^{i(\lambda_{1}-\lambda_{2}) y}, \tag{3.55}\] \[\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2})=e^{i\frac{\lambda_{1}+\lambda_{2 }}{2}(x_{1}+x_{2})}\int\limits_{-\infty}^{\infty}\!\frac{d\gamma}{2\pi}\, \hat{K}\Big{(}\frac{\lambda_{1}-\lambda_{2}}{2}-\gamma\Big{)}\] \[\times\hat{K}\Big{(}\frac{\lambda_{2}-\lambda_{1}}{2}-\gamma\Big{)}\,e^{i \gamma(x_{1}-x_{2})}.\] The exponents behind the integrals coincide, so it is left to prove the equivalence of integrals. Let \(x=x_{1}-x_{2}\) and \(\lambda=(\lambda_{1}-\lambda_{2})/2\). The second integral in explicit form \[\phi_{x}(\lambda)=\frac{2^{2g-2}}{\Gamma^{2}(g)}\int\limits_{- \infty}^{\infty}\!\frac{d\gamma}{2\pi}\,\Gamma\Big{(}\frac{i\lambda-i\gamma+g }{2}\Big{)}\,\Gamma\Big{(}\frac{-i\lambda+i\gamma+g}{2}\Big{)} \tag{3.56}\] \[\times\Gamma\Big{(}\frac{-i\lambda-i\gamma+g}{2}\Big{)}\,\Gamma \Big{(}\frac{i\lambda+i\gamma+g}{2}\Big{)}\,e^{i\gamma x}.\] Next denote by \(M[f](s)\) and \(M^{\prime}[F](z)\) direct and inverse Mellin transforms respectively \[M[f](s)=\int\limits_{0}^{\infty}\frac{dz}{z}\,z^{s}\,f(z),\qquad M^{\prime}[F ](z)=\frac{1}{2\pi i}\int\limits_{c-i\infty}^{c+i\infty}ds\,z^{-s}\,F(s).\] We also write these relations in the form \[f(z):\stackrel{{ M}}{{=}}M[f](s),\qquad F(s):\stackrel{{ M^{\prime}}}{{=}}M^{\prime}[F](z).\] Then the integral (3.56) after change of variable \(s=i\gamma/2\) can be written as the following inverse Mellin transform \[\phi_{x}(\lambda)=\frac{2^{2g-1}}{\Gamma^{2}(g)}\,M^{\prime}\biggl{[} \Gamma\Bigl{(}\frac{i\lambda+g}{2}-s\Bigr{)}\,\Gamma\Bigl{(}\frac{-i\lambda+g}{ 2}+s\Bigr{)}\\ \times\Gamma\Bigl{(}\frac{-i\lambda+g}{2}-s\Bigr{)}\,\Gamma \Bigl{(}\frac{i\lambda+g}{2}+s\Bigr{)}\biggr{]}(e^{-2x}). \tag{3.57}\] The integration contour lies in the strip \(|\mathrm{Re}\,s|<g/2\). Beta integral, written in a form \[\int\limits_{0}^{\infty}\frac{dz}{z}\,\frac{z^{a}}{(1+z)^{a+b}}=\frac{\Gamma( a)\Gamma(b)}{\Gamma(a+b)}\] says that the Mellin transform of the function \(\frac{z^{a}}{(1+z)^{a+b}}\) is \[\frac{z^{a}}{(1+z)^{a+b}}:\stackrel{{ M}}{{=}}\frac{1}{\Gamma(a+b)} \Gamma(a-s)\Gamma(b+s),\] so that the inverse formula reads as \[\frac{1}{\Gamma(a+b)}\Gamma(a-s)\Gamma(b+s):\stackrel{{ M^{ \prime}}}{{=}}\frac{z^{a}}{(1+z)^{a+b}}. \tag{3.58}\] The integration contour lies in the strip \(-\mathrm{Re}\,b<\mathrm{Re}\,s<\mathrm{Re}\,a\). Now we use the property \[f*g(z):\stackrel{{ M}}{{=}}F[s]\cdot G[s]\] where \[f*g(z)=\int\limits_{0}^{\infty}\frac{dt}{t}\,f(t)g(z/t).\] Due to (3.57) and (3.58) it says that \[\phi_{x}(\lambda)=2^{2g-1}\,\frac{z^{\frac{i\lambda+g}{2}}}{(1+z)^{g}}*\frac{ z^{\frac{-i\lambda+g}{2}}}{(1+z)^{g}}\,(e^{-2x}).\] Let us write the last formula in explicit form \[\phi_{x}(\lambda)=2^{2g-1}\int\limits_{0}^{\infty}\frac{dt}{t}\,\frac{t^{\frac{ \lambda+q}{2}}}{(1+t)^{g}}\,\frac{(e^{-2x}/t)^{\frac{-i\lambda+g}{2}}}{(1+e^{-2 x}/t)^{g}}.\] After the change of integration variable \(t=e^{2y-x}\) this integral takes the form \[\phi_{x}(\lambda)=\int\limits_{-\infty}^{\infty}dy\,\frac{e^{2i\lambda y}}{ \operatorname{ch}^{g}(x/2-y)\,\operatorname{ch}^{g}(x/2+y)}\] which coincides with the integral in (3.55) after identifying \(x=x_{1}-x_{2}\), \(\lambda=(\lambda_{1}-\lambda_{2})/2\). #### 3.3.2 Equivalence through dual \(Q\)-operators There is yet another way to establish the equivalence of two integral representations \[\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=\Lambda_{2}(\lambda_{2})\,e^{i \lambda_{1}x_{1}}=\hat{\Lambda}_{2}(x_{2})\,e^{i\lambda_{1}x_{1}}\] using both dual \(Q\)-operators. This particular way can be generalized to the case of \(n\) particles, this is done for relativistic case in [BDKK2]. First, note that the raising and Baxter operators are connected as \[\Lambda_{2}(\lambda_{2})=e^{i\lambda_{2}x_{2}}\,Q_{1}(\lambda_{2})\,K(x_{2}-x _{1}),\qquad\hat{\Lambda}_{2}(x_{2})=e^{i\lambda_{2}x_{2}}\,\hat{Q}_{1}(x_{2}) \,\hat{K}(\lambda_{2}-\lambda_{1}).\] Recall also how one-particle \(Q\)-operators act on plane waves \[\hat{Q}_{1}(x_{2})\,e^{i\lambda_{1}x_{1}}=K(x_{2}-x_{1})\,e^{i\lambda_{1}x_{1 }},\qquad Q_{1}(\lambda_{2})\,e^{i\lambda_{1}x_{1}}=\hat{K}(\lambda_{2}- \lambda_{1})\,e^{i\lambda_{1}x_{1}}.\] Hence, the first integral representation can be written as \[\Lambda_{2}(\lambda_{2})\,e^{i\lambda_{1}x_{1}} =e^{i\lambda_{2}x_{2}}\,Q_{1}(\lambda_{2})\,K(x_{2}-x_{1})\,e^{i \lambda_{1}x_{1}}\] \[=e^{i\lambda_{2}x_{2}}\,Q_{1}(\lambda_{2})\,\hat{Q}_{1}(x_{2})\,e ^{i\lambda_{1}x_{1}}.\] Since dual \(Q\)-operators act on different variables, they can be interchanged (the corresponding double integral is absolutely convergent). After that a similar chain of equations leads to the claim \[\Lambda_{2}(\lambda_{2})\,e^{i\lambda_{1}x_{1}} =e^{i\lambda_{2}x_{2}}\,\hat{Q}_{1}(x_{2})\,Q_{1}(\lambda_{2})\,e ^{i\lambda_{1}x_{1}}\] \[=e^{i\lambda_{2}x_{2}}\,\hat{Q}_{1}(x_{2})\,\hat{K}(\lambda_{2}- \lambda_{1})\,e^{i\lambda_{1}x_{1}}=\hat{\Lambda}_{2}(x_{2})\,e^{i\lambda_{1} x_{1}}.\] ### Scalar product As it was outlined in introduction, if we trasform \(Q\)-operator's eigenfunction \(\Psi_{\lambda_{1},\lambda_{2}}\) (3.47) \[\Psi^{S}_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=\operatorname{sh}^{g}\left|x_{1 }-x_{2}\right|\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})\] it becomes an eigenfunction of the Sutherland Hamiltonian \(H_{S}\) \[H_{S}\,\Psi^{S}_{\lambda_{1},\lambda_{2}}=(\lambda_{1}^{2}+\lambda_{2}^{2})\, \Psi^{S}_{\lambda_{1},\lambda_{2}},\qquad H_{S}=-\partial_{x_{1}}^{2}-\partial _{x_{2}}^{2}+\frac{2g(g-1)}{\operatorname{sh}^{2}(x_{1}-x_{2})},\] which is symmetric with respect to the measure \(dx_{1}dx_{2}\). Therefore, the scalar product between \(Q\)-operator's eigenfunctions contains a nontrivial measure \(\operatorname{sh}^{2g}\left|x_{1}-x_{2}\right|dx_{1}dx_{2}\) \[\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,\overline{\Psi^{S}_ {\lambda_{1},\lambda_{2}}(x_{1},x_{2})}\,\Psi^{S}_{\rho_{1},\rho_{2}}(x_{1},x _{2})\\ =\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\operatorname{sh}^{2 g}\!\left|x_{1}-x_{2}\right|\overline{\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x _{2})}\,\Psi_{\rho_{1},\rho_{2}}(x_{1},x_{2}). \tag{3.59}\] The goal of this section is to calculate this scalar product by two methods: the standard method from textbooks and using the \(Q\)-operator. Furthermore, using dual operators \(\hat{Q}(x)\) and \(\hat{Q}(x|\boldsymbol{\omega})\) we calculate the corresponding scalar products between their eigenfunctions \(\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2})\) and \(\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2}|g,\boldsymbol{\omega})\) from the previous section. #### 3.4.1 Standard quantum mechanical calculation In this subsection we calculate the scalar product between the transformed functions \(\Psi^{S}_{\lambda_{1},\lambda_{2}}\) following the way written in textbooks (for example see [FY], SS36). It says that once we have two functions \(\psi_{1}(x)\) and \(\psi_{2}(x)\), which are the eigenfunctions of the Hamiltonian on the line with real potential \[H=-\frac{d^{2}}{dx^{2}}+V(x),\qquad H\psi_{1}=k_{1}^{2}\psi_{1},\qquad H\psi_ {2}=k_{2}^{2}\psi_{2},\] then multpying the first equation in the system \[-\bar{\psi}^{\prime\prime}_{1}+V\bar{\psi}_{1}=k_{1}^{2}\bar{\psi}_{1},\] \[-\psi^{\prime\prime}_{2}+V\psi_{2}=k_{1}^{2}\psi_{2},\] by \(\psi_{2}\), the second by \(\bar{\psi}_{1}\) and subtracting them we get the equality \[(k_{1}^{2}-k_{2}^{2})\bar{\psi}_{1}\psi_{2}=\bar{\psi}_{1}\psi_{2}^{\prime \prime}-\bar{\psi}_{1}^{\prime\prime}\psi_{2}=(\bar{\psi}_{1}\psi_{2}^{\prime}- \bar{\psi}_{1}^{\prime}\psi_{2})^{\prime}.\] Integrating it we arrive at the scalar product of eigenfunctions \[\int\limits_{-\infty}^{\infty}dx\,\bar{\psi}_{1}\psi_{2}=\lim\limits_{x\to \infty}\frac{W(\bar{\psi}_{1},\psi_{2})\big{|}_{-x}^{x}}{k_{1}^{2}-k_{2}^{2}} \tag{3.60}\] where we introduced the Wronskian \[W(\bar{\psi}_{1},\psi_{2})=\bar{\psi}_{1}\psi_{2}^{\prime}-\bar{\psi}_{1}^{ \prime}\psi_{2}.\] Calculation of the scalar product therefore reduces to the calculation of Wronskian asymptotics. Returning to our case, first, we use Mellin-Barnes representation of the eigenfunction and factor out the "center of mass" part changing the integration variable \(\gamma\to\gamma+(\lambda_{1}+\lambda_{2})/2\) \[\Psi_{\lambda_{1},\lambda_{2}}^{S}(x_{1},x_{2})=\mbox{sh}^{g}\,|x_{1}-x_{2}| \int\limits_{-\infty}^{\infty}\frac{d\gamma}{2\pi}\,\hat{K}(\lambda_{1}- \gamma)\,\hat{K}(\lambda_{2}-\gamma)\,e^{i(\lambda_{1}+\lambda_{2}-\gamma)x_{ 2}+i\gamma x_{1}}\] \[=e^{i\frac{\lambda_{1}+\lambda_{2}}{2}(x_{1}+x_{2})}\,\,\mbox{sh}^{g}\,|x_{1} -x_{2}|\int\limits_{-\infty}^{\infty}\frac{d\gamma}{2\pi}\,\hat{K}\Big{(} \frac{\lambda_{1}-\lambda_{2}}{2}-\gamma\Big{)}\] \[\times\hat{K}\Big{(}\frac{\lambda_{2}-\lambda_{1}}{2}-\gamma\Big{)}\,e^{i \gamma(x_{1}-x_{2})}.\] Introducing \[\psi_{\lambda}(x)=\mbox{sh}^{g}\,|x|\int\limits_{-\infty}^{\infty}\frac{d \gamma}{2\pi}\,\hat{K}(\lambda-\gamma)\,\hat{K}(-\lambda-\gamma)\,e^{i\gamma x}\] we separate the variables in the eigenfunction \[\Psi_{\lambda_{1},\lambda_{2}}^{S}(x_{1},x_{2})=e^{i\frac{\lambda_{1}+\lambda _{2}}{2}(x_{1}+x_{2})}\,\psi_{\frac{\lambda_{1}-\lambda_{2}}{2}}(x_{1}-x_{2}). \tag{3.61}\] The scalar product between plane waves is known. Hence, the scalar product between \(\Psi^{S}_{\lambda_{1},\lambda_{2}}\) reduces to the product between functions \(\psi_{\lambda}\), which solve the equation \[\biggl{(}-\partial_{x}^{2}+\frac{g(g-1)}{\operatorname{sh}^{2}x}\biggr{)}\psi_ {\lambda}(x)=\lambda^{2}\psi_{\lambda}(x).\] By (3.60) the latter product can be evaluated through the Wronskian asymptotics. Note that \(\psi_{\lambda}(x)\) is an even function of \(x\) so that we only need its asymptotics as \(x\to\infty\) \[\int\limits_{-\infty}^{\infty}dx\,\overline{\psi_{\lambda}(x)}\,\psi_{\rho}(x )=2\lim_{x\to\infty}\frac{W(\psi_{\lambda}(x),\psi_{\rho}(x))}{\lambda^{2}- \rho^{2}}.\] In the last expression we also used the fact that the eigenfunction is real \(\overline{\psi_{\lambda}}=\psi_{\lambda}\). The asymptotic of \(\psi_{\lambda}(x)\) as \(x\to\infty\) can be deduced from the asymptotic of two-particle eigenfunction (1.5) calculated in introduction. It is given by \[\psi_{\lambda}(x)=\frac{2^{g-1}}{\Gamma(g)}\,\Bigl{[}\,\Gamma(i\lambda)\, \Gamma(g-i\lambda)\,e^{i\lambda x}+\Gamma(-i\lambda)\,\Gamma(g+i\lambda)\,e^{ -i\lambda x}\,\Bigr{]}(1+O(e^{-2x})).\] Consequently, its derivative has asymptotic \[\psi^{\prime}_{\lambda}(x)=\frac{2^{g-1}}{\Gamma(g)}\,i\lambda\Bigl{[}\, \Gamma(i\lambda)\,\Gamma(g-i\lambda)\,e^{i\lambda x}-\Gamma(-i\lambda)\, \Gamma(g+i\lambda)\,e^{-i\lambda x}\,\Bigr{]}(1+O(e^{-2x})).\] Combining them we calculate the asymptotic of the Wronskian \[2\frac{W(\psi_{\lambda}(x),\psi_{\rho}(x))}{\lambda^{2}-\rho^{ 2}} =\frac{2^{2g-1}}{i\Gamma^{2}(g)}\,\Gamma(i\lambda)\,\Gamma(-i\lambda)\, \Gamma(g+i\lambda)\,\Gamma(g-i\lambda)\] \[\times\left(\frac{e^{i(\lambda-\rho)x}-e^{i(\rho-\lambda)x}}{ \lambda-\rho}+\frac{e^{i(\lambda+\rho)x}-e^{-i(\lambda+\rho)x}}{\lambda+\rho} +o(x)\right)\] where by \(o(x)\) we mean terms which tend to zero in a sense of distributions of \(\lambda,\rho\) as \(x\to\infty\). Using the well-known identity \[\lim_{x\to\infty}\frac{\sin kx}{k}=\pi\delta(k)\] we arrive at the expression for the scalar product \[\int\limits_{-\infty}^{\infty}dx\,\overline{\psi_{\lambda}(x)}\,\psi_{\rho}(x)= \frac{2^{2g}\pi}{\Gamma^{2}(g)}\,\Gamma(i\lambda)\,\Gamma(-i\lambda)\,\Gamma(g+i \lambda)\,\Gamma(g-i\lambda)\,\big{(}\delta(\lambda-\rho)+\delta(\lambda+\rho) \big{)}.\] Finally, using it together with the factorization (3.61) we evaluate the scalar product between the original eigenfunctions \[\int\limits_{-\infty}^{\infty}dx_{1}dx_{2}\,\overline{\Psi^{S}_{ \lambda_{1},\lambda_{2}}(x_{1},x_{2})}\Psi^{S}_{\rho_{1},\rho_{2}}(x_{1},x_{2}) \\ =C\,\big{(}\delta(\lambda_{1}-\rho_{1})\delta(\lambda_{2}-\rho_{2} )+\delta(\lambda_{1}-\rho_{2})\delta(\lambda_{2}-\rho_{1})\big{)} \tag{3.62}\] where the coefficient is given by \[C=\frac{2^{2g+1}\pi^{2}}{\Gamma^{2}(g)}\,\Gamma\Big{(}\frac{i\lambda_{1}-i \lambda_{2}}{2}\Big{)}\,\Gamma\Big{(}g+\frac{i\lambda_{1}-i\lambda_{2}}{2} \Big{)}\,\Gamma\Big{(}\frac{i\lambda_{2}-i\lambda_{1}}{2}\Big{)}\,\Gamma \Big{(}g+\frac{i\lambda_{2}-i\lambda_{1}}{2}\Big{)}.\] #### 3.4.2 Eigenfunctions of the operator \(Q(\lambda)\) In this subsection we calculate the same scalar product (3.59) using the fact that \(\Psi_{\lambda_{1},\lambda_{2}}\) is an eigenfunction of the operator \(Q_{2}(\lambda)\). This calculation is universal in two ways. Firstly, translation to the operators \(\hat{Q}_{2}(x)\), \(\hat{Q}_{2}(x|\boldsymbol{\omega})\) and scalar products between their eigenfunctions reduces to different expressions for the main building blocks -- the kernel and measure functions \(K\) and \(\mu\). Secondly, this calculation can be generalized to the general case of \(n\) particles, as we will show in our future work. Expressions for the eigenfunction and conjugated eigenfunction have the form \[\Psi_{\rho_{1},\rho_{2}}(x_{1},x_{2})=\int\limits_{-\infty}^{+ \infty}dt\,e^{i\rho_{1}(x_{1}+x_{2}-t)}\,e^{i\rho_{2}t}\,K(x_{1}-t)\,K(x_{2}-t),\] \[\overline{\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})}=\int \limits_{-\infty}^{+\infty}dt\,e^{-i\lambda_{1}(x_{1}+x_{2}-t)}\,e^{-i\lambda _{2}t}\,K(x_{1}-t)\,K(x_{2}-t).\] and integral over \(t\) absolutely converges for \(g>0\). Denote \(x_{12}=x_{1}-x_{2}\). The scalar product is \[\langle\Psi_{\lambda_{1},\lambda_{2}}|\Psi_{\rho_{1},\rho_{2}}\rangle =\int\limits_{-\infty}^{+\infty}\!dx_{1}dx_{2}\,\mathrm{sh}^{2g}|x_{12}| \,\overline{\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})}\,\Psi_{\rho_{1},\rho _{2}}(x_{1},x_{2})\\ =\int\limits_{-\infty}^{+\infty}\!dx_{1}dx_{2}\,\mathrm{sh}^{2g}| x_{12}|\,\int\limits_{-\infty}^{+\infty}\!dt_{1}\,\int\limits_{-\infty}^{+ \infty}\!dt_{2}\,e^{-i\lambda_{1}(x_{1}+x_{2}-t_{1})}\\ \times e^{-i\lambda_{2}t_{1}}\,e^{i\rho_{1}(x_{1}+x_{2}-t_{2})}\, e^{i\rho_{2}t_{2}}\prod\limits_{i,j=1}^{2}K(x_{i}-t_{j}).\] Note that the ordering of integrals is fixed from the very beginning: at the first step we integrate over \(t_{1}\) and \(t_{2}\) (absolutely convergent integrals) and then integrate over \(x_{1},x_{2}\). The \(x_{1},x_{2}\)-integral separately \[\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,\frac{e^{i(\rho_{1}-\lambda_{1}) (x_{1}+x_{2})}\,\mathrm{sh}^{2g}\,|x_{1}-x_{2}|}{\mathrm{ch}^{g}(x_{1}-t_{1}) \,\mathrm{ch}^{g}(x_{2}-t_{1})\,\mathrm{ch}^{g}(x_{1}-t_{2})\,\mathrm{ch}^{g}( x_{2}-t_{2})}\] doesn't converge. Let us use regularize it adding external point \(t_{0}\) and small \(\varepsilon>0\) in exponent \[\int\limits_{-\infty}^{+\infty}\!dx_{1}dx_{2}\,e^{i(\rho_{1}- \lambda_{1})(x_{1}+x_{2})}\,\mathrm{sh}^{2g}|x_{12}|\,\prod\limits_{i,j=1}^{2} K(x_{i}-t_{j})\\ =\frac{1}{2^{2g}}\lim\limits_{t_{0}\to+\infty}\lim\limits_{ \varepsilon\to 0}\int\limits_{-\infty}^{+\infty}\!dx_{1}dx_{2}\,\mathrm{sh}^{2g}| x_{12}|\,e^{-g(x_{1}-t_{0})-g(x_{2}-t_{0})}\\ \times e^{\varepsilon(x_{1}+x_{2})}\,\,e^{i(\rho_{1}-\lambda_{1} )(x_{1}+x_{2})}\,\prod\limits_{i=1}^{2}\prod\limits_{j=0}^{2}K(x_{i}-t_{j})\,.\] To prove that in the limit \(t_{0}\to+\infty\) we reproduce the initial integrand use the following asymptotic \[\frac{1}{2^{2g}}\,K(x_{1}-t_{0})\,K(x_{2}-t_{0})\to e^{g(x_{1}-t_{0})+g(x_{2}- t_{0})},\qquad t_{0}\to+\infty.\] The regularized \(x_{1},x_{2}\)-integral is already convergent. Furthermore, the chosen regularization completes the \(\Lambda_{2}\)-operator to the corresponding \(Q_{2}\)-operator and this allows to calculate everything in a closed form. To do this we convert \(t_{2}\)-integral back to the eigenfunction \(\Psi_{\rho_{1},\rho_{2}}(x_{1},x_{2})\) and rearrange exponents \[\langle\Psi_{\lambda_{1}\lambda_{2}}|\Psi_{\rho_{1}\rho_{2}}\rangle =\frac{1}{2^{2g}}\lim_{t_{0}\to+\infty}\lim_{\varepsilon\to 0}e^{-i( \lambda_{1}+ig+i\varepsilon)t_{0}}\int\limits_{-\infty}^{+\infty}dt_{1}\,e^{-i( \lambda_{2}-ig+i\varepsilon)t_{1}}\] \[\quad\times\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,e^{i( \lambda_{1}-ig+i\varepsilon)(t_{1}+t_{0}-x_{1}-x_{2})}\,\mathrm{sh}^{2g}|x_{1 2}|\] \[\qquad\qquad\qquad\qquad\times\prod_{i=1}^{2}K(x_{i}-t_{1})K(x_{ i}-t_{0})\Psi_{\rho_{1},\rho_{2}}(x_{1},x_{2}).\] Now in the second line it is easy to recognize the action of the \(Q\)-operator on the eigenfunction \[[Q_{2}(\lambda_{1}-ig+i\varepsilon)\Psi_{\rho_{1},\rho_{2}}](t_{ 1},t_{0})\\ =\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,e^{i(\lambda_{1}- ig+i\varepsilon)(t_{1}+t_{0}-x_{1}-x_{2})}\,\mathrm{sh}^{2g}|x_{12}|\\ \times\prod_{i=1}^{2}K(x_{i}-t_{1})K(x_{i}-t_{0})\Psi_{\rho_{1}, \rho_{2}}(x_{1},x_{2})\\ =2\,q(\lambda_{1}-ig+i\varepsilon,\rho_{1})\,q(\lambda_{1}-ig+i \varepsilon,\rho_{2})\,\Psi_{\rho_{1},\rho_{2}}(t_{1},t_{0}).\] So, we have managed to calculate \(x_{1},x_{2}\)-integrals in explicit form and it remains to calculate \(t_{1},t_{2}\)-integrals \[e^{-i(\lambda_{1}+ig+i\varepsilon)t_{0}}\int\limits_{-\infty}^{+ \infty}dt_{1}\,e^{-i(\lambda_{2}-ig+i\varepsilon)t_{1}}\,\Psi_{\rho_{1},\rho_{2 }}(t_{1},t_{0})\\ =e^{-i(\lambda_{1}+ig+i\varepsilon)t_{0}}\int\limits_{-\infty}^{ +\infty}dt_{1}\,e^{-i(\lambda_{2}-ig+i\varepsilon)t_{1}}\\ \times\int\limits_{-\infty}^{+\infty}dt_{2}\,e^{i\rho_{1}(t_{1}+ t_{0}-t_{2})}\,K(t_{1}-t_{2})K(t_{0}-t_{2})\,e^{i\rho_{2}t_{2}}.\] The \(t_{1}\)-integral reproduces the action of the \(Q\)-operator on the eigenfunction \(\Psi_{\rho_{1}}(t_{1})=e^{i\rho_{1}t_{1}}\) and can be calculated explicitly \[[Q_{1}(\lambda_{2}-ig+i\varepsilon)\Psi_{\rho_{1}}](t_{2}) =\int\limits_{-\infty}^{+\infty}dt_{1}\,e^{i(\lambda_{2}-ig+i \varepsilon)(t_{2}-t_{1})}\,K(t_{2}-t_{1})\,e^{i\rho_{1}t_{1}}\] \[=q(\lambda_{2}-ig+i\varepsilon,\rho_{1})\,e^{i\rho_{1}t_{2}}.\] The last step -- calculation of the \(t_{2}\)-integral. After simple rearrangements of the exponents we obtain \[e^{-i(\lambda_{1}+\lambda_{2}-\rho_{1}+2i\varepsilon)t_{0}}\int\limits_{- \infty}^{+\infty}dt_{2}\,e^{i(\lambda_{2}-ig+i\varepsilon)(t_{0}-t_{2})}\,K(t _{0}-t_{2})\,e^{i\rho_{2}t_{2}},\] so that the \(t_{2}\)-integral also gives the action of the \(Q\)-operator on eigenfunction \(\Psi_{\rho_{2}}(t_{2})=e^{i\rho_{2}t_{2}}\) \[[Q_{1}(\lambda_{2}-ig+i\varepsilon)\,\Psi_{\rho_{2}}]\,(t_{0}) =\int\limits_{-\infty}^{+\infty}dt_{2}\,e^{i(\lambda_{2}-ig+i \varepsilon)(t_{0}-t_{2})}\,K(t_{0}-t_{2})\,e^{i\rho_{2}t_{2}}\] \[=q(\lambda_{2}-ig+i\varepsilon,\rho_{2})\,e^{i\rho_{2}t_{0}}.\] Collecting everything together we obtain the following expression for the regularized scalar product \[\langle\Psi_{\lambda_{1},\lambda_{2}}|\Psi_{\rho_{1},\rho_{2}}\rangle=2^{1-2g} \lim\limits_{t_{0}\to+\infty}\,\lim\limits_{\varepsilon\to 0}\,e^{-i(\lambda_{1}+ \lambda_{2}-\rho_{1}-\rho_{2}+2i\varepsilon)t_{0}}\prod\limits_{k,j=1}^{2}q( \lambda_{k}-ig+i\varepsilon,\rho_{j}).\] It is left to show that the function on the right is a delta-sequence. Let us write it explicitly and factor out a singular part \[\langle\Psi_{\lambda_{1},\lambda_{2}}|\Psi_{\rho_{1},\rho_{2}}\rangle=2^{1-2g} \lim_{t_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,e^{i(\rho_{1}+\rho_{2}- \lambda_{1}-\lambda_{2})t_{0}}\] \[\times\prod_{i,k=1,2}\frac{\Gamma\left(g+\frac{i(\lambda_{i}-\rho_{k})- \varepsilon}{2}\right)\,\Gamma\left(\frac{i(\rho_{k}-\lambda_{i})+\varepsilon }{2}\right)}{2^{1-g}\Gamma(g)}\] \[=\frac{2^{2g-3}}{\Gamma^{4}(g)}\prod_{i,k=1,2}\Gamma\left(g+\frac{i(\lambda_{i }-\rho_{k})}{2}\right)\lim_{t_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,e^{i(\rho_{1}+ \rho_{2}-\lambda_{1}-\lambda_{2})t_{0}}\] \[\times\prod_{i,k=1,2}\Gamma\left(\frac{i(\rho_{k}-\lambda_{i})+\varepsilon}{ 2}\right)\] \[=\frac{2^{2g-3}}{\Gamma^{4}(g)}\,\prod_{i,k=1,2}\Gamma\left(g+\frac{i(\lambda _{i}-\rho_{k})}{2}\right)\Gamma\left(1+\frac{i(\rho_{k}-\lambda_{i})}{2}\right)\] \[\times\lim_{t_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,\frac{e^{i(\rho_{1}+ \rho_{2}-\lambda_{1}-\lambda_{2})t_{0}}}{\prod_{i,k=1,2}\frac{i(\rho_{k}- \lambda_{i})+\varepsilon}{2}}.\] Next we transform the last expression to the more simple form \[\lim_{t_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,\frac{e^{i(\rho_{1}+\rho_{2}- \lambda_{1}-\lambda_{2})t_{0}}}{\prod_{i,k=1,2}\frac{i(\rho_{k}-\lambda_{i})+ \varepsilon}{2}}=\left(\frac{2}{i}\right)^{4}\lim_{t_{0}\to+\infty}\,\lim_{ \varepsilon\to 0}\,\frac{e^{i(\rho_{1}+\rho_{2}-\lambda_{1}-\lambda_{2})t_{0}}}{ \prod_{i,k=1,2}(\rho_{k}-\lambda_{i}-i\varepsilon)}\] and use relation proven in Appendix C \[\lim_{t_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,\frac{e^{i(\rho_{1}+ \rho_{2}-\lambda_{1}-\lambda_{2})t_{0}}}{\prod_{i,k=1,2}(\rho_{k}-\lambda_{i}- i\varepsilon)}\\ =\frac{(2\pi)^{2}}{\left(\lambda_{1}-\lambda_{2}\right)^{2}} \Bigl{[}\delta(\lambda_{1}-\rho_{1})\,\delta(\lambda_{2}-\rho_{2})+\delta( \lambda_{1}-\rho_{2})\,\delta(\lambda_{2}-\rho_{1})\Bigr{]}. \tag{3.63}\] Thus, for the scalar product we obtain \[\langle\Psi_{\lambda_{1},\lambda_{2}}|\Psi_{\rho_{1},\rho_{2}}\rangle=\frac{2^{2g-3 }}{\Gamma^{4}(g)}\Gamma^{2}(g)\,\Gamma\left(g\pm\frac{i\lambda_{12}}{2}\right) \Gamma\left(1\pm\frac{i\lambda_{12}}{2}\right)\frac{2^{4}(2\pi)^{2}}{\lambda_{1 2}^{2}}\] \[\times\Big{[}\delta(\lambda_{1}-\rho_{1})\,\delta(\lambda_{2}-\rho_{2})+\delta (\lambda_{1}-\rho_{2})\,\delta(\lambda_{2}-\rho_{1})\Big{]}\] \[=\frac{2^{2g+1}\pi^{2}}{\Gamma^{2}(g)}\,\Gamma\left(g\pm\frac{i\lambda_{12}}{2 }\right)\Gamma\left(\pm\frac{i\lambda_{12}}{2}\right)\] \[\times\Big{[}\delta(\lambda_{1}-\rho_{1})\delta(\lambda_{2}-\rho_{2})+\delta (\lambda_{1}-\rho_{2})\,\delta(\lambda_{2}-\rho_{1})\Big{]}.\] This result coincides with the one obtained by the standard method (3.62). #### 3.4.3 Eigenfunctions of the operator \(\hat{Q}(x)\) Now we perform all calculations from previous subsection in the case of the dual \(\hat{Q}\)-operators. Expressions for the eigenfunction and conjugated eigenfunction have the form \[\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2})=\int\limits_{-\infty}^{+\infty} \frac{d\gamma}{2\pi}\,e^{ix_{1}(\lambda_{1}+\lambda_{2}-\gamma)}\,e^{ix_{2} \gamma}\,\hat{K}(\lambda_{1}-\gamma)\,\hat{K}(\lambda_{2}-\gamma),\] \[\overline{\Phi_{y_{1},y_{2}}(\lambda_{1},\lambda_{2})}=\int\limits_{-\infty}^{+ \infty}\frac{d\gamma}{2\pi}\,e^{-iy_{1}(\lambda_{1}+\lambda_{2}-\gamma)}\,e^{ -iy_{2}\gamma}\,\hat{K}(\lambda_{1}-\gamma)\,\hat{K}(\lambda_{2}-\gamma).\] The integral over \(\gamma\) converges for \(g>0\). The scalar product is \[\langle\Phi_{y_{1},y_{2}}|\Phi_{x_{1},x_{2}}\rangle=\int\limits_{-\infty}^{+ \infty}\frac{d\lambda_{1}}{2\pi}\frac{d\lambda_{2}}{2\pi}\,\mu(\lambda_{1}, \lambda_{2})\,\overline{\Phi_{y_{1},y_{2}}(\lambda_{1},\lambda_{2})}\,\Phi_{x _{1},x_{2}}(\lambda_{1},\lambda_{2})\] \[=\int\limits_{-\infty}^{+\infty}\frac{d\lambda_{1}}{2\pi}\frac{d\lambda_{2}}{2 \pi}\,\mu(\lambda_{1},\lambda_{2})\,\int\limits_{-\infty}^{+\infty}\frac{d \gamma_{1}}{2\pi}\,\int\limits_{-\infty}^{+\infty}\frac{d\gamma_{2}}{2\pi}\,e^ {-iy_{1}(\lambda_{1}+\lambda_{2}-\gamma_{1})}\,e^{-iy_{2}\gamma_{1}}\] \[\times e^{ix_{1}(\lambda_{1}+\lambda_{2}-\gamma_{2})}\,e^{ix_{2}\gamma_{2}}\, \prod\limits_{i,j=1}^{2}\hat{K}(\lambda_{i}-\gamma_{j}).\] The integration measure (3.36) \[\mu(\lambda_{1},\lambda_{2})=\frac{[2^{1-g}\Gamma(g)]^{2}}{\Gamma\left(g\pm\frac{ i(\lambda_{1}-\lambda_{2})}{2}\right)\Gamma\left(\pm\frac{i(\lambda_{1}-\lambda_{2}) }{2}\right)}.\] Again \(\gamma_{1},\gamma_{2}\)-integral is absolutely convergent, but \(\lambda_{1},\lambda_{2}\)-integral diverges \[\int\limits_{-\infty}^{+\infty}\frac{d\lambda_{1}}{2\pi}\frac{d\lambda_{2}}{2 \pi}\,e^{i(x_{1}-y_{1})(\lambda_{1}+\lambda_{2})}\,\mu(\lambda_{1}\,,\lambda_{ 2})\,\prod\limits_{i,j=1}^{2}\hat{K}(\lambda_{i}-\gamma_{j}).\] We regularize it adding external point \(\gamma_{0}\) and small \(\varepsilon>0\) in exponent \[\int\limits_{-\infty}^{+\infty}\frac{d\lambda_{1}}{2\pi}\frac{d \lambda_{2}}{2\pi}\,e^{i(x_{1}-y_{1})(\lambda_{1}+\lambda_{2})}\,\mu(\lambda_ {1},\lambda_{2})\,\prod\limits_{i,j=1}^{2}\hat{K}(\lambda_{i}-\gamma_{j})\] \[=\lim\limits_{\gamma_{0}\to+\infty}\,\lim\limits_{\varepsilon\to 0 }\,\left[\frac{2\pi}{\Gamma(g)}\,\gamma_{0}^{g-1}\right]^{-2}\int\limits_{- \infty}^{+\infty}\frac{d\lambda_{1}}{2\pi}\frac{d\lambda_{2}}{2\pi}\,\mu( \lambda_{1},\lambda_{2})\,e^{-\frac{\pi}{2}(\lambda_{1}-\gamma_{0})-\frac{\pi }{2}(\lambda_{2}-\gamma_{0})}\] \[\times e^{\varepsilon(\lambda_{1}+\lambda_{2})}\,e^{i(x_{1}-y_{1 })(\lambda_{1}+\lambda_{2})}\,\prod\limits_{i=1}^{2}\prod\limits_{j=0}^{2}\hat {K}(\lambda_{i}-\gamma_{j}).\] In the limit \(\gamma_{0}\to+\infty\) we reproduce the initial integrand, because \[\hat{K}(\lambda-\gamma_{0})\to\frac{2\pi}{\Gamma(g)}\,\gamma_{0}^{g-1}\,e^{ \frac{\pi}{2}(\lambda-\gamma_{0})},\qquad\gamma_{0}\to+\infty,\] so that \[\left[\frac{2\pi}{\Gamma(g)}\,\gamma_{0}^{g-1}\right]^{-2}\,\hat{K}(\lambda_{ 1}-\gamma_{0})\,\hat{K}(\lambda_{2}-\gamma_{0})\to e^{\frac{\pi}{2}(\lambda_{ 1}-\gamma_{0})+\frac{\pi}{2}(\lambda_{2}-\gamma_{0})},\qquad\gamma_{0}\to+\infty.\] Th regularized integral is absolutely convergent. Next we convert \(\gamma_{2}\)-integral back to the eigenfunction \(\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2})\) and rearrange some factors \[\langle\Phi_{y_{1},y_{2}}|\Phi_{x_{1},x_{2}}\rangle=\lim\limits_{ \gamma_{0}\to+\infty}\,\lim\limits_{\varepsilon\to 0}\,\left[\frac{2\pi}{\Gamma(g)}\, \gamma_{0}^{g-1}\right]^{-2}\,e^{-i(y_{1}+\frac{i\pi}{2}+i\varepsilon)\gamma_{ 0}}\\ \times\int\limits_{-\infty}^{+\infty}\frac{d\gamma_{1}}{2\pi}\,e^ {-i(y_{2}-\frac{i\pi}{2}+i\varepsilon)\gamma_{1}}\,\int\limits_{-\infty}^{+ \infty}\frac{d\lambda_{1}}{2\pi}\frac{d\lambda_{2}}{2\pi}\,\mu(\lambda_{1}, \lambda_{2})\,e^{i(y_{1}-\frac{i\pi}{2}+i\varepsilon)(\gamma_{1}+\gamma_{0}- \lambda_{1}-\lambda_{2})}\\ \times\prod\limits_{i=1,2}\hat{K}(\lambda_{i}-\gamma_{1})\hat{K} (\lambda_{i}-\gamma_{0})\,\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2})\] In the last line it is easy to recognize the action of the \(\hat{Q}\)-operator on eigenfunction \(\Phi_{x_{1}x_{2}}(\lambda_{1}\,,\lambda_{2})\) \[[\hat{Q}_{2}(y_{1}-\tfrac{i\pi}{2}+i\varepsilon)\Phi_{x_{1},x_{2}}](\gamma_{1}, \gamma_{0})\\ =\int\limits_{-\infty}^{+\infty}\frac{d\lambda_{1}}{2\pi}\frac{d \lambda_{2}}{2\pi}\,e^{i(y_{1}-\frac{i\pi}{2}+i\varepsilon)(\gamma_{1}+\gamma _{0}-\lambda_{1}-\lambda_{2})}\,\mu(\lambda_{1},\lambda_{2})\\ \times\prod_{i=1,2}\hat{K}(\lambda_{i}-\gamma_{1})\hat{K}( \lambda_{i}-\gamma_{0})\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2})\\ =2\,\hat{q}(y_{1}-\tfrac{i\pi}{2}+i\varepsilon,x_{1})\,\hat{q}(y _{1}-\tfrac{i\pi}{2}+i\varepsilon,x_{2})\,\Phi_{x_{1},x_{2}}(\gamma_{1},\gamma _{0})\,.\] Thus, we have managed to calculate \(\lambda_{1},\lambda_{2}\)-integrals in explicit form and it remains to calculate \(\gamma_{1},\gamma_{2}\)-integrals \[e^{-i(y_{1}+\tfrac{i\pi}{2}+i\varepsilon)\gamma_{0}}\int\limits_ {-\infty}^{+\infty}\frac{d\gamma_{1}}{2\pi}\,e^{-i(y_{2}-\tfrac{i\pi}{2}+i \varepsilon)\gamma_{1}}\,\Phi_{x_{1},x_{2}}(\gamma_{1},\gamma_{0})\\ =e^{-i(y_{1}+\tfrac{i\pi}{2}+i\varepsilon)\gamma_{0}}\int\limits_ {-\infty}^{+\infty}\frac{d\gamma_{1}}{2\pi}\,e^{-i(y_{2}-\tfrac{i\pi}{2}+i \varepsilon)\gamma_{1}}\\ \times\int\limits_{-\infty}^{+\infty}\frac{d\gamma_{2}}{2\pi}\,e ^{ix_{1}(\gamma_{1}+\gamma_{0}-\gamma_{2})}\hat{K}(\gamma_{1}-\gamma_{2})\hat{ K}(\gamma_{0}-\gamma_{2})\,e^{ix_{2}\gamma_{2}}.\] The \(\gamma_{1}\)-integral represents action of the \(\hat{Q}\)-operator on the eigenfunction \(\Phi_{x_{1}}(\gamma_{1})=e^{ix_{1}\gamma_{1}}\) \[[\hat{Q}_{1}(y_{2}-\tfrac{i\pi}{2}+i\varepsilon)\Phi_{x_{1}}]( \gamma_{2}) =\int\limits_{-\infty}^{+\infty}\frac{d\gamma_{1}}{2\pi}\,e^{i(y_ {2}-\tfrac{i\pi}{2}+i\varepsilon)(\gamma_{2}-\gamma_{1})}\,\hat{K}(\gamma_{2} -\gamma_{1})\,e^{ix_{1}\gamma_{1}}\] \[=\hat{q}(y_{2}-\tfrac{i\pi}{2}+i\varepsilon,x_{1})\,e^{ix_{1} \gamma_{2}}\] The last step is to calculate the \(\gamma_{2}\)-integral. After simple rearrangements of the exponents we obtain \[e^{-i(y_{1}+y_{2}-x_{1}+2i\varepsilon)\gamma_{0}}\int\limits_{-\infty}^{+ \infty}\frac{d\gamma_{2}}{2\pi}\,e^{i(y_{2}-\tfrac{i\pi}{2}+i\varepsilon)( \gamma_{0}-\gamma_{2})}\,\hat{K}(\gamma_{0}-\gamma_{2})\,e^{ix_{2}\gamma_{2}}\] and it is again action of the \(\hat{Q}\)-operator on eigenfunction \(\Phi_{x_{2}}(\gamma_{2})=e^{ix_{2}\gamma_{2}}\) \[[\hat{Q}_{1}(y_{2}-\tfrac{i\pi}{2}+i\varepsilon)\Phi_{x_{2}}]( \gamma_{0}) =\int\limits_{-\infty}^{+\infty}\frac{d\gamma_{2}}{2\pi}\,e^{i(y_ {2}-\frac{i\pi}{2}+i\varepsilon)(\gamma_{0}-\gamma_{2})}\,\hat{K}(\gamma_{0}- \gamma_{2})\,e^{ix_{2}\gamma_{2}}\] \[=\hat{q}(y_{2}-\tfrac{i\pi}{2}+i\varepsilon,x_{2})\,e^{ix_{1} \gamma_{0}}.\] Collecting everything together we obtain the following expression for the scalar product \[\langle\Phi_{y_{1},y_{2}}|\Phi_{x_{1},x_{2}}\rangle =\lim_{\gamma_{0}\to+\infty}\,\lim_{\varepsilon\to 0}2\left[\frac{2\pi}{ \Gamma(g)}\,\gamma_{0}^{g-1}\right]^{-2}\] \[\times e^{-i(y_{1}+y_{2}-x_{1}-x_{2}+2i\varepsilon)\gamma_{0}} \prod_{k,j=1}^{2}\hat{q}(y_{k}-\tfrac{i\pi}{2}+i\varepsilon,x_{j}).\] Let us write it in explicit form and factor out a singular part \[\langle\Phi_{y_{1},y_{2}}|\Phi_{x_{1},x_{2}}\rangle=2\,\frac{\Gamma^{2}(g)}{( 2\pi)^{2}}\,e^{-2\pi ig}\,\lim_{\gamma_{0}\to+\infty}\,\lim_{\varepsilon\to 0 }\,\frac{\gamma_{0}^{2(1-g)}\,e^{i(x_{1}+x_{2}-y_{1}-y_{2})\gamma_{0}}}{\prod_{ k,j=1}^{2}\mathrm{sh}^{g}(x_{j}-y_{k}-i\varepsilon)}\] \[=2\,\frac{\Gamma^{2}(g)}{(2\pi)^{2}}\,e^{-2\pi ig}\lim_{\varepsilon\to 0 }\,\prod_{k,j=1}^{2}\frac{(x_{j}-y_{k}-i\varepsilon)^{g}}{\mathrm{sh}^{g}(x_{j }-y_{k}-i\varepsilon)}\] \[\times\lim_{\gamma_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,\frac{\gamma_{0}^{2(1-g)} \,e^{i(x_{1}+x_{2}-y_{1}-y_{2})\gamma_{0}}}{\prod_{k,j=1}^{2}(x_{j}-y_{k}-i \varepsilon)^{g}}\] where we used \[\hat{q}(y-\tfrac{i\pi}{2}+i\varepsilon,x)=\frac{1}{\mathrm{ch}^{g}(y-x-\tfrac {i\pi}{2}+i\varepsilon)}=\frac{e^{-i\frac{\pi}{2}g}}{\mathrm{sh}^{g}(x-y-i \varepsilon)}\] and the fact that function \(z^{-1}\,\mathrm{sh}(z)\) is regular at the point \(z=0\). Now we have to use analog of the formulas (2.35) and (3.63) (we postpone the proof to the next paper) \[\lim_{\gamma_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,\frac{\gamma_{0}^{2(1-g) }\,e^{i(x_{1}+x_{2}-y_{1}-y_{2})\gamma_{0}}}{\prod_{i,k=1,2}(x_{i}-y_{k}-i \varepsilon)^{g}}\] \[=\frac{(2\pi)^{2}}{\Gamma^{2}(g)}\,e^{2\pi ig}\frac{1}{|x_{1}-x_{2}|^{2g}} \,\Big{[}\delta(x_{1}-y_{1})\,\delta(x_{2}-y_{2})+\delta(x_{1}-y_{2})\,\delta (x_{2}-y_{1})\Big{]}.\] Then we have \[\lim_{\varepsilon\to 0}\,\prod_{k,j=1}^{2}\frac{(x_{k}-y_{j}-i \varepsilon)^{g}}{\mathrm{sh}^{g}(x_{k}-y_{j}-i\varepsilon)}\,\delta(x_{1}-y_{1} )\,\delta(x_{2}-y_{2})\] \[=\lim_{\varepsilon\to 0}\,\frac{(x_{1}-y_{1}-i\varepsilon)^{g}}{ \mathrm{sh}^{g}(x_{1}-y_{1}-i\varepsilon)}\,\frac{(x_{1}-y_{2}-i\varepsilon)^ {g}}{\mathrm{sh}^{g}(x_{1}-y_{2}-i\varepsilon)}\,\frac{(x_{2}-y_{1}-i \varepsilon)^{g}}{\mathrm{sh}^{g}(x_{2}-y_{1}-i\varepsilon)}\] \[\times\frac{(x_{2}-y_{2}-i\varepsilon)^{g}}{\mathrm{sh}^{g}(x_{2 }-y_{2}-i\varepsilon)}\,\delta(x_{1}-y_{1})\,\delta(x_{2}-y_{2})\] \[=\lim_{\varepsilon\to 0}\,\frac{(-i\varepsilon)^{g}}{ \mathrm{sh}^{g}(-i\varepsilon)}\,\frac{(x_{1}-x_{2}-i\varepsilon)^{g}}{ \mathrm{sh}^{g}(x_{1}-x_{2}-i\varepsilon)}\,\frac{(x_{2}-x_{1}-i\varepsilon) ^{g}}{\mathrm{sh}^{g}(x_{2}-x_{1}-i\varepsilon)}\] \[\times\frac{(-i\varepsilon)^{g}}{\mathrm{sh}^{g}(-i\varepsilon) }\,\delta(x_{1}-y_{1})\,\delta(x_{2}-y_{2})=\left(\frac{x_{12}}{\mathrm{sh}\, x_{12}}\right)^{2g}\,\delta(x_{1}-y_{1})\,\delta(x_{2}-y_{2}).\] The second contribution with \(\delta(x_{1}-y_{2})\,\delta(x_{2}-y_{1})\) produces the same coefficient, so that collecting everything together we obtain \[\langle\Phi_{y_{1},y_{2}}|\Phi_{x_{1},x_{2}}\rangle =2\,\frac{\Gamma^{2}(g)}{(2\pi)^{2}}\,\left(\frac{x_{12}}{ \mathrm{sh}\,x_{12}}\right)^{2g}\,\frac{(2\pi)^{2}}{\Gamma^{2}(g)}\,\frac{1}{ |x_{12}|^{2g}}\] \[\times\Big{[}\delta(x_{1}-y_{1})\,\delta(x_{2}-y_{2})+\delta(x_{ 1}-y_{2})\,\delta(x_{2}-y_{1})\Big{]}\] \[=\frac{2}{\mathrm{sh}^{2g}|x_{12}|}\,\Big{[}\delta(x_{1}-y_{1})\, \delta(x_{2}-y_{2})+\delta(x_{1}-y_{2})\,\delta(x_{2}-y_{1})\Big{]}.\] 4.4 Eigenfunctions of the operators \(Q(\lambda|\boldsymbol{\omega})\) and \(\hat{Q}(x|\boldsymbol{\omega})\) In this subsection we repeat the same calculation in the relativistic case. Due to the relation (3.52) the transition to the dual \(Q\)-operator is reduced to change \(g\to g^{*}\) and renaming of variables \(x\rightleftarrows\lambda\). In this section we shall work with the eigenfunctions of the operator \(Q_{2}(\lambda|\boldsymbol{\omega})\) (3.38). Expressions for the eigenfunction and conjugated eigenfunction have the form \[\Phi_{\rho_{1},\rho_{2}}(x_{1},x_{2})=\int\limits_{-\infty}^{+\infty}dt \,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\rho_{1}(x_{1}+x_{2}-t)}\,e^{\frac{2\pi i }{\omega_{1}\omega_{2}}\rho_{2}t}\,K_{g^{*}}(x_{1}-t)\,K_{g^{*}}(x_{2}-t),\] \[\overline{\Phi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})}=\int \limits_{-\infty}^{+\infty}dt\,e^{-\frac{2\pi i}{\omega_{1}\omega_{2}}\lambda_ {1}(x_{1}+x_{2}-t)}\,e^{-\frac{2\pi i}{\omega_{1}\omega_{2}}\lambda_{2}t}\,K_{ g^{*}}(x_{1}-t)\,K_{g^{*}}(x_{2}-t).\] The scalar product is \[\langle\Phi_{\lambda_{1},\lambda_{2}}|\Phi_{\rho_{1},\rho_{2}} \rangle=\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,\mu_{g^{*}}(x_{1},x_{2}) \,\overline{\Phi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})}\,\Phi_{\rho_{1},\rho _{2}}(x_{1},x_{2})\] \[=\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,\mu_{g^{*}}(x_{1},x _{2})\,\int\limits_{-\infty}^{+\infty}dt_{1}\,\int\limits_{-\infty}^{+\infty} dt_{2}\,e^{-\frac{2\pi i}{\omega_{1}\omega_{2}}\lambda_{1}(x_{1}+x_{2}-t_{1})}\,e^{- \frac{2\pi i}{\omega_{1}\omega_{2}}\lambda_{2}t_{1}}\] \[\times e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\rho_{1}(x_{1}+x_{2} -t_{2})}\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\rho_{2}t_{2}}\,\prod\limits_ {i,j=1}^{2}K_{g^{*}}(x_{i}-t_{j})\] where \[\mu_{g^{*}}(x_{1},x_{2})=S(\pm ix_{12})S(\pm ix_{12}+g^{*}).\] The \(x_{1}\,,x_{2}\)-integrals diverge \[\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,e^{\frac{2\pi i}{\omega_{1} \omega_{2}}(\rho_{1}-\lambda_{1})(x_{1}+x_{2})}\,\mu_{g^{*}}(x_{1}\,,x_{2})\, \prod\limits_{i,j=1}^{2}K_{g^{*}}(x_{i}-t_{j})\,,\] so we regularize them adding external point \(t_{0}\) and small \(\varepsilon>0\) in exponent \[\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,e^{\frac{2\pi i}{ \omega_{1}\omega_{2}}(\rho_{1}-\lambda_{1})(x_{1}+x_{2})}\,\mu_{g^{*}}(x_{1},x _{2})\,\prod\limits_{i,j=1}^{2}K_{g^{*}}(x_{i}-t_{j})\] \[=\lim\limits_{t_{0}\to+\infty}\lim\limits_{\varepsilon\to 0} \int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,\mu_{g^{*}}(x_{1},x_{2})e^{- \frac{\pi g}{\omega_{1}\omega_{2}}(x_{1}-t_{0})-\frac{\pi g}{\omega_{1}\omega_ {2}}(x_{2}-t_{0})}\] \[\times e^{\frac{2\pi}{\omega_{1}\omega_{2}}\varepsilon(x_{1}+x_{2 })}\,\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}(\rho_{1}-\lambda_{1})(x_{1}+x_{2 })}\,\prod\limits_{i=1}^{2}\prod\limits_{j=0}^{2}K_{g^{*}}(x_{i}-t_{j})\,.\] To prove that in the limit \(t_{0}\to+\infty\) we reproduce the initial integrand use the following asymptotic \[K_{g^{*}}(x_{1}-t_{0})\,K_{g^{*}}(x_{2}-t_{0})\to e^{\frac{\pi g}{\omega_{1} \omega_{2}}(x_{1}-t_{0})+\frac{\pi g}{\omega_{1}\omega_{2}}(x_{2}-t_{0})},\qquad t _{0}\to+\infty. \tag{3.64}\] Next we convert the \(t_{2}\)-integral back to the function \(\Phi_{\rho_{1},\rho_{2}}(x_{1},x_{2})\) and rearrange exponents \[\langle\Phi_{\lambda_{1},\lambda_{2}}|\Phi_{\rho_{1},\rho_{2}} \rangle=\lim_{t_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,e^{-\frac{2\pi i}{ \omega_{1}\omega_{2}}(\lambda_{1}+\frac{i}{2}g+i\varepsilon)t_{0}}\,\int \limits_{-\infty}^{+\infty}dt_{1}\,e^{-i(\lambda_{2}-\frac{i}{2}g+i \varepsilon)t_{1}}\\ \int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,e^{\frac{2\pi i}{ \omega_{1}\omega_{2}}(\lambda_{1}-\frac{i}{2}g+i\varepsilon)(t_{1}+t_{0}-x_{ 1}-x_{2})}\,\mu_{g^{*}}(x_{1},x_{2})\\ \times\prod_{i=1,2}K_{g^{*}}(x_{i}-t_{1})K_{g^{*}}(x_{i}-t_{0})\, \Phi_{\rho_{1},\rho_{2}}(x_{1},x_{2}).\] In the last line the \(x_{1},x_{2}\)-integral represents the action of the \(Q\)-operator on the eigenfunction \(\Phi_{\rho_{1},\rho_{2}}(x_{1},x_{2})\) \[[Q_{2}(\lambda_{1}-\tfrac{i}{2}g+i\varepsilon|\boldsymbol{\omega} )\Phi_{\rho_{1},\rho_{2}}](t_{1},t_{0})\\ =\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,e^{\frac{2\pi i}{ \omega_{1}\omega_{2}}(\lambda_{1}-\frac{i}{2}g+i\varepsilon)(t_{1}+t_{0}-x_{ 1}-x_{2})}\,\mu_{g^{*}}(x_{1},x_{2})\\ \times\prod_{i=1,2}K_{g^{*}}(x_{i}-t_{1})K_{g^{*}}(x_{i}-t_{0}) \Psi_{\rho_{1},\rho_{2}}(x_{1},x_{2})\\ =2\,q(\lambda_{1}-\tfrac{i}{2}g+i\varepsilon,\rho_{1}|\boldsymbol {\omega})\,q(\lambda_{1}-\tfrac{i}{2}g+i\varepsilon,\rho_{2}|\boldsymbol{ \omega})\,\Phi_{\rho_{1},\rho_{2}}(t_{1},t_{0}).\] where \[q(\lambda,\rho|\boldsymbol{\omega})=\sqrt{\omega_{1}\omega_{2}}S(g)K_{g}( \lambda-\rho)=\frac{\sqrt{\omega_{1}\omega_{2}}\,S(g)}{S\left(\tfrac{g}{2}+i( \lambda-\rho)\right)S\left(\tfrac{g}{2}-i(\lambda-\rho)\right)}. \tag{3.65}\] In fact we have managed to calculate \(x_{1},x_{2}\)-integrals in explicit form and it remains to calculate \(t_{1},t_{2}\)-integrals \[e^{-\frac{2\pi i}{\omega_{1}\omega_{2}}(\lambda_{1}+\frac{i}{2}g+i \varepsilon)t_{0}}\int\limits_{-\infty}^{+\infty}dt_{1}\,e^{-\frac{2\pi i}{ \omega_{1}\omega_{2}}(\lambda_{2}-\frac{i}{2}g+i\varepsilon)t_{1}}\,\Phi_{\rho _{1},\rho_{2}}(t_{1},t_{0})\\ =e^{-\frac{2\pi i}{\omega_{1}\omega_{2}}(\lambda_{1}+\frac{i}{2}g +i\varepsilon)t_{0}}\int\limits_{-\infty}^{+\infty}dt_{1}\,e^{-\frac{2\pi i}{ \omega_{1}\omega_{2}}(\lambda_{2}-\frac{i}{2}g+i\varepsilon)t_{1}}\\ \times\int\limits_{-\infty}^{+\infty}dt_{2}\,e^{\frac{2\pi i}{ \omega_{1}\omega_{2}}\rho_{1}(t_{1}+t_{0}-t_{2})}\,K_{g^{*}}(t_{1}-t_{2})\,K_{ g^{*}}(t_{0}-t_{2})\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\rho_{2}t_{2}}.\] The \(t_{1}\)-integral represents the action of \(Q\)-operator on eigenfunction \(\Phi_{\rho_{1}}(t_{1})=e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\rho_{1}t_{1}}\) \[[Q_{1}(\lambda_{2}-\tfrac{i}{2}g+i\varepsilon|\boldsymbol{\omega })\Phi_{\rho_{1}}](t_{2})\\ =\int\limits_{-\infty}^{+\infty}dt_{1}\,e^{\frac{2\pi i}{\omega_ {1}\omega_{2}}(\lambda_{2}-\frac{i}{2}g+i\varepsilon)(t_{2}-t_{1})}\,K_{g^{*} }(t_{2}-t_{1})\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\rho_{1}t_{1}}\\ =q(\lambda_{2}-\tfrac{i}{2}g+i\varepsilon,\rho_{1}|\boldsymbol{ \omega})\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\rho_{1}t_{2}}\] After simple rearrangements of the exponents \[e^{-\frac{2\pi i}{\omega_{1}\omega_{2}}(\lambda_{1}+\lambda_{2}-\rho_{1}+2i \varepsilon)t_{0}}\int\limits_{-\infty}^{+\infty}dt_{2}\,e^{\frac{2\pi i}{ \omega_{1}\omega_{2}}(\lambda_{2}-\frac{i}{2}g+i\varepsilon)(t_{0}-t_{2})}\,K _{g^{*}}(t_{0}-t_{2})\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\rho_{2}t_{2}}\] we see that the \(t_{2}\)-integral also represents the action of the \(Q\)-operator on eigenfunction \(\Phi_{\rho_{2}}(t_{2})=e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\rho_{2}t_{2}}\) \[[Q_{1}(\lambda_{2}-\tfrac{i}{2}g+i\varepsilon|\boldsymbol{\omega })\Phi_{\rho_{2}}](t_{0})\\ =\int\limits_{-\infty}^{+\infty}dt_{2}\,e^{\frac{2\pi i}{\omega_{ 1}\omega_{2}}(\lambda_{2}-\frac{i}{2}g+i\varepsilon)(t_{0}-t_{2})}\,K_{g^{*}}( t_{0}-t_{2})\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\rho_{2}t_{2}}\\ =q(\lambda_{2}-\tfrac{i}{2}g+i\varepsilon,\rho_{2}|\boldsymbol{ \omega})\,e^{\frac{2\pi i}{\omega_{1}\omega_{2}}\rho_{2}t_{0}}.\] Collecting everything together and using exact representation for the eigenvalue (3.65) we obtain the following expression for the regularized scalar product \[\langle\Phi_{\lambda_{1},\lambda_{2}}|\Phi_{\rho_{1},\rho_{2}}\rangle =2\,\lim_{t_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,e^{-\frac{2\pi i}{ \omega_{1}\omega_{2}}(\lambda_{1}+\lambda_{2}-\rho_{1}-\rho_{2}+2i\varepsilon )t_{0}}\\ \times\prod_{k,j=1}^{2}q(\lambda_{k}-\tfrac{i}{2}g^{*}+i \varepsilon,\rho_{j}|\boldsymbol{\omega}),\] or explicitly \[\langle\Phi_{\lambda_{1},\lambda_{2}}|\Phi_{\rho_{1},\rho_{2}}\rangle =\lim_{t_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,2\,e^{\frac{2\pi i}{ \omega_{1}\omega_{2}}(\rho_{1}+\rho_{2}-\lambda_{1}-\lambda_{2})t_{0}}\] \[\times\prod_{k,j=1,2}\frac{\sqrt{\omega_{1}\omega_{2}}S(g)}{S\left( i(\lambda_{k}-\rho_{j}+i\varepsilon)+g\right)S\left(i(\rho_{j}-\lambda_{k}-i \varepsilon)\right)}\] \[=2\,\lim_{\varepsilon\to 0}\,\prod_{k,j=1,2}\frac{\sqrt{\omega_{1} \omega_{2}}S(g)\left(\rho_{j}-\lambda_{k}-i\varepsilon\right)}{S\left(i( \lambda_{k}-\rho_{j}+i\varepsilon)+g\right)S\left(i(\rho_{j}-\lambda_{k}-i \varepsilon)\right)}\] \[\times\lim_{t_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,\frac{e^{ \frac{2\pi i}{\omega_{1}\omega_{2}}(\rho_{1}+\rho_{2}-\lambda_{1}-\lambda_{2}) t_{0}}}{\prod_{j,k=1,2}(\rho_{j}-\lambda_{k}-i\varepsilon)}.\] In the last line we used the fact that function \(z^{-1}\,S_{2}(z)\) is regular at the point \(z=0\) and extracted the singular part arising at coinciding arguments \(\rho_{j}=\lambda_{k}\). Next we use formula (3.63) \[\lim_{t_{0}\to+\infty}\,\lim_{\varepsilon\to 0}\,\frac{e^{\frac{2\pi i}{ \omega_{1}\omega_{2}}(\rho_{1}+\rho_{2}-\lambda_{1}-\lambda_{2})t_{0}}}{\prod_{ j,k=1,2}(\rho_{j}-\lambda_{k}-i\varepsilon)}\\ =\frac{(2\pi)^{2}}{\lambda_{12}^{2}}\left[\delta(\lambda_{1}-\rho _{1})\,\delta(\lambda_{2}-\rho_{2})+\delta(\lambda_{1}-\rho_{2})\,\delta( \lambda_{2}-\rho_{1})\right]\] and calculate the contribution arising from the first term \(\delta(\lambda_{1}-\rho_{1})\,\delta(\lambda_{2}-\rho_{2})\) \[\lim_{\varepsilon\to 0}\,\prod_{k,j=1,2}\frac{\sqrt{\omega_{1}\omega_{2}}S(g) \left(\rho_{j}-\lambda_{k}-i\varepsilon\right)}{S\left(i(\lambda_{k}-\rho_{j}+ i\varepsilon)+g\right)S\left(i(\rho_{j}-\lambda_{k}-i\varepsilon)\right)}\,\delta( \lambda_{1}-\rho_{1})\,\delta(\lambda_{2}-\rho_{2})\\ =\frac{(\omega_{1}\omega_{2})^{3}S^{2}(g)}{(2\pi)^{2}}\,\frac{ \lambda_{12}^{2}}{S\left(\pm i\lambda_{12}+g\right)S\left(\pm i\lambda_{12} \right)}\,\delta(\lambda_{1}-\rho_{1})\,\delta(\lambda_{2}-\rho_{2})\] where we used the formula (see Appendix A) \[\lim_{z\to 0}z^{-1}\,S(z)=\frac{2\pi}{\sqrt{\omega_{1}\omega_{2}}}.\] The second contribution with \(\delta(\lambda_{1}-\rho_{2})\,\delta(\lambda_{2}-\rho_{1})\) enters with the same coefficient so that finally one obtains \[\langle\Phi_{\lambda_{1},\lambda_{2}}|\Phi_{\rho_{1},\rho_{2}} \rangle=\frac{2\,\left(\omega_{1}\omega_{2}\right)^{3}S^{2}(g)}{S\left(\pm i \lambda_{12}+g\right)S\left(\pm i\lambda_{12}\right)}\\ \times\Big{[}\delta(\lambda_{1}-\rho_{1})\,\delta(\lambda_{2}- \rho_{2})+\delta(\lambda_{1}-\rho_{2})\,\delta(\lambda_{2}-\rho_{1})\Big{]}.\] The scalar product for the eigenfunctions \(\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2}|g,\boldsymbol{\omega})\) of the operator \(\hat{Q}(x|\boldsymbol{\omega})\) is obtained from the previous formula by change \(g\rightleftarrows g^{*}\) and renaming variables \[\langle\Phi_{x_{1},x_{2}}|\Phi_{y_{1},y_{2}}\rangle=\frac{2\, \left(\omega_{1}\omega_{2}\right)^{3}S^{2}(g^{*})}{S\left(\pm ix_{12}+g^{*} \right)S\left(\pm ix_{12}\right)}\\ \times\Big{[}\delta(x_{1}-y_{1})\,\delta(x_{2}-y_{2})+\delta(x_{ 1}-y_{2})\,\delta(x_{2}-y_{1})\Big{]}.\] #### 3.4.5 Orthogonality and completeness In previous sections we have established orthogonality relations for all sets of eigenfunctions. Due to the duality properties \[\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=\Phi_{x_{1},x_{2}}( \lambda_{1},\lambda_{2}),\] \[\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2}|g,\boldsymbol{\omega}) =S^{2}(g^{*})\,\Phi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2}|g^{*}, \boldsymbol{\omega}).\] the completeness relations of these eigenfunctions coincide with orthogonality relations for the dual ones. Let us list the corresponding formulas in explicit form using the exact expressions for the measures. We have orthogonality relation for eigenfunctions \(\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})\) of the operator \(Q_{2}(\lambda)\) \[\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,\operatorname{sh}^{2g}|x_{1}-x_{2}| \,\overline{\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})}\,\Psi_{\rho_{1},\rho_{ 2}}(x_{1},x_{2})\] \[=\frac{2^{2g+1}\pi^{2}}{\Gamma^{2}(g)}\,\Gamma\left(g\pm\frac{i\lambda_{12}}{2 }\right)\Gamma\left(\pm\frac{i\lambda_{12}}{2}\right)\] \[\times\Big{[}\delta(\lambda_{1}-\rho_{1})\,\delta(\lambda_{2}-\rho_{2})+ \delta(\lambda_{1}-\rho_{2})\,\delta(\lambda_{2}-\rho_{1})\Big{]}\] and due to relation \(\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=\Phi_{x_{1},x_{2}}(\lambda_{1}, \lambda_{2})\) it is possible to rewrite orthogonality relations for the eigenfunctions \(\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2})\) of the operator \(\hat{Q}(x)\) in terms of functions \(\Psi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})\). In this way one obtains the needed completeness relation \[\int\limits_{-\infty}^{+\infty}\frac{d\lambda_{1}}{2\pi}\frac{d\lambda_{2}}{2 \pi}\,\frac{[2^{1-g}\Gamma(g)]^{2}}{\Gamma\left(g\pm\frac{i(\lambda_{1}- \lambda_{2})}{2}\right)\Gamma\left(\pm\frac{i(\lambda_{1}-\lambda_{2})}{2} \right)}\,\overline{\Psi_{\lambda_{1},\lambda_{2}}(y_{1},y_{2})}\,\Psi_{ \lambda_{1},\lambda_{2}}(x_{1},x_{2})\] \[=\frac{2}{\operatorname{sh}^{2g}\!|x_{1}-x_{2}|}\,\Big{[}\delta(x_{1}-y_{1}) \,\delta(x_{2}-y_{2})+\delta(x_{1}-y_{2})\,\delta(x_{2}-y_{1})\Big{]}.\] These relations are compatible with each other. To verify it integrate the first one over \(\rho_{1},\rho_{2}\) with the eigenfunction \(\overline{\Psi_{\rho_{1},\rho_{2}}(y_{1},y_{2})}\) and the measure \(\mu(\rho_{1},\rho_{2})\) from the second relation. Then using the second relation together with eigenfunction symmetry with respect to variables \(\rho_{1},\rho_{2}\) and \(y_{1},y_{2}\) we obtain obvious statement \(4\overline{\Psi_{\lambda_{1},\lambda_{2}}(y_{1},y_{2})}=4\overline{\Psi_{ \lambda_{1},\lambda_{2}}(y_{1},y_{2})}\). In relativistic case we have orthogonality relations for the eigenfunctions \(\Phi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=\Phi_{\lambda_{1},\lambda_{2}}(x_ {1},x_{2}|g^{*}\,,\boldsymbol{\omega})\) of the operator \(Q(\lambda|\boldsymbol{\omega})\) \[\int\limits_{-\infty}^{+\infty}dx_{1}dx_{2}\,S(\pm ix_{12})S(\pm ix_{12}+g^{*}) \,\overline{\Phi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})}\,\Phi_{\rho_{1},\rho_ {2}}(x_{1},x_{2})=\] \[\frac{2\,\left(\omega_{1}\omega_{2}\right)^{3}S^{2}(g)}{S\left(\pm i\lambda_{1 2}+g\right)S\left(\pm i\lambda_{12}\right)}\Big{[}\delta(\lambda_{1}-\rho_{1}) \,\delta(\lambda_{2}-\rho_{2})+\delta(\lambda_{1}-\rho_{2})\,\delta(\lambda_{2 }-\rho_{1})\Big{]}.\] The scalar product for the eigenfunctions \(\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2}|g,\boldsymbol{\omega})\) of the operator \(\hat{Q}(x|\boldsymbol{\omega})\) is obtained from the previous formula by change \(g\rightleftarrows g^{*}\) and renaming variables \[\int\limits_{-\infty}^{+\infty}d\lambda_{1}d\lambda_{2}\,S(\pm i \lambda_{12})S(\pm i\lambda_{12}+g)\,\overline{\Phi_{x_{1},x_{2}}(\lambda_{1}, \lambda_{2})}\,\Phi_{y_{1},y_{2}}(\lambda_{1},\lambda_{2})=\\ \frac{2\,\left(\omega_{1}\omega_{2}\right)^{3}S^{2}(g^{*})}{S\left( \pm ix_{12}+g^{*}\right)S\left(\pm ix_{12}\right)}\Big{[}\delta(x_{1}-y_{1}) \,\delta(x_{2}-y_{2})+\delta(x_{1}-y_{2})\,\delta(x_{2}-y_{1})\Big{]}.\] The relation \(\Phi_{x_{1},x_{2}}(\lambda_{1},\lambda_{2}|g,\boldsymbol{\omega})=S^{2}(g^{*} )\,\Phi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2}|g^{*},\boldsymbol{\omega})\) allows to rewrite this formula as completeness relation for eigenfunctions \(\Phi_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})\) \[\int\limits_{-\infty}^{+\infty}d\lambda_{1}d\lambda_{2}\,S(\pm i \lambda_{12})S(\pm i\lambda_{12}+g)\,\overline{\Phi_{\lambda_{1}\lambda_{2}}( x_{1}\,,x_{2})}\,\Phi_{\lambda_{1}\lambda_{2}}(y_{1}\,,y_{2})=\\ \frac{2\,\left(\omega_{1}\omega_{2}\right)^{3}S^{2}(g)}{S\left( \pm ix_{12}+g^{*}\right)S\left(\pm ix_{12}\right)}\Big{[}\delta(x_{1}-y_{1}) \,\delta(x_{2}-y_{2})+\delta(x_{1}-y_{2})\,\delta(x_{2}-y_{1})\Big{]}.\] ## Acknowledgments The work of N. Belousov and S. Derkachov was supported by Russian Science Foundation, project No 23-11-00311, used for the proof of statements of Section 3. The work of S. Khoroshkin was supported by Russian Science Foundation, project No 23-11-00150, used for the proof of statements of Sections 2.1-2.3. The work of S. Kharchev was supported by Russian Science Foundation, project No 20-12-00195, used for the proof of statements of Section 2.4 and Appendices A, B, C. The authors also thank Euler International Mathematical Institute for hospitality during the PDMI and HSE joint workshop on quantum integrability, where they got a possibility to discuss many subtle points of this work. ## Appendix A The double sine function The double sine function \(S(z):=S_{2}(z):=S_{2}(z|\mathbf{\omega})\), see [Ku] and references therein, is a meromorphic function that satisfies two functional relations \[\frac{S_{2}(z)}{S_{2}(z+\omega_{1})}=2\sin\frac{\pi z}{\omega_{2}},\qquad\frac{ S_{2}(z)}{S_{2}(z+\omega_{2})}=2\sin\frac{\pi z}{\omega_{1}}\] and inversion relation \[S_{2}(z)S_{2}(-z)=-4\sin\frac{\pi z}{\omega_{1}}\sin\frac{\pi z}{\omega_{2}},\] or equivalently \[S_{2}(z)S_{2}(\omega_{1}+\omega_{2}-z)=1.\] The function \(S_{2}(z)\) has poles at the points \[z=m\omega_{1}+k\omega_{2},\qquad m,k\geq 1\] and zeros at \[z=-m\omega_{1}-k\omega_{2},\qquad m,k\geq 0.\] For \(\omega_{1}/\omega_{2}\not\in\mathbb{Q}\) all poles and zeros are simple. In the analytic region \(\mathop{\rm Re}\nolimits z\in(0,\mathop{\rm Re}\nolimits\left(\omega_{1}+ \omega_{2}\right))\) we have the following integral representation for the logarithm of \(S_{2}(z)\) \[\ln S_{2}(z)=\int_{0}^{\infty}\frac{dt}{2t}\left(\frac{\mathop{\rm sh} \nolimits\left[(2z-\omega_{1}-\omega_{2})t\right]}{\mathop{\rm sh}\nolimits( \omega_{1}t)\mathop{\rm sh}\nolimits(\omega_{2}t)}-\frac{2z-\omega_{1}-\omega_ {2}}{\omega_{1}\omega_{2}t}\right).\] It is clear from this representation that the double sine function is homogeneous \[S_{2}(\gamma z|\gamma\omega_{1},\gamma\omega_{2})=S_{2}(z|\omega_{1},\omega_{ 2}),\qquad\gamma\in(0,\infty)\] and invariant under permutation of periods \[S_{2}(z|\omega_{1},\omega_{2})=S_{2}(z|\omega_{2},\omega_{1}).\] The double sine function can be expressed through the Barnes double Gamma function \(\Gamma_{2}(z|\mathbf{\omega})\) [B], \[S_{2}(z|\mathbf{\omega})=\Gamma_{2}(\omega_{1}+\omega_{2}-z|\mathbf{\omega})\Gamma_{2}^{-1}(z|\mathbf{\omega}),\] and its properties follow from the corresponding properties of the double Gamma function. It is also connected to the Ruijsenaars hyperbolic Gamma function \(G(z|\boldsymbol{\omega})\) [R1] \[G(z|\boldsymbol{\omega})=S_{2}\Big{(}\imath z+\frac{\omega_{1}+\omega_{2}}{2} \,\Big{|}\,\boldsymbol{\omega}\Big{)}\] and to the Faddeev quantum dilogarithm \(\gamma(z|\boldsymbol{\omega})\) [F] \[\gamma(z|\boldsymbol{\omega})=S_{2}\Big{(}-\imath z+\frac{\omega_{1}+\omega_{2 }}{2}\,\Big{|}\,\boldsymbol{\omega}\Big{)}\exp\Bigl{(}\frac{\imath\pi}{2 \omega_{1}\omega_{2}}\Big{[}z^{2}+\frac{\omega_{1}^{2}+\omega_{2}^{2}}{12} \Big{]}\Big{)}.\] Both \(G(z|\boldsymbol{\omega})\) and \(\gamma(z|\boldsymbol{\omega})\) were investigated independently. ### Asymptotics with respect to periods The asymptotics of double sine function as \(\omega_{2}\to\infty\) [R1, Prop III.6] \[S_{2}(u|\boldsymbol{\omega})=\sqrt{2\pi}\,\biggl{(}\frac{2\pi\omega_{1}}{ \omega_{2}}\biggr{)}^{\frac{1}{2}-\frac{u}{\omega_{1}}}\,\Gamma^{-1}\Bigl{(} \frac{u}{\omega_{1}}\Bigr{)}\Bigl{\{}1+O(\omega_{2}^{-1})\Bigr{\}}.\] (A.1) We also use another limiting formula: \[\lim_{\omega_{2}\to\infty}\frac{S_{2}(\frac{\omega_{2}}{\pi}u+h|\boldsymbol{ \omega})}{S_{2}(\frac{\omega_{2}}{\pi}u+g|\boldsymbol{\omega})}=(2\sin u)^{ \frac{q-h}{\omega_{1}}}.\] It agrees with [R1, Prop III.7]: \[\lim_{\omega_{2}\to 0}\frac{S_{2}(u+h\omega_{2}|\boldsymbol{\omega})}{S_{2}(u+g \omega_{2}|\boldsymbol{\omega})}=\Bigl{(}2\sin\frac{\pi u}{\omega_{1}}\Bigr{)} ^{g-h}\] (A.2) due to automorphy property \(S_{2}(u|\omega_{1},\omega_{2})=S_{2}(\lambda u|\lambda\omega_{1},\lambda \omega_{2})\), for \(\lambda\in(0,\infty)\) one has \[S_{2}\Bigl{(}\frac{\omega_{2}}{\pi}u+g\Big{|}\boldsymbol{\omega}\Bigr{)}=S_{ 2}\Bigl{(}u+\frac{g}{\omega_{1}}\cdot\frac{\pi\omega_{1}}{\omega_{2}}\Big{|} \pi,\frac{\pi\omega_{1}}{\omega_{2}}\Bigr{)}.\] ### Beta integrals Here we show that integrals (2.21) and (2.22) can be obtained by reduction from the higher level beta integral \[\int\limits_{-\infty}^{+\infty}dz\,e^{\frac{2\pi ixz}{\omega_{1}\omega_{2}}} \,\frac{S\left(\frac{\omega_{1}+\omega_{2}}{2}+iz+\frac{g}{2}\right)}{S\left( \frac{\omega_{1}+\omega_{2}}{2}+iz-\frac{g}{2}\right)}=\frac{\sqrt{\omega_{1} \omega_{2}}\,S(g)}{S\left(\frac{g}{2}+ix\right)S\left(\frac{g}{2}-ix\right)}.\] (A.3) The first reduction: we substitute \(g\to\omega_{2}g\) and \(x\to\omega_{2}x\) and then send \(\omega_{2}\to 0\) using in the left hand side the following formula obtained from (A.2) \[\frac{S\left(\frac{\omega_{1}+\omega_{2}}{2}+iz+\frac{g\omega_{2}}{2}\right)}{S \left(\frac{\omega_{1}+\omega_{2}}{2}+iz-\frac{g\omega_{2}}{2}\right)}\to\frac{ 1}{2^{g}\cos^{g}\left(\frac{i\pi z}{\omega_{1}}\right)}.\] (A.4) In the right hand side we need the following asymptotic for \(\omega_{2}\to 0\) \[S\left(x\omega_{2}\right)\to\frac{\sqrt{2\pi}\left(\frac{2\pi\omega_{2}}{ \omega_{1}}\right)^{\frac{1}{2}-x}}{\Gamma(x)}.\] To derive the last formula we use the following relation which can be get from integral representation of the double sine function(see (A.1)) \[\lim_{\omega_{2}\to\infty}\sqrt{2\pi}\left(\frac{2\pi\omega_{1}}{\omega_{2}} \right)^{\frac{1}{2}-\frac{x}{\omega_{1}}}\,S_{2}^{-1}(x|\omega_{1},\omega_{2 })=\Gamma\left(\frac{z}{\omega_{1}}\right).\] (A.5) We use the modular property \[S_{2}(x\omega_{2}|\omega_{1},\omega_{2})=S_{2}\left(x|\omega_{1}/\omega_{2},1 \right)=S_{2}\left(x|1,\omega_{1}/\omega_{2}\right)\] and after that we are ready to apply (A.5) but now the role of \(\omega_{1}\) in (A.5) plays \(\omega_{1}\to 1\) and the role of \(\omega_{2}\) in (A.5) plays \(\omega_{2}\to\omega_{1}/\omega_{2}\). Performing needed reductions (A.4) in the left hand side and (A.5) in the right hand side of (A.3) we obtain \[\int\limits_{-\infty}^{+\infty}dz\,e^{\frac{2\pi izz}{\omega_{1}}}\,\frac{1}{ 2^{g}\cos^{g}\left(\frac{i\pi z}{\omega_{1}}\right)}\] \[=\frac{\sqrt{\omega_{1}\omega_{2}}\,\frac{\sqrt{2\pi}\left(\frac{2\pi\omega_{ 2}}{\omega_{1}}\right)^{\frac{1}{2}-g}}{\Gamma(g)}}{\frac{\sqrt{2\pi}\left( \frac{2\pi\omega_{2}}{\omega_{1}}\right)^{\frac{1}{2}-\frac{g}{2}-ix}}{\Gamma (\frac{2}{2}+ix)}}\frac{\sqrt{2\pi}\left(\frac{2\pi\omega_{2}}{\omega_{1}} \right)^{\frac{1}{2}-\frac{g}{2}+ix}}{\Gamma(\frac{g}{2}-ix)}=\frac{\omega_{1 }}{2\pi}\frac{\Gamma(\frac{g}{2}-ix)\Gamma(\frac{g}{2}+ix)}{\Gamma(g)}.\] Next we rescale \(z\to\frac{\omega_{1}}{\pi}z\) and \(x\to\frac{x}{2}\) and obtain relation \[\int\limits_{-\infty}^{+\infty}dz\,e^{ixz}\,\frac{1}{2^{g}\mathrm{ch}^{g}z}= \frac{1}{2}\frac{\Gamma(\frac{g}{2}-\frac{ix}{2})\Gamma(\frac{g}{2}+\frac{ix}{ 2})}{\Gamma(g)}\] which coincides exactly with (2.21). The second reduction: first of all we switch to the dual coupling constant \(g\to g^{*}=\omega_{1}+\omega_{2}-g\) in (A.3) \[\int\limits_{-\infty}^{+\infty}dz\,e^{\frac{2\pi ixz}{\omega_{1}\omega_{2}}}\, \frac{S\left(\omega_{1}+\omega_{2}+iz-\frac{g}{2}\right)}{S\left(iz+\frac{g}{2 }\right)}=\frac{\sqrt{\omega_{1}\omega_{2}}\,S(\omega_{1}+\omega_{2}-g)}{S \left(\frac{\omega_{1}+\omega_{2}}{2}-\frac{g}{2}+ix\right)S\left(\frac{\omega _{1}+\omega_{2}}{2}-\frac{g}{2}-ix\right)}\] (A.6) then rescale \(g\to g\omega_{2}\) and \(z\to z\omega_{2}\) and transform integrand in a following way \[\frac{S\left(\omega_{1}+\omega_{2}(1+iz-\frac{g}{2})\right)}{S \left(\omega_{2}(iz+\frac{g}{2})\right)}=\frac{1}{2\sin\pi(1+iz-\frac{g}{2})} \,\frac{S\left(\omega_{2}(1+iz-\frac{g}{2})\right)}{S\left(\omega_{2}(iz+ \frac{g}{2})\right)}\] \[\xrightarrow{\omega_{2}\to 0}\frac{1}{2\sin\pi(1+iz-\frac{g}{2})} \,\frac{\frac{\sqrt{2\pi}\left(\frac{2\pi\omega_{2}}{\omega_{1}}\right)^{ \frac{1}{2}-(1+iz-\frac{g}{2})}}{\Gamma(1+iz-\frac{g}{2})}}{\frac{\sqrt{2\pi} \left(\frac{2\pi\omega_{2}}{\omega_{1}}\right)^{\frac{1}{2}-(iz+\frac{g}{2})} }{\Gamma(iz+\frac{g}{2})}}\] \[=\frac{1}{2\sin\pi(1+iz-\frac{g}{2})}\frac{\Gamma(iz+\frac{g}{2}) }{\Gamma(1+iz-\frac{g}{2})}\,\left(\frac{2\pi\omega_{2}}{\omega_{1}}\right)^{ g-1}\] \[=\frac{1}{2\pi}\Gamma\left(\frac{g}{2}+iz\right)\Gamma\left(\frac {g}{2}-iz\right)\,\left(\frac{2\pi\omega_{2}}{\omega_{1}}\right)^{g-1}.\] In the right hand side we have \[\frac{\sqrt{\omega_{1}\omega_{2}}\,S(\omega_{1}+\omega_{2}(1-g))}{S \left(\frac{\omega_{1}+\omega_{2}}{2}-\frac{g\omega_{2}}{2}+ix\right)S\left( \frac{\omega_{1}+\omega_{2}}{2}-\frac{g\omega_{2}}{2}-ix\right)}\] \[=\frac{\sqrt{\omega_{1}\omega_{2}}\,S(\omega_{2}(1-g))}{2\sin\pi( 1-g)}\frac{S\left(\frac{\omega_{1}+\omega_{2}}{2}+\frac{g\omega_{2}}{2}+ix \right)}{S\left(\frac{\omega_{1}+\omega_{2}}{2}-\frac{g\omega_{2}}{2}+ix\right)}\] \[\xrightarrow{\omega_{2}\to 0}\frac{\sqrt{\omega_{1}\omega_{2}}\, \frac{\sqrt{2\pi}\left(\frac{2\pi\omega_{2}}{\omega_{1}}\right)^{\frac{1}{2}-( 1-g)}}{\Gamma(1-g)}}{2\sin\pi(1-g)}\frac{1}{2^{g}\cos^{g}(\frac{i\pi x}{ \omega_{1}})}=\left(\frac{2\pi\omega_{2}}{\omega_{1}}\right)^{g-1}\frac{ \omega_{2}\,\Gamma(g)}{2^{g}\cos^{g}(\frac{i\pi x}{\omega_{1}})}.\] Factor \(\left(\frac{2\pi\omega_{2}}{\omega_{1}}\right)^{g-1}\) appears in both sides of our relations and the same with factor \(\omega_{2}\) (it appears in integral due to the change of variables \(z\to z\omega_{2}\)) so that both factors can be cancelled and we obtain \[\int\limits_{-\infty}^{+\infty}dz\,e^{\frac{2\pi iz}{\omega_{1}}}\,\frac{1}{2 \pi}\Gamma\left(\frac{g}{2}+iz\right)\Gamma\left(\frac{g}{2}-iz\right)=\frac{ \Gamma(g)}{2^{g}\cos^{g}(\frac{i\pi x}{\omega_{1}})}.\] This relation is reduced to the (2.22) by \(z\to\frac{z}{2}\) and \(x\to\frac{\omega_{1}x}{\pi}\). Finally, we have checked that integral (A.3) reproduced in appropriate limits all integral relations (2.21) and (2.22). ### Asymptotics for large argument Let us consider asymptotic of needed functions for large arguments. The key formula is \[S(z|\boldsymbol{\omega})\to e^{\pm\frac{i\pi}{2}B_{2,2}(z|\boldsymbol{\omega})}\] for \(\pm\operatorname{Im}(z)>0\) and \(|z|\to\infty\). The polynomial \(B_{2,2}(z|\boldsymbol{\omega})\) is given by the formula \[B_{2,2}(z|\boldsymbol{\omega})=\frac{z^{2}}{\omega_{1}\omega_{2}}-\frac{ \omega_{1}+\omega_{2}}{\omega_{1}\omega_{2}}\,z+\frac{\omega_{1}^{2}+3\omega _{1}\omega_{2}+\omega_{2}^{2}}{6\omega_{1}\omega_{2}}.\] We have for \(\lambda\to\pm\infty\) \[K_{g}(\lambda)=\frac{S\left(\omega_{1}+\omega_{2}+i\lambda-\frac{g}{2}\right) }{S\left(i\lambda+\frac{g}{2}\right)}\\ \to e^{\pm\frac{i\pi}{2}\left(B_{2,2}(\omega_{1}+\omega_{2}+i \lambda-\frac{g}{2}|\omega_{1},\omega_{2})-B_{2,2}(i\lambda+\frac{g}{2}| \omega_{1},\omega_{2})\right)}=e^{\pm\frac{2\pi i}{\omega_{1}\omega_{2}}\, \lambda\,\frac{ia^{*}}{2}}\] where \(g^{*}=\omega_{1}+\omega_{2}-g\). The leading asymptotic of \(\hat{K}(\gamma-\mu)\) for \(\mu\to+\infty\) has the following form \[\hat{K}(\gamma-\mu)=\frac{\Gamma\left(\frac{g+i(\gamma-\mu)}{2} \right)\Gamma\left(\frac{g-i(\gamma-\mu)}{2}\right)}{2^{1-g}\Gamma(g)}\\ \to\frac{2\pi i\,e^{-\frac{i\pi g}{2}}}{2^{1-g}\Gamma(g)}\,\left( \frac{i\mu}{2}\right)^{g-1}e^{\frac{\pi}{2}(\gamma-\mu)}=\frac{2\pi}{\Gamma( g)}\,\mu^{g-1}\,e^{\frac{\pi}{2}(\gamma-\mu)}.\] (A.7) To derive this asymptotic we start from useful textbook formula \[\frac{\Gamma(\Lambda+a)}{\Gamma(\Lambda+b)}\xrightarrow{\Lambda\to\infty}\Lambda^{ a-b}.\] To derive needed asymptotic we shall use reflection formula also \[\Gamma\left(\frac{i\mu}{2}+a\right)\Gamma\left(-\frac{i\mu}{2}+b \right)=\frac{\Gamma(\frac{i\mu}{2}+a)}{\Gamma(\frac{i\mu}{2}+1-b)}\frac{\pi}{ \sin\pi(b-\frac{i\mu}{2})}\\ \to\left(\frac{i\mu}{2}\right)^{a+b-1}2\pi i\,e^{-\frac{\mu\pi}{ 2}}e^{-i\pi b}.\] In our case \(a=\frac{g-i\lambda}{2}\) and \(b=\frac{g-i\lambda}{2}\) so that we obtain (A.7). ## Appendix B Commutativity of \(Q\)-operators Let us formulate the relation of commutativity of \(Q\)-operators as some integral relation which we have to prove. The \(Q\)-operator is the integral operator \[[Q(u)\Psi](x_{1}\dots x_{n})=\int\limits_{-\infty}^{+\infty}\prod\limits_{i=1} ^{n}dt_{i}\prod\limits_{i<k}\operatorname{sh}^{2g}(t_{i}-t_{k})\,\frac{e^{iu \sum_{i=1}^{n}(x_{i}-t_{i})}}{\prod_{i,k=1}^{n}\operatorname{ch}^{g}(x_{i}-t_{ k})}\,\Psi(t_{1}\dots t_{n})\] We shall rewrite all in new variables and the reason is very simple: it seems that work with usual rational functions instead of hyperbolic functions is simpler but of course it is just matter of habit. We have for \(x=\ln z_{1}\) and \(y=\ln z_{2}\) \[\operatorname{ch}(x-y)=\frac{e^{x-y}+e^{y-x}}{2}=\frac{z_{1}^{2}+z_{2}^{2}}{2z _{1}z_{2}},\quad\operatorname{sh}(x-y)=\frac{e^{x-y}-e^{y-x}}{2}=\frac{z_{1}^ {2}-z_{2}^{2}}{2z_{1}z_{2}}\] so that for rationalization of our integral we perform the change of variables \[x_{k}=\ln\sqrt{z_{k}},\quad t_{k}=\ln\sqrt{s_{k}},\quad dt_{k}=\frac{ds_{k}}{2 s_{k}}\] and switch to new function \(\Psi(\ln\sqrt{s_{1}}\ldots\ln\sqrt{s_{n}})=\Phi(s_{1}\ldots s_{n})\) \[[Q(u)\Phi](z_{1}\ldots z_{n})=\int\limits_{0}^{+\infty}\prod\limits_{k=1}^{n} \frac{ds_{k}}{2s_{k}}\prod\limits_{i<k}\left(\frac{s_{i}-s_{k}}{2\sqrt{s_{i}s_ {k}}}\right)^{2g}\\ \times\prod\limits_{i=1}^{n}\,\frac{z_{i}^{\frac{iu}{2}}}{s_{i}^ {\frac{iu}{2}}}\prod\limits_{i,k=1}^{n}\frac{\left(2\sqrt{z_{i}s_{k}}\right)^{g }}{(z_{i}+s_{k})^{g}}\,\Phi(s_{1}\ldots s_{n})\\ =2^{n(g-1)}\,\prod\limits_{i=1}^{n}\,z_{i}^{\frac{iu+gn}{2}}\,\int \limits_{0}^{+\infty}\prod\limits_{k=1}^{n}ds_{k}\,s_{k}^{g-1-\frac{iu+ng}{2} }\frac{\prod_{i<k}\left(s_{i}-s_{k}\right)^{2g}}{\prod_{i,k=1}^{n}(z_{i}+s_{k })^{g}}\,\Phi(s_{1}\ldots s_{n}).\] We are going to consider the commutativity relation \(Q(u)Q(v)=Q(v)Q(u)\) as integral relation for the kernels of \(Q\)-operators. We have \[[Q(u)Q(v)\Phi](z_{1}\ldots z_{n})=2^{n(g-1)}\,\prod\limits_{i=1}^{n}\,z_{i}^{ \frac{iu+gn}{2}}\,\int\limits_{0}^{+\infty}\prod\limits_{k=1}^{n}ds_{k}\,s_{k }^{g-1-\frac{iu+ng}{2}}\\ \times\frac{\prod_{i<k}\left(s_{i}-s_{k}\right)^{2g}}{\prod_{i,k= 1}^{n}(z_{i}+s_{k})^{g}}\,2^{n(g-1)}\prod\limits_{i=1}^{n}\,s_{i}^{\frac{iv+ an}{2}}\,\int\limits_{0}^{+\infty}\prod\limits_{k=1}^{n}dt_{k}\,t_{k}^{g-1-\frac{iv+ng}{2}}\\ \times\frac{\prod_{i<k}\left(t_{i}-t_{k}\right)^{2g}}{\prod_{i,k= 1}^{n}(s_{i}+t_{k})^{g}}\,\Phi(t_{1}\ldots t_{n})\\ =2^{2n(g-1)}\,\prod\limits_{i=1}^{n}\,z_{i}^{\frac{iu+gn}{2}}\, \int\limits_{0}^{+\infty}\prod\limits_{k=1}^{n}ds_{k}\,s_{k}^{g-1-i\frac{u-v}{ 2}}\frac{\prod_{i<k}\left(s_{i}-s_{k}\right)^{2g}}{\prod_{i,k=1}^{n}(z_{i}+s_{ k})^{g}(t_{i}+s_{k})^{g}}\\ \times\int\limits_{0}^{+\infty}\prod\limits_{k=1}^{n}dt_{k}\,t_{k} ^{g-1-\frac{iv+ng}{2}}\prod\limits_{i<k}\left(t_{i}-t_{k}\right)^{2g}\,\Phi(t_{ 1}\ldots t_{n})\] so that the commutativity relation \(Q(u)Q(v)=Q(v)Q(u)\) is equivalent to the following integral relation \[\prod_{i=1}^{n}\,z_{i}^{i\frac{u-v}{2}}\,\int\limits_{0}^{+\infty} \prod_{i=1}^{n}ds_{k}\,s_{k}^{g-1-i\frac{u-v}{2}}\frac{\prod_{i<k}\left(s_{i}-s _{k}\right)^{2g}}{\prod_{i,k=1}^{n}(z_{i}+s_{k})^{g}(t_{i}+s_{k})^{g}}\\ =\prod_{k=1}^{n}t_{k}^{i\frac{v-u}{2}}\,\int\limits_{0}^{+\infty }\prod_{i=1}^{n}ds_{k}\,s_{k}^{g-1-i\frac{v-u}{2}}\frac{\prod_{i<k}\left(s_{i}- s_{k}\right)^{2g}}{\prod_{i,k=1}^{n}(z_{i}+s_{k})^{g}(t_{i}+s_{k})^{g}}.\] Let us use notation \(\lambda=\frac{u-v}{2}\) for simplicity so that main relation has the form \[\left(z_{1}\cdots z_{n}\right)^{i\lambda}\,\int\limits_{0}^{+ \infty}\prod_{i=1}^{n}ds_{k}\,s_{k}^{g-1-i\lambda}\frac{\prod_{i<k}\left(s_{i }-s_{k}\right)^{2g}}{\prod_{i,k=1}^{n}(z_{i}+s_{k})^{g}(t_{i}+s_{k})^{g}}\\ =\left(t_{1}\cdots t_{n}\right)^{-i\lambda}\,\int\limits_{0}^{+ \infty}\prod_{i=1}^{n}ds_{k}\,s_{k}^{g-1+i\lambda}\frac{\prod_{i<k}\left(s_{i} -s_{k}\right)^{2g}}{\prod_{i,k=1}^{n}(z_{i}+s_{k})^{g}(t_{i}+s_{k})^{g}}.\] ### \(n=1\) and arbitrary \(g\) Of course it is natural to start from the simplest case \(n=1\) and to check everything \[z^{i\lambda}\,\int\limits_{0}^{+\infty}ds\,s^{g-1-i\lambda}\frac{1}{(z+s)^{g} (t+s)^{g}}=t^{-i\lambda}\,\int\limits_{0}^{+\infty}ds\,s^{g-1+i\lambda}\frac{ 1}{(z+s)^{g}(t+s)^{g}}.\] (B.1) Here main steps are more or less evident - we should use inversion \(s\to\frac{1}{s}\) and dilatation \(s\to\frac{s}{tz}\) \[z^{i\lambda}\,\int\limits_{0}^{+\infty}ds\,s^{g-1-i\lambda}\frac{1 }{(z+s)^{g}(t+s)^{g}}=z^{i\lambda}\,\int\limits_{0}^{+\infty}\frac{ds}{s^{2}}\,s ^{-g+1+i\lambda}\frac{s^{2g}}{(sz+1)^{g}(st+1)^{g}}\] \[=z^{i\lambda}\,\int\limits_{0}^{+\infty}ds\,s^{g-1+i\lambda} \frac{1}{(sz+1)^{g}(st+1)^{g}}=z^{i\lambda}\,(zt)^{-g-i\lambda}\,\int\limits_{ 0}^{+\infty}ds\,s^{g-1+i\lambda}\frac{(zt)^{g}}{(s+t)^{g}(s+z)^{g}}\] \[=t^{-i\lambda}\,\int\limits_{0}^{+\infty}ds\,s^{g-1+i\lambda} \frac{1}{(z+s)^{g}(t+s)^{g}}.\] ### \(g=1\) and arbitrary \(n\) Now we are going to reformulate everything in some determinant form. To do that we use Cauchy determinant identity (\(z_{kj}=z_{k}-z_{j}\) and so on ) \[\frac{\prod_{k<j}z_{kj}\,s_{kj}}{\prod_{k,j=1}^{n}(z_{i}+s_{k})}=\det\left( \frac{1}{z_{i}+s_{k}}\right),\quad\frac{\prod_{k<j}t_{kj}\,s_{kj}}{\prod_{k,j= 1}^{n}(t_{i}+s_{k})}=\det\left(\frac{1}{t_{i}+s_{k}}\right)\] and rewrite the main relation in a very suggestive form \[(z_{1}\cdots z_{n})^{i\lambda}\,\int\limits_{0}^{+\infty}\prod_{i =1}^{n}ds_{k}\,s_{k}^{g-1-i\lambda}\left(\det\left(\frac{1}{z_{i}+s_{k}}\right) \det\left(\frac{1}{t_{i}+s_{k}}\right)\right)^{g}\\ =(t_{1}\cdots t_{n})^{-i\lambda}\,\int\limits_{0}^{+\infty}\prod _{i=1}^{n}ds_{k}\,s_{k}^{g-1+i\lambda}\left(\det\left(\frac{1}{z_{i}+s_{k}} \right)\det\left(\frac{1}{t_{i}+s_{k}}\right)\right)^{g}.\] In the case \(g=1\) this reformulation in fact solves the problem. Now it is possible to convert each \(n\)-fold integral to the determinant of the matrix constructed from the one dimensional integrals \[\int\limits_{0}^{+\infty}\prod_{i=1}^{n}ds_{k}\,s_{k}^{-i\lambda} \det\left(\frac{1}{z_{i}+s_{k}}\right)\det\left(\frac{1}{t_{i}+s_{k}}\right)\\ =n!\,\det\left(\int\limits_{0}^{+\infty}ds\,s^{-i\lambda}\frac{1 }{(z_{i}+s)(s+t_{k})}\right)\] and then everything is reduced to the \(n=1\) case. Indeed we have \[\left(z_{1}\cdots z_{n}\right)^{i\lambda}\,\int\limits_{0}^{+\infty }\prod\limits_{i=1}^{n}ds_{k}\,s_{k}^{-i\lambda}\det\left(\frac{1}{z_{i}+s_{k}} \right)\det\left(\frac{1}{t_{i}+s_{k}}\right)\] \[\qquad\qquad=n!\,\det\left(\int\limits_{0}^{+\infty}ds\,s^{-i \lambda}\frac{z_{i}^{i\lambda}}{(z_{i}+s)(s+t_{k})}\right)\] \[\qquad\qquad=n!\,\det\left(\int\limits_{0}^{+\infty}ds\,s^{i \lambda}\frac{t_{k}^{-\lambda}}{(z_{i}+s)(s+t_{k})}\right)\] \[\qquad\qquad=\left(t_{1}\cdots t_{n}\right)^{-i\lambda}\,\int \limits_{0}^{+\infty}\prod\limits_{i=1}^{n}ds_{k}\,s_{k}^{i\lambda}\det\left( \frac{1}{z_{i}+s_{k}}\right)\det\left(\frac{1}{t_{i}+s_{k}}\right)\] where we used the identity (B.1) for one-dimensional integrals (for \(g=1\)) \[\int\limits_{0}^{+\infty}ds\,s^{-i\lambda}\frac{z_{i}^{i\lambda}}{(z_{i}+s)( s+t_{k})}=\int\limits_{0}^{+\infty}ds\,s^{i\lambda}\frac{t_{k}^{-i\lambda}}{(z_{i}+s )(s+t_{k})}.\] ## Appendix C Delta-sequence We are going to show that in the sense of distributions the following identity holds \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\frac{e^{i\lambda\sum_{a=1}^{n} \left(x_{a}-y_{a}\right)}}{\prod_{a,b=1}^{n}\left(x_{a}-y_{b}-i\varepsilon \right)}=\frac{(-1)^{\frac{n(n-1)}{2}}(2\pi i)^{n}n!}{\prod_{a<b}^{n}\left(x_{ a}-x_{b}\right)^{2}}\,\delta\big{(}\boldsymbol{x}_{n},\boldsymbol{y}_{n}\big{)},\] (C.1) where \[\delta\big{(}\boldsymbol{x}_{n},\boldsymbol{y}_{n}\big{)}=\frac{1}{n!}\,\sum \limits_{w\in S_{n}}\prod_{k=1}^{n}\delta(x_{k}-y_{w(k)}).\] This identity is written in a compact formal way and should be understood in the following sense: for any test function \(f(x_{1},\ldots,x_{n})\) we have \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\int dx_{1}\cdots dx_{n}\prod_{a<b}^{ n}(x_{a}-x_{b})^{2}\,f(x_{1},\ldots,x_{n})\,\frac{e^{i\lambda\sum_{a=1}^{n}(x_{a}-y_ {a})}}{\prod_{a,b=1}^{n}\,(x_{a}-y_{b}-i\varepsilon)}\\ =(-1)^{\frac{n(n-1)}{2}}(2\pi i)^{n}\,\sum_{w\in S_{n}}f\big{(}y_ {w(1)},\ldots,y_{w(n)}\big{)}.\] (C.2) First of all we are going to prove the equivalent identity \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}e^{i\lambda\sum_{a=1}^{n}(x_{a}- y_{a})}\,\frac{\prod_{a<b}^{n}\,(x_{a}-x_{b})\,(y_{b}-y_{a})}{\prod_{a,b=1}^{n} \,(x_{a}-y_{b}-i\varepsilon)}\\ =(2\pi i)^{n}\,\sum_{w\in S_{n}}(-1)^{s(w)}\prod_{k=1}^{n}\, \delta(x_{k}-y_{w(k)})\] (C.3) where \(s(w)\) is the sign of the permutation \(w\). Let us start from the simplest example \(n=1\). We have to prove that \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\frac{e^{i\lambda(x-y)}}{(x-y-i \varepsilon)}=2\pi i\,\delta(x-y),\] or equivalently \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\int_{\mathbb{R}}f(x)\,\frac{e^{i \lambda(x-y)}}{x-y-i\varepsilon}\,dx=2\pi i\,f(y).\] First of all we transform integral with the test function. We divide integral on two parts: the first integral can be calculated by residues and due to cancelation of singularity at \(x=y\) it is possible to put \(\varepsilon\to 0\) in the second part \[\int_{\mathbb{R}}f(x)\,\frac{e^{i\lambda(x-y)}}{x-y-i\varepsilon} \,dx=f(y)\,\int_{\mathbb{R}}\,\frac{e^{i\lambda(x-y)}}{x-y-i\varepsilon}\,dx+ \int_{\mathbb{R}}\frac{f(x)-f(y)}{x-y-i\varepsilon}\,e^{i\lambda(x-y)}\,dx\\ =2\pi i\,f(y)\,e^{-\varepsilon\lambda}+\int_{\mathbb{R}}\frac{f( x)-f(y)}{x-y-i\varepsilon}\,e^{i\lambda(x-y)}\,dx\\ \xrightarrow{\varepsilon\to 0}2\pi i\,f(y)+\int_{\mathbb{R}} \frac{f(x)-f(y)}{x-y}\,L^{ix-iy}\,dx.\] Due to the Riemann-Lebesgue lemma the second contribution tends to zero in the limit \(L\to\infty\) so that we obtain after removing \(\varepsilon\)-regularization and \(\lambda\to\infty\) \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\int_{\mathbb{R}}f(x)\,\frac{e^{i \lambda(x-y)}}{x-y-i\varepsilon}\,dx=2\pi i\,f(y).\] The whole consideration in the case \(n=2\) is almost identical to the case of general \(n\). We have to prove the following relation \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\frac{e^{i \lambda(x_{1}+x_{2}-y_{1}-y_{2})}\,x_{12}\,y_{21}}{\prod_{a,b=1}^{2}\left(x_{a }-y_{b}-i\varepsilon\right)}\\ =\left(2\pi i\right)^{2}\bigl{[}\delta(x_{1}-y_{1})\,\delta(x_{2 }-y_{2})-\delta(x_{1}-y_{2})\,\delta(x_{2}-y_{1})\bigr{]}.\] First of all we use Cauchy determinant identity \[\frac{x_{12}\,y_{21}}{\prod_{a,b=1}^{2}\left(x_{a}-y_{b}-i \varepsilon\right)}\\ =\frac{1}{\left(x_{1}-y_{1}-i\varepsilon\right)\left(x_{2}-y_{2} -i\varepsilon\right)}-\frac{1}{\left(x_{1}-y_{2}-i\varepsilon\right)\left(x_{ 2}-y_{1}-i\varepsilon\right)}.\] (C.4) Let us consider the convolution of the first term with the test function \[\int dx_{1}\,dx_{2}\,f(x_{1},x_{2})\,\frac{e^{i\lambda(x_{1}+x_{2}-y_{1}-y_{2 })}}{\left(x_{1}-y_{1}-i\varepsilon\right)\left(x_{2}-y_{2}-i\varepsilon\right)}\] and introduce two commuting operators \(X_{1}\) and \(X_{2}\) acting on the test function \[X_{1}f(x_{1}\,,x_{2})=f(y_{1}\,,x_{2}),\quad X_{2}f(x_{1}\,,x_{2})=f(x_{1}\,,y _{2}).\] As a consequence of evident identity \[1=(1-X_{1}+X_{1})(1-X_{2}+X_{2})\\ =(1-X_{1})(1-X_{2})+X_{1}(1-X_{2})+X_{2}(1-X_{1})+X_{1}X_{2}\] and explicit formulas \[X_{1}(1-X_{2})f(x_{1}\,,x_{2})=f(y_{1}\,,x_{2})-f(y_{1}\,,y_{2})\,;\] \[X_{2}(1-X_{1})f(x_{1}\,,x_{2})=f(x_{1}\,,y_{2})-f(y_{1}\,,y_{2})\,;\] \[(1-X_{1})(1-X_{2})f(x_{1}\,,x_{2})=(1-X_{1})\,[f(x_{1}\,,x_{2})- f(x_{1}\,,y_{2})]\] \[=f(x_{1}\,,x_{2})-f(x_{1}\,,y_{2})-f(y_{1}\,,x_{2})+f(y_{1}\,,y_{ 2})\] we obtain the following useful representation for the function \(f(x_{1}\,,x_{2})\) \[f(x_{1}\,,x_{2})=f(y_{1}\,,y_{2}) +\left[f(y_{1}\,,x_{2})-f(y_{1}\,,y_{2})\right]+\left[f(x_{1}\,,y_{2 })-f(y_{1}\,,y_{2})\right]\] \[+\left[f(x_{1}\,,x_{2})-f(x_{1}\,,y_{2})-f(y_{1}\,,x_{2})+f(y_{1} \,,y_{2})\right].\] Note that the first term does not depend on \(x_{1}\) and \(x_{2}\), second term does not depend on \(x_{1}\) and is equal to zero at the point \(x_{2}=y_{2}\), third term does not depend on \(x_{2}\) and is equal to zero at the point \(x_{1}=y_{1}\). The Taylor expansion of the last term in vicinity of the point \(x_{1}=y_{1}\,,x_{2}=y_{2}\) started from the contribution \(\sim(x_{1}-y_{1})(x_{2}-y_{2})\) because it turns to zero at points \(x_{1}=y_{1}\) and \(x_{2}=y_{2}\) independently. In the first three terms the corresponding integrals can be calculated by residues and we obtain \[\int dx_{1}\,dx_{2}\,\frac{f(x_{1}\,,x_{2})\,e^{i\lambda(x_{1}+x_ {2}-y_{1}-y_{2})}}{(x_{1}-y_{1}-i\varepsilon)\,(x_{2}-y_{2}-i\varepsilon)}\] \[=f(y_{1}\,,y_{2})\,(2\pi i)^{2}\,e^{-2\varepsilon\lambda}+2\pi i \,e^{-\varepsilon\lambda}\,\int dx_{2}\,\frac{\left[f(y_{1}\,,x_{2})-f(y_{1} \,,y_{2})\right]\,e^{i\lambda(x_{2}-y_{2})}}{x_{2}-y_{2}-i\varepsilon}\] \[+2\pi i\,e^{-\varepsilon\lambda}\,\int dx_{1}\,\frac{\left[f(x_{ 1}\,,y_{2})-f(y_{1}\,,y_{2})\right]\,e^{i\lambda(x_{1}-y_{1})}}{x_{1}-y_{1}-i \varepsilon}\] \[+\int dx_{1}\,dx_{2}\,\frac{\left[f(x_{1}\,,x_{2})-f(x_{1}\,,y_{ 2})-f(y_{1}\,,x_{2})+f(y_{1}\,,y_{2})\right]\,e^{i\lambda(x_{1}+x_{2}-y_{1}-y_ {2})}}{(x_{1}-y_{1}-i\varepsilon)\,(x_{2}-y_{2}-i\varepsilon)}.\] Inside of remaining integrals all singularities of integrand are cancelled so that it is possible to perform the limit \(\varepsilon\to 0\). Due to the Riemann-Lebesgue lemma all contributions with integrals tend to zero in the limit \(\lambda\to\infty\) and we have after removing \(\varepsilon\)-regularization and \(\lambda\to\infty\) \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\int dx_{1}\,dx_{2}\,\frac{f(x_{1} \,,x_{2})\,e^{i\lambda(x_{1}+x_{2}-y_{1}-y_{2})}}{(x_{1}-y_{1}-i\varepsilon) \,(x_{2}-y_{2}-i\varepsilon)}=(2\pi i)^{2}\,f(y_{1}\,,y_{2}).\] The second term in (C.4) is obtained by \(y_{1}\rightleftarrows y_{2}\) so that finally one obtains the stated result \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\int dx_{1}\,dx_{2}\, \frac{f(x_{1}\,,x_{2})\,e^{i\lambda(x_{1}+x_{2}-y_{1}-y_{2})}x_{12}y_{21}}{(x_ {1}-y_{1}-i\varepsilon)\,(x_{1}-y_{2}-i\varepsilon)\,(x_{2}-y_{1}-i \varepsilon)\,(x_{2}-y_{2}-i\varepsilon)}\\ =(2\pi i)^{2}\,\left[f(y_{1}\,,y_{2})-f(y_{2}\,,y_{1})\right].\] It is evident that the symmetric part of the function \(f(x_{1},x_{2})\) does not contribute so that the nontrivial contribution is due to antisymmetric part of the function \(f(x_{1},x_{2})\). Antisymmetric part of the test function \(f(x_{1},x_{2})\) should be zero at \(x_{1}=x_{2}\) and without loss of generality it is possible to use representation \(f(x_{1},x_{2})=(x_{1}-x_{2})\phi(x_{1},x_{2})\), where \(\phi(x_{1},x_{2})\) can be generic because antisymmetric part of the function \(\phi(x_{1},x_{2})\) does not contribute. Finally one obtains the formula (C.2) in the case \(n=2\) \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\int dx_{1}\,dx_{2}\, \frac{x_{12}^{2}\,\phi(x_{1}\,,x_{2})\,e^{i\lambda(x_{1}+x_{2}-y_{1}-y_{2})}} {\left(x_{1}-y_{1}-i\varepsilon\right)\left(x_{1}-y_{2}-i\varepsilon\right) \left(x_{2}-y_{1}-i\varepsilon\right)\left(x_{2}-y_{2}-i\varepsilon\right)}\\ =\left(2\pi i\right)^{2}\,\left[\phi(y_{1}\,,y_{2})+\phi(y_{2}\,, y_{1})\right].\] In general case we again use Cauchy determinant identity in the form \[\frac{\prod_{k<j}x_{kj}\,y_{jk}}{\prod_{k,j=1}^{n}(x_{k}-y_{j}-i \varepsilon)} =\det\left(\frac{1}{x_{k}-y_{j}-i\varepsilon}\right)\] \[=\sum_{\sigma\in S_{n}}(-1)^{s(\sigma)}\prod_{k=1}^{n}\frac{1}{x_ {k}-y_{\sigma(k)}-i\varepsilon}.\] In analogy with \(n=2\) we shall prove that \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\int dx_{1}\,\cdots dx_{n}\, \frac{f(x_{1},\ldots,x_{n})\,e^{i\lambda\sum_{k}(x_{k}-y_{k})}}{\prod_{k} \left(x_{k}-y_{k}-i\varepsilon\right)}=\left(2\pi i\right)^{n}f(y_{1},\ldots,y _{n})\] and then use the same identity with evident permutations. We introduce the natural generalization of the operators \(X_{k}\) \[X_{k}f(x_{1},\ldots,x_{k},\ldots,x_{n})=f(x_{1},\ldots,y_{k},\ldots,x_{n})\] and the main expansion \[1=\prod_{k=1}^{N}(1-X_{k}+X_{k})=\prod_{k=1}^{N}(1-X_{k})\\ +\sum_{k=1}X_{k}\prod_{i\neq k}^{N}(1-X_{i})+\sum_{k,p=1}X_{k}X_{ p}\prod_{i\neq k,p}^{N}(1-X_{i})+\ldots+\prod_{k=1}^{N}X_{k}.\] Due to the Riemann-Lebesgue lemma all contributions containing \(\prod_{i}(1-X_{i})\,f(x_{1},\ldots,x_{n})\) in integrand are regular at corresponding points so that corresponding integrals tend to zero in the limit \(L\to\infty\). In the needed limit only one term \(\prod_{k=1}^{n}X_{k}\) survives and produce \((2\pi i)^{n}\,f(y_{1},\ldots,y_{n})\) in the full anagoly with the case \(n=2\). Then for the whole sum we obtain \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\int dx_{1}\cdots dx_{n}\,f(x_{1}, \ldots,x_{n})\,\frac{\prod_{k<j}x_{kj}\,y_{jk}\,e^{i\lambda\sum_{a=1}^{n}(x_{a} -y_{a})}}{\prod_{a,b=1}^{n}\left(x_{a}-y_{b}-i\varepsilon\right)}\\ =(-1)^{\frac{n(n-1)}{2}}(2\pi i)^{n}\,\sum_{w\in S_{n}}(-1)^{s(w)} \,f\bigl{(}y_{w(1)},\ldots,y_{w(n)}\bigr{)}\] (C.5) and this identity is equivalent to (C.3). Next step is very similar to the case \(n=2\). Indeed, only the antisymmetric part of the test function \(f(x_{1},\ldots,x_{n})\) give nontrivial contribution so that without loss of generality it is possible to use the following representation for the test function \(f(x_{1},\ldots,x_{n})=\Delta(x_{1},\ldots,x_{n})\,\phi(x_{1},\ldots,x_{n})\), where \(\Delta(x_{1},\ldots,x_{n})=\prod_{k<j}x_{kj}\). We have evident relation \[\Delta\bigl{(}x_{w(1)},\ldots,x_{w(n)}\bigr{)}=(-1)^{s(w)}\,\Delta\bigl{(}x_{1 },\ldots,x_{n}\bigr{)}\] and as consequence one obtains (C.2) \[\lim_{\lambda\to\infty}\lim_{\epsilon\to 0^{+}}\int dx_{1}\cdots dx _{n}\,\phi(x_{1},\ldots,x_{n})\,\frac{\prod_{k<j}x_{kj}^{2}\,e^{i\lambda\sum_{ a=1}^{n}(x_{a}-y_{a})}}{\prod_{a,b=1}^{n}\left(x_{a}-y_{b}-i\varepsilon\right)}\\ =(-1)^{\frac{n(n-1)}{2}}(2\pi i)^{n}\,\sum_{w\in S_{n}}\,\phi \bigl{(}y_{w(1)},\ldots,y_{w(n)}\bigr{)}.\]
2309.06097
Fidelity-Induced Interpretable Policy Extraction for Reinforcement Learning
Deep Reinforcement Learning (DRL) has achieved remarkable success in sequential decision-making problems. However, existing DRL agents make decisions in an opaque fashion, hindering the user from establishing trust and scrutinizing weaknesses of the agents. While recent research has developed Interpretable Policy Extraction (IPE) methods for explaining how an agent takes actions, their explanations are often inconsistent with the agent's behavior and thus, frequently fail to explain. To tackle this issue, we propose a novel method, Fidelity-Induced Policy Extraction (FIPE). Specifically, we start by analyzing the optimization mechanism of existing IPE methods, elaborating on the issue of ignoring consistency while increasing cumulative rewards. We then design a fidelity-induced mechanism by integrate a fidelity measurement into the reinforcement learning feedback. We conduct experiments in the complex control environment of StarCraft II, an arena typically avoided by current IPE methods. The experiment results demonstrate that FIPE outperforms the baselines in terms of interaction performance and consistency, meanwhile easy to understand.
Xiao Liu, Wubing Chen, Mao Tan
2023-09-12T10:03:32Z
http://arxiv.org/abs/2309.06097v1
# Fidelity-Induced Interpretable Policy Extraction for Reinforcement Learning ###### Abstract Deep Reinforcement Learning (DRL) has achieved remarkable success in sequential decision-making problems. However, existing DRL agents make decisions in an opaque fashion, hindering the user from establishing trust and scrutinizing weaknesses of the agents. While recent research has developed Interpretable Policy Extraction (IPE) methods for explaining how an agent takes actions, their explanations are often inconsistent with the agent's behavior and thus, frequently fail to explain. To tackle this issue, we propose a novel method, Fidelity-Induced Policy Extraction (FIPE). Specifically, we start by analyzing the optimization mechanism of existing IPE methods, elaborating on the issue of ignoring consistency while increasing cumulative rewards. We then design a fidelity-induced mechanism by integrate a fidelity measurement into the reinforcement learning feedback. We conduct experiments in the complex control environment of StarCraft II, an arena typically avoided by current IPE methods. The experiment results demonstrate that FIPE outperforms the baselines in terms of interaction performance and consistency, meanwhile easy to understand. Machine Learning, Reinforcement Learning, Explainable Artificial Intelligence, Policy Extraction. ## I Introduction Deep Reinforcement Learning (DRL) has showcased impressive achievements in various sequential decision-making domains. For example, it has been employed to train agents that outperform professional players in complex games [1, 2, 3, 4] and control robots for intricate tasks [5, 6, 7]. Despite its capacity to learn complex mappings between inputs and outputs, DRL represent these knowledge in an opaque fashion. This lack of transparency hides the key factors influencing decision-making and exacerbates the difficulty of identifying and rectifying errors. [8, 9, 10, 11, 12]. Consequently, the practical application of DRL methods is limited, particularly in cost-sensitive domains like autonomous driving, healthcare, and finance. To address the problem of opaque knowledge representation, previous research [13, 14, 15, 16, 17, 18] propose to extract self-interpretable policy from the interactive tranjections of DRL policy. At a high level, these approaches leverage the transparent structure of self-interpretable models to unveil the hidden knowledge within DRL systems (see Figure 1 (a)). Specifically, the self-interpretable models are trained based on the interaction data, consisting of state-action pairs, collected during the interaction between agents and their environment. When the self-interpretable model is provided with the same state as the agent, it reveals the key factors by showcasing the rules that are activated within the self-interpretable model. However, despite their ability to identify key features in specific observations, these methods suffer from a problem of **inconsistent explanation**. Specifically, the goal of interpretable policy extraction is to describe the DRL's decisions accurately with an interpretable structure. When the extracted policy diverges from the decisions made by the DRL policy, the output explanations fail to interpret the agent's decisions. According to Figure 1 (b), the consistency of the current IPE policies is affected by the difficulty of the task. A typical example is that, when dealing with the \(2s\_vs\_1sc\) task in StarCraft II, the success rate of existing method dropped to about 40%. In conclusion, existing IPE methods aim to maximize rewards rather than the fidelity of the extracted policies throughout the entire lifecycle, which deviates from the goal of explaining the DRL policy, particularly in complex tasks like those in the StarCraft II platform. This paper proposes a novel method called Fidelity-Induced Policy Extraction (FIPE) to address the problem of inconsistent explanation. At a high level, FIPE introduces a fidelity-induced mechanism that integrates the fidelity metric into the reinforcement learning feedback and thus, guiding the model toward improving its consistency with the teacher. Specifically, we first analyze the preference towards higher rewards exhibited by current methods from a theoretical perspective, highlighting their deviation from the original intent of interpretable policy extraction. Subsequently, we design a fidelity-induced mechanism to rectify this preference, and estimate the upper limits of this mechanism. Finally, we propose an approximate solution for the fidelity-induced mechanism, simplifying the calculation of the FIPE mechanism, especially in complex tasks. We conduct experiments to evaluate the proposed method within three complex multi-agent reinforcement learning tasks in the StarCraft II platform, i.e., \(3m\), \(2s\_vs\_1sc\), \(8m\). According to the experiment results, the proposed method outperforms the baselines in interactive performance and consistency. Furthermore, we demonstrate use cases for the FIPE compatible with other self-explainable structures. Overall, the proposed method exhibits greater applicability in complex tasks. ## II Related Works Interpretable Policy Extraction (IPE) is based on the concept of imitation learning [19, 20, 21]. At a high level, IPE aims to mimic the behavior of the DRL agents by modeling the interaction trajectories using a self-explainable model structure. Some works propose novel interpretable structures to replace deep neural networks [22, 23, 24]. The primary advan tage of these emerging structures lies in their inherent interpretability, which summarizes the decision logic of the DRL agent. However, it is challenging to balance accuracy and interpretability in a model, i.e., the accuracy-interpretability trade-off [25, 26, 27, 28, 29]. Empirically, more flexible methods that are capable of estimating more complex shapes for the unknown function are way less interpretable. Consequently, these innovative model structures are often more complex than traditional interpretable structures, making them more difficult to understand. Some works [14, 15, 30, 31] improve the traditional interpretable model structure. This approach avoid to use unintertable components and thus, provides concise explanations for the decision of DRL agents. Nevertheless, traditional interpretable structures (e.g., decision trees) struggle to generalize to complex tasks due to their unstable performance. Recent studies have shifted their focus towards optimizing the extraction process [13, 16, 18, 22, 32, 33]. These methods involve selectively retaining interactive data that is deemed more "significant" for generating self-explainable outputs. This process improve the performance of self-explainable rule-based models. While they perform well in simple environments (e.g., Gym), these methods encounter issues of inconsistent explanation when deployed in challenging tasks (e.g., StarCraft II) due to the disparity between interpretable and original policies (refer to Figure 1 (b)). ## III Preliminary In order to provide interpretable representations of the "black-box" decision logic of deep reinforcement learning models. At a high level, existing policy extraction methods employ a supervised learning paradigm to fit the sampled data of the model, i.e., state, action, and \(Q\)-value, denoted as \((s,a,q)\). This process transforms the model into an interpretable sequence of rules. Technically, existing approaches implicitly extract high-quality samples from the sampled data, enabling the extracted rule-based policies to achieve greater rewards within the environment. To elaborate on this process, we discuss the feedback and optimization mechanisms of reinforcement learning from the standpoint of Interpretable Policy Extraction. Table I summarizes the relevant symbols and variables of this paper. ### _Implicit Knowledge in Reinforcement Learning Feedback_ To maximize the cumulative rewards obtained from the interaction between an agent and its environment, an action-reward function is used to guide the intelligent agent in learning the optimal policy. When sampling with a well-trained policy, the \(Q\)-value in the interaction samples reflects the expectation of discounted rewards for a specific action. Technically, in a finite-horizon Markov Decision Process (MDP) defined by the tuple \((S,A,P,R)\), where \(S\) represents the state set, \(A\) represents the action set, \(P:S\times A\times S\to[0,1]\) (i.e., \(P(s,a,s^{\prime})=p(s^{\prime}|s,a)\)) represents the environment transition probability function, and \(R:S\to\mathbb{R}\) represents the reward function, we consider that rewards closer to the current time are more valuable than future rewards, which may not materialize. Therefore, we often use the discounted return to express a preference for future expected returns, denoted as \(U=\mathbb{E}[\sum_{k=t+1}^{+\infty}\gamma^{k-t-1}r_{k}]\), where \(\gamma\in[0,1]\) is the discount factor. Smaller values of \(\gamma\) assign less importance to future rewards. Using the discounted reward, we can define the action value at time \(t\), representing the quality of action \(a_{t}\) in state \(s_{t}\) and allowing us to evaluate actions. Specifically, the action value function (also known as the \(Q\) function) is formulated as \[Q_{\pi}(s_{t},a_{t})=\mathbb{E}[U_{t}|S_{t}=s_{t},A_{t}=a_{t}]. \tag{1}\] Current policy extraction methods leverage supervised learning to train models using sampled data, which includes states, actions, and \(Q\)-values, denoted as \((s,a,q)\). Among these, the \(Q\)-value signifies the anticipated discounted reward for a given sample. As a result, policy extraction methods enhance the performance of interpretable rule-based policies in tasks by preserving diverse samples with high expected rewards. ### _Optimization Approach for Interpretable Policy Extraction_ Interpretable Policy Extraction aims to distill rule based policy that accurately represent the behavioral of the original agent's policy [13, 16]. Technically, given a teacher policy model \(\pi^{*}\) and a student policy model \(\tilde{\pi}\), for a specific state \(s\), the extracted rule-based policy selects action \(a\), incurring a cost to the teacher policy \(\pi^{*}\) denoted as \(C_{s}(\pi^{*},\tilde{\pi})\). Generally, the \(C_{s}(\pi^{*},\tilde{\pi})\) is adaptable, and different definition of \(C_{s}(\pi^{*},\tilde{\pi})\) result in policies with different preferences. Fig. 1: (a) Explain an agent’s behavior using self-interpretable rules. (b) Success rate of explanation using existing policy extraction methods (using VIPER [16] as an example). The vertical axis represents the tasks, and the horizontal axis represents the success rate of explanations. Typically, policy extraction methods optimize losses based on rewards, denoted as \(l(s,\tilde{\pi})\). We have: \[C_{s}(\pi^{*},\tilde{\pi}) =\mathbb{E}_{a\sim\tilde{\pi}(s)}[C(s,\tilde{\pi})] \tag{2}\] \[=\mathbb{E}_{a\sim\tilde{\pi}(s)}[l(s,\tilde{\pi})]. \tag{3}\] At this point, the overall cost-to-go function of \(\tilde{\pi}\) with respect to \(\pi\) at time step \(T\) is expressed as: \[J(\tilde{\pi}) =\sum_{t=1}^{T}\mathbb{E}_{s\sim d^{\sharp}}[C_{s}(\pi^{*},\tilde {\pi})] \tag{4}\] \[=T\mathbb{E}_{s\sim d^{\sharp}}[C_{s}(\pi^{*},\tilde{\pi})]\] (5) \[=T\mathbb{E}_{s\sim d^{\sharp}}[l(s,\tilde{\pi})]. \tag{6}\] To address the issue of accumulating sequential decision errors [13], a common approach involves using a mixed strategy between the student and teacher to generate interaction trajectories. The mixed strategy is represented as: \[\pi_{i}=\beta\pi^{*}+(1-\beta)\tilde{\pi}, \tag{7}\] where \(\pi_{i}\) represents the mixed strategy for the \(i\)-th round of interaction, and \(\beta\in[0,1]\) is a decreasing parameter. As a result, the interaction trajectory obtained in each round is denoted as \(\tau=\{(s_{t},\pi_{i}(s_{t}))\}_{t=1:T}\). The goal of policy extraction is to find a student policy \(\tilde{\pi}^{*}\) that minimizes the average cost under the induced state distribution, formally represented as: \[\tilde{\pi}^{*} =\arg\min_{\tilde{\pi}}\mathbb{E}_{s\sim d^{\sharp}}(J(\tilde{ \pi})(\pi^{*},\tilde{\pi})) \tag{8}\] \[=\arg\min_{\tilde{\pi}}T\mathbb{E}_{s\sim d^{\sharp}}[l(s,\tilde{ \pi})]\] (9) \[=\arg\min_{\tilde{\pi}}\mathbb{E}_{s\sim d^{\sharp}}[l(s,\tilde{ \pi})]. \tag{10}\] The student policy \(\tilde{\pi}\) is typically represented with structured and semantically clear rules, e.g., decision trees or nearest neighbor models, making it easy to understand.
2307.16728
Valeriepieris Circles for Spatial Data Analysis
The Valeriepieris circle is the smallest circle that can be draw on the globe containing half of the world's population. The Valeriepieris (VP) circle acts as a spatial median, effectively splitting spatial data into two halves in a unique way. In this paper the idea of the VP circle is generalised and a fast algorithm and corresponding software package to compute it are described. The VP circle is compared to other measures of centre and dispersion for population distributions and is shown to reflect expected differences between countries and changes over time. By studying the VP circle as a function of the included population fraction a new way of representing population distributions is constructed, as well as a mathematical model of its expected behaviour. Finally a measure of population `centralisation' is constructed which measures the tendency of a territory to be dominated by a single population centre or to have a more even distribution of population.
Rudy Arthur
2023-07-31T14:50:41Z
http://arxiv.org/abs/2307.16728v1
# Valeriepieris Circles for Spatial Data Analysis ###### Abstract The Valeriepieris circle is the smallest circle that can be draw on the globe containing half of the world's population. The Valeriepieris (VP) circle acts as a spatial median, effectively splitting spatial data into two halves in a unique way. In this paper the idea of the VP circle is generalised and a fast algorithm and corresponding software package to compute it are described. The VP circle is compared to other measures of centre and dispersion for population distributions and is shown to reflect expected differences between countries and changes over time. By studying the VP circle as a function of the included population fraction a new way of representing population distributions is constructed, as well as a mathematical model of its expected behaviour. Finally a measure of population 'centralisation' is constructed which measures the tendency of a territory to be dominated by a single population centre or to have a more even distribution of population. ## 1 Introduction Finding the 'centre' of a spatial distribution is common not only in demographics (Niedomysl et al., 2017; Hall et al., 2019), but has applications in any field that deals with spatial data, from epidemiology (El Deeb, 2021) to economics (Grether and Mathys, 2010; Quah, 2011). Rogerson (2015) gives an entertaining account of the often contentious and highly political history of geographic centres in the United States and Murray (2018) discusses various 'complexities' in defining the centre of any spatial object or distribution. Studying centres over time has been especially common in North America (Plane and Rogerson, 2015; Breau et al., 2018; Rogerson, 2021) where the location of the population or economic centre of gravity over time shows the westward drift of settler populations in those countries. There are numerous approaches to finding the 'centre' of the world's or a region's population. One approach, taken by the US Census bureau (Geography Division U.S. Census Bureau, 2011), is to compute the **centre of population** from \[\hat{\phi} =\frac{\sum_{i}w_{i}\phi_{i}}{w_{i}} \tag{1}\] \[\hat{\lambda} =\frac{\sum_{i}w_{i}\lambda_{i}\cos(\phi_{i})}{w_{i}\cos(\phi_{i })}\] Where the sum is over \(i\) areas with population \(w_{i}\) and latitude and longitude \((\phi_{i},\lambda_{i})\) to yield \((\hat{\phi},\hat{\lambda})\), the latitude and longitude of the centre. Barmore (1991) notes that this gives the point at which a flat map would balance if the population was placed on a sinusoidally (Sanson-Flamsteed) projected map with central meridian at longitude \(\hat{\lambda}\). Barmore (1991) suggests the centre of population be defined instead as the balance point on a map projected using the Azimuthal Equidistant projection centred at the centre of population. This would keep distances of populations to the centre undistorted but must be computed iteratively. The dependence of the above on the map projection has provoked alternative definitions. An influential one is given by Aboufadel and Austin (2006) who locate population mass using three dimensional vectors originating at the Earth's centre and compute the weighted average vector \[\hat{x} =\frac{\sum_{i}w_{i}\vec{x}_{i}}{\sum_{i}w_{i}} \tag{2}\] \[\bar{x} =\frac{\hat{x}}{|\hat{x}|}\] Here \(\vec{x}_{i}\) are the vectors locating each of the populated areas and \(\hat{x}\) is the population weighted average vector. Barmore (1991) and others e.g. Grether and Mathys (2010) note that the weighted average \(\hat{x}\) produces points not on the surface of the earth, though \(\bar{x}\) will be on the surface. I will refer to this point as the **3d centre of population**. In practice (Rogerson, 2021) finds that this point and Barmore (1991)'s iterative Azimuthal centre are very similar, though the 3d point is much easier to compute. There are other definitions of centre which do not use a weighted sum of population. The intersection of the parallels of latitude and longitude that divide the target population in half is called the median centre or cross-median (Plane and Rogerson, 2015). This definition depends on the orientation of the latitude and longitude grid and the longitude median is undefined for the entire globe. The **geometric median** is another possible definition - this is the point for which the sum of distances to each individual is minimised, however it is somewhat complicated to compute (Beck and Sabach, 2015). The geographic centre is a related idea - this the point where the sum of squared distances from the center to all points in the region is minimised. Rogerson (2015) provides a method to calculate this, which is similar to the Azimuthal iterative method of Barmore (1991). As pointed out by Barmore (1991), we want a statistic that indicates where people are, not where they could easily travel to. Likewise we want a statistic that doesn't depend on arbitrary map constructions - including how we draw grid lines or how we project onto the plane. The centre of population fits most of these criteria, however the centre of population it is often far from where most people actually are. The centre of the US population is in southern Missouri, the global centre of population is in the Arabian desert (see the following sections for the exact coordinates). For showing trends over time the actual location of the centre may not be so important, only the relative change from year to year matters, however as a'representative point' the centres above all leave something to be desired. ### The Valeriepieris Circle The Valeriepieris circle1 is the circle of smallest radius containing half the Earth's population. The idea gained popular attention e.g. (Fisher, 2013) and some academics, notably economist Danny Quah (Quah, 2015, 2016), discussed the idea. The Valeriepieris circle for the whole earth is centred in the north of Yunnan province, China with a radius of around 3300km. The radius is remarkably small due to the very high populations of northern India, southern China and the rest of south east Asia which together constitute half the world population. Footnote 1: Valeriepieris is the Reddit username of Ken Myers who first suggested the idea there: [https://www.reddit.com/r/MapPorn/comments/1dqh7d/after_seeing_a_recent_post_about_the_population/](https://www.reddit.com/r/MapPorn/comments/1dqh7d/after_seeing_a_recent_post_about_the_population/) The idea behind the Valeriepieris (hereafter VP) circle can naturally be generalised. The VP circle is the answer to the question: _What is the smallest circle containing at least a fraction \(f\) of the population of the area \(c\)?_ The original VP circle has \(f=0.5\) and \(c\) equal to the whole Earth, however we can use any fraction \(f\) and any populated area: Europe, England, Exeter etc. for \(c\). I will refer to the radius of this circle as the **VP f-radius** for \(c\), denoted by \(R(f)\). Similarly call the centre of the circle the **VP f-centre**, denoted \((\phi_{VP}(f),\lambda_{VP}(f))\). I will refer to the special case with \(f=0.5\) as _the_ VP circle and if the \(f\) value is not specified it is assumed to be \(0.5\) by default. The VP radius can be used as a measure of population dispersion. A small radius indicates a population concentrated around a central point, a large one indicates a more uniform spread. Some other measures of population dispersion are discussed in Rogerson (2021). The simplest and only one I consider is **Bachi standard distance**(Bachi, 1963) \[s(y)=\sqrt{\frac{\sum_{i}w_{i}d_{i}(y)^{2}}{w_{i}}}, \tag{3}\] where \(d_{i}\) is the distance between the location \(i\) and the centre \(y\). In this paper I will argue that the VP centres and radii provide an interesting and useful summary of spatial data while avoiding some of the problems mentioned above. In Section 2 I give a method to compute VP circles for any area and compare the VP circle to some of the other centre definitions mentioned above. In Section 3 I look at the VP radius as a function of time, a popular use case for population centre statistics. Section 4 looks at VP circles as a function of \(f\). This yields an interesting statistic that reduces spatial population distributions from two to one dimension which can be fit with a simple amthemathical model. I also give an statistic which characterises patterns of population density in a novel way. I conclude in Section 5. ## 2 Finding Valeriepieris Circles I will use gridded population data from SEDAC (Center for International Earth Science Information Network - CIESIN - Columbia University, 2018). This data is available at grid resolutions from 1 degree (\(\sim 111\)km) to 30 seconds (\(\sim 1\)km) for the years 2000, 2005, 2010, 2015 and 2020. Having data on a regular grid is not necessary for finding VP circles, but makes a number of computational efficiencies possible. The algorithm below will find a VP circle, with any kind of population data, by an exhaustive search. 1. Compute the total population \(P\) of \(c\) and the target population \(t=fP\). Choose a starting point \((\phi_{0},\lambda_{0})\). 2. Using a binary search, grow a circle centred at \((\phi_{0},\lambda_{0})\) to find the minimum radius where the population within the circle is \(\geq t\). Call this radius \(R_{0}\). 3. Shift the circle to the next location and calculate the population within a distance \(R_{0}\) from this point. 4. If the population is \(\geq t\) find the radius \(R_{1}\) (again by binary search) such that \(R_{1}\) is the smallest possible radius where the population is still \(\geq t\). 5. After checking all the locations, the final radius \(R(f)\) will be the VP f-radius and the grid points where the population within the circle is \(\geq t\) will be the VP f-centres. I refer to VP centres, plural, because in some cases, typically when \(f\) is very small and the grid is coarse, there may be multiple minima. In practice, for any reasonably fine grid and \(f\gtrapprox 0.05\), this doesn't happen and the VP circle found by the above process is unique. The binary search proceeds in the obvious way: starting from, say, \((r_{min},r_{max})=(0,R_{c})\), where \(R_{c}\) is the greatest distance between any two points in \(c\), compute the population, \(P_{0}\), within a distance \(r_{0}=\frac{r_{min}+r_{max}}{2}\) from the centre. If \(P_{0}<t\) then set \(r_{min}=r_{0}\) otherwise set \(r_{max}=r_{0}\) and recalculate the population. The division is repeated until \(r_{max}-r_{min}<\epsilon\), i.e. we have narrowed down the radius to within some threshold \(\epsilon\). Once this happens set \(R=r_{max}\). I have chosen the threshold \(\epsilon\) to be 1km which is smaller than the grid resolutions I will use and means this process usually converges in fewer than 10 iterations. When the data forms a regular grid the search can be made significantly more efficient: * When shifting the centre to a neighbouring grid point, only a small number of population counts change. By keeping track of which grid points are on the edges of the circle we can add and remove only the necessary data. * Using the symmetry of the sphere, distances only need to be recalculated when the latitude of the centre point changes, otherwise, at fixed latitude, the same distance matrix can be shifted and reused. * If the VP circle is first found on a coarse grid we can narrow down the search area significantly e.g. only search within, say, \(\pm 5\) degrees of the coarse centre on the fine grid. * Since the distance calculations are the same every time, if multiple \(f\) values are required, do all of them at the same time. I have implemented the above algorithm in Cython (Behnel et al., 2011) and made it available as a Python package2. On a single core (Intel i7 1.9 GHz) using the SEDAC 15 minute resolution data, which covers the globe with a \(720\times 1440\) grid, the VP circle takes about 16.5 seconds to compute. If the coarse grid is used first to roughly locate the centre, the whole process is significantly quicker, completing in less than 1 second. I have also implemented a number of other popular methods to calculate population centres Footnote 2: [https://pypi.org/project/valeriepieris/](https://pypi.org/project/valeriepieris/) 1. **Centre of population**, see equation 1 2. **3d Centre of population**, see equation 2 3. **Geometric median** I calculate geometric median using Weiszfeld's Method (Weiszfeld, 1937) modified after Vardi and Zhang (2001) to work when the centre is at a grid point. As found in the original Reddit post, the VP circle for the entire globe is centred between India and China, see Figure 1. The exact position and radius are given in Table 1 which also gives co-ordinates of the other centres. As most of the world's population is in south east Asia, all 4 centres cluster around there, and interestingly, all 4 centres are within 10 degrees of latitude of each other. The Centre of population, and 3d Centre of population are furthest west, influenced by sizeable populations in Africa, Europe and the Americas. The geometric median is closest to the VP centre, which is expected - the VP circle is also a kind of spatial median, half the data is inside the circle and half outside and median statistics are less influenced by the extremes of the distribution they summarise. The VP radius is about half of the Bachi standard distance. \begin{table} \begin{tabular}{|c|c|c|} \hline Method & (Lat, Lon) & Bachi standard distance (km) \\ \hline Centre of population & \((22.125,51.375)\) & 6583 \\ 3d Centre of population & \((36.625,66.875)\) & 6364 \\ Geometric median & \((24.625,72.125)\) & 6629 \\ \hline \hline & \((\phi_{VP},\lambda_{VP})\) & \(R\) (km) \\ \hline VP 0.5-circle & \((28.375,100.625)\) & 3386 \\ \hline \end{tabular} \end{table} Table 1: Positions of the various centres and radii for the globe. All lat, lon values have been ‘snapped’ the the nearest grid point and radii are only reported to the nearest km. The VP circles for a number of countries in Europe are shown in Figure 2. Each one takes a few seconds to compute, in fact the bottleneck is usually the intersection of the polygon data (obtained from (World Bank, 2023)) with the population grid. Most country's VP centres are close to that country's capital/largest city. Most are quite a bit smaller than their country, some very much so e.g. Iceland's VP radius is only 15 km. We will look at some specific countries in more detail in the next section and study the behaviour of the VP radius as a function of \(f\) in Section 4. ## 3 Centres of Population over Time One of the most prominent uses of population centres has been to track the change in population in the US over time, in particular its westward movement (Rogerson, 2015; Plane and Rogerson, 2015; Rogerson, 2021). Population centres of the UK and its constituent countries have also attracted interest. Dorling and Atkins (1995) suggest that the population centre of Great Britain has been moving south at about 100 meters per year. Figure 3 shows the VP circles, Centre of Population, 3d Centre of Population and Geometric median for the continental United States (i.e. not including Alaska and Hawai'i). First note again that the VP centre is significantly further east and closer to the main'mass' of population than the others (it is near Craigsville West Virginia). The VP radius is around 950km (see Table 2 for precise values) and encompasses the major cities of the north east, great lakes and eastern sun belt. All the centres show a gradual southwest drift, though between 2015 and 2020 the VP centre makes a sudden southward jump while moving back east slightly. This is less significant in light of the growing VP radius, which increased by around 90km over two decades. The VP circle puts the centre of the US in the east and with New York at the north east boundary, Figure 1: The Valeriepieris circle (small circle, solid outline, blue fill) for the entire globe. The centre is positioned roughly half-way between India and China, see Table 1 for the exact location. Other population centres are shown, as well as a circle of radius equal to the Bachi standard distance with origin at the Centre of population (large circle, dashed outline, pink fill). Using SEDAC 15 minute resolution, \(\sim 30\)km, data. \begin{table} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{Continental US} & \multicolumn{2}{|c|}{Great Britain} \\ \hline \hline Year & \((\phi_{VP},\lambda_{VP})\) & \(R\) (km) \\ \hline 2000 & \((39.7708,-80.6875)\) & 863 \\ 2005 & \((39.4792,-80.6042)\) & 878 \\ 2010 & \((39.7292,-80.9375)\) & 901 \\ 2015 & \((39.4792,-81.2708)\) & 923 \\ 2020 & \((38.3958,-80.6875)\) & 949 \\ \hline \multicolumn{3}{|c|}{Mongolia} \\ \hline \hline Year & \((\phi_{VP},\lambda_{VP})\) & \(R\) (km) \\ \hline 2000 & \((47.7292,104.4792)\) & 227 \\ 2005 & \((48.7708,105.7708)\) & 167 \\ 2010 & \((48.4375,105.6875)\) & 134 \\ 2015 & \((48.0625,107.4792)\) & 83 \\ 2020 & \((47.9792,106.8542)\) & 34 \\ \hline \end{tabular} \end{table} Table 2: VP centre and VP radius for 3 areas over time. All lat, lon values have been ‘snapped’ the the nearest grid point and radii are only reported to the nearest km. Figure 2: The Valeriepieris circles for countries in ‘Europe’ here eqivalent to a bounding box with lower left lat, lon \((34,-25)\) and upper right lat, lon \((80,35)\). Only the portions of countries that intersect with this bounding box are used in the VP calculations (e.g. only western Russia). Each country is treated individually, so even if the VP circle crosses a border, population in the other country is not counted. Using SEDAC 2.5 minute resolution, \(\sim 5\)km, data. Chicago at the north west and Atlanta at the south. The median American lives in this circle. The changing demographics of the US are reflected not as much by the position of the centre as by the growing VP radius. Proportionally higher population growth in the south and west means that progressively larger radii are required to 'capture' 50% of the population, even if the centre doesn't move as far west. The US is an unusual case, with heavily populated east and west coasts separated by a large expanse of sparsely populated territory. Figure 4 shows the UK over the same period. Compared to the US there is very little change over time. None of the centres move very much, in fact, the VP centre is located in the same grid cell for all dates. The radius decreases slightly, by about 2 km every 5 years, see Table 2 for precise values. This is generally in line with the rather slow evolution of the UK population in the 20th century as reported by Dorling and Atkins (1995). The US is the country with the largest increase in VP radius from 2000 to 2020 at 86km, indicating a more even dispersal of population across the territory. At the opposite extreme, the largest reduction in VP radius, indicating population concentration, is found in Mongolia whose VP radius has reduced by 193km, see Figure 5 and Table 2 for precise values. Mongolia has experienced huge rural to urban migration in recent years (International Organization for Migration (IOM) et al., 2021) which is shown quite cleanly by reduction in VP radius. The examples of the US and Mongolia suggest that changes in VP radius over time are indicative of changing demographic patterns. Figure 4: Valeriepieris circles for Great Britain between 2000 and 2020 overlayed on 2020 population data. Also shown are the other centres of population in the same period. Using SEDAC 2.5 minute resolution, \(\sim 5\)km, data. Figure 5: Valeriepieris circles for Mongolia between 2000 and 2020 overlayed on 2020 population data. Also shown are the other centres of population in the same period. Using SEDAC 2.5 minute resolution, \(\sim 5\)km, data. only by the shape of the populated area. Imagine a circular island of radius \(R_{I}\) with a radially symmetric population density \(\rho(r)\). The population at \(R\) is \[P(R)=2\pi\int_{0}^{R}rdr\rho(r)\] Set \[\rho(r)=\begin{cases}\frac{\rho_{0}R_{0}^{2-a}}{(r+R_{0})^{2-a}},&\text{if }r \leq R_{I}\\ 0,&\text{otherwise}\end{cases}\] The population density equals \(\rho_{0}\) at the centre, \(r=0\), and increases or decreases like a power of the radius as we move away from the center up to some max or min at the boundary of the island, \(R_{I}\). We then have \[P(R)=\rho_{0}R_{0}^{2-a}\pi\int_{0}^{R}r\left(r+R_{0}\right)^{a-2}dr=\rho_{0}R _{0}^{2-a}\pi(r+R_{0})^{a}\left(\frac{1}{a}-\frac{b}{(r+R_{0})(a-1)}\right) \bigg{|}_{0}^{R}\] The special cases of \(a=0\) and \(a=1\) have to be treated separately, where the integral gives a log. Ignoring this complication, if the total population is \(P=P(R_{I})\) then solving the above for \(R\) with \(P(R)=fP\) gives the VP f-radius. For large R we have approximately \[\left(R(f)+R_{0}\right)^{a}\simeq\frac{afP}{\rho_{0}R_{0}^{2-a}\pi}\] or even more approximately: \(R\sim f^{1/a}\). It is convenient to normalise by \(R(1)\) to create a function that is bounded between 0 and 1, which allows a direct comparison of places with very different areas. Define \[\tau(f)=\frac{R(f)}{R(1)} \tag{4}\] The special case of a uniform population density corresponds to \(a=2\) and can be solved exactly where the (normalised) VP f-radius grows like the square root of \(f\). For \(a<2\) the radius grows more slowly, since the population is decaying more rapidly than the circle's area grows as a function of \(R\). I will call the normalised VP f-radius, considered as a function of the population fraction, \(\tau(f)\), the **VP profile** for some area c. Figure 6: Left: VP 0.25-,0.5-,0.75- and 1-circles for Great Britain. Right: VP profile with best fit lines, note the log axes. Using SEDAC 2.5 minute resolution, \(\sim 5\)km, data. Figure 6 shows the VP profile for the United Kingdom as well as some of its VP f-circles. The global best fit corresponds to \(a\sim 1\) though the graph of \(\tau\) has some interesting inflection points indicating there is structure in the data not captured by the simple disc model. From \(f=0\) to \(f\sim 0.15\) the VP circle is entirely contained in London and \(\tau\) increases at a rate close to \(a=2\). Once \(f\) is large enough that we need to go outside of London, the slope changes abruptly to cover the gap between the densely populated areas of England in the south east and north west. Once the north west of England is reached, between \(f\sim 0.33\) and \(f\sim 0.8\) the radius increases more slowly until much bigger jumps are required to reach the remaining population in the north east of England and then Scotland. Figure 7 shows Mongolia, a rather clear example, where around 50% of the population live at high densities in the capital and the rest are extremely dispersed. This results in a sharp inflection point around \(f\sim 0.5\). The opposite extreme is Germany, Figure 8, which has a comparatively uniform population distribution and is lacking obvious inflection points, so the disc model does a good job fitting this data. Even in cases (e.g. UK and Mongolia) where the global fit is bad, a model where the radial decay rate \(a\) abruptly changes still fits the data well. While the VP profile gives the most information and the slope parameter \(a\) summarises some information about the population distribution, it is useful to have a single, model-independent, Figure 8: Left: VP 0.25-,0.5-,0.75- and 1-circles for Germany. Right: VP profile with best fit lines, note the log axes. Using SEDAC 2.5 minute resolution, \(\sim 5\)km, data. Figure 7: Left: VP 0.25-,0.5-,0.75- and 1-circles for Mongolia. Right: VP profile with best fit lines, note the log axes. Using SEDAC 2.5 minute resolution, \(\sim 5\)km, data. statistic to measure this tendency of areas to spread or concentrate their population. Define the statistic \[C_{100f}=1-\frac{\tau(f)}{\sqrt{f}} \tag{5}\] as a measure of what I will call **centralisation**, I will use \(C_{50}\) for simplicity. Large values (close to 1) correspond to highly centralised countries and smaller values to more dispersed ones. A uniform population on a disc gives \(C_{50}=0\) (the factor of \(\sqrt{f}\) is to normalise this case). A population highly concentrated in the centre gives \(C_{50}\simeq 1\). Table 3 gives the centralisation scores for some countries of interest. The UK score is above average and the US score is below. The average nation's \(C_{50}\) was relatively static between 2000 and 2020. As expected, Mongolia has a very high score and Germany a low one. Note that this is not simply a proxy for density, there are countries with low density e.g. Mongolia (2 people / km\({}^{2}\)) and Kazakhstan (7 people / km\({}^{2}\)) with both high and low \(C_{50}\), 0.96 and 0.48 respectively. Likewise there are densely populated countries e.g. India (460 people / km\({}^{2}\)) and Japan (344 people / km\({}^{2}\)) with very different \(C_{50}\), 0.48 and 0.84 respectively. ## 5 Discussion The VP circle (the standard version with \(f=0.5\)) is a kind of spatial median. Half of the people are inside the circle, the rest are outside. Like the one dimensional median, it is less affected by extreme values than the mean. Thus the VP centre tends to be closer to the main mass of the population, e.g. the north eastern US or south eastern UK. The VP radius provides a useful measure of population dispersion. Section 3 showed that this radius increases over time for the US likely due to disproportionate growth of population in the more sparsely populated west and south. Mongolia shows the opposite behaviour, population movement from rural to urban areas results in a rapidly decreasing radius. VP profiles, defined in Section 4, provide a simple way to reduce two dimensional population maps to a one dimensional function while retaining much of the important information. A summary statistic, \(C_{50}\), was also defined which measures the tendency of countries to spread their populations evenly or concentrate them in a central city \begin{table} \begin{tabular}{|c|c|c|} \hline Country & Year & \(C_{50}\) \\ \hline \hline Mongolia (max) & 2020 & 0.96 \\ Mongolia & 2000 & 0.73 \\ \hline Japan & 2020 & 0.84 \\ \hline United Kingdom & 2020 & 0.70 \\ \hline Global & 2020 & 0.68 \\ \hline Average Nation & 2020 & 0.62 \\ Average Nation & 2000 & 0.61 \\ \hline China & 2020 & 0.6 \\ \hline Kazakhstan & 2020 & 0.48 \\ \hline India & 2020 & 0.48 \\ \hline Continental US & 2000 & 0.48 \\ Continental US & 2020 & 0.43 \\ \hline Germany & 2020 & 0.35 \\ \hline Sierra Leone (min) & 2020 & 0.27 \\ \hline \end{tabular} \end{table} Table 3: The centralisation \(C_{50}\) metric for various countries mentioned previously as well as the max and min values for non-island nations (these have very large \(R(1)\) values) found in the SEDAC 2020, 2.5 minute data. or area. This seems to capture an interesting feature of population that is not measured by any other statistic of which I am aware. Improvements and modifications to the definitions could of course be made. The algorithm described in Section 2 could be sped up using some search heuristics, though this has not been necessary for this work, all of which was performed on a very modest laptop. Faster searches however might be necessary if the VP circle is generalised e.g. the VP ellipse (the ellipse of minimum area that encloses a fraction \(f\) of the population) or the VP polygon (the polygon of minimum area what encloses a fraction \(f\) of the population). Such generalisations could help for very non-circular areas e.g. Chile, but I leave this for future work. As well as academic, there is significant popular interest in population centres. The Wikipedia page for center of population3, gives a dozen or more examples of population centres in different countries and there are specific pages for the center of the UK4 and at least five discussing various centres of the US56789. VP circles for other statistics could also be of academic or practical interest. The VP circle of economic activity would be an area containing half the world's economic output. Following Grether and Mathys (2010) or Quah (2011) this would be interesting with regards to tracking the relative growth of India and China versus the historical centres of Europe and the US. In the study of disease, the VP circle of infected people (the minimum area containing half of the infected population) could be a useful way to locate the epicentre and geographic extent of an outbreak and give a good idea of where to concentrate resources. This paper has aimed to show that the VP circle and statistics derived from it can be of use in the analysis of spatial data. Footnote 3: [https://en.wikipedia.org/wiki/Center_of_population](https://en.wikipedia.org/wiki/Center_of_population) Accessed 28/07/23 Footnote 4: [https://en.wikipedia.org/wiki/Centre_points_of_the_United_KingdomAccessed](https://en.wikipedia.org/wiki/Centre_points_of_the_United_KingdomAccessed) 28/07/23 Footnote 5: [https://en.wikipedia.org/wiki/List_of_geographic_centers_of_the_United_States](https://en.wikipedia.org/wiki/List_of_geographic_centers_of_the_United_States) Accessed 28/07/23 Footnote 6: [https://en.wikipedia.org/wiki/Geographic_center_of_the_United_States](https://en.wikipedia.org/wiki/Geographic_center_of_the_United_States) Accessed 28/07/23 Footnote 7: [https://en.wikipedia.org/wiki/Mean_center_of_the_United_States_population](https://en.wikipedia.org/wiki/Mean_center_of_the_United_States_population) Accessed 28/07/23 Footnote 8: [https://en.wikipedia.org/wiki/Median_center_of_United_States_population](https://en.wikipedia.org/wiki/Median_center_of_United_States_population) Accessed 28/07/23
2309.08904
Stylized Table Tennis Robots Skill Learning with Incomplete Human Demonstrations
In recent years, Reinforcement Learning (RL) is becoming a popular technique for training controllers for robots. However, for complex dynamic robot control tasks, RL-based method often produces controllers with unrealistic styles. In contrast, humans can learn well-stylized skills under supervisions. For example, people learn table tennis skills by imitating the motions of coaches. Such reference motions are often incomplete, e.g. without the presence of an actual ball. Inspired by this, we propose an RL-based algorithm to train a robot that can learn the playing style from such incomplete human demonstrations. We collect data through the teaching-and-dragging method. We also propose data augmentation techniques to enable our robot to adapt to balls of different velocities. We finally evaluate our policy in different simulators with varying dynamics.
Xiang Zhu, Zixuan Chen, Jianyu Chen
2023-09-16T07:11:15Z
http://arxiv.org/abs/2309.08904v1
# Stylized Table Tennis Robots Skill Learning with Incomplete Human Demonstrations ###### Abstract In recent years, Reinforcement Learning (RL) is becoming a popular technique for training controllers for robots. However, for complex dynamic robot control tasks, RL-based method often produces controllers with unrealistic styles. In contrast, humans can learn well-stylized skills under supervisions. For example, people learn table tennis skills by imitating the motions of coaches. Such reference motions are often incomplete, e.g. without the presence of an actual ball. Inspired by this, we propose an RL-based algorithm to train a robot that can learn the playing style from such incomplete human demonstrations. We collect data through the teaching-and-dragging method. We also propose data augmentation techniques to enable our robot to adapt to balls of different velocities. We finally evaluate our policy in different simulators with varying dynamics. ## I Introduction Reinforcement Learning (RL) has emerged as a powerful technique for training intelligent agents in various domains, including sport games. Table tennis, often referred to as ping pong, is a sport demanding rapid reflexes and dynamic control. This poses a significant challenge for RL algorithms, which can exhibit unrealistic behaviors, unlike human players. Traditionally, humans acquire the skill of table tennis through fragmentary and incomplete demonstrations. For instance, in the context of instruction, a coach might manipulate a trainee's arm to exemplify the appropriate striking mechanism, notwithstanding the absence of a ball during this demonstration. Yet, humans are still able to extract the essence of such motions to proficiently play the game. These demonstrations provide valuable insights into the desired style of playing, that is the swing motion used to hit the ball. Our research aims to use these partial demonstrations to develop an RL-based system for a robotic arm to play table tennis. While previous research has utilized RL techniques for playing ping-pong, none of them have specifically focused on learning the style of play from incomplete and partial demonstrations. Most existing approaches rely on direct RL training, where the agent learns to optimize its actions solely through trial and error solely based on the reward functions designed manually by humans. While [1] made strides in addressing style, their defined style is not readily extensible. Our proposed approach differs by integrating knowledge from human demonstrations, allowing the agent to adopt specific playing styles. In this paper we create a stylized table tennis agent that learns from partial demonstrations, even without ball trajectory data. Using generative adversarial learning techniques and insights from the teaching-and-dragging method, our goal is to develop an agent that demonstrates unique playing styles and efficiently returns the ball. We gather demonstrations by guiding the robotic arms through various swing motions and subsequently use these demonstrations to train an RL policy. We illustrate that our policy has the potential to be transferred to a real robot through a sim2sim task. ## II Related Work Reinforcement Learning for Table TennisIn recent years, the application of reinforcement learning (RL) to robotic arm control in table tennis has garnered significant attention. Comprehensive overviews are provided by [2] and [3], highlighting the advancements in high-speed table tennis RL systems. Numerous studies, including [4, 5, 6, 7], and [8], have focused on designing RL algorithms to improve ball-return accuracy and speed, specifically by predicting ball state at time \(t\) to determine optimal paddle positions for striking by learning methods. [1] focused on using model-free RL algorithms to develop control policies for robotic arm joint velocities in table tennis. [9] employed a one-step environment with well-defined action space and controller, aiming for high sample-efficiency. [10] introduced a robotic simulation environment, rooted in mathematical modeling, to predict ball spin velocities using RL. In a novel approach, [11] utilized pneumatic artificial muscles (PAMs) and end-to-end RL to formulate hitting strategies. They also introduced a hybrid simulation-real training (HYSR) method to optimize safety and sample efficiency. [12] aimed to return balls to specific landing positions, while [13] implemented an iterative sim2real system, enabling trained agents to compete against humans. Reinforcement Learning from DemonstrationsAdvancements in robotics have led to numerous applications demanding intelligent systems capable of decision-making and practical physical movements. Nonetheless, sub-optimal parameter configurations or algorithmic constraints might prevent a learning agent from achieving the desired behavior [14], thus hindering deployment real-world deployment. While task learning can technically be addressed as an optimization through meticulous reward engineering, integrating expert-driven prior knowledge is often deemed more efficient than starting anew [15]. One recognized approach to learning from demonstrations is inverse reinforcement learning [16], wherein an agent's goals or preferences are inferred from its actions rather than from direct reward feedback. In the era of deep learning, by combining the power of generative neural networks and imitation learning [17], generative adversarial imitation learning (GAIL)[18] is proposed to directly extract policy from demonstration data. In robotics, demonstrations is a very common and useful form of reference motions. They can be learned through tracking objectives that minimize pose error between simulated characters and target motions [19][20][21]. However, GAIL struggles in environments with substantial deviations from demonstrations, given the challenge of balancing imitation with task accomplishment [22]. To this end, [23] introduced the adversarial motion prior (AMP) that utilizes a discriminator to capture the distribution of input mocap without exact imitation. Subsequent works [24] and [25] proved that AMP's efficacy on real robots and that it can be powerful for solving locomotion tasks. Further enhancements were made by [26], integrating partial demonstrations and augmenting training stability. We have noticed that prior research has predominantly focused on the ball-return task of table tennis, overlooking the significance of playing style and the specific actions involved. Existing methods, typically based on imitation learning or inverse RL, necessitate demonstrations with synchronized actions, speeds, and the ball's presence. In contrast, our study aims to infer playing styles from demonstrations captured by the arm's slow dragging movements in the absence of a ball, a context that may not reflect genuine gameplay dynamics. ## III Method Our objective is to derive robotic arm motion styles from human-guided dragging-and-teaching demonstrations to guide subsequent RL agent behaviors. We initially gather seven motion clips by manually dragging a UR5 robot. This data is then augmented by accelerating the reference motions at 2x, 3x, and 5x rates. Following this, we utilize the augmented motion data to train the discriminator in conjunction with policy and value networks based on the predefined task reward. A schematic representation of our methodology is presented in Figure 1. ### _Collecting Demonstration by Dragging_ We use Forward Dynamics Compliance Control (FDCC) [27] to achieve the dragging-and-teaching procedure. FDCC combines the principles of admittance, impedance, and force control to achieve compliance control in Cartesian space. The core concept revolves around utilizing dynamic simulations to directly manipulate robotic manipulators through virtual and measured forces applied to their end effectors. By employing forward dynamics as a well-suited solver, the translation of effort from task space to motion commands in joint space is efficiently achieved. Here we define a Cartesian motion control target: \[F_{\text{t}}=K_{P}(x_{d}-x)-K_{D}\dot{x} \tag{1}\] where \(F_{t}\) is the Cartesian task force, \(x_{d}\) is the desired pose and \(x\) is the current end effector pose. We then employ forward dynamics to determine the joint control targets. \[\ddot{q}=\text{ForwardDynamics}(\text{model},q,\dot{q},F_{\text{ext}}-F_{ \text{t}}) \tag{2}\] where \(q\) is the joint position, \(F_{\text{ext}}\) is the external force. Then we can use integration to get \(q\) and \(\dot{q}\). To collect demonstrations via dragging, we simply set the parameter \(K_{P}\) to 0, while the damping can be controlled by modifying the parameter \(K_{D}\). Following this, we guide the robotic arm to execute a specific swinging motion, capturing the robot's state \(s_{\text{arm}}\), including the joint positions \(q\) and velocities \(\dot{q}\). This accumulated data functions as reference motion for subsequent training. It is worth noting that, given the robot's intrinsic damping, the control strategy we adopted, and the absence of a ball during the dragging phase, a direct replay of the data from the dragging-and-teaching procedure would not result in a successful ping-pong ball return. The data collected comprise three primary components: forward, rightward, and leftward movements. ### _Learning Styles from Partial Demonstrations_ In the context of the underlying Markov Decision Process (MDP), the state can be seen as two components: the robotic arm state and the ball state, represented as \(s=(s_{arm},s_{ball})\). Utilizing the generative adversarial learning framework, the reward function is inferred for stylistic analysis from demonstrated transitions \((s_{arm},s^{\prime}_{arm})\) observed in a dragging-and-teaching demonstration. Within this framework, the policy functions as a generative model and stands in opposition to a discriminator. The role of the discriminator is to differentiate between transitions drawn from the reference demonstration distribution \(d^{\mathcal{M}}\) and those generated by the policy \(d^{\pi}\). In our approach, we adopt the AMP [23] framework, which employs the least-squares GAN (LSGAN) [28] loss, to train the discriminator \(D(s_{arm},s^{\prime}_{arm})\). Distinct from the traditional GAN that commonly leverages a sigmoid cross-entropy loss function, the use of the LSGAN loss has been proved to enhance training stability and produce superior results in image synthesis tasks. For clarity in subsequent discussions, we omit the subscript from the discriminator input. \[\operatorname*{arg\,min}_{D} \mathbb{E}_{d^{\mathcal{M}}(\text{s},\text{s}^{\prime})}\left[ \left(D\left(\text{ s},\text{ s}^{\prime}\right)-1\right)^{2}\right]+ \tag{3}\] \[\mathbb{E}_{d^{\pi}(\text{s},\text{s}^{\prime})}\left[\left(D \left(\text{ s},\text{ s}^{\prime}\right)+1\right)^{2}\right]. \tag{4}\] In our method the discriminator is trained using the aforementioned loss, with the objective of predicting a score of \(1\) for samples sourced from the demonstrations and a score of \(-1\) for those generated by the policy. A commonly encountered challenge during GAN training is mode collapse, whereby the outputs produced by the generator demonstrate limited diversity, capturing only a constrained spectrum of possibilities. To mitigate this issue, we incorporate a gradient penalty [29]. Additionally, due to the robotic arm's inherent limited degrees of freedom, the slow speed of the dragging-and-teaching process and its confined range of motion, the difference between two successive states can be quite small, which can also negatively affect the training of discriminator, increasing the probability of the discriminator to get trapped in local minima. We therefore take a state sequence of length \(L\) as the input for the discriminator, denoted as \(D(s_{t-L+1},...,s_{t})\). \[\operatorname*{arg\,min}_{D} \mathbb{E}_{d^{\mathcal{M}}(s_{t-L+1},...,s_{t})}\left[\left(D\left( s_{t-L+1},...,s_{t}\right)-1\right)^{2}\right]+ \tag{5}\] \[\mathbb{E}_{d^{\mathcal{a}}(s_{t-L+1},...,s_{t})}\left[\left(D \left(s_{t-L+1},...,s_{t}\right)+1\right)^{2}\right]+\] \[\omega^{gp}\mathbb{E}_{d^{\mathcal{M}}(s_{t-L+1},...,s_{t})}\left[ \left\|\nabla D_{\phi}(\phi)\right\|\right].\] The loss function for the discriminator is defined by Equation 5. Weight decay is also applied to stabilize the training procedure. ### _Data Augmentation_ The reference motion obtained from drag-and-teach methods tends to be slow, making it ineffective against high-speed balls. Such a speed discrepancy can compromise the efficiency of imitation learning. To ensure the system effectively learns the style from a slower reference motion while still performing optimally in high-speed ball interactions, we augment the original reference motion data. Let the original motion reference be represented as a sequence of joint poses and velocities, denoted by \(\{\hat{q}_{t},\hat{v}_{t}\}\), whose length is \(n\). To increase the motion speed by a factor of \(k\), we sample \(\frac{n}{k}\) joint poses at equidistant intervals to form a new motion reference, where the corresponding velocity is also scaled by \(k\). This yields a new reference motion, \(\{\hat{q}^{\prime}_{t},\hat{v}^{\prime}_{t}\}\). Both the augmented and the original motion references are later included in our demonstration dataset, collectively forming our augmented demonstration set. ### _Reward Functions_ To facilitate the agent's proficiency in directing the table tennis ball to a specified location on the opposite of the table, we have formulated a reward system comprising three primary elements: ball hitting, smoothing & regularization, and penalization for illegal penalties. **Ball Hitting** This reward encourages the agent to strike the ball and redirect it to the designated goal position. \[r_{\text{hit}}=\begin{cases}\alpha_{\text{bat}}\exp\left(\|x_{\text{eff}_{y}}- x_{\text{ball}_{y}}\|/\sigma_{\text{bat}}\right),&\text{\bf before hitting},\\ \alpha_{\text{goal}}\exp\left(\|g-x_{\text{ball}}\|/\sigma_{\text{goal}} \right),&\text{\bf after hitting},\end{cases} \tag{6}\] where \(x_{\text{eff}_{y}}\) represents the \(y\) position of the arm's end effector, \(x_{\text{ball}_{y}}\) denotes the \(y\) position of the ping-pong ball, and \(g\) represents the goal, which is a predetermined ball landing location. **Smoothing & Regularization** These rewards aim to smooth and regularize the agent's action. * Acceleration penalty, where \(q\) is the robot joint position. \[r_{\text{acc}}=\alpha_{\text{acc}}\exp{(\|\tilde{q}\|)}.\] (7) * Dof regulaztion, where \(\hat{q}\) is the default joint positions, and \(\sigma_{\text{dof}}\) is a predefined hyperparameter. \[r_{\text{dof}}=\alpha_{\text{dof}}\exp{(\|q-\hat{q}\|/\sigma_{\text{dof}})}\] (8) * Action rate, where \(a\) is the current action generated by the policy. \[r_{\text{ar}}=\alpha_{\text{ar}}\exp{(\|a_{t}-a_{t-1}\|)}\] (9) * Action penalty, where \(|\;a\;|\) is the absolute value of the current action. \[r_{\text{ap}}=\alpha_{\text{ap}}\max{|\;a\;|}\] (10) Fig. 1: Overview of our method. We first collect reference motion data through manually guiding a robotic arm in a real-world environment. This data undergoes augmentation, producing our training reference motion \(\mathcal{M}\). Subsequently, this augmented reference motion, together with a predefined task, is utilized to concurrently train the discriminator and our policy. Ilegal PenaltiesWe also penalize illegal behaviors, such as: robot self collision, ball illegal bounce and etc. The task reward \(r_{t}^{G}\) is the summation of the above reward functions. The final reward consists of the task reward and the style reward. \[r_{t}=\omega_{\text{task}}r_{t}^{G}+\omega_{\text{style}}r_{t}^{S}, \tag{11}\] in which \(r_{t}^{S}=-\log{(D\left(s_{t-L+1},...,s_{t}\right))}\). ### _Domain Transfer_ To enhance the training process and strengthen our agent's adaptability to varied dynamics, we employ an asymmetric actor-critic network [30] with distinct policy and value inputs. The policy's input encompasses the information required and obtainable during the actual robot deployment. However, the value function incorporates additional privileged information, including the specific position and velocity of the agent, the number of bounces between the ball and the table, physical parameters of the environment, noise levels, damping values for robot velocity control, and delays in action and observation. These pieces of information are difficult to acquire during real robot deployment, hence we include them as additional inputs for the value function. To achieve success during domain transfer, we incorporate extensive randomization of the environment's dynamics parameters. We primarily consider four kinds of errors that could cause issues when transferring the policy to a real robot, including: discrepancies between the robot and environment models, errors in motor parameters, sensor noise, and communication delays. To enable the policy to account for discrepancies between the robot and environment models, we randomize the table, bat and the ball friction and restitution, gravity. We randomize the damping and stiffness parameter for the low-level position controller. We also heavily randomize the sensor noise which will input into the policy and also consider the communication and computation delays between the policy and input, as well as between the policy and action. ## IV Experiment We begin by detailing our experimental setup. Subsequently, we will: a) compare our method with the baseline PPO method both quantitatively and visually; b) illustrate the enhancements achieved through reference motion data augmentation; and c) conduct a sim2sim experiment to show the successful transfer of the policy learned via our method to a distinct domain, demonstrating its potential for real-world deployment. ### _Experiment Setup_ In our experiments, we choose the Universal Robots UR5 as our robotic arm Our simulation environment is constructed with IsaacGym [31]. For the experiment's setup, a table tennis bat is affixed to the UR5 robot's end-effector. The simulation environment is reset under any of the following conditions: if the ball exits the bounds of the table, if there's an illegal bounce, if any part of the robotic arm other than the bat makes contact with the ball, or if the robotic arm collides with the table. We also define a goal region with a radius of \(0.25\)m. Upon every environment reset, this region is resampled across the entire half-length of the table. The primary objective of adding this goal region is to direct the robot to strike the ball across the broadest possible area on the table. The simulation operates at a frequency of 120Hz, whereas the policy operates at 60Hz. 4096 environments are simulated in parallel on a single Nvidia RTX3090 GPU, while the interactions between the ball, bat, and table are simulated on the CPU. For modeling the policy, value function, and discriminator, we utilize three separate multi-layer perceptrons (MLPs). The Exponential Linear Unit (ELU) activation function [32] is employed. The hidden dimensions are [512, 256, 128] for both the policy and value function, while for the discriminator, they are [512, 256]. We employ the proximal policy optimization method [33] to train our policy. Over the course of training, it processes around 131 million samples. This translates to roughly 606 hours of simulated time, which corresponds to about 6 hours in real-world time. ### _Playing Style Comparison_ Visual ComparisonWe first trained two policies using PPO and our method with augmented reference motion dataset separately. Both policies were then tested in simulation with randomly incoming balls. Figure 2 illustrates the playing styles of the policy trained using PPO from scratch versus that trained with AMP. From the visualization, it is evident that the policy trained from scratch using PPO can display unrealistic behaviors. Even though this policy can manage to return the ball in the simulation, its implausible trajectory indicates that deploying this policy in real world is nearly impossible. Conversely, the policy trained with our method showcases more feasible and logical motions. And it also captured the style of our reference motion clip. Due to the smaller range of the action it displays, deploying this policy in real world is much more practical. We can also see that our trained policy has successfully captured the essence of input reference motions and is able to adjust the velocity to downstream tasks. Quantitative ComparisonTo quantitatively assess these two policies, we evaluated their success rates in returning randomly spawned balls. We executed each policy in an identical environment using 10 distinct random seeds, amounting to \(200\) spawnings for each seed. The success rate was then averaged across these 10 seeds to obtain the mean success rate. The findings are presented in Table I. From the results, it is evident that the policy trained with our method is not only robust but can also adeptly return balls from varied directions and velocities. With a success rate of \(93\%\), which closely parallels the \(97\%\) of the PPO policy, demonstrating that its performance is maintained quite well. To assess the similarity between states produced by our trained action and the reference motion, we adopt Dynamic Time Warping (DTW) as the metric [34]. We choose DTW due to the considerable variation in length and speed between the action generated by our trained policy and the reference motion, and DTW is primarily designed to measure the similarity between two sequences with potential differences in speed or length. In our experiment, we input the sequence of joint positions of our policy and reference motion respectively. Our reference motion comprises three main components: moving forward, leftward, and rightward. Accordingly, we introduce three types of balls moving in these directions and evaluate the states against the respective reference motions. The findings are presented in Table II. The DTW distance, when compared to policies trained from scratch using PPO, is notably lower, demonstrating the effectiveness of our approach. Both visual and quantitative analyses clearly indicate that our method enables the robot to exhibit realistic and plausible behaviors, while performance levels remain largely consistent. ### _Data Augmentation_ We evaluated the efficacy of our motion data augmentation technique using success rate metrics. The overall results are provided in Table I. A noticeable improvement is discernible after augmentation. The primary motivation behind our data augmentation is to condition our trained agent to adeptly handle balls traveling at high speeds. We subsequently measured the success rate of two agents when presented with low-speed balls (\(4.0\sim 4.8\)\(m/s\)) and high-speed balls (spanning from \(5.8\sim 6.6\)\(m/s\)). These findings are presented in Table III. For balls traveling at a reduced velocity, both the original AMP and our augmented method manage to return the ball with commendable efficiency, producing a success rate that surpasses \(90\%\). In contrast, when faced with balls spawned with augmented speeds, the agent trained via the original AMP exhibits a significant decline in its success rate, which is approximately \(70\%\). Our method, conversely, showcases minimal variation in performance, whose success rate is around \(86\%\), differs little compared with low speed balls. The supplementary video further demonstrates the resilience and adaptability of our method to balls of varying velocities. \begin{table} \begin{tabular}{c c c c} \hline \hline & **Forward** & **Leftward** & **Rightward** \\ **PPO** & \(148.23\) & \(263.77\) & \(403.85\) \\ **Our Method** & **62.73** & **133.22** & **243.56** \\ \hline \hline \end{tabular} \end{table} TABLE II: Comparison of DTW distance for PPO and our method. Lower is better. \begin{table} \begin{tabular}{c c c c} \hline \hline **Method** & **Total Attempts** & **Mean Success** & **Success Rate** \\ PPO & \(200\) & \(192.53\) & \(96.27\%\) \\ AMP & \(200\) & \(163.22\) & \(81.61\%\) \\ Our Method & \(200\) & \(180.64\) & \(90.32\%\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Success rate comparison of policies trained with different methods. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Method** & **Speed** & **Total Attempts** & **Mean** & **Success Rate** \\ AMP & Low & \(200\) & \(184.29\) & \(92.15\%\) \\ Our Method & Low & \(200\) & \(183.89\) & \(91.95\%\) \\ AMP & High & \(200\) & \(138.00\) & \(69.00\%\) \\ Our method & High & \(200\) & \(171.00\) & \(85.91\%\) \\ \hline \hline \end{tabular} \end{table} TABLE III: Success rate comparison of policies trained with origiani AMP and our method on different ball speeds. (_If the agent can properly return the spawned ball to the opposing side of the table, then it is considered a success, regardless whether the ball is landed in the goal region._) Fig. 2: Visualization of comparison of playing style of PPO and our method. We also display a motion clip to show the AMP has learned the style from reference motion. ### _Sim2Sim Transfer_ To demonstrate the potential of our method for real-world applications, we effectively transitioned it to an environment constructed on PyBullet [35]. A visualization of two environments is shown in Figure 3. Specifically, we maintained uniformity in the key dynamics parameters, mirroring those used in IsaacGym. However, the difference between the simulation engines and the low level controllers and their different simulation methodologies leads to a gap in the displayed dynamics. Actually, this difference effectively mirrors the main discrepancy between simulations and real-world dynamics. With previously detailed domain randomization, we trained a policy with our method and subsequently tested this policy within the PyBullet framework. The success rates are shown in Table IV. We can see that after transferring, there is a huge decrease in performance compared with the our policy in the original domain. However, it still remains a fairly good performance with a success rate about \(40\%\). ## V Conclusions and Limitations In this study, we constructed a stylized table tennis agent using partial demonstrations derived from dragging-and-teaching methods. This agent produces actions that are more realistic compared to those generated by agents trained from scratch with PPO. We further incorporated data augmentation on the reference motion data, enhancing the agent's adaptability to balls of varying speeds. Additionally, we successfully transferred this agent to a domain with distinct dynamics, implemented using PyBullet. Though we have effectively transferred the policy to a different domain, the evaluation of our policy on a real robot remains unsolved. Time constraints and our selection of high-speed cameras have precluded its transfer to an actual robot, which we consider as our subsequent step. Additionally, our current demonstrations, sourced from human-assisted dragging, are somewhat restricted compared to the diverse demonstrations available from online videos. Thus, there is potential in developing methodologies that enable our robot to learn styles from online table tennis match videos.
2305.19504
Self-Replicating Hierarchical Structures Emerge in a Binary Cellular Automaton
We have discovered a novel transition rule for binary cellular automata (CA) that yields self-replicating structures across two spatial and temporal scales from sparsely populated random initial conditions. Lower-level, shapeshifting clusters frequently follow a transient attractor trajectory, generating new clusters, some of which periodically self-duplicate. When the initial distribution of live cells is sufficiently sparse, these clusters coalesce into larger formations that also self-replicate. These formations may further form the boundaries of an expanding complex on an even larger scale. This rule, dubbed ``Outlier,'' is rotationally symmetric and applies to 2D Moore neighborhoods. It was evolved through Genetic Programming during an extensive automated search for rules that foster open-ended evolution in CA. While self-replicating structures, both crafted and emergent, have been created in CA with state sets intentionally designed for this purpose, the Outlier may be the first known rule to facilitate emergent self-replication across two spatial scales in simple binary CA.
Bo Yang
2023-05-31T02:30:29Z
http://arxiv.org/abs/2305.19504v1
# Self-Replicating Hierarchical Structures Emerge in a Binary Cellular Automaton ###### Abstract We have discovered a novel transition rule for binary cellular automata (CA) that yields self-replicating structures across two spatial and temporal scales from sparsely populated random initial conditions. Lower-level, shapeshifting clusters frequently follow a transient attractor trajectory, generating new clusters, some of which periodically self-duplicate. When the initial distribution of live cells is sufficiently sparse, these clusters coalesce into larger formations that also self-replicate. These formations may further form the boundaries of an expanding complex on an even larger scale. This rule, dubbed "Outlier," is rotationally symmetric and applies to 2D Moore neighborhoods. It was evolved through Genetic Programming during an extensive automated search for rules that foster open-ended evolution in CA. While self-replicating structures, both crafted and emergent, have been created in CA with state sets intentionally designed for this purpose, the Outlier may be the first known rule to facilitate emergent self-replication across two spatial scales in simple binary CA. ## 1 Background and Introduction Self-replication, a hallmark of biological life, represents a significant milestone in the pursuit of artificial life across various mediums. In the digital realm, Von Neumann, in tandem with the earliest cellular automaton, envisioned self-replicating machines as a pathway towards universal construction [1]. His automaton involved cells transitioning among 29 states, and the initial structure spanned hundreds of thousands of cells, necessitating meticulous design. As simpler cellular automata (CA) demonstrated their unique utility, self-replication--regardless of potential applications in universal construction--emerged as a distinct area of interest. Most notably, Langton constructed an eight-state automaton with an 86-cell initial loop structure capable of self-replication [2]. Subsequent research has sought to simplify this further or enhance its capabilities [3]. Traditionally, most self-replicating structures were manually designed. However, CA capable of forming self-replicating structures from random initial conditions could expand our understanding of self-organization and emergence. In [4], this was accomplished using CA with 8-bit state sets, each segmented into four parts to facilitate elements of the replication process, such as signal flow, movement, bonding, and detachment. Self-replicating loops, composed of 2 by 2 cells, emerge from random initial conditions, and their sizes may increase over time. In this paper, we report the discovery of a novel two-state CA rule that enables the spontaneous assembly of larger self-replicating "formations" from smaller, shapeshifting "clusters" that themselves emerge from random initial conditions. An increasing number of these replicating formations often subsequently form an expanding superstructure, or "complex," on an even larger scale. Figure 1 illustrates the hierarchical arrangement of these structures. We have named this rule the "Outlier," as it gen Figure 1: Sample outcome from the Outlier rule starting with a sparse random initial condition. (a) Two clusters on the smallest scale; (b) A self-replicating formation, assembled from a few clusters; (c) On the largest scale, an expanding complex with a semi-chaotic interior, bordered by replicating formations. erates the most seemingly complex behaviors among all the interesting rules we have encountered. ## 2 Discovery of the Outlier Rule The Outlier rule was serendipitously discovered during an extensive automated search for CA rules that would support open-ended evolution (OEE), defined as the continuous emergence of novel and increasingly complex behaviors [5]. Although often associated with OEE, self-replication was never an explicit search goal in that study. Consequently, we will defer most implementation details to a future report, focusing here on two aspects of the methods relevant to the characteristics of the rules selected for evaluations. The particular search runs that led to the Outlier were conducted within the space of \(2^{140}\) rotationally symmetric rules on Moore neighborhoods of 2D binary CA, \(\mathcal{R}\). Mirror parity was not required. To render the search tractable, we employed Genetic Algorithm (GA), Genetic Programming (GP), and various forms of bit representation of the rules as genotypes in several phases of the project. In general, the details of genotype representation in GA and GP searches modulate the probability distribution of random sampling in the parameter space, thereby shaping the search path. This fact is particularly significant in our project, as the entirety of all rules ever evaluated will comprise an infinitesimally small fraction of \(\mathcal{R}\). The Outlier rule was discovered during a Genetic Programming search run wherein each genotype, or rule, was represented as a tree structure of bitwise logic operations, \((G_{1},\ldots,G_{L})\). Specifically, each node \(G_{i}\) in the tree of length \(L\) is a tuple of three integers: \[G_{i}=(f_{i},i_{1},i_{2})\text{ where }i_{1}<=i_{2}<i<=L,\text{ and }f_{i}=0\text{ or }1\] For each cell in a Moore neighborhood, we then compute its new state by traversing the entire tree, starting from its neighborhood states: \[N_{i} =\begin{cases}N_{i_{1}}\oplus N_{i_{2}},&\text{if }f_{i}=1\\ N_{i_{1}}\wedge N_{i_{2}},&\text{if }f_{i}=0\end{cases}\text{ for }i>9,\] \[N_{i} =S_{i},\text{ for }i\leq 9\] Here, \(S_{1}\) through \(S_{9}\) represent the current states of the cells in the Moore neighborhood, with \(S_{1}\) at the center. The center cell is then updated to \(N_{L}\) for the next step, along with all other cells in the automaton updated in the same manner. Additional procedures were added before and after tree traversals to enforce rotational symmetry. The traditional lookup table representation of each rule can be mapped to several logical trees expressed in this way, and they are computationally equivalent. The choice of this representation was initially motivated by computational efficiency, crucial to CA rule search, as a single fitness evaluation often necessitates the calculation of hundreds of billions of cell updates. With one bit per cell memory representation, many adjacent cells can be loaded into long word registers and updated in parallel via consecutive bitwise logical operations specified by the aforementioned trees. Modern CPUs and GPUs can update hundreds to billions of bits concurrently in this manner, with excellent memory locality. For instance, the GP search in 2019 that resulted in the Outlier rule ran on a 14-core Xeon CPU, capable of updating thousands of cells concurrently with AVX-512 support in each core. In later iterations on GPUs, bitwise logical operation trees were tweaked to keep most, if not all, operations in the GPU register files, thanks to optimizing kernel compilers. This often resulted in a speedup by two orders of magnitude. The second implementation detail pertinent to our findings is the fitness function, which measures the complexity or "open-endedness" of the phenotypes, which in our case are the CA bitmaps generated by each rule. As often happens in GA/GP searches, fitness functions derived directly from spatial and temporal analysis are prone to "cheating," wherein rules maximize the fitness score with surprisingly novel yet simplistic behaviors. In the later stages of the project, we adopted "novelty search" as first developed by Lehman and Stanley [6]. This approach rewards new phenotypes that brings "novelty" to all previously evaluated phenotypes. In our implementation, we extract a feature vector, \(\mathbf{F}\), for each rule by quantifying the complexity of CA bitmaps in the later stages of convergence. For each new rule, a novelty score is calculated from the distances from \(\mathbf{F}\) to its \(k\) nearest neighbors in the space of all (or a large sample) of previously computed \(\mathbf{F}\). This implementation of novelty search was somewhat successful, yielding a few rules with intriguing behaviors not seen with other fitness functions. This includes the Outlier, which was algorithmically tagged as sufficiently "novel" for visual inspection. However, nothing substantially more complex has been observed thus far, and our search for OEE in CA continues. ## 3 Cluster, Formation, and Complex As listed in Figure 2, the Outlier rule observes rotational symmetry but lacks mirror symmetry. Similar to many solutions produced by GA/GP, it does not possess a clearly recognizable structure or definable formulation. Notably, its rule table representation has 220 live entries out of 512, which is denser than Conway's Game of Life, which has 140. Under this rule, three categories of trajectories typically follow random initial configurations. Although each individual outcome is probabilistic, the statistical likelihoods are highly dependent on the initial density of live cells, \(D_{0}\), and the grid size. A 1024 by 1024 grid is more likely to become completely empty when \(D_{0}<0.02\), semi-chaotic when \(D_{0}>0.15\), and likely to support replicating formations when \(D_{0}\) falls in between these values. We will refer to these three types of outcomes as "barren," "dense," and "sparse," respectively. These cutoff values for \(D_{0}\) are grid-size dependent. For grids smaller than 512 by 512, replicating formations do not occur at all. We will explain the dependency of the likelihoods on \(D_{0}\) in the next section. Regardless of \(D_{0}\), shape-shifting clusters, each composed of a few dozen live cells at most, form in fewer than a hundred steps. A "cluster" is formally characterized as evolving shapes composed of live cells that are topologically connected; two live cells are considered adjacent if they are in each other's Moore neighborhood. Each cluster Figure 2: The Outlier Rule. The center cell in each of the boxed neighborhoods and their three quarter-turn rotations stays alive. Filled/empty circles stand for live/dead cells respectively. continuously shape-shifts, sometimes splitting into two, or interacting with another cluster through collision or merging. Without these interactions, most of these clusters would disappear within a hundred steps. However, on a "sparse" grid that is sufficiently large, a small fraction of the clusters can survive and grow into larger self-replicating formations by spawning new clusters. Each of these formations consists of groups of clusters, with the numbers fluctuating around ten. A replicating formation expands its territory by creating copies of itself while slowly traversing, until it collides with another formation or cluster outside its territory. Collisions break down a formation back to clusters, which then change shape and interact among themselves continuously and in a chaotic manner, eventually occupying the entire grid. As shown in Figure 3, a dense grid will transition directly into this semi-chaotic phase, before any formation has the opportunity to emerge. When a single replicating formation survives in the middle of a sufficiently large empty area, it periodically generates new formations that form an even larger structure, or a "complex," at a still higher scale. Each boundary region of a complex's four edges consists mostly of replicating formations. These are mostly identical to each other Figure 3: A densely populated random grid transitions to a semi-chaotic phase. The grid is 256 by 256 cells, with periodic boundary conditions, and an initial density of 50%. The step numbers are shown as labeled. and shape-shift synchronously. The initial expansion of the boundaries appears to be driven by a formation protruding out of the rectangular boundary, as shown in Figure 1. Given that all replicating formations are spaced (52, 172) or (172, 52) cells apart from neighboring formations on each side of the complex, the edges of a complex form a rectangle that is tilted counter-clockwise from the axes by \(\arctan(13/43)\), or approximately 16.8214 degrees. The interior of a complex is occupied by surplus clusters, or "debris," that are generated by the replication process but are not part of the replicating formation themselves. These evolve in the same semi-chaotic manner as on a dense grid, and interact with the bordering formations without affecting the integrity of the formations. A complex continuously expands until it occupies all available space or until it collides with other structures outside its territory. Upon such collision, formations break down, and their clusters continue their dynamic transformations on a lower scale, similar to a dense grid. ## 4 Temporal Loops and Transient Attractors To understand the dynamics underlying the replicating formations, we conducted experiments by initializing an empty grid with a single isolated 3 by 3 cluster in the center. Results are displayed in Figure 4. Out of the 140 possible initial configurations, considering rotational symmetry, two (\(\mathbf{c}_{0}\) and \(\mathbf{c}_{2}\)) evolve into replicating formations, while all others die out. \(\mathbf{c}_{0}\) updates into \(\mathbf{c}_{2}\) in two steps and thus follows the same trajectory thereafter. We refer to them as "seed" clusters and their trajectory as \(\mathcal{A}_{s}\). In short, any isolated 3 by 3 initial cluster either disappears or follows \(\mathcal{A}_{s}\). A detailed examination of \(\mathcal{A}_{s}\) reveals that \(\mathbf{c}_{2}\) and some of its follow-up clusters reappear periodically, rotated 90 degrees counter-clockwise each time, with a period of 143 steps. The first period begins when \(\mathbf{c}_{2}\) is initialized and ends when it reappears in \(\mathbf{c}_{145}\), rotated and translated, amongst several clusters spun off during the period. This happens again after another 143 steps, and the formation grows larger. Another two-period run starts at \(\mathbf{c}_{391}\), with two then three rotated copies of \(\mathbf{c}_{2}\). Each new reappearance of the rotated \(\mathbf{c}_{2}\) introduces a new sub-trajectory into \(\mathcal{A}_{s}\) if the new cluster is sufficiently isolated from the rest of the formation and can thus seed its own trajectory. Because the rule is deterministic and rotationally symmetric, all the structures appearing in the first period, such as \(\mathbf{c}_{11}\) and \(\mathbf{c}_{42}\), reappear and sometime self-replicate in the same 143-step period. We also identify these as "seed clusters," and each time one materializes outside the existing trajectory, it adds a new branch, or sub-trajectory, onto \(\mathcal{A}_{s}\). These sub-trajectories are only partially self-similar to the original \(\mathcal{A}_{s}\), as collisions restrict their growth when the vicinity becomes crowded. Hence, four-period reappearances rarely occur. For example, in \(\mathbf{c}_{820}\), which is 143 steps after \(\mathbf{c}_{677}\), four copies of \(\mathbf{c}_{42}\) appear instead of \(\mathbf{c}_{2}\). \(\mathbf{c}_{391}\) appears to be around the time when the accumulation of new clusters suppresses dynamics of 143-step periods by crowding the empty space, and inter-cluster interactions form parallel dynamics on a longer timescale, embodied in larger structures emerging at the formation scale, some of which can self-replicate Figure 4: Seed Trajectory \(\mathcal{A}s\). Configurations are numbered with step counts. \(\mathbf{c}2\) reappears periodically every 143 steps in \(\mathbf{c}2\) - \(\mathbf{c}145\) - \(\mathbf{c}288\), and \(\mathbf{c}391\) - \(\mathbf{c}534\) - \(\mathbf{c}677\). The formation in \(\mathbf{c}391\) is already capable of self-replication, as shown in the next figure. Note rotations of \(\mathbf{c}11\) appears in Figure 1. and thus be identified visually. The formations in the third 143-step period, for example, the one in Figure 4\(\mathbf{c}_{391}\), as a whole and as a higher-level structure, start to reappear with a period of 1556 steps. In fact, it evolves into the "protruding arm" of the larger complex, as illustrated in Figure 5. Visually, it appears to be shape-shifting while slowly moving away from its original position, producing a new replicating formation "behind" itself in each period. A closer inspection reveals that it shares many clusters with the adjacent replicating formation that is being formed, and the boundary between the formations shifts constantly and lacks a clear definition. Many identical clusters are components of both the protruding Figure 5: Formations self-replicating every 1556 steps. Step counts: (a) 391, (b) 1947, (c) 3503, (d) 5059, (e) 6615. arm and the replicating formations and they shape-shift in sync, and most of these clusters and formations reappear every 1556 steps. A similar replicating process later starts at the diagonally opposite corner of the complex, with the corner formation appearing to be "caved in" rather than protruding. In Figure 5, the replicating process first starts at the left edge of the complex, then at each following edge in a clockwise order, spaced by a time lag characterized by the same 1556-step period. The bottom edge forms last and is least defined. As the complex expands, one new replicating formation is added to each edge every 1556 steps. On each edge, most of its formations shape-shift in perfect sync and repeat with the same 1556-step period. Under sparse initial random conditions, our close examination of the updates revealed a clear pattern: the majority of arbitrarily formed clusters eventually vanish. The rare survivors consistently follow the same trajectory \(\mathcal{A}_{s}\), aside from a small period-four "spinner" that rotates 90 degrees per step, usually getting absorbed by other clusters. For example, the automaton in Figure 1 has one surviving cluster that enters \(\mathcal{A}_{s}\) as a seed cluster \(\mathbf{c}_{2}\). When more than one cluster survive, their individual evolutions along \(\mathcal{A}_{s}\) derail when there is a collision between clusters originating from different seeds. Consequently, for the evolution into replicating formations, surviving clusters need to maintain sufficient spatial separation. This explains how the density of the initial random configuration determines the existential likelihood of replicating formations. Neither random initial conditions nor 3 by 3 initial seeds cleanly generate "pure" replicating formations, as they always produce additional clusters, or "debris," in their vicinities. Out of curiosity, we initialized a grid with nothing but an isolated replicating formation, without the debris, and found the subsequent behavior to be similar, as it self-replicates and then grows into a complex. Additionally, we isolated each individual component cluster in a replicating formation and successively used each as an individual seed for initialization, and found that about half of them disappear, while the others evolve into full formations. In short, \(\mathcal{A}_{s}\) appears to be dominating, even though it is not robust. In conclusion, a cellular automaton operating under the Outlier rule transitions into one of three phases: empty, semi-chaotic, or replication at the formation level. The last phase is characterized by a trajectory that resembles an expanding transient attractor. Reappearances of both clusters and formations attach sub-loops to the trajectory, even though they have different characterizing period length: 143 steps for clusters, and 1556 steps for formations. However, this attractor is only transient, as eventually the complex runs out of empty space to expand into, or it collides with other structures, and the semi-chaotic phase takes over. But between that eventuality and the initial randomness, replicating formations can exist for quite many steps. Discussion The "building blocks" one level down from replicating structures in previously constructed self-replicating CAs are the cells themselves, each in one of multiple states, the number of which ranges from 8 to the hundreds [2][3][4]. Each of these states or their subcomponents was assigned a primary "role," such as information storage, replication trigger, structural protection, collision avoidance or inducement. Similar to components of engineered machinery, a state often plays multiple roles but never all of them. In contrast, emergent self-replication in binary CAs has to be more complex than complicated, as, apart from its rule encodable in slightly more than a hundred bits, everything else must emerge on its own. Each cell in a binary CA carries minimal information. The 'building blocks', therefore, must be clusters of cells that emerge from the rule. In this context, emergence on multiple scales becomes a necessity for self-replication. However, it is unclear in the case of the Outlier rule, if each of the clusters carries a role that is specific to the assembly of a replicating formation. The clusters seem to be different but among equals, and perhaps they lack such specificity due to the constraints imposed by the shared cell-level updating rule. Instead, processes on a higher scale level emerge from interactions among clusters in proximity, which in return supports the continued existence and evolution of the clusters. We have observed similar inter-cluster dynamics with other CA rules, but the higher level processes always appear chaotic. The Outlier rule is exceptional in that some of its emergent processes self-repeat, embodied in self-replicating formations. Interestingly, the larger complex generated by the Outlier rule, although not self-replicating as a whole, presents a boundary shape that superficially resembles the "loop"-shaped self-replicators as designed in [2][3] or emerged in [4], specifically, a rectangle with a protruding arm. Whether this resemblance is coincidental or substantial warrants further investigation. The Outlier is the only rule that can generate replicating formations amongst the few hundred thousand rules we examined. Yet its composition looks irregular and arbitrary, which begs the question of how common similarly capable rules are in \(\mathcal{R}\). We performed one-bit flips, or single configuration mutations of the representation in Figure 2, and found no such capability in all the mutations. It appears that the Outlier is unique at least in its immediate adjacent rule space. This of course helps very little in answering the question. Nevertheless, the fact that nontrivial emergent behavior occurs on multiple scales in a simple binary cellular automaton can be intriguing, and this author hopes it is illustrative as well. Finally, inching towards open-ended evolution in CA, a logical adjacent step would be to identify a rule that supports not only emergent replication but also adaptation and structural evolution and is robust to collisions. Some of such capabilities have already been previously showcased with specially designed states, as seen in the nine-state automaton initialized with loop structures discussed in [3]. The prospect of such rules existing with simpler CA remains uncertain. But certainly, only an infinitesimally small fraction of the vast rule space has ever been explored thus far. ## 6 Acknowledgements The author would like to thank Bert Chan and Hiroki Sayama for their encouragement to write this down before venturing away further.
2309.11299
A Cost-Aware Mechanism for Optimized Resource Provisioning in Cloud Computing
Due to the recent wide use of computational resources in cloud computing, new resource provisioning challenges have been emerged. Resource provisioning techniques must keep total costs to a minimum while meeting the requirements of the requests. According to widely usage of cloud services, it seems more challenging to develop effective schemes for provisioning services cost-effectively; we have proposed a novel learning based resource provisioning approach that achieves cost-reduction guarantees of demands. The contributions of our optimized resource provisioning (ORP) approach are as follows. Firstly, it is designed to provide a cost-effective method to efficiently handle the provisioning of requested applications; while most of the existing models allow only workflows in general which cares about the dependencies of the tasks, ORP performs based on services of which applications comprised and cares about their efficient provisioning totally. Secondly, it is a learning automata-based approach which selects the most proper resources for hosting each service of the demanded application; our approach considers both cost and service requirements together for deploying applications. Thirdly, a comprehensive evaluation is performed for three typical workloads: data-intensive, process-intensive and normal applications. The experimental results show that our method adapts most of the requirements efficiently, and furthermore the resulting performance meets our design goals.
Safiye Ghasemi, Mohammad Reza Meybodi, Mehdi Dehghan Takht Fooladi, Amir Masoud Rahmani
2023-09-20T13:27:30Z
http://arxiv.org/abs/2309.11299v1
# A Cost-Aware Mechanism for Optimized Resource Provisioning in Cloud Computing ###### Abstract Due to the recent wide use of computational resources in cloud computing, new resource provisioning challenges have been emerged. Resource provisioning techniques must keep total costs to a minimum while meeting the requirements of the requests. According to widely usage of cloud services, it seems more challenging to develop effective schemes for provisioning services cost-effectively; we have proposed a novel learning based resource provisioning approach that achieves cost-reduction guarantees of demands. The contributions of our optimized resource provisioning (ORP) approach are as follows. Firstly, it is designed to provide a cost-effective method to efficiently handle the provisioning of requested applications; while most of the existing models allow only workflows in general which cares about the dependencies of the tasks, ORP performs based on services of which applications comprised and cares about their efficient provisioning totally. Secondly, it is a learning automata-based approach which selects the most proper resources for hosting each service of the demanded application; our approach considers both cost and service requirements together for deploying applications. Thirdly, a comprehensive evaluation is performed for three typical workloads: data-intensive, process-intensive and normal applications. The experimental results show that our method adapts most of the requirements efficiently, and furthermore the resulting performance meets our design goals. Cloud computing, cost, learning automata, resource provisioning, services, virtual machine. ## 1 Introduction Cloud computing is a technology that provides various services such as Infrastructure-as-a-Service (IaaS) and Software-as-a- Service (SaaS) via related providers [1, 2]; these services are provided to cloud users as a pay-per-use basis model. Nowadays, enterprises prefer to lease their required services such as applications from related providers as online services instead of buying them as on-premise ones [3], especially in the cases that the services are not needed for long use. Thus, a large number of applications that were running on users' desktops are transformed to SaaS services which run on the IaaS cloud providers [4]. The automated provisioning of these applications has many challenges [5, 6]; it is a significant step to satisfy the quality of services (QoS) which can increase the satisfaction of users as well. The cost of provided services is a dominant part of the providers' and users' satisfaction factor [7] which makes resource provisioning strategies more critical. The server cost is the most effective metric in the price of provided services [8]; as the hosting infrastructure costs are considered as the largest share of provisioning cost [9], cloud systems put the main burden on providers, and hence how to reduce the costs is an urgent issue for providers while provisioning. None of current cloud provisioning mechanisms provides cost-effective pay-per-use model for SaaS applications [1]. IaaS is a computational service model that has extra computing resources, such as processing power, storage, and network bandwidth [10] to provide resources to SaaS providers by the aim of virtualization technologies, which is one of the core technologies of cloud computing. Virtual machines (VMs), which are used to host the requested applications, share the available physical resources provided by IaaS providers [11]. The problem of VM placement has introduced as a crucial problem [10, 2, 12]. Optimizing the process of resource management is an effective way of decreasing service costs [8, 13, 14]. Resource management related problems include resource allocation, resource adaptation, resource brokering, resource discovery, resource mapping, resource modeling, resource provisioning and resource scheduling; these are discussed in [14]. Some of approaches that tackle this crucial issue include following. Dynamic SLAs mapping are considered to restrict the number of resource types [15] in a way to optimize costs with respect to the public SLA template. Resource provisioning approaches in cloud systems are addressed in different researches. In [4], provisioning of continuous write applications in cloud of clouds is proposed. In [16], cost-aware solutions for dealing with MapReduce workloads are discussed. To make an optimal provisioning decision in [10], the price uncertainty from cloud providers and demand uncertainty from users are taken into account to adjust the tradeoff between on-demand and over-subscribed costs. Optimizing the resource provisioning task by shortening the completion time for the users' tasks while minimizing the associated cost is performed by dynamic resources provisioning and monitoring approach in [17]. Moreover, the approach includes a new virtual machine selection algorithm called the host fault detection algorithm. Finally, in [9] a cooperative resource provisioning solution for four heterogeneous workloads: parallel batch jobs, web servers, search engines and MapReduce jobs are introduced. Live migration [18] as an important component of cloud computing paradigm provides extreme versatility of management. It is applied to load balancing approaches [19] to improve resilience and availability of services [20]; but it comes at a price of degraded service performance during migration. Transferring the memory image of service from the source host to the destination host is considered as migration process. It could be evaluated by computing the expected degradation in service level due to the bandwidth limitations while migrating services between VMs for load balancing. The cost of live migration process consists of determining when to migrate, deciding what services to migrate, determining where to migrate these services, the pre-copy phase cost, the down time cost, and cost of the amount of bandwidth to be used for the migration in each step [19, 20]. We do not address this issue as our main goal is to efficiently manage the cost of provide the cost request. It is to be noted that the existing optimization provisioning approaches for cloud solutions cannot deal with the application demands; besides, they are cost-aware resource provisioning approaches based on per-job optimization without considering different attributes of individual requirements for VMs placement. In this paper, we apply a learning-based provisioning approach that can allocate available VMs of SaaS providers of cloud to application demands. The providers, which denote SaaS providers in this article, deploy the applications on their particular platforms for providing to demanding users [21]. As providers are unaware of the conditions of requests, learning automata (LAs) are used in the process of provisioning to tackle the lack of information. We propose variable structure LAs in provisioning process of each provider to handle requests. According to the fact that applications comprised of different services, like security services, database services and etc, LAs find the optimal combination of VMs for hosting each service of the demanded application. All services of an application must be deployed on proper VMs to enable execution of the application. Thus, it is necessary to have the minimum requirements of each service before an application provisioned. In other words, if we have a request of \(n\) applications _App\({}_{i}\)_=\(1\).\(n\), each with \(s_{i}\) services then the resource provisioning approach must consider the total requirements of _i=Ins_ services. Considering such property for applications makes us to care about this structure in comparison with two-tier structure of workflows. A resource provisioning approach for a workflow which comprises of \(n\) processes and dependencies between them produces the order of \(n\) processes based on their dependencies [22]. The required resources of processes are allocated based on the order determined by the scheduler; it is not needed to consider the requirements of all processes together [17], while in provisioning of an application there is no order between its services and they must be deployed based on their minimum requirements. Therefore, the provisioning approach of an application searches among available VMs and finds the most proper VM for each service of the application, which is performed by LA. The main aim of this article is to represent a dynamic mechanism that facilitates the optimized resource provisioning process by the use of LAs. The unique benefits of our optimized resource provisioning (ORP) approach are as follows. First, unlike existing models that allow only workflows in general form of jobs and their dependencies, we provide a cost-effective resource provisioning solution for applications by considering the fact that, each application comprises of different services; the main challenge is to provision totally required services of an application. Secondly, ORP is a learning automata-based approach, which selects the most proper computing resources in form of VMs for hosting each service of the demanded application. It considers both cost and computing requirements, as the formalization of measures, for deploying applications based on their attributes; these are applied to LAs to assess the performance evaluation of the approach. Finally, comprehensive evaluations are performed for three typical application types: data-intensive, process-intensive and normal applications. The simulations of ORP present its adaption to most requirements efficiently, while reducing the expected costs, and furthermore the resulting performance meets our design goals as well. The rest of this paper is organized as follows. Section 2 presents the body of the article, i.e. the system model and assumptions of cloud computing environment. Section 3 formalizes the resource allocation problem and introduces the proposed algorithm. The experimental setup and simulations for the performance evaluation of ORP are described in Section 4. Finally, Section 5 summarizes and concludes this article. ## 2 System model The proposed optimized resource provisioning approach, named ORP, significantly improves cost-effective issues of providing the cloud services to users in form of applications. Cloud providers deploy the demanded applications of users on their particular infrastructures [21]. It is to be noted that a provider does not know the upcoming requests in cloud environment. Therefore, it must make decisions based on current situations without any accurate long-term decisions. Thus, a decision maker is required to overcome such limits of variable and unknown situations. Our proposed approach makes optimal provisioning decisions based on current conditions by the use of LAs. As the process proceeds, the provider performs the optimal provisioning decisions based on the requests. According to current requests, the provider determines its way of resource provisioning to maximize its profit while satisfying users. In this section we firstly, describe primitives of learning automaton in Section 2.1; then, in Section 2.2, the proposed provisioning mechanism is generally presented; finally, the details of the mechanism is discussed, i.e. the performance factor of ORP while using LAs is formulated in 2.3. ### Learning Automata Learning automaton [23] is an automatic learning model which its learning relates to the way of collecting and using knowledge during its decision making. The learning process of each learning automaton has three main components: the _LA_, the _Environment_ and the _Reward/Penalty_ structure. They are briefly explained as follows. 1) LA: The LA can be modeled in form of a quintuple as _(Q_, \(a\), \(B\), _F_(,,), _H_(,,)_) [23], where: * \(Q\) is a finite set of internal states of LA as _(q\({}_{1}\), q\({}_{2}\)..., q\({}_{s}\)?,_ where _q\({}_{i}\)_ is the state of LA at instant \(t\). * \(a\) is a finite set of actions of LA as _(a\({}_{1}\), a\({}_{2}\)...,a\({}_{s}\)?,_ where _a\({}_{i}\)_ is the action that the automaton has performed at instant \(t\); note that \(a\) is the output of LA. * \(B\) is a finite set of replies of the environment after that the LA applies the action; \(B\) = _(\(\beta_{1}\), \(\beta_{2}\),..., \(\beta_{n}\)_) where _\(\beta_{i}\)_ is the response of the environment at instant \(t\); in other words, it is the input of LA. * \(F\) is a mapping function that maps the current state and the input of LA to the next state, i.e. \(Q\) \(\times\)_B_ \(\to\)_Q_. * \(H\) is a mapping function that generates an action according to the current state and the input of the LA, i.e. \(Q\) \(\times\)_B_ \(\to\)_a_. 2) Environment: An environment is the medium in which the automaton functions. The environment can be mathematically modeled in form of a triple as _/a_, \(B\), _C_? [23]; \(a\) and \(B\) are the set of inputs and outputs of the environment, respectively; \(C\) is a set of penalty probabilities that the environment considers for each of its inputs as _/c\({}_{1}\), c\({}_{2}\)..., c\({}_{d}\)?. LA interacts with the environment in a feedback loop, as depicted in Fig. 1; in this interaction, the input of LA is the output of the environment and vise versa. The environment replies to the LA based on the selected action. LA updates the probabilities of its actions according to the environment responses. Updating is performed with a particular reinforcement scheme; the negative values of reinforcement indicate punishment, and positive values express reward. There are several models of LA defined based on the response set of the environment. Models in which the output of the environment can take only one of two values, 0 or 1, are referred to as P-models. In such case, the value of 1 corresponds to an unfavorable response which means failure or penalty, while output of 0 denotes the action of LA is favorable. A further generalization of the environment, called Q-models, allows finite response sets that take finite number of values in an interval [a, b]. When the output of the environment is a continuous random variable, the value of the environment is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that is a function that that is a function that is is depicted in Fig. 2. Users demand applications from a provider. The provider hosts a set of applications on its virtual infrastructures, named virtual machines (VMs). Before the requests provided, computing resources have to be provisioned from providers. IaaS providers package the resource requirements of providers into VMs [10, 2, 3], and then services can be deployed on VMs in the same way as physical machines [3]. This way of provisioning enables independent services [27]. In other words, IaaS providers offer requested VMs to the SaaS providers. SaaS providers can buy new VMs while resource provisioning. This assumption causes some delays while provisioning; the delay is because of the time that IaaS providers require for preparing new VMs to SaaS providers, which is discussed in [16], with details. A user may demand different applications from a provider; in fact, users simply submit their requests of applications [10, 16]. Furthermore, they can specify some quality concerns which is known as service level agreement (SLA). There are a large number of commercial providers which may enter cloud market and offer a number of different types of applications [15]. It is clear that users choose providers which provide the application with the less price and acceptable performance. On the other hand, as providers pursue the profit, they try to attract as more users as possible, and thus, they must offer affordable prices with satisfactory performance. The providers have complete control on the attributes of virtual resources that are devoted to the requests. In this paper we have focused on resource provisioning process of providers to minimize the infrastructure cost by minimizing the cost of VMs which are required to handle the requests. Obviously, the providers must decrease the costs to have acceptable prices for services without losing the profits. The proposed resource provisioning model, which is applied to providers of the considered cloud market, is presented in Fig. 3. The depicted model of Fig. 3 is a SaaS provider, such as what presented in Fig. 2, by omitting users and IaaS providers; instead of users and IaaS providers of Fig. 2, a _Request Pool_ and a _Virtual Resources_ frame is considered, respectively. The Request Pool gets the requests from users; the Virtual Resources frame gets VMs from IaaS provider and stores in the provider for hosting the applications. As mentioned before, the provider's goal is to maximize its profit while providing satisfactory services to the users with affordable prices and acceptable performances. The proposed approach of this research reaches the goal by descending the infrastructural costs which is performed by Provisioning System (Fig. 3). According to [8, 9], the server cost contributes the largest proportion of the total cost of provisioning services. Users determine their demanded software requirements, e.g., operating systems and applications [3]; they specify the performance of their requests by some predefined parameters which are determined by SLA. The provisioning system gets the requests, which are stored in Requests Pool, by communicating with Requests Interface. Request Interface is placed under Requests Pool in the model depicted in Fig. 3. Previously mentioned, applications are hosted on VMs, which are provided on a pay-per-use basis by IaaS providers. Each application comprises of several services; the application run by the aim of these services. As an instance, a CRM application, which is provided by different vendors such as Salesforce.com or Microsoft Dynamics, may need some services such as database, security, calculating and accounting to be able to run. Requests are related to an application demands, and they are formalized in form of applications as follows \[\textit{Req=A}\qquad,\,S\quad,\tau. \tag{3}\] Where, _Req_ denotes a request stored in Request Pool; _AppID_ is the identification of the application which is requested in _Req_; \(s\) is the number of services of which the request comprised; the list of the services of the demanded application of _Req_ is stored in _Srv_, which is modeled as _Srv_=[_VMSrv_1,_VMSrv_2,...,_VMSrv_3_]; finally, \(\tau\) denotes deadline of the application which is determined by the user. The services can be located on different VMs based on the _VMSrv_ determined by _Srv_. Each of these services is supposed to be hosted on an individual VM. Since VMs have different properties, cloud providers have a limited set of available configurations [12]. A set of such configuration, which is determined by _VMSrv_ in _Srv_, includes following properties as \(<\)_VM type_, _Core_, _Memory_, _Storage_, _Throughput_, _Hour costF\(>\)_, e.g., a user may request a VM as \(<\)_'large'_, _3, 30 MB_, _2048 MB_, _100KB/s_, _3.400S\(>\)_. We consider the following formulation for introducing a VM, which is stored in Virtual Resources layer of providers, \[\textit{VM}\ =\ \ \ \ \,\ \ The requests, stored in Requests Pool, are forwarded to Provisioning System via Request Interface; the Request Interface is supposed as a gateway that handles the requests and the replies. The provisioning system decides to accept a request if there are sufficient resources available in Virtual Resources of the provider based on the parameters of the requests; if the provider accepts a request it must specify the most appropriate VMs to deploy it. ### Performance Factor The proposed provisioning system depicted in Fig. 3 has to specify the list of VMs that would better host the application. The process operates by the means of a learning scheme consisting of LAs. We used variable structure learning automata (LA) in each SaaS provider to find the best virtual machines (VMs) for deploying the current request including its services. The LA updates the probabilities of taking the actions via learning algorithms to reach the best values of probabilities for current request. As the requests and also the available VMs in the provider change very quickly using LA to find the best VMs for each request is a useful approach in this scenario that can lead each automaton quickly converges to its optimal action. LA explore between the limited numbers of states including placement of services of the request on VMs of the provider. Analyzer plays the role of the environment described in Section 2.1; Analyzer can communicate with Resource Manager and Request Interface to get informed of the available Virtual Resources of the provider and properties of the current request. The actions of LAs, \(a\), is the available VM of Virtual Resources. LA comprises LAs which select VMs according to the probabilities of the action set for each of the services in the requested application of the request. The selected VMs, \(a_{n}\), is passed to Analyzer as the action of LA at instant \(t\). Then, Analyzer evaluates the action of LA to generate a reinforcement signal as its output. This reinforcement is firstly computed the performance factor named \(\rho\). Then, it is evaluated by comparing with a predefined threshold. Since the proposed LAs are P-model one, thus, if \(\rho\) is less than the thresholds, the selected VM is rejected by the learning system; otherwise the learning system accepts the selected action. Suppose that LAs in LA select VMs 1 to \(s\) for hosting services 1 to \(s\) of the demanded application in the current request. LA passes \(a_{i}\)=\(VM_{1}\)... \(VM_{s}\), at instant \(t\), to Analyzer; Analyzer evaluates the action by comparing the selected VM, with the requirements of service \(j\); the result of the comparison is considered as the performance factor \(\rho\), depicted in Eq. (5). \[\begin{array}{l}\rho=\,\\ \\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\ The Provisioning Mechanism Formulation As previously mentioned, our proposed provisioning system uses LAs to find the most proper VM for hosting each service in the request. As mentioned before, after receiving a request by Request Interface, it is forwarded to Provisioning System; each request, _Req_, consists of \(s\) services and Provisioning System must run LAs for the services in _Req_. It is to be noted that LAs search in _VMList_ of the provider and find the best VM for each service. Thus, the problem involves finding \(s\) VMs between \(r\) VMs of different properties existed in Virtual Resources of the provider, 1 _s i s r_, where each VM is available in a certain size _VMSrv\({}_{i}\)_, as introduced in Eq. (4). The problem is to fill the requirements of _Req_, which consists of \(s\) fixed services, with _SelectedVMList_ = [_VM\({}_{1}\)_,..., _VM\({}_{s}\)_] to yield a minimal value for the costs while mapping the requirements. Firstly, Provisioning System must solve the problem by deciding on which service would be better to host at first. Then, it must find the best VM for hosting that service. Since the aim of ORP is to optimize the performance and the cost of provisioned resources in cloud market, performance and cost compatibility model is defined in form of Eq. (5) to get the goal. The equation takes the attributes of the selected VM and the attributes of the request of the user as input variables to help to quantify the performance and the cost. The details of this process are presented in this section. Previously mentioned, a provider must offer satisfactory levels of performance guarantee for deploying demanded applications. In addition to the performance concerns, profit of the provider is a considerable factor in resource provisioning and providing the requested applications as well. Therefore, requests of users are the main revenue source of providers. The mechanism of our proposed provisioning system is applied in order to obtain the optimal VMs selection to increase the providers' profits. The reason is that, ORP finds the nearest VMs to the requirements of services with the lowest prices, which causes an optimal utilization of resources. Initially, the probabilities of actions of LAs are the same, signifying that any of the VMs is equally the same to be selected by the provisioning system. Thus, VMs are randomly selected; then, Analyzer calculates the performance of the selected action and sends a reinforcement signal to LA. The probability of the selected VMs is increased when the environment sends a favorable response to LA, i.e. if VM \(i\) of the provider is chosen for service \(j\), then \(\rho_{i}^{j}\) is better than the threshold value; the probabilities of other VMs are decreased as well. Otherwise, the probability of the selected VM is reduced, while the probabilities of other VMs are increased; the response of the environment is unfavorable. Eqs. (1) and (2) are used for updating the probabilities of actions of LA. The provisioning strategy is presented in Algorithm 1. Table 1 summarizes key notations used in the algorithm. The providers, which can deploy the requested application on their VMs, run the algorithm. A provider can deploy the request if it has sufficient virtual resources based on the requirements of the demanded application; otherwise, the provider can buy new VMs from IaaS providers to provide the request. In this case, Virtual Resources of the provider is changed; thus, \(a\) is changed as well. In other hand, the user has to suffer a delay for virtual resources preparation which might not be acceptable in comparison with the time required for other providers to provide his/her request. As mentioned before, a large number of commercial providers currently exist in cloud market, offering a number of different types of applications [15]. The algorithm of Provisioning System consists of the process of interaction of LA and Analyzer (Figs. 1 and 3). The provider runs Algorithm 1 after receiving a request. At the end of Algorithm 1, a list of VMs, named _SelectedVMList_, is sent to Request Interface (Fig. 3) as the output of the algorithm. Firstly, the provider checks whether its available virtual resources, called VMs, can tackle the current request, _Req_, or not; in case that it cannot tackle (i.e. \(\sim\)_Tacke(VMs, Req)_ in Line 5) which means that the provider requires additional resources, negotiating with IaaS providers via function _Negotiation(IaaS)_ starts. Finally, after provisioning new virtual resources, they are added to the current VMs in Line 7, by function _Renew_(). Then the main part of provisioning of requests starts; the algorithm runs for each service, named _Srv_, of the demanded application in _Req_, from Line 10. Besides, the process is executed in a loop which iterates until the model is converged (Lines 9-29) for each service in _Req_. The model is converged if the probability of a selected VM exceeds 95%, or \(\rho\) remains unchanged for several iterations. Otherwise, the process stops when it iterates for a maximum limit. When the probability of a VM converges to 1, then the selected action of LA is optimal and therefore the selected VM is the one that must be a part of the output of Provisioning System. In Line 12, function _select(VMList)_ randomly selects a VM from the list of virtual resources of the provider named _VMList_ based on the probability of VMs; this selection is the action of LA. Then, in Lines 14-19, the selected action is evaluated to help to generate a reinforcement signal in Lines 24-27. Function _Adapted(p,q)_ compares the values of \(p\) and \(q\), then, returns their difference as the compatibility of \(p\) to \(q\); \(p\) relates to the attributes of the selected VM and \(q\) relates to the attributes of the considered service. The considered compatibility parameters _size_, _core_, _mmry_, _strg_, and _trgp_, which are size, memory, core, storage, and throughput, respectively, denote the compatibility of attributes of the allocated VM to the attributes of the considered service of the request. Then, in Line 19, a variable, named _Total_, is computed according to the values of compatibility parameters. The coefficients _v\({}_{i}\)_, _v\({}_{2}\)_, _..., and _v\({}_{s}\)_ balance the compatibility parameters _size_, _core_, _mmry_, _strg_, and _trgp_ based on the type of the requested application; e.g. in a data-intensive application the storage, memory and throughput are more important than the other factors. In other words, these coefficients enable us to compute the summation of compatibility parameters, which have different types. _Total_ computes the compatibility of the chosen VM to the considered service based on all attributes of the requirements. After assessing the chosen VM, in Line 20, the algorithm computes the performance factor presented in Eq. (5). The \(\textit{Normalize}(p)\) function in Line 20, converts the performance factor into the range of 0 and 1, as follows, \[N\] ( ) = p-AB-A. (6) \(p\) is the main value of performance factor before being normalized; \(A\) and \(B\) are the minimum and the maximum values that \(p\) may take, respectively. The values of \(A\) and \(B\) are calculated based on the minimum and the maximum values of VMs in \(\textit{VMList}\) of the provider, respectively. After normalizing the performance factor of the selected VM, updating the probabilities of the available actions is performed in Line 25 and 27 for favorable selection and unfavorable selection, respectively. As mentioned before, favorable and unfavorable selections are determined by comparing with some thresholds _a\(>\)0_ and _b\(<\)1_ which are determined according to the model in the experiments. ## 4 Performance Evaluations In this section ORP is evaluated in terms of its economical resource provisioning decisions; the performance is compared with variety of systems. Section 4.1 introduces the setting of simulated cloud market environment in a quantitative manner with descriptions of parameters setting. In Section 4.2, firstly, the efficiency of learning system used by provisioning system is analyzed; then, some comparisons with other approaches are discussed. ### Experimental Setup In this section, firstly the local test bed of experiments including the architecture of simulation model, the status of the providers, VMs, IaaS providers, and requests are introduced; then the parameters of LAs are discussed. #### 4.1.1 Local test bed environment We have modeled providers of cloud computing in a discrete event simulation, for evaluating performance of the proposed approach. The simulation model is shown in Fig. 4; it consists of a unit for arrivals of requests which is named Request Generator and forwards the requests to the Request Pool of providers, IaaS provider which provides infrastructural requirements of providers in form of VMs, and several SaaS providers equipped with different resource provisioning techniques which have two outputs, one for evaluating the performance of LAs and one for Comparison \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{Variable} & \multicolumn{1}{c}{Description} \\ \hline _Req_ & A request in Request Pool of the provider \\ \(\textit{Srv}_{t}\) & Service \(i\) in the list _Srv_ of _Req_ \\ \(\textit{VMSrv}_{t}\) & Infrastructural requirements of service \(i\) \\ \(\textit{s}\) & Number of services that _Req_ comprises of \(i\) \\ \(\textit{VM}_{t}\) & VM \(i\) in Virtual Resources of the provider \\ \(\textit{VM}_{t}\)_Size_ & Type of _VM\({}_{t}\)_ \\ \(\textit{VM}_{t}\)Memory_ & Memory of _VM\({}_{t}\)_ \\ \(\textit{VM}_{t}\)Core & Number of cores that _VM\({}_{t}\)_ has \\ \(\textit{VM}_{t}\)Storage & Storage of _VM\({}_{t}\)_ \\ \(\textit{VM}_{t}\)Throughput & read/write network throughput of _VM\({}_{t}\)_ \\ \(\textit{VM}_{t}\)HourCost & Cost of _VM\({}_{t}\)_ per hour \\ \(\rho_{t}^{{}^{\prime}}\) & Performance factor of selecting _VM\({}_{t}\)_ for \\ & deploying _Srv\({}_{j}\)_ \\ \(b\) & Parameter of penalty \\ \(a\) & Parameter of reward \\ \hline \hline \end{tabular} \end{table} Table 1: Parameter definitions and their values Figure 4: Simulation model Unit which compares ORP approach with other provisioning techniques applied to other SaaS providers. These techniques include GORPA [4], OCRP [10], and ORP without LA which selects VMs randomly. For the sake of simplicity, we have assumed that IaaS providers offer the computing resources to the available providers with configurations of instances of Amazon EC2 (Feb 2017) depicted in Table 2. There are a rapidly growing number of SaaS providers which provide required applications of users [11]. Users can easily find the latest list of SaaS providers offering software solutions in their interested area. For our experiments, 15 providers are defined as the ones which use ORP approach for resource provisioning. In particular, when the algorithm converges, the required statistics are calculated to indicate the behavior of ORP on average. These providers initially own predefined number of VMs with configurations of VMs depicted in Table 2; the number of VMs exist in each provider is a random variable determined by uniform distribution (20, 50). It is assumed that IaaS provider offers unlimited amount of resources in terms of VMs, so the simulations are not face with shortage of VMs. As mentioned in Section 3, let _VMList_ denotes the set of VMs available in the provider. It is assumed that each VM hosts a distinct service of the request of application (e.g., some VMs for database services and another for an individual web service); however, the solution can be also extended by omitting this assumption. It is to be noted that certain amount of physical resources is required for hosting provided VMs of IaaS providers. The VM instance is determined according to the required amount of resources of a VM [10]; some instances are presented in Table 2. The prices, in Table 2, are defined in dollars per resource unit for an hour in Windows usage. In our experiment, no probability distribution of arrival of requests is considered since they are stored in Request Pool and they are sent to providers per request. We use Grid Workloads Archive (GWA), GWA-T-12 Bitbrains, from Delf University ([http://gwa.ewi.tudelft.nl](http://gwa.ewi.tudelft.nl)) as our workload traces [28]. It contains the performance metrics of 1,750 VMs from a distributed datacenter from Bitbrains. Bitbrains is a service provider that specializes in managed hosting and business computation for enterprises. Each file of GWA-T-12 contains the performance metrics of a VM. In our experiments fastStorage is applied. The workload is entered to the model from Request Generator. GWA consists of different VM requirements of 1750 requests. Our evaluations use some of these requests and they are stored in Request Generator; they are sent to the providers in an offline manner one per request, and they are stored in Request Pool of the provider. The format of each request of application in GWA is compatible with the introduced metrics in Eqs. (3) and (4). As previously mentioned, the requests are in form of application; applications are software packages which consist of different services, e.g., operating system, database, and any other utility service. For simplicity we consider 20 types of applications, in GWA, that a user can demand. The licenses of applications are assumed to be purchased from software vendors by the providers; thus, users, instead of buying licenses, desire lease an application from the providers to save their budgets. The leaseal applications are needed to be hosted on the proper virtual resources of providers [10], named VMs. There is not large number of different types of VMs offered by providers; for instance, Amazon introduces only few derivations of their basic resource type [15]; Table 2 depicts offered instances of VMs in our experiments. The proper VMs are the ones which are compatible with the requirements of the demanded application. The user pays for the application according to the license cost of the application per running VMs [10]. The cost of a licensed application is determined based on the selected VMs. The required VMs of each service of the application are varied following GWA. #### 4.1.2 Sensitivity analysis First, we start with a sensitivity analysis on the learning parameters \(\alpha\) and \(\beta\), in order to study their effects on the performance of the ORP and also to find the best value of them. To reach this, an input set with 50 requests is considered. Two main parameters of ORP algorithm are the reward and the penalty parameters of LAs, \(\alpha\) and \(\beta\), respectively. To achieve more certainty and to accelerate the convergence of the algorithm, these parameters can be varied from 0.7 to 0.9 and from 0 to 0.1, respectively in different experiments and the reported values are depicted in Fig. 5. The vectors of the chart in Fig. 5, which are marked with \(\alpha\), \(\beta\) and _Iteration_ labels, indicate the reward and the penalty parameters of LAs and the average number of iterations required for convergence of LAs in 10 different providers, respectively. Fig. 5 shows that the best value for \(\alpha\) and \(\beta\), which has the least number of iterations, is 0.8 and 0.05, respectively. Furthermore, the number of steps of LAs must be determined in order to stop the algorithm when the other convergence conditions are not accessed. According to Fig. 5, it can be concluded that LAs find the solution and converge after approximately 200 steps, in average. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \multirow{2}{*}{\begin{tabular}{c} **Attr.** Size** \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} **VCPU** \\ (GB) \\ \end{tabular} } & \begin{tabular}{c} Memory \\ (GB) \\ \end{tabular} & \begin{tabular}{c} Storage \\ \end{tabular} & Price per VM/S \\ \hline \(\Omega\).small & 1 & 2 & 1\(\times\)4 & \(\$0.026\) / Hour \\ \hline \(\Omega\).medium & 2 & 4 & 1\(\times\)4 & \(\$0.052\) / Hour \\ \hline m.medium & 1 & 3.75 & 1\(\times\)4 & \(\$0.070\) / Hour \\ \hline m4.large & 2 & 8 & 1\(\times\)32 & \(\$0.1041\) / Hour \\ \hline c3.large & 2 & 3.75 & 2\(\times\)16 & \(\$0.141\) / Hour \\ \hline c4.large & 4 & 7.5 & 2\(\times\)40 & \(\$0.2067\) / Hour \\ \hline c4.2xlarge & 8 & 15 & 2\(\times\)80 & \(\$0.412\) / Hour \\ \hline r3.large & 2 & 15 & 1\(\times\)32 & \(\$0.175\) / Hour \\ \hline i3.large & 2 & 15.25 & 1\(\times\)32 & \(\$0.109\) / Hour \\ \hline i3.large & 4 & 30.5 & 1\(\times\)80 & \(\$0.218\) / Hour \\ \hline i3.2xlarge & 8 & 61 & 1\(\times\)160 & \(\$0.436\) / Hour \\ \hline \end{tabular} \end{table} Table 2: Properties of VMs with the prices ### Experimental results and analysis At the beginning of the simulation, the generated requests are sent to Request Pool. Then the requests are forwarded to Request Interface of the providers. Each provider tries to find the best combination of VMs for hosting the requests. At the end of the processing of all requests, some statistical data such as average number of rejected requests, utilization of VMs and cost of the provisioned resources are generated. We compare the performance of resource provisioning mechanisms depicted in Fig. 4, including ORP, GORPA [4], OCRP [10], and ORP without LA which selects VMs randomly. The results of the experiments are included a baseline as well. The input of baseline experiments is a set of predefined requests which is sent to Request Pool. Our performance evaluation is measured according to three performance metrics: the number of requests that are denied to be processed by the approaches as no VM is available; in this case the algorithms run without adding new VMs. The second evaluated metric is utilization of VMs, and the third one is the total provisioning cost. The experiments are designed to evaluate the values of these metrics in form of the outputs of Comparison Unit, in Fig. 4; Section 4.2.1 discusses the results of the evaluations of these metrics. In addition, a comprehensive evaluation of ORP is performed for three typical workloads: data-intensive, process-intensive and normal applications; it is discussed in Section 4.2.2. #### 4.2.1 Comparison with other resource provisioning mechanisms The experiments use the values of parameters shown in Tables 1 and 2, with the same workload traces from GWA, for all provisioning approaches depicted in Fig. 4. It is to be noted that for evaluating the throughput and the QoS violation of our provisioning approach, a fix number of VMs is supposed to enable the comparison of the approaches more accurately [4]. However, in other experiments that performed for validating the evaluations of the proposed approach, such as costs and utilization, SaaS providers can take advantage of using VMs in an elastic fashion. We firstly compare the throughput and the QoS violation of ORP in comparison with GORPA introduced in [4], OCRP in [10], and LA-omitted ORP approaches of provisioning. In these two experiments, the same amount of virtual infrastructures in form of VMs is assumed for providers based on Table 2. Although the whole point of using VMs is to virtualize the infrastructure, and to request and release VMs on demand in an elastic fashion to adapt to the workload but in these experiments, it is assumed that the providers cannot take advantage of this in order to verify the throughput of the approaches and the average number of requests which cannot be processed by the approaches. Requests are classified into three classes depending on their resource requirements as in [4]: small, medium and high demand classes; some of the requests applied to the experiments are presented in Table 3, derived from GWA-T-12 Bitbrains. The results of this comparison are depicted on the bar chart of Fig. 7. It is shown that the throughput of ORP is more than other approaches. The reason is that in ORP, LAs find the most proper VMs amongst Virtual Resources of the provider; thus, the compatibility of the attributes of the VMs and the requests are well considered and many of the requests can be provisioned with a determined amount of virtual resources. While LA is used for finding the most proper VMs for a request there are little number of requests which cannot be processed and are rejected. The performance is better than both of GORPA and OCRP; GORPA is designed for continuous write applications and considers the shortest path in terms of data transmission cost between VMs. OCRP provisions based on reservations which does not performs well in these experiments in comparison with ORP. Utilization is another parameter which is used to evaluate the performance of the proposed approach. To compute the utilization of VMs of a provider the average CPU, memory, and disk for the set of VMs for each request is used. It is to be noted that if a provider allocates VMs to requests without considering the requirements then the utilization of the provider will be a low value; on the other hand if the compatibility of the requirements of the requests and the selected VMs is high then the provisioning approach will have a great utilization of its virtual resources as well. In this section we compare the utilization of virtual resources of providers while using ORP approach and OCRP [10], GORPA [4], and ORP without LA. The results of this comparison, which are generated by Comparison Unit of simulation model depicted in Fig. 4, are represented in Fig. 8. Fig. 8 demonstrates that the utilization of ORP approach is more efficient than the utilization of others; the reason is that ORP is effective in all properties of a VM such as CPU cycle and memory allocation with high resource utilization; Provisioning System in ORP chooses a VM with the most adaption with the service requirements, in each iteration as introduced in Eq. (5). None of other approaches take care of attributes of VMs in addition to the hour costs of VMs while allocation a VM to a service in the request. The last metric considered in our experiments is the total cost of the selected VMs for deploying the requests. The comparisons of costs of provisioned VMs are performed between ORP, and OCRP [10] and GORPA [4]. Although we used the attributes of VMs presented in Table 2 for the previous experiments, for cost comparison in this experiment, the prices introduced in [10], are applied to compare the techniques in same conditions. The comparison of the cost is generated by Comparison Unit of simulation model depicted in Fig. 4; the results are represented in Fig. 9. It is obvious from the figure that our proposed resource provisioning approach, ORP, obtains better costs comparing with both OCRP and GORPA. The increase of costs with the growth of the requests is expected, which can be seen in Fig. 9. While the requests increase the differences of the total costs of ORP with the others are decreased; the reason is that ORP chooses VMs without considering any future model of requests, and the provisioning is performed in a way that the most proper VMs would be chosen based on the current requests. Therefore the provider must buy new resources and for this the provider may incur additional costs such as costs of new virtual resources offered by IaaS providers and costs of time waiting for the preparation of new VMs [4]. Thus, total costs might have higher prices for new requests as depicted in Fig. 9. Fig. 8: Comparison between utilization of VMs Fig. 7: QoS evaluation (a) Throughput (b) number of rejected requests 2.2 Impact of ORP on typical application types: data-intensive, process-intensive and normal applications In this section, the resource provisioning method has been evaluated for typical application types. We consider three general types of requests, i.e. the requests of data-intensive applications, the requests of process-intensive applications and requests of normal applications. Specifically, we want to show that our model can effectively assign proper VMs to each type of requests privately, with respect to the application requirements, while keeping the costs low. Firstly, the requirements of each application type are discussed; then, a simple scenario to evaluate the performance of ORP based on the type of the requests is presented. For each type of application, a set of services with different requirements is needed. The applications which devote most of their execution time to computational requirements are deemed process-intensive, whereas applications which require large volumes of data and devote most of their processing time to I/O and manipulation of data are deemed data-intensive. Normal applications have both requirements of data processing and computational processes. Our traced workloads of these types are presented in Table 4. In Fig. 10, the evolutions of total number of requests that are processed by ORP are presented. As expected, in demands of normal applications the growth of costs is smoother while in data-intensive and process-intensive demands, the costs increase faster. Furthermore, since we consider that the virtual resources capacity of each provider is limited, the amount of VMs that each provider dedicates to the requests is bounded; this makes the total cost stop increasing after processing a number of requests. In Fig.10, more requests can be processed while requests are normal application types in comparison with two other types. The reason is that in two later cases, the proper VMs for data-intensive and process-intensive applications finish before in normal application types demands, since special VMs are required in these two types. Therefore, also more increase of costs is seen in data-intensive and process-intensive demands in comparison with normal demands, the number of processed requests decrease as well. ## 5 Conclusion Cloud computing has enabled new technologies to Software-as-a-Service (SaaS) providers and Infrastructure-as-a-Service (IaaS) providers to offer applications online with pay per use model. These technologies make computing resources more powerful, and thus more efficient resource provisioning techniques must be involved. Current researches of resource provisioning approaches lacks of applications granularity; in this paper, we have proposed an optimized resource provisioning (ORP) approach in order to provide applications, which consist of different services, to users via virtual resources. Providers try to provide the application with an affordable cost while the performance is satisfying. ORP uses LAs on each provider to deploy each request on the best combination of VMs while saving the infrastructural cost. In this article, our proposed approach focuses on how to lower the resource provisioning cost while not severely degrading the performance metrics of services. A comprehensive evaluation is performed for three typical workloads: data-intensive, process-intensive and normal applications. The experimental results show that ORP \begin{table} \begin{tabular}{|c|l|c|c|c|} \hline \multirow{2}{*}{Request} & \multirow{2}{*}{Services} & \multirow{2}{*}{VCPU} & Memory & Storage \\ & & & (GB) & (GB) \\ \hline \multirow{3}{*}{Data-intensive} & VMSrv\({}_{1}\) & 1 & 15 & 2\textgreater{}40 \\ \cline{2-5} & VMSrv\({}_{2}\) & 1 & 30 & 1\textgreater{}32 \\ \cline{2-5} & VMSrv\({}_{3}\) & 2 & 60 & 1\textgreater{}80 \\ \hline \multirow{3}{*}{Process-intensive} & VMSrv\({}_{1}\) & 4 & 2 & 1\textgreater{}4 \\ \cline{2-5} & VMSrv\({}_{2}\) & 8 & 4 & 1\textgreater{}4 \\ \cline{2-5} & VMSrv\({}_{3}\) & 8 & 8 & 2\textgreater{}16 \\ \hline \multirow{3}{*}{Normal} & VMSrv\({}_{1}\) & 1 & 4 & 1\textgreater{}4 \\ \cline{2-5} & VMSrv\({}_{2}\) & 2 & 8 & 1\textgreater{}32 \\ \cline{1-1} \cline{2-5} & VMSrv\({}_{3}\) & 4 & 15 & 2\textgreater{}80 \\ \hline \end{tabular} \end{table} Table 4: Properties of requests used for experiments of Fig. 7 Figure 10: Evolution of total costs in different types of demands with constant number of VMs Figure 9: Cost comparison efficiently adapts the infrastructural requirements, and the resulting performance meets our design goals as well. In general, averages of utilization and cost were improved; in addition the number of requests which can be processed by ORP are optimized.
2309.08910
Total-effect Test May Erroneously Reject So-called "Full" or "Complete" Mediation
The procedure for establishing mediation, i.e., determining that an independent variable X affects a dependent variable Y through some mediator M, has been under debate. The classic causal steps require that a "total effect" be significant, now also known as statistically acknowledged. It has been shown that the total-effect test can erroneously reject competitive mediation and is superfluous for establishing complementary mediation. Little is known about the last type, indirect-only mediation, aka "full" or "complete" mediation, in which the indirect (ab) path passes the statistical partition test while the direct-and-remainder (d) path fails. This study 1) provides proof that the total-effect test can erroneously reject indirect-only mediation, including both sub-types, assuming least square estimation (LSE) F-test or Sobel test; 2) provides a simulation to duplicate the mathematical proofs and extend the conclusion to LAD-Z test; 3) provides two real-data examples, one for each sub-type, to illustrate the mathematical conclusion; 4) in view of the mathematical findings, proposes to revisit concepts, theories, and techniques of mediation analysis and other causal dissection analyses, and showcase a more comprehensive alternative, process-and-product analysis (PAPA).
Tingxuan Han, Luxi Zhang, Xinshu Zhao, Ke Deng
2023-09-16T07:26:34Z
http://arxiv.org/abs/2309.08910v2
# Total-Effect Test May Erroneously Reject So-Called "Full" or "Complete" Mediation ###### Abstract The procedure for establishing mediation, i.e., determining that an independent variable \(X\) affects a dependent variable \(Y\) through some mediator \(M\), has been under debate. The classic causal steps require that a "total effect" be significant, now also known as statistically acknowledged. It has been shown that the total-effect test can erroneously reject competitive mediation and is superfluous for establishing complementary mediation. Little is known about the last type, indirect-only mediation, aka "full" or "complete" mediation, in which the indirect (\(ab\)) path passes the statistical partition test while the direct-and-remainder (\(d\)) path fails. This study 1) provides proof that the total-effect test can erroneously reject indirect-only mediation, including both sub-types, assuming least square estimation (LSE) \(F\)-test or Sobel test; 2) provides a simulation to duplicate the mathematical proofs and extend the conclusion to LAD-\(Z\) test; 3) provides two real-data examples, one for each sub-type, to illustrate the mathematical conclusion; 4) in view of the mathematical findings, proposes to revisit concepts, theories, and techniques of mediation analysis and other causal dissection analyses, and showcase a more comprehensive alternative, process-and-product analysis (PAPA). Hypothesis testing; indirect-only mediation; mediation analysis; total-effect test. ## 1 Introduction The procedure to establish mediation, i.e., how an independent variable \(X\) affects a dependent variable \(Y\) through some mediator \(M\), has been under debate. The classic _causal-steps_ procedure requires that the _total effect_\((c)\), i.e., the effect of \(X\) on \(Y\) without controlling \(M\), be significant, now known as statistically acknowledged (Baron and Kenny, 1986). It has been shown that the total-effect test can erroneously reject competitive mediation and is superfluous for establishing complementary mediation (Jiang et al., 2021; Hayes, 2009; Zhao et al., 2010). Little is known about the third and the last type, the indirect-only mediation, in which the indirect \((ab)\) path passes the statistical test while the direct-and-remainder \((d)\) path fails. Roughly equivalent to "full mediation" aka "complete mediation" in the classic quasi-topology of _full, partial, and no mediation_, indirect-only mediation is believed to be the strongest form of mediation (Baron and Kenny, 1986; Hayes, 2022, p. 126). While a revised procedure of causal steps (Kenny et al., 1998; Kenny, 2008, 2021) allows researchers to "suspend" or "relax" the total-effect test when _suppression_, aka _inconsistent_ or _competitive mediation_, is suspected, full (indirect-only) mediation and partial (complementary) mediation do not qualify for the relief. As of today, the total-effect test and, most importantly, the underlying conception of "mediation" and "effect" remain at the core of the criteria for establishing mediation across disciplines and languages (e.g. Kenny (2021); Mathieu and Taylor (2006); Rose et al. (2004); Wen et al. (2004); Wen et al. (2005); Wen and Ye (2014); Wen et al. (2022)). Section 2 below provides a brief review of the debate over the total-effect test. This study is assigned several tasks. 1a) Provide a mathematical proof that the total-effect test can erroneously reject indirect-only mediation, including both sub-types, assuming least square estimation (LSE) and \(F\)-test. 1b) Provide derivation to show that the same results can be obtained assuming Sobel test. 2) Provide a simulation to duplicate the mathematical proofs and extend the conclusion to LAD-\(Z\) test. 3) Provide two real-data exam the mathematical proof and the simulation outcomes. 4a) In light of the mathematical findings, propose revisions to the concepts, theories, and techniques of mediation analysis and other causal dissection analyses. 4b) Introduce the principles of a more comprehensive, i.e., more encompassing and more informative, alternative, _process-and-product analysis_ (PAPA). ## 2 Debate on Total-Effect Test for Establishing Mediation Mediation model suggests a causal chain where an independent variable \(X\) affects a dependent variable \(Y\) through a third variable \(M\), known as a _mediator_. In a classic work that influenced generations of researchers, Baron and Kenny (1986) defines mediation as a linear regression model: \[M = i_{M}+aX+\varepsilon_{M}, \tag{2.1}\] \[Y = i_{Y}+bM+dX+\varepsilon_{Y}, \tag{2.2}\] where the errors are assumed to follow independent normal distributions: \[\varepsilon_{M}\sim N(0,\sigma_{M}^{2}),\quad\varepsilon_{Y}\sim N(0,\sigma_{ Y}^{2}).\] As shown in Figure 1, the model involves two paths: 1) the _indirect path_ "\(X\to M\to Y\)" indicates the _mediated effect_ of \(X\) on \(Y\) via _mediator_\(M\), which equals to \(a\times b\), and 2) the so-called _direct path_ "\(X\to Y\mid M\)" indicates Figure 1: Mediation model proposed by Baron and Kenny (1986). the _direct-and-remainder effect_ of \(X\) on \(Y\) while \(M\) is controlled, represented by \(d\). Reorganizing models (2.1) and (2.2), we have: \[Y=i_{Y}^{*}+cX+\varepsilon_{Y}^{*}, \tag{2.3}\] where \(i_{Y}^{*}=i_{Y}+bi_{M}\), \(\varepsilon_{Y}^{*}=\varepsilon_{Y}+b\varepsilon_{M}\), and \(c=a\times b+d\) represents the _total effect_ of \(X\) on \(Y\). A formal typology has been established that features three types of mediation (Zhao et al., 2010, 2011): (1) _Complementary mediation_, where mediated effect and direct effect both pass the statistical partition tests and bear the same sign, i.e., \(a\times b\times d>0\); (2) _Competitive mediation,_ where mediated effect and direct effect both pass the tests and bear opposite signs, i.e., \(a\times b\times d<0\); and (3) _Indirect-only mediation,_ where mediated effect passes the test while direct effect fails, i.e., \(a\times b\neq 0\) but \(d=0\). Almost all experts accept and adopt the above definition of mediation, namely a statistically acknowledged \(a\times b\). The "causal-steps" procedure, however, adds another test, a statistically significant total effect, \(c=a\times b+d\). This "total-effect test" is necessary because, according to this dominant doctrine, \(c\) represents _the effect to be mediated_; a statistically non-significant \(c\) indicates there is nothing to mediate hence no mediation is possible. Therefore, in the causal-steps procedure, if \(c\) fails to pass the statistical test, the mediation hypothesis is declared a failure and further analysis is stopped. Although the causal-steps doctrine dominated mediation analysis across disciplines, whether and under what conditions the total-effect test should be required became a subject of discussion and debate. The opinions may be organized into three groups. _1) Complete acceptance_: The seminal Baron and Kenny (1986) requires the total-effect test as the first bar to pass and allows no exception for establishing mediation of any type. The procedure and the total-effect test as the first criterion have been recommended time and again by mediation experts across disciplines (e.g., Judd and Kenny (1981b); Mathieu and Taylor (2006); Rose et al. (2004)). As we write, the total-effect test, and more importantly the underlying conception of "mediation" and "effect," remain part of the standard guidelines for establishing mediation across disciplines and languages (e.g. Kenny (2021); Mathieu and Taylor (2006); Rose et al. (2004)Wen et al. (2004, 2005); Wen and Ye (2014); Wen et al. (2022)). _2) Conditional Suspension._ Even before Baron and Kenny (1986), statisticians had recognized "suppression", aka "inconsistent models", "confounding", and, more recently, "competition", where the mediated path \(a\times b\) and the direct-and-remainder path \(d\) have the opposite signs (Breslow et al., 1980; Davis, 1985; Judd and Kenny, 1981a; Lord et al., 1968; McFatter, 1979; Velicer, 1978). The subject came up more often after Baron and Kenny (1986), as shown in numerous studies discussing it (Cliff and Earleywine, 1994; Cohen et al., 2013; Conger, 1974; Collins et al., 1998; Hamilton, 1987; Hayes, 2009; Horst et al., 1941; Kenny et al., 1998; Kenny, 2021; MacKinnon et al., 2000; Rucker et al., 2011; Sharpe and Roberts, 1997; Shrout and Bolger, 2002; Tzelgov and Henik, 1991; Zhao et al., 2010). It was Collins et al. (1998) who proposed suspending the total-effect test for inconsistent mediation. Only a special type of the "inconsistent" models - those with dichotomous variables - would qualify for the relief. Other authors, at about the same time or shortly after, suggested suspending or dropping the total-effect test when there is _a priori_ belief of suppression (Kenny et al., 1998; Kenny, 2021; MacKinnon, 2000; MacKinnon et al., 2000, 2002; Shrout and Bolger, 2002). Shrout and Bolger (2002, p. 438) also added a type, by proposing to relax the total-effect test for distal processes and expectedly small effect sizes. These authors often stressed the importance of retaining the total-effect test for all other types of mediation. Shrout and Bolger (2002, p. 430), for example, argued that the total-effect test has conceptual usefulness, because "clearly, (researchers) need to first establish that the effect exists". _3) Complete repeal_: Hayes (2009, p. 414) pointed out that suppression is a regular occurrence and recommended "researchers not require a significant total effect". Zhao et al. (2010, p. 199) re-conceptualized "suppression" as "competitive mediation", and presented a real-data example in which the competition caused a non-significant \(c\) even though the mediated path was strong. They hence concluded "There need not be a significant zero-order effect of \(X\) on \(Y,\dots\), to establish mediation". Rucker et al. (2011, p. 18) agreed, concluding that "focusing on the significance of the \(X\to Y\) relationship before or after examining a mediator might be unnecessarily restrictive". The discussions and debates, however, were conducted mostly on conceptual and philosophical levels without mathematically rigorous evidence. In fact, the subjects under discussion were not formulated as mathematical problems until recently. Zhao et al. (2010) and Zhao et al. (2011) replaced the traditional one-dimension conception of mediation with the two-dimension framework, which allowed the authors to replace the dominant full-partial-no quasi-topology with the five-type typology. Through the prism of the new typology, and armed with the new vernaculars, the authors observed that the total-effect test may 1) be superfluous for establishing complementary mediation 2) erroneously reject competitive mediation, and 3) erroneously reject indirect-only mediation. Although not backed by rigorous proof or systematic evidence other than the one real-data example (Zhao et al., 2010), the three observations fostered three questions: Does the total-effect test help, harm, or neither for establishing each of the three types, i.e., complementary, competitive, or indirect-only mediation? Jiang et al. (2021) turned one observation, that the total-effect test is superfluous, into a mathematical conjecture, and produced a proof for the conjecture. They did so after transforming the task into a series of mathematical problems, which is to verify the geometry of rejection regions of hypothesis tests for \(a\), \(b\), \(d\) and \(c\). Employing theoretical analyses, mathematical proofs, Monte Carlo simulation, and real-data examples, Jiang et al. (2021) demonstrated that the total-effect test is indeed superfluous for establishing complementary mediation when the paths are estimated by the least square estimators and tested by \(F\)- or Sobel test. We are to extend the advancement to the other two types. The geometric analysis developed by Jiang et al. (2021) potentially can be applied to build a mathematical framework for analyzing mediation of all types. The potential, however, has yet to be realized. This study is to fill the gap. Section 3 reviews the key elements of the geometric analysis, and extends the analysis to establishing complementary mediation under the LSE-Sobel frameworks. Sections 4 and 5 utilize the geometric approach to analyze indirect-only and competitive mediation under LSE-\(F\) and LSE-Sobel frameworks. The results are validated numerically in Section 6 via simulation. Section 7 applies the main results to two real-data examples. Section 8 summarizes and discusses the main findings. A Geometric Perspective to Study the Total-Effect Test ### Geometric Representation of Criteria for Establishing Mediation In data analysis, researchers use hypothesis tests to determine whether the direct, indirect, and total effects each pass the partition threshold (Kenny et al., 1998; Zhao et al., 2010). Denoting the rejection regions of appropriate hypothesis tests with significance level \(\alpha\) as \(\mathcal{R}_{a\times b}(\alpha)\), \(\mathcal{R}_{d}(\alpha)\), and \(\mathcal{R}_{c}(\alpha)\), and using estimators \((\hat{a},\hat{b},\hat{d},\hat{c})\), we can claim a statistically significant causal effect based on the sign of its estimator. For example, a positive direct causal effect is claimed when \(\hat{d}>0\) and the observed data fall within \(\mathcal{R}_{d}(\alpha)\). Similar definitions apply to the total and indirect effects. The _process-and-product approach_ (PAPA) defines three types of mediation, namely complementary (\(\mathbf{C}_{+}\)), indirect-only (\(\mathbf{C}_{0}\)), and competitive (\(\mathbf{C}_{-}\)), at a given significance level \(\alpha\)(Jiang et al., 2021; Liu et al., 2023; Zhao et al., 2010, 2011). The _causal steps_ approach, however, requires also a statistically significant total effect (_c_) to claim any type of mediation (Baron and Kenny, 1986). The two sets of requirements together would imply the following: \[\mathbf{C}_{+}: \mathcal{D}\in\mathcal{R}_{a\times b}(\alpha)\cap\mathcal{R}_{d} (\alpha)\cap\mathcal{R}_{c}(\alpha)\text{ and }\hat{a}\times\hat{b}\times\hat{d}>0; \tag{3.4}\] \[\mathbf{C}_{0}: \mathcal{D}\in\mathcal{R}_{a\times b}(\alpha)\cap\bar{\mathcal{R }}_{d}(\alpha)\cap\mathcal{R}_{c}(\alpha);\] (3.5) \[\mathbf{C}_{-}: \mathcal{D}\in\mathcal{R}_{a\times b}(\alpha)\cap\mathcal{R}_{d} (\alpha)\cap\mathcal{R}_{c}(\alpha)\text{ and }\hat{a}\times\hat{b}\times\hat{d}<0; \tag{3.6}\] where \(\mathcal{D}\) represents the observed data, \(\bar{\mathcal{R}}\) stands for the complementary set of a rejection region \(\mathcal{R}\). If the total-effect test fails, the mediation fails to validate. However, the PAPA analysts propose alternative criteria without considering the total effect \(c\): \[\mathbf{C}_{+}^{*}: \mathcal{D}\in\mathcal{R}_{a\times b}(\alpha)\cap\mathcal{R}_{d}( \alpha)\text{ and }\hat{a}\times\hat{b}\times\hat{d}>0; \tag{3.7}\] \[\mathbf{C}_{0}^{*}: \mathcal{D}\in\mathcal{R}_{a\times b}(\alpha)\cap\bar{\mathcal{R} }_{d}(\alpha);\] (3.8) \[\mathbf{C}_{-}^{*}: \mathcal{D}\in\mathcal{R}_{a\times b}(\alpha)\cap\mathcal{R}_{d}( \alpha)\text{ and }\hat{a}\times\hat{b}\times\hat{d}<0. \tag{3.9}\] Therefore, the need for the total-effect test in establishing mediation can be rationalized by examining the geometric relationships of the corresponding rejection regions. Specifically, if we demonstrate that \[\mathcal{R}_{a\times b}(\alpha)\cap\mathcal{R}_{d}(\alpha)\cap\bar{\mathcal{ R}}_{c}(\alpha)=0\text{ whenever }\hat{a}\times\hat{b}\times\hat{d}>0, \tag{3.10}\] we establish \(\mathbf{C}_{+}\Leftrightarrow\mathbf{C}_{+}^{*}\), indicating that the total-effect test is unnecessary for complementary mediation. Similarly, if we show that \[\mathcal{R}_{a\times b}(\alpha)\cap\bar{\mathcal{R}}_{d}(\alpha)\cap\bar{ \mathcal{R}}_{c}(\alpha)\neq\emptyset, \tag{3.11}\] we have \(\mathbf{C}_{0}\Leftrightarrow\mathbf{C}_{0}^{*}\), suggesting that the total effect test may lead to misleading results and erroneously reject indirect-only mediation when we consider criterion \(\mathbf{C}_{0}^{*}\) to be more appropriate. Likewise, if we demonstrate that \[\mathcal{R}_{a\times b}(\alpha)\cap\mathcal{R}_{d}(\alpha)\cap\bar{\mathcal{ R}}_{c}(\alpha)\neq\emptyset\text{ whenever }\hat{a}\times\hat{b}\times\hat{d}<0, \tag{3.12}\] we have \(\mathbf{C}_{-}\Leftrightarrow\mathbf{C}_{-}^{*}\), implying that the total effect test may incorrectly reject competitive mediation. In the following sections, we will provide a detailed implementation of the above geometric analysis. ### Estimating and Testing Mediation Effects The LSE-\(F\) framework proposed by Judd and Kenny (1981b) estimates the parameters \((a,b,d,c)\) using least squares estimators (LSEs) and tests their significance using \(F\)-tests. The following equations define the LSEs: \[(\hat{i}_{M},\hat{a},\hat{i}_{Y},\hat{b},\hat{d}) = \big{(}\mathbf{M}^{T}\mathcal{D}_{1,X}\big{(}\mathcal{D}_{1,X}^{T} \mathcal{D}_{1,X}\big{)}^{-1},\mathbf{Y}^{T}\mathcal{D}_{1,M,X}\big{(}\mathcal{D}_{ 1,M,X}^{T}\mathcal{D}_{1,M,X}\big{)}^{-1}\big{)},\] \[\hat{c} = \hat{a}\times\hat{b}+\hat{d},\] where \(\mathcal{D}_{1,X}=(\mathbf{1},\mathbf{X})\) and \(\mathcal{D}_{1,M,X}=(\mathbf{1},\mathbf{M},\mathbf{X})\) are data matrices with or without the mediator, respectively. The rejection regions for the \(F\)-tests on \((a,b,d,c)\) with significance level \(\alpha\) are defined as follows: \[\mathcal{R}_{a}(\alpha) = \left\{\frac{||\mathbf{M_{1,X}}-\mathbf{M_{1}}||/(2-1)}{||\mathbf{M}-\mathbf{M_{1,X}}||/(n-2)}>\lambda_{1,n-2}(\alpha)\right\}, \tag{3.13}\] \[\mathcal{R}_{b}(\alpha) = \left\{\frac{||\mathbf{Y_{1,M,X}}-\mathbf{Y_{1,X}}||/(3-2)}{||\mathbf{Y}-\bm {Y_{1,M,X}}||/(n-3)}>\lambda_{1,n-3}(\alpha)\right\},\] (3.14) \[\mathcal{R}_{d}(\alpha) = \left\{\frac{||\mathbf{Y_{1,M,X}}-\mathbf{Y_{1,M}}||/(3-2)}{||\mathbf{Y}-\bm {Y_{1,M,X}}||/(n-3)}>\lambda_{1,n-3}(\alpha)\right\},\] (3.15) \[\mathcal{R}_{c}(\alpha) = \left\{\frac{||\mathbf{Y_{1,X}}-\mathbf{Y_{1}}||/(2-1)}{||\mathbf{Y}-\mathbf{Y_{1,X}}||/(n-2)}>\lambda_{1,n-2}(\alpha)\right\}, \tag{3.16}\] where \(\mathbf{Y}=(Y_{1},\ldots,Y_{n})^{T}\), \(\mathbf{Y_{X}}\) represents the projection of \(\mathbf{Y}\) onto the linear space spanned by \(\mathbf{X}\), and \(\lambda_{t,s}(\alpha)\) is the \(\alpha\)-quantile of \(F\)-distribution with degrees of freedom \((t,s)\). Additionally, to claim the significance of the indirect effect \(a\times b\), we reject the hypothesis test when both \(a\) and \(b\) are rejected, i.e., \(\mathcal{R}_{a\times b}(\alpha)\) is replaced with \(\mathcal{R}_{a}(\alpha)\cap\mathcal{R}_{b}(\alpha)\). Alternatively, Baron and Kenny (1986) suggested the LSE-Sobel framework, which is similar to the LSE-\(F\) framework but uses the Sobel test to examine the indirect effect \(a\times b\). The test statistic \(S\) is defined as: \[S=\frac{\hat{a}\hat{b}}{\Big{(}\hat{a}^{2}\text{Var}(\hat{b})+\hat{b}^{2}\text {Var}(\hat{a})\Big{)}^{1/2}}. \tag{3.17}\] Under the null hypothesis of \(a\times b=0\), \(S\) asymptotically follows a standard normal distribution. The rejection region for the Sobel test is given by: \[\mathcal{R}_{a\times b}(\alpha)=\left\{|S|>z_{\alpha/2}\right\}, \tag{3.18}\] where \(z_{\alpha}\) represents the \(\alpha\)-quantile of standard normal distribution. The LSE-Sobel framework provides a direct inference of the indirect effect \(a\times b\) using a single test, but is limited by not being an exact test as the exact distribution of the test statistic \(S\) depends on the values of \(a\) and \(b\). The LSE-based frameworks assume normal distribution for the noise terms \(\varepsilon_{M}\) and \(\varepsilon_{Y}\). In case of non-normality, an alternative is the LAD-\(Z\) framework (Pollard, 1991). It uses the test statistic \(z=|\tilde{\beta}|/sd(\tilde{\beta})\) compared to the standard normal distribution. Here, \(\tilde{\beta}\) is the _least absolute deviance_ (LAD) estimator of \(\beta\) for \(\beta\in\{a,b,d,c\}\), and \(sd(\tilde{\beta})\) is the estimated standard deviation of \(\tilde{\beta}\). ### Transforming the Observed Data for Simpler Representation The LSE estimators \((\hat{a},\hat{b},\hat{d},\hat{c})\) and the corresponding rejection regions in the LSE-\(F\) and LSE-Sobel frameworks involve complex components. However, we have discovered a simpler mathematical formulation by properly transforming the original data matrix, inspired by Jiang et al. (2021). Lemma 1 in Jiang et al. (2021) demonstrated that the LSE estimators \((\hat{a},\hat{b},\hat{d},\hat{c})\) and the rejection regions for \(F\)-tests are invariant to scale and orthogonal transformations of the observed data. The following lemma extends the original lemma in Jiang et al. (2021) by including the invariance of the Sobel test for \(a\times b\), and its proof can be found in Section 1 (S1) of the Supplementary Material. **Lemma 1**.: _Consider the regression model in Eq. (2.1)-(2.3) with data matrix \(\mathcal{D}=(\mathbf{1},\boldsymbol{M},\boldsymbol{X},\boldsymbol{Y})\). Let \(\tilde{\mathcal{D}}=\gamma\Gamma\mathcal{D}\) be the transformed data matrix under scale and orthogonal transformations, where \(\gamma>0\), \(\Gamma\) orthogonal matrix. In the transformed problem, the LSE \(\tilde{\beta}\) and rejection regions \(\tilde{\mathcal{R}}_{\beta}(\alpha)\) for the \(F\) and Sobel test (\(\beta\in a,b,d,c,a\times b\)) remain the same as the original problem._ The above lemma suggests that we can transform the original data matrix to obtain simpler rejection regions. As highlighted by Jiang et al. (2021), for a classic mediation model with the data matrix \(\mathcal{D}\) with \(\text{rank}(\mathcal{D})=4\), there always exists an \(n\times n\) real orthogonal matrix \(\Gamma\) and a global scale parameter \(\gamma>0\) s.t. the transformed data matrix \(\tilde{\mathcal{D}}=(\tilde{\mathbf{1}},\tilde{\mathbf{M}},\tilde{\mathbf{X}},\tilde{ \mathbf{Y}})\) satisfies \(\tilde{\mathbf{1}}=(1,0,\ldots,0)^{T}\), \(\tilde{\mathbf{X}}=(x_{1},x_{2},0,\ldots,0)^{T}\), \(\tilde{\mathbf{M}}=(m_{1},m_{2},m_{3},0,\ldots,0)^{T}\), \(\tilde{\mathbf{Y}}=(y_{1},y_{2},y_{3},y_{4},0,\ldots,0)^{T}\) with \(x_{2}>0\), \(m_{3}>0\) and \(y_{4}>0\). Apparently, the transformed data \(\tilde{\mathcal{D}}\) simplifies the LSE estimators and rejection regions. The following lemma summarizes the results, including the explicit form of \(\mathcal{R}_{a\times b}\), which was not provided in Jiang et al. (2021). **Lemma 2**.: _If \(\text{rank}(\tilde{\mathcal{D}})=4\), we obtain simple expressions for the LSE estimators and rejection regions of the causal effects of interest:_ \[\hat{a}=\frac{m_{2}}{x_{2}},\quad\hat{b}=\frac{y_{3}}{m_{3}},\quad\hat{c}= \frac{y_{2}}{x_{2}},\quad\hat{d}=\frac{m_{3}y_{2}-m_{2}y_{3}}{x_{2}m_{3}}; \tag{3.19}\] \[\mathcal{R}_{a}(\alpha) = \{r>r_{n,\alpha}\}, \tag{3.20}\] \[\mathcal{R}_{b}(\alpha) = \{p>p_{n,\alpha}\},\] (3.21) \[\mathcal{R}_{c}(\alpha) = \left\{q>r_{n,\alpha}(p^{2}+1)^{1/2}\right\},\] (3.22) \[\mathcal{R}_{d}(\alpha) = \begin{cases}\left\{\left|q-rp\right|>p_{n,\alpha}(r^{2}+1)^{1/2} \right\}&\text{if }\hat{a}\hat{b}\hat{c}\geq 0,\\ \left\{\left|q+rp\right|>p_{n,\alpha}(r^{2}+1)^{1/2}\right\}&\text{if }\hat{a}\hat{b} \hat{c}<0,\end{cases}\] (3.23) \[\mathcal{R}_{a\times b}(\alpha) = \left\{\frac{1}{(n-2)r^{2}}+\frac{1}{(n-3)p^{2}}<\frac{1}{z_{ \alpha/2}^{2}}\right\}; \tag{3.24}\] _where \(r=|m_{2}|/m_{3}\), \(p=|y_{3}|/y_{4}\), \(q=|y_{2}|/y_{4}\), \(r_{n,\alpha}=\left[\lambda_{1,n-2}(\alpha)/(n-2)\right]^{1/2}\), and \(p_{n,\alpha}=\left[\lambda_{1,n-3}(\alpha)/(n-3)\right]^{1/2}\), with \(\lambda_{t,s}(\alpha)\) and \(z_{\alpha/2}\) defined previously._ ### Total-Effect Test is Superfluous for Complementary Mediation Using the simplified formulas in Lemma 2, Jiang et al. (2021) showed that \(\mathcal{R}_{a}(\alpha)\cap\mathcal{R}_{b}(\alpha)\cap\mathcal{R}_{d}(\alpha) \subseteq\mathcal{R}_{c}(\alpha)\) whenever \(\hat{a}\times\hat{b}\times\hat{d}>0\) under mild conditions. This implies that total-effect test is superfluous for establishing complementary mediation under the LSE-\(F\) framework. Additionally, they showed that the total-effect test is also unnecessary asymptotically under the LSE-Sobel framework. However, their analysis within the LSE-Sobel framework lacks a geometric perspective, and can be enhanced by the following theorem, with the proof detailed in Section 1 (S1) of the Supplementary Material. **Theorem 1**.: _Suppose that we rely on the LSE-Sobel framework to establish mediation. If \(\text{rank}(\mathcal{D})=4\), as sample size \(n\to\infty\), for all \(\alpha\in(0,1)\),_ \[P\left(\mathcal{D}\in\mathcal{R}_{a\times b}(\alpha)\cap\mathcal{R}_{d}( \alpha)\cap\bar{\mathcal{R}}_{c}(\alpha)\right)\to 0\text{ for }\hat{a}\times\hat{b}\times\hat{d}>0.\] Theorem 1 implies that as the sample size \(n\to\infty\), \(\mathcal{R}_{a\times b}(\alpha)\cap\mathcal{R}_{d}(\alpha)\subseteq\mathcal{R} _{c}(\alpha)\) holds asymptotically. This provides an alternative perspective supporting the argument that the total-effect test is superfluous for testing complementary mediation under the LSE-Sobel framework. ## 4 Total-Effect Test can Erroneously Reject Indirect-Only Mediation Jiang et al. (2021) mentioned that the total-effect test can be misleading when testing indirect-only mediation under the LSE-\(F\) framework. However, no technical proof was provided to support the observation, and the observation does not cover the LSE-Sobel framework. In this section, we will fill these gaps through explicit theoretical analysis that shows the potentially misleading nature of the total-effect test for establishing indirect-only mediation. To show that the total-effect test may erroneously reject indirect-only mediation under the LSE-\(F\) framework, we need to verify condition (3.11): \[\mathcal{R}_{a}(\alpha)\cap\mathcal{R}_{b}(\alpha)\cap\bar{\mathcal{R}}_{d}( \alpha)\cap\bar{\mathcal{R}}_{c}(\alpha)\neq\emptyset,\text{ for all }\alpha\in(0,1).\] This condition can be equivalently expressed as: \[\mathcal{R}_{a}(\alpha\mid r)\cap\mathcal{R}_{b}(\alpha\mid r)\cap\bar{\mathcal{ R}}_{d}(\alpha\mid r)\cap\bar{\mathcal{R}}_{c}(\alpha\mid r)\neq\emptyset\text{ for some }r>0, \tag{4.25}\] where \(\mathcal{R}_{\beta}(\alpha|r)\) represents the intersection of \(\mathcal{R}_{\beta}(\alpha)\) and the \(p\)-\(q\) plane \(\mathcal{P}_{r}\) for \(\beta\in\{a,b,d,c\}\). Since \(\mathcal{R}_{a}(\alpha\mid r)=\mathcal{P}_{r}\cap\{r>r_{n,\alpha}\}=\emptyset\) for \(0<r\leq r_{n,\alpha}\), we will focus on \(r>r_{n,\alpha}\). We verify the argument by considering two sub-types of indirect-only mediation separately: the case where \(\hat{a}\hat{b}\hat{d}>0\), representing _directionally complementary mediation_, and the case where \(\hat{a}\hat{b}\hat{d}<0\), representing _directionally competitive mediation_. In the case where \(\hat{a}\hat{b}\hat{d}>0\), we can observe the following relationships. Corollary 1 in Jiang et al. (2021) implies that \(\hat{a}\hat{b}\hat{c}>0\) and \(q>rp\). Thus, we have \(\bar{\mathcal{R}}_{d}(\alpha\mid r)=\left\{rp<q\leq rp+p_{n,\alpha}(r^{2}+1)^ {1/2}\right\}\). Additionally, \(\mathcal{R}_{b}(\alpha\mid r)=\left\{p>p_{n,\alpha}\right\}\), and \(\bar{\mathcal{R}}_{c}(\alpha\mid r)=\left\{0\leq q\leq r_{n,\alpha}(p^{2}+1)^ {1/2},p\geq 0\right\}\). The geometry of these regions is depicted in Figure 2. According to Theorem 1 in Jiang et al. (2021), we have \(r_{n,\alpha}(p^{2}+1)^{1/2}<rp+p_{n,\alpha}(r^{2}+1)^{1/2}\) for \(r>r_{n,\alpha}\). Therefore, the intersection \(\mathcal{R}_{a}(\alpha\mid r)\cap\mathcal{R}_{b}(\alpha\mid r)\cap\bar{ \mathcal{R}}_{d}(\alpha\mid r)\cap\bar{\mathcal{R}}_{c}(\alpha\mid r)\) is \[\left\{rp<q\leq r_{n,\alpha}(p^{2}+1)^{1/2},p_{n,\alpha}<p<r_{n,\alpha}/(r^{2} -r_{n,\alpha}^{2})^{1/2}\right\},\] which can be verified to be not empty for \(r_{n,\alpha}<r<r_{n,\alpha}(1+1/p_{n,\alpha}^{2})^{1/2}\). Figure 2 (D) provides a graphical demonstration of this intersection. In the case where \(\hat{a}\hat{b}\hat{d}<0\), the sign of \(\hat{a}\hat{b}\hat{c}\) is indeterminate. The regions \(\mathcal{R}_{a}(\alpha\mid r)\), \(\mathcal{R}_{b}(\alpha\mid r)\), and \(\bar{\mathcal{R}}_{c}(\alpha\mid r)\) remain the same as in the case where \(\hat{a}\hat{b}\hat{d}>0\). The only difference lies in the region \(\bar{\mathcal{R}}_{d}(\alpha\mid r)\). The following lemma helps define \(\mathcal{R}_{d}(\alpha\mid r)\) when \(\hat{a}\hat{b}\hat{c}\geq 0\). **Lemma 3**.: _If \(\hat{a}\hat{b}\hat{d}<0\) and \(\hat{a}\hat{b}\hat{c}\geq 0\), we have \(q<rp\), and thus, \(\mathcal{R}_{d}(\alpha)=\{q<rp-p_{n,\alpha}(r^{2}+1)^{1/2},p\geq 0\}\)._ Using Lemma 3, It can be verified that \[\bar{\mathcal{R}}_{d}(\alpha\mid r)=\begin{cases}\left\{rp-p_{n,\alpha}(r^{2}+1)^{ 1/2}\leq q<rp,p\geq 0\right\},&\hat{a}\hat{b}\hat{c}\geq 0,\\ \left\{0\leq q\leq-rp+p_{n,\alpha}(r^{2}+1)^{1/2},p\geq 0\right\},&\hat{a}\hat{ b}\hat{c}<0,\end{cases}\] and the intersection of \(\mathcal{R}_{b}(\alpha\mid r)\), \(\bar{\mathcal{R}}_{c}(\alpha\mid r)\) and \(\bar{\mathcal{R}}_{d}(\alpha\mid r)\) is not empty for any \(r>r_{n,\alpha}\). The geometry of \(\bar{\mathcal{R}}_{d}(\alpha\mid r)\) and the intersection of interest under the directionally competitive mediation are shown in Figure 3. Above all, we validate the argument (4.25). Figures 2 and 3 imply that \(\mathcal{R}_{a}(\alpha\mid r)\cap\mathcal{R}_{b}(\alpha\mid r)\cap\bar{ \mathcal{R}}_{d}(\alpha\mid r)\cap\bar{\mathcal{R}}_{c}(\alpha\mid r)\) has larger support under directionally competitive mediation, indicating a higher probability of observing an insignificant total effect when \(\hat{a}\hat{b}\) and \(\hat{d}\) have opposite signs and \(\hat{d}\) is not statistically significant. The following theorem summarizes the above analysis and depicts the rejection regions based on a nice geometry under a mild condition. **Theorem 2**.: _Suppose that we rely on the LSE-\(F\) framework to establish mediation. If \(\text{rank}(\mathcal{D})=4\), we have_ \[\mathcal{R}_{a}(\alpha)\cap\mathcal{R}_{b}(\alpha)\cap\bar{\mathcal{R}}_{d}( \alpha)\cap\bar{\mathcal{R}}_{c}(\alpha)\neq\emptyset,\text{ for all }\alpha\in(0,1).\] Similarly, the following Theorem shows that the total-effect test can be erroneous for establishing indirect-only mediation under the LSE-Sobel framework with large sample size. The details of proof can be found in Section S1 of the Supplementary Material. **Theorem 3**.: _Suppose that we rely on the LSE-Sobel framework to establish mediation. If \(\text{rank}(\mathcal{D})=4\), there exists \(N>0\) such that when sample size \(n>N\), we have_ \[\mathcal{R}_{a\times b}(\alpha)\cap\bar{\mathcal{R}}_{d}(\alpha)\cap\bar{ \mathcal{R}}_{c}(\alpha)\neq\emptyset,\text{ for all }\alpha\in(0,1).\] ## 5 Total-Effect Test Can Erroneously Reject Competitive Mediation Similar analyses can be applied to competitive mediation. The following theorems demonstrate that the total-effect test can lead to erroneously rejection of competitive mediation under LSE-\(F\) and LSE-Sobel frameworks, respectively, for statistical partitioning. While previous studies have shown the possibility of erroneous rejection using bootstrap tests (MacKinnon et al., 2000, 2007; Hayes, 2009; Zhao et al., 2010) through derivations and examples, the theorems below provide rigorous proofs assuming LSE-\(F\) and LSE-Sobel tests. **Theorem 4**.: _Suppose that we rely on the LSE-\(F\) framework to establish mediation. If \(\text{rank}(\mathcal{D})=4\), condition \(\hat{a}\times\hat{b}\times\hat{d}<0\) implies_ \[\mathcal{R}_{a}(\alpha)\cap\mathcal{R}_{b}(\alpha)\cap\mathcal{R}_{d}(\alpha) \cap\bar{\mathcal{R}}_{c}(\alpha)\neq\emptyset,\text{ for all }\alpha\in(0,1).\] **Theorem 5**.: _Suppose that we rely on the LSE-Sobel framework to establish mediation. If \(\text{rank}(\mathcal{D})=4\), condition \(\hat{a}\times\hat{b}\times\hat{d}<0\) implies_ \[\mathcal{R}_{a\times b}(\alpha)\cap\mathcal{R}_{d}(\alpha)\cap\bar{\mathcal{R }}_{c}(\alpha)\neq\emptyset,\text{ for all }\alpha\in(0,1).\] The procedure for proving the indirect-only mediation was applied to proving the two theorems above, of which the details are documented in Section 1 (S1) of the Supplementary Material. Simulations Jiang et al. (2021) conducted simulations to show that the total-effect test is unnecessary for establishing complementary mediation under LSE-\(F\) and LSE-Sobel frameworks and can be misleading with the LAD-\(Z\) test. The study focused on indirect-only and competitive mediation, presenting the results for indirect-only mediation in the main text and the results for competitive mediation in Section 2 (S2) of the Supplementary Material. ### Numerical validation of Theorem 2 To validate Theorem 2, we generate the simulated data from model (2.1) and (2.2) as follows: \[n\sim\text{Unif}(\{10,\ldots,100\}),\quad(i_{M},i_{Y},a,b,d) \sim\text{Unif}[-1,1]^{5},\] \[X\sim N(0,1),\quad\sigma_{M}^{2}\text{ and }\sigma_{Y}^{2}\sim \text{Inv-Gamma}(1,1).\] A total of \(10,000\) independent datasets of different sample sizes were simulated. For each dataset, we calculated the LSEs \((\hat{a},\hat{b},\hat{c},\hat{d})\) and \(p\)-values \((p_{a},p_{b},p_{c},p_{d})\) under the LSE-\(F\) framework. We checked if, for any fixed \(\alpha\in(0,1)\), \(\max(p_{a},p_{b})<\alpha\) and \(p_{d}\geq\alpha\) imply \(\{p_{c}\geq\alpha\}\neq\emptyset\). Figure 4 (A) checks the \(p\)-value condition when \(\alpha=0.1\). Each simulated dataset is represented by a point in a 3-dimensional space with \(\max(p_{a},p_{b})\), \(p_{d}\), and \(p_{c}\) as the \(X\), \(Y\), and \(Z\) axes, respectively. The solid circles represents datasets satisfying \(\max(p_{a},p_{b})<\alpha\) and \(p_{d}\geq\alpha\), gray crossings represent data sets with \(\max(p_{a},p_{b})\geq\alpha\) or \(p_{d}<\alpha\), and the dark gray dashed plane represents \(p_{c}=\alpha\). The solid circles above the plane \(p_{c}=\alpha\) indicate the empirical set \(\{p_{c}\geq\alpha\}\). We observe that when \(\max(p_{a},p_{b})<\alpha=0.1\) and \(p_{d}\geq\alpha\), the set \(\{p_{c}\geq\alpha\}\) is not empty. Figure 4 (B) presents the proportion of datasets satisfying \(p_{c}\geq\alpha\) for 1000 evenly spaced values of \(\alpha\) in the range of \((0.01,0.99)\) when \(\max(p_{a},p_{b})<\alpha\) and \(p_{d}\geq\alpha\) under the LSE-\(F\) framework. It shows that for significance levels \(\alpha\) smaller than \(0.1\), which is commonly used in practice, the proportion of cases where \(p_{c}>\alpha\) is greater than \(40\%\). This indicates a high probability of erroneous rejection of indirect-only mediation by the total-effect test. The proportion of erroneous total-effect test results for both directionally complementary and directionally competitive indirect-only mediation cases is Figure 4: Numerical exploration under LSE-\(F\) framework: (A) black solid circles for \(\max(p_{a},p_{b})<0.1\) and \(p_{d}\geq 0.1\), grey crossings for \(\max(p_{a},p_{b})\geq 0.1\) or \(p_{d}<0.1\), and dark gray dashed plane for \(p_{c}=0.1\); (B) Proportion of datasets satisfying \(p_{c}\geq\alpha\) when \(\max(p_{a},p_{b})<\alpha\) and \(p_{d}\geq\alpha\). depicted in Figure 5. The plot demonstrates that the total-effect test can lead to incorrect conclusions regarding the presence of indirect-only mediation in both cases. Interestingly, the erroneous judgments are more frequent when the signs of \(\hat{a}\hat{b}\) and \(\hat{d}\) are opposite, which is in line with expectations established in the analysis of Theorem 2. ### Exploratory analysis for other frameworks To investigate whether a similar result holds for other frameworks in establishing indirect-only mediation, we conducted a similar analysis using the LSE-Sobel framework and LAD-\(Z\) framework with the same set of simulated datasets. Under the LSE-Sobel framework, we additionally calculated the the p-value \(p_{ab}\) for the Sobel test of \(a\times b\). If a similar result holds, we could expect to see \(\{p_{c}\geq\alpha\}\neq\emptyset\) for any fixed \(\alpha\in(0,1)\) when \(p_{ab}<\alpha\) and \(p_{d}\geq\alpha\), which is supported by Figures 6 and 7. Moreover, graphical verification of results under the LAD-\(Z\) framework are shown in Figures 8 and 9, supporting similar conclusions. Figure 6: Numerical exploration under LSE-Sobel framework: (A) black solid circles for \(p_{ab}<0.1\) and \(p_{d}\geq 0.1\), grey crossings for \(p_{ab}\geq 0.1\) or \(p_{d}<0.1\), and dark gray dashed plane for \(p_{c}=0.1\); (B) Proportion of datasets satisfying \(p_{c}\geq\alpha\) when \(p_{ab}<\alpha\) and \(p_{d}\geq\alpha\). ## 7 Real Data Illustrations We illustrate the conclusions of the mathematical derivation and simulation presented above with two real-data examples. The example data came from the Health Information National Trends Survey (HINTS, [http://hints.cancer.gov/](http://hints.cancer.gov/)), which is conducted regularly by the National Cancer Institute on representative samples of United States adults to track changes in health behavior and communication (Jiang and Liu, 2020; Liu et al., 2023; Finney Rutten et al., 2020). This study used the 2020 version of the postal-mail survey (HINTS 5 Cycle 4) with 3,865 participants. Two models are presented below. Model 1 is a directionally competitive indirect-only (d-petitive IO) mediation depicting an effect of _caregiving_ (CG) on _smoking_ (SM) through _psychological distress_ (PD) (Figure 10). Model 2 is a directionally complementary indirect-only (d-plementary IO) mediation describing the effect of _employment_ (EM) on _physical activity_ (PA) through _psychological distress_ (PD) (Figure 11). In each model, the mediated effect passed the statistical threshold (\(p<.05\)) while the total effect failed (\(p\geq.05\)). In both examples, the total-effect (\(c\)) test would have concluded there was no "effect to be mediated", which would be equal to "no mediated effect" in the causal-steps doctrine, thereby requiring a full-stop of all further analysis, when the mediated effect (\(ab\)) would have passed the statistical test. Thus, each model is a real-data example of the total-effect test erroneously rejecting indirect-only mediation. The two example models fit the definition of "full mediation" aka "complete mediation" under the quasi typology of full, partial, and no mediation Baron and Kenny (1986). The terms were meant to connote the strongest form of mediation (Hayes, 2022, p. 126). Hopefully, the total-effect test erroneously rejecting the perceived strongest mediation demonstrates the pitfalls of the total-effect test and the pitfalls of the causal-steps approach. Figure 11: Directionally Complementary Indirect-only (D-plementary IO) Mediation (\(n\) = 3,594) Figure 10: Directionally Competitive Indirect-only (D-petitive IO) Mediation (\(n\) = 3,267) When presenting the examples, we employ process-and-product analysis (PAPA) emerging in several disciplines (Jiang et al., 2021; Liu et al., 2023; Peng et al., 2020; Liu et al., 2023; Zhao and Zhang, 2014; Zhao et al., 1994, 2010). PAPA approach sees mediation as a process and total effect (_c_) as the product of the process. While the causal-steps approach is focused on one task, which is to "establish mediation", PAPA is given three tasks, 1) testing effect hypotheses, 2) classifying effect types, and 3) analyzing effect sizes, all for the ultimate mission of better understanding the relationship between parts, process, and product. To estimate effect sizes, PAPA employs the percentage coefficient (\(b_{p}\)), the regression coefficient with the dependent variable (DV) and independent variable (IV) both on a \(0\sim 1\) percentage scale (\(p_{s}\)) (Jiang et al., 2021; Liu et al., 2022; Zhao et al., 2016; Liu et al., 2023). Scaled such, \(b_{p}\) indicates the percentage change in DV associated with a 100% whole-scale increase in IV or, in other words, the change in DV measured by the percentage of a point associated with an increase in IV by one percentage point. Thus, \(b_{p}\) is interpretable and comparable assuming scale equivalence (Zhao and Zhang, 2014; Jiang et al., 2021). The two features make \(b_{p}\) a generic indicator of effect sizes that is easy to interpret and efficient to compare. Table 1 provides scale details and univariate descriptions of variables, and Eq. 1 of Table 2 is the formula for percetizing the scales. To help dissect the product, discern the parts and divine the process, PAPA calculates _percent contribution_ (\(c_{p}\)), the contribution of each elemental part, i.e., the \(a\), \(b\), \(ab\), \(d\), or \(c\) path, to the \(IV\to DV\) total effect, \(c\), as detailed in Table 2. To reduce overuse and misuse of \(p\) values, we strive to practice what we consider the best practice, 1) refraining from the term "statistical significance" and "statistical non-significance"; 2) referring to \(.05\) as "statistical acknowledgment", (Benjamini et al., 2021; Editorial, 2019; Siegfried, 2015; Wilkinson, 1999) and \(3\)) referring to \(p\geq.05\) as "statistical inconclusiveness". Such practices indicate that \(p<.05\) is merely a pretest yardstick or partition threshold under the principles of functionalism, passing which would allow for classifying effect types and analyzing effect sizes (Liu et al., 2023; Zhao et al., 2022; Liu et al., 2023). It's hoped that such practices, including the application of effect-size indicators such as \(b_{p}\) and \(c_{p}\), benefit from and contribute to the "effect size movement" (Kelley and Preacher, 2012; Preacher and Kelley, 2011; Wilkinson, 1999; Robinson et al., 2002; Jiang et al., 2021; Schmidt, 1996). ### Model 1 for D-petitive IO Mediation Competitive mediation, aka suppression or inconsistent mediation (MacKinnon et al., 2000, 2007), defined as a model with statistically acknowledged \(ab\) and \(d\) paths at the opposite directions, is widely known as the type of medi \begin{table} \begin{tabular}{|c c c c c c c c c c c c c c|} \hline & & & \multicolumn{3}{c|}{Non-01} & \multicolumn{3}{c|}{Natural Scale} & \multicolumn{3}{c|}{Conceptual Range} & \multicolumn{3}{c|}{0-1 Percentage Scale} \\ \cline{4-13} & & & \multicolumn{2}{c}{Min} & \multicolumn{1}{c}{Max} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{SD} & \multicolumn{1}{c}{Min} & \multicolumn{1}{c}{Max} & \multicolumn{1}{c}{Min} & \multicolumn{1}{c}{Max} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{SD} \\ \hline \multirow{4}{*}{Model 1} & \(Y\) & Smoking Frequency & 3821 & 0 & 3 & 0.37 & 0.58 & 0 & 3 & 0 & 1 & 0.12 & 0.19 \\ & \(M\) & Psychological Distress & 3810 & 0 & 4 & 0.67 & 0.97 & 0 & 4 & 0 & 1 & 0.17 & 0.24 \\ & \(X\) & Caregiving & 3738 & 0 & 5 & 0.18 & 0.46 & 0 & 5 & 0 & 1 & 0.04 & 0.09 \\ & Control 1 & Age & 3738 & 18 & 104 & 57.01 & 16.99 & 0 & 100 & 0.18 & 1.04 & 0.57 & 0.17 \\ & Control 2 & Income & 3448 & 1 & 9 & 5.59 & 2.26 & 1 & 9 & 0 & 1 & 0.57 & 0.28 \\ & Control 3 & Education & 3722 & 1 & 7 & 4.94 & 1.62 & 1 & 7 & 0 & 1 & 0.66 & 0.27 \\ \hline \multirow{4}{*}{Model 2} & \(Y\) & Physical Activity & 3739 & 0 & 4620 & 160.09 & 273.43 & 0 & 500 & 0 & 9.24 & 0.03 & 0.06 \\ & \(M\) & Psychological Distress & 3810 & 0 & 4 & 0.67 & 0.97 & 0 & 4 & 0 & 1 & 0.17 & 0.24 \\ & \(X\) & Employment & 3778 & 0 & 1 & 0.50 & 0.50 & 0 & 1 & 0 & 1 & 0.50 & 0.50 \\ & Control 1 & Age & 3738 & 18 & 104 & 57.01 & 16.99 & 0 & 100 & 0.18 & 1.04 & 0.57 & 0.17 \\ & Control 2 & Gender & 3765 & 1 & 2 & 1.59 & 0.49 & 1 & 2 & 0 & 1 & 0.59 & 0.49 \\ & Control 3 & Education & 3722 & 1 & 7 & 4.94 & 1.62 & 1 & 7 & 0 & 1 & 0.66 & 0.27 \\ \hline \end{tabular} \end{table} Table 1: Descriptive statistics of variables in the HINTS Data: the sample size \(N\), the original scale as data were collected and the 0-1 percentage scale after the variables have been linearly transformed to the interval \([0,1]\). ation that can be erroneously rejected by the total-effect test (Xiao et al., 2018; Busse et al., 2016; Gopalakrishnan and Zhang, 2019). The mathematical derivation above has provided proof that the total-effect test can also erroneously reject indirect-only (IO) mediation, which includes two subtypes, directionally competitive (d-petitive) and directionally complementary (d-plementary). The following is a real-data example for the first subtype, D-petitive IO mediation, defined as a model with a statistically acknowledged \(ab\) path and a statistically inconclusive \(d\) path in the opposite direction. The example shows that this model of mediation is erroneously rejected by the total-effect test. **Key Variables.** \begin{table} \begin{tabular}{|l c c c|} \hline \hline Indicator & Equation1 & Range & Eq. \\ \hline Percentage score (\(p_{*}\)) & \(p_{*}=\frac{p_{*}-c_{min}}{c_{max}-c_{min}}\) & \begin{tabular}{c} Observable \(-\infty<p_{*}<\infty\) \\ Conceptual \(0<p_{*}<1\) \\ \end{tabular} & 1 \\ \hline Percent contribution of total effect & \(c_{p}(c)=\frac{b_{*}(c)}{|b_{*}(c)|}\) & \(c_{p}(c)=1\) or \(-1\) & 2 \\ \hline Percent contribution of indirect & \(c_{p}(ab)=\frac{b_{*}(ab)}{|b_{*}(c)|}\) & \(-\infty<c_{p}(ab)<\infty\) & 3 \\ \(|c_{p}(ab)|\leq|c_{p}(c)|\) & & \\ \hline Percent contribution of direct \& \(c_{p}(d)=\frac{b_{*}(d)}{|b_{*}(c)|}\) & \(-\infty<c_{p}(d)<\infty\) & 4 \\ remainder effect (\(d\)) to total effect (\(c\)) & & \(|c_{p}(d)|\leq|c_{p}(c)|\) & & \\ \hline Percent contribution of 1*-leg effect & \(c_{p}(a)=\frac{|b_{*}(a)|}{|b_{*}(a)|+|b_{*}(b)|}\times c_{p}(ab)\) & \(-\infty<c_{p}(a)<\infty\) & 5 \\ \(|c_{p}(a)|\leq|c_{p}(ab)|\) & & \(-\infty<c_{p}(b)<\infty\) & 6 \\ \(|c_{p}(b)|\leq|c_{p}(ab)|\) & & \(|c_{p}(b)|\leq|c_{p}(ab)|\) & & \\ \hline For Eq. 1 & For Eqs. 2\(\sim\)6 & For Eqs. 2\(\sim\)6 (continued) & \\ \(c_{max}\): conceptual maximum & (\(a\)): first leg of the indirect path. & \(b_{p}\): percentage coefficient, regression \\ \(c_{min}\): conceptual minimum & (\(b\)): second leg of the indirect path. & coefficient when DV and IV is each \\ \(o_{*}\): original score & (\(c\)): total effect. & on a percentage scale (\(p_{*}\)). \\ \(p_{*}\): percentage score & (\(d\)): direct \& remainder path. & \(c_{p}\): percent contribution to total effect, \(c\). \\ & (\(ab\)): indirect path. & \\ \hline \hline \end{tabular} Note 1: Concepts and equations adopted and adapted from Jiang et al. (2021), Liu et al. (2023), Zhao and Zhang (2014) and Zhao et al. (2016). \end{table} Table 2: Percentage coefficient (\(b_{p}\)) and percent contribution (\(c_{p}\)) to total effect (\(c\)). **Dependent variable:**_Smoking frequency_ (SF) was measured by four items that asked responders how often they smoke cigarettes and e-cigarettes. The composite variable ranged from 0 to 1 where 0 represents not smoking and 1 represents smoking every day. **Mediating variable:**_Psychological distress_ (PD) was the sum of four items (Cronbach's \(\alpha=.871\)) measuring the frequency by which the respondents experienced four symptoms of psychological distress in the past two weeks, feeling little interest in doing things, being emotionally down, hopeless, and anxious. Again, it was transferred into a \(0\sim 1\) where 0 means never feeling any symptoms and 1 means feeling four symptoms every day. **Independent variable:**_Caregiving (CG)_ was the sum of five items, measuring whether the respondents took five types of caregiving currently. The caregiving types include caring for children, partners, parents, relatives, and friends. The compound variable CG ranges \(0\sim 1\) where 0 indicates no caregiving responsibilities and 1 indicates the respondent needs to take all types of caregiving. **Control variables:**_Age_, _income_, and _education_ were reported in Table 1 and included as control variables for the mediation analysis. To simplify the presentation, the control variables were omitted from Table 3 and Figures 10 & 11 that report the outcomes. **Mediation analysis.** Table 3 and Figure 10 summarize Model 1 findings. They show a positive and statistically acknowledged indirect effect, and a negative but statistically inconclusive direct-and-remainder (di-remainder) effect, producing a directionally competitive indirect-only (d-petitive IO) mediation. Figure 10 reports the contribution of each path. The \(ab\) path (\(b_{p}=.0165\%\), \(c_{p}\approx 117,857\%\)) and \(d\) path (\(b_{p}=-.0167\), \(c_{p}\approx-119,286\%\)) contributed about equal per centages in opposite directions, leading to the nearly-zero total effect (\(c\) path, \(b_{p}=.000014\)). Because the competition between \(ab\) and \(d\) paths was about equal (118K% v.s. -119K%), they offset each other to produce a near-zero total effect (\(b_{p}=.000014\)). Consequently, the contribution of each part appeared huge percentage-wise. In such cases of small total effect due to even competition, the comparative sizes may be more important than the sizes themselves. The indirect path (\(ab\)) passed the statistical threshold (\(p<.001\)) while the direct-and-remainder (\(d\)) path failed (\(p=.6411\)), making it a directionally competitive indirect-only (d-petitive IO) mediation by our standards (Zhao et al., 2011, 2010; Hayes, 2009; Rucker et al., 2011). Nevertheless, the total effect (\(c\)) failed (\(p=.9997\)). If passing the total-effect test remains a necessary condition for establishing mediation as some experts continued to prescribe, this model would have been disqualified as mediation (Wen et al., 2004; Wen and Ye, 2014; Rose et al., 2004). ### Model 2 for D-plementary IO Mediation In the above example, the competition between the indirect (\(ab\)) and the direct-and-remainder (\(d\)) paths was clearly a main contributor to the near-zero total effect (\(c\)) and the statistical inconclusiveness. The following is a real-data example that a second subtype, a directionally complementary indirect-only (d-plementary IO) mediation, is erroneously rejected by the total-effect test, thereby showing that the total-effect test can erroneously reject mediation without competition. **Key Variables.** **Dependent variable:**_Physical activity_ (PA) was measured by two items that asked responders how many minutes per day and how many days per week they usually did physical activity (Xie et al., 2020; Kontos et al., 2014). The two items were multiplied to compute the weekly physical activity the respondents conducted. The composite variable was then linearly transformed to a 0-1 percentage scale where 1 represents the highest weekly physical activity and 0 represents not conducting weekly physical activity at all. **Mediating variable:**_Psychological distress_ (PD) in Model 2 was the same mediating variable in Model 1. **Independent variable:**_Employment_ (EM) measured whether the respondents were employed or not, e.g., unemployed, retired, or being students, recoded 1 for employed and 0 for not employed. **Control variable:**_Age_, _gender_, and _education_ were controlled in mediation analysis but omitted from Table 3 and Figures 10 & 11 that report the outcomes. **Mediation analysis.** Model 2 of Table 3 show that the indirect (\(ab\)) path was positive and statistically acknowledged, while the dire-mainder (\(d\)) path was also positive but failed to pass the statistical threshold (\(p=.2704\)), making it a directionally complementary indirect-only (d-plementary IO) mediation. The total effect (\(c\)) also failed the statistical test (\(p=.1169\)), demonstrating that total-effect test can also erroneously reject this subtype of mediation. This could be among the first documented real-data examples that the total-effect test erroneously rejects mediation without competition, aka suppression. A question arises: Given that the indirect path was statistically acknowledged, the direct-and-remainder path complemented, and the total effect was the sum of the two paths, what caused the statistical inconclusiveness of the total effect? The mathematical derivations provided above would point fingers at the estimated variance (SE) of the \(d\) path. That is, for this subtype, the large variance of the \(d\) path relative to the other parameters should be considered the largest factor contributing to the larger-than-threshold \(p\)-value of the total effect (\(c\)). The process-and-product analysis (PAPA) of the real-data example provides a non-mathematical illustration of the point. As shown in Figure 11, even though the \(d\) path was statistically inconclusive (\(p=.2704\)), the estimated effect size of the \(d\) path (\(b_{p}=.0243\)) more than doubled that of the statistically acknowledged \(ab\) path (\(b_{p}=.0102\), \(p<.001\)). The contribution of the d path accounted for 71% of the total effect (\(c_{p}=71\%\)). The effect size of the \(d\) path is not small relative to the other main parameters. Rather, it was the relatively large variance of the \(d\) path (SE =.022) that was a main contributor to the variance of the \(c\) path (SE =.0218), which was, in turn, a main factor contributing to the statistical inconclusiveness of the total effect (\(p=.1169\)). ### Section Summary and Discussion The previous sections proved a mathematical theorem and provided Monte Carlo simulations showing that the total-effect test can erroneously reject indirect-only (IO) mediation. This section added two real-data examples documenting that the test did erroneously reject this type of mediation. Two models were shown, recording erroneous rejection for each of the two major subtypes of IO mediation. Model 1 is a case where the total-effect test erroneously rejected directionally competitive indirect-only (d-petitive IO) mediation. Model 2 is a case where the total-effect test erroneously rejected directionally complementary (d-plementary IO) mediation. As the d-plementary IO model involves no competition, Model 2 shows that the erroneous rejection can occur without competition, statistical or di rectional. The finding has implications for data analysts who struggle to interpret d-plementary IO models with statistically inconclusive total effects. It also illustrates an implication of Theorem 2: The estimation variance, i.e., standard error, of direct-and-remainder (\(d\)) path tends to be large relative to other parameters of this subtype; the estimation variance of the direct-and-remainder path may be the more significant factor than effect sizes and other parameters contributing to the statistical inconclusiveness of the total effect (\(c\)). An implication is that practicing researchers may need to focus more on effect sizes than on _p_-values or confidence intervals. For example, in a d-plementary IO model, the statistically inconclusive \(p\)-value for the \(c\) path is attributed more to the large variance in the \(d\) path. While the \(d\) portion of the \(c\) path can not be acknowledged, the direction of the \(c\) path still can, due mainly to the direction and strength of the \(ab\) path. The direction and strength of the effect are often theoretically and practically as important as, if not more important than, the variance of the effect. \begin{table} \begin{tabular}{|c c c c c c c c c c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{Estimates} & \multicolumn{4}{c|}{\(p\)-values} & \multirow{2}{*}{Mediation Type} \\ \cline{2-3} \cline{5-10} & \(a\) & \(b\) & \(d\) & \(c\) & \(p_{u}\) & \(p_{b}\) & \(p_{d}\) & \(p_{v}\) \\ \hline Model 1 &.1631 &.1012 & \(-\).0167 &.000014 & No & \(<.001\) & \(<.001\) &.6411 &.9997 & \begin{tabular}{c} Directionally Competitive \\ Indirect-only Mediation \\ \end{tabular} \\ \hline Model 2 & \(-\).0656 & \(-\).1552 &.0243 &.0342 & Yes & \(<.001\) & \(<.001\) &.2704 &.1169 & \begin{tabular}{c} Directionally Complementary \\ Indirect-only Mediation \\ \end{tabular} \\ \hline \end{tabular} Note: See Table S1 in the Supplementary Material for variables in Models 1 and 2. As all variables are on 0-1 percentage scales, all regression coefficients, namely \(\hat{a}\), \(\hat{b}\), \(\hat{d}\), and \(\hat{c}\), become percentage coefficients (\(b_{b}\)). \end{table} Table 3: Mediation analysis results of the HINTS Data. Columns 2-5: LSEs of the parameters. Conclusions and Discussions ### Main Findings This study provided a mathematical theorem, a Monte Carlo simulation, and two real-data examples to demonstrate that the total-effect test can and did erroneously reject indirect-only mediation. There are three and only three types of mediation, competitive, complementary, and indirect-only (Busse et al., 2016; Jiang et al., 2021; Zhao et al., 2011, 2010). While prior studies have shown that the total-effect test is superfluous for establishing complementary mediation, this study shows that the test can erroneously reject indirect-only and competitive mediation. Thus, this study completes the argument that the total-effect test should be recanted, but not just relaxed or suspended, for establishing any type of mediation, to the extent that the traditional ordinary least square (LSE-\(F\) or LSE-Sobel) procedures were applied to calculate \(p\)-values or confidence intervals for statistical tests. A similar conclusion can be reached under the LAD-\(Z\) framework, which is verified with simulation studies. Table 4 displays the repercussions of imposing the total-effect test for establishing mediation. Altogether three types of mediation and the three types of tests make up the nine cells. The table shows two possible types of outcomes, i.e., repercussions, to be erroneous or superfluous. The total-effect test is erroneous in seven of the nine situations and is superfluous in the other two situations. The table also lists the studies that have contributed to the knowledge and influenced this study the most directly. Of all the nine cells, "this study" is the only entry in three (C2, B3, C3), indicating it is the only contributor. "This study" also appears in two other cells (B1 and C1). If we count the nine cells as nine pieces of knowledge, this study emerges as the sole or main contributor to five of the nine pieces regarding the repercussions of the test. Now, this study has provided proof that the total-effect test can produce erroneous outcomes, not only for competitive mediation but also for indirect-only mediation, both directionally competitive and complementary. Every cell of Table 4 has been filled. The total-effect test harms or fails to help, whether LSE-\(F\), LSE-Sobel, or LAD-\(Z\) test is used. The burden of proof is now on the causal-step procedure to show that the total-effect test is harmless and helpful for establishing any type of mediation. At the meantime, it might be appropriate to consider recanting the total \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Down: & Right: & A & B & C \\ Type of & Type of & Complementary / & Competitive / Inconsistent / & Indirect-Only / \\ Statistical Test & Mediation Models & Partial Mediation & Suppressive Mediation & Full Mediation \\ \hline & Test outcomes1 & _Superfluous_ & _Erroneous_ & _Erroneous_ \\ 1. LSE-\(F\) & & Zhao et al. (2010) & Kenny et al. (1998) & Jiang et al. (2021) \\ & Contributing studies2 & Jiang et al. (2021) & Jiang et al. (2021) & This study3 \\ & & & & This study3 \\ & & & & \\ \hline & Test outcomes1 & _Superfluous_ & _Erroneous_ & _Erroneous_ \\ 2. LSE-Sobel & & Zhao et al. (2010) & & \\ & Contributing studies2 & Jiang et al. (2021) & Hayes (2009) & This study3 \\ & & & Zhao et al. (2010) & \\ \hline & Test outcomes1 & _Erroneous_ & _Erroneous_ & _Erroneous_ \\ 3. LAD-\(Z\) & & & & \\ Contributing studies2 & Jiang et al. (2021) & This study3 & This study3 \\ \hline \end{tabular} 1. The three rows show two possible outcomes of total-effect tests, i.e., erroneous or superfluous. Cell A1, for example, shows that, with complementary mediation using LSE-\(F\) test, the total effect test is superfluous, because it would always pass the statistical threshold to “establish” this type of mediation. For another example, Cell A2 shows that the total-effect test would erroneously reject competitive mediation using LSE-\(F\) test, with “mediation” defined by the near-consensus that the _ab_ path passes the \(p<\alpha\) statistical test (\(\alpha\)=0.05 for this study). 2. This table is not meant to provide a comprehensive survey of past studies or a ranking of contributors. We instead selected a maximum of three studies per cell that have had the most direct impact on this study. 3. For Cells C2, B3, and C3, this study is the main contributor to the conclusion that the total-effect test can be erroneous for establishing mediation. \end{table} Table 4: Repercussions of Total-Effect Test and Contributors to Knowledge effect test for establishing mediation of all types, rather than imposing the test and then suspending or relaxing it for one special type. This means completely removing the test from the regular procedure instead of allowing exceptions in the cases of anticipated suppression or inconsistency, aka competitive mediation (Baron and Kenny, 1986; Kenny et al., 1998; Kenny, 2008, 2021; Rose et al., 2004; Wen et al., 2004; Wen and Ye, 2014). Admittedly, proofs assuming bootstrap tests are not yet available. But the available evidence, especially if adding this study, is overwhelmingly against the test. Given the striking imbalance between pro and con, it's time to set aside the total-effect test for establishing mediation unless and until new evidence emerges to show a benefit. ### Implications Why and how did the total-effect test dominate so many disciplines for so long (Baron and Kenny, 1986)? Over-dichotomization of the _effect_ concept and oversimplification of the _mediation_ concept may be among the root causes. In light of the revelation, it may be time to revisit and possibly revamp the objectives, concepts, theories, and techniques of mediation analysis. It may be time also to consider more broadly _causal dissection models_, which include moderation and curvilinearity in addition to mediation (Zhao, 2017). Accordingly, Section 6 above are tasked to showcase two applications of _process-and-product analysis_ (PAPA), whose missions are 1) testing the presence of mediation, 2) identifying types of mediation, and 3) analyzing sizes of the mediation and non-mediation effects (Jiang et al., 2021; Liu et al., 2022, 2023). Establishing mediation is a part, and only a part, of the first mission of PAPA. ### Future Research While this study completes the argument about recanting total-effect test for establishing mediation using OLS tests, i.e., LSE-\(F\), LSE-Sobel, and LAD-\(Z\), future research may falsify or qualify the argument by testing the main theses assuming non-OLS tests such as bootstraps (Zhao et al., 2011). The more challenging task is to overcome the spiral of inertia, resist over-dichotomization and oversimplification of fundamental concepts, and foster an understanding of mediation that is more analytical and comprehensive. It will take time and luck, but above all persistence. (c.f. Zhao et al. (2022); Zhao et al. (2018)) ## Supplementary Material _Supplementary Material_ is attached at the end of this document to provide the detailed proof for Lemma 1, 2, 3, Theorem 1, 3, 4 and 5 and results of simulation for competitive mediation under LSE-\(F\), LSE-Sobel, and LAD-\(Z\) frameworks. ## Acknowledgments This research was supported by the Beijing Natural Science Foundation [Z190021]; National Natural Science Foundation of China [Grant 11931001]; grants of University of Macau, including CRG2021-00002-ICI, ICI-RTO-0010-2021, CPG2021-00028-FSS and SRG2018-00143-FSS, ZXS PI; and Macau Higher Education Fund, HSS-UMAC-2020-02, ZXS PI. Xinshu Zhao and Ke Deng are co-corresponding authors. Tingxuan Han Center for Statistical Science & Department of Industry Engineering, Tsinghua University, Haidian, Beijing, China. E-mail: [email protected] Luxi Zhang Department of Communication, University of Macau, Macau, China. E-mail: [email protected] Xinshu Zhao Department of Communication, University of Macau, Macau, China. E-mail: [email protected] Ke Deng Center for Statistical Science & Department of Industry Engineering, Tsinghua University, Haidian, Beijing, China. E-mail: [email protected] **Supplementary Material** This supplementary material contains two sections. Section 1 (S1) presents the detailed proof for Lemma 1, 2, 3, Theorem 1, 3, 4 and 5. Section 2 (S2) shows the results of simulation for competitive mediation under LSE-\(F\), LSE-Sobel and LAD-\(Z\) frameworks. ## S1 Technical Proofs ### Proof of Lemma 1 The invariance of \(F\)-test has been proved in Jiang et al. (2021) and we focus on the invariance of Sobel test here. To simplify the problem, consider the following general multivariate linear regression problem: \[Y=\beta_{0}X_{0}+\beta_{1}X_{1}+\cdots+\beta_{p}X_{p}+\varepsilon\] with \(n\) observed data points \(\{(X_{i0},X_{i1},\ldots,X_{ip},Y_{i})\}\). Define the vector of coefficients \(\mathbf{\beta}=(\beta_{0},\beta_{1},\ldots,\beta_{p})^{\prime}\), design matrix \(\mathbf{X}=(\mathbf{X}_{0},\mathbf{X}_{1},\ldots,\mathbf{X}_{p})\), where \(\mathbf{X}_{j}=(X_{1j},\ldots,X_{nj})^{\prime}\), and response vector \(\mathbf{Y}=(Y_{1},\ldots,Y_{n})^{\prime}\). Denote the 2-norm of a vector as \(||\cdot||_{2}^{2}\). Then the least square estimator of \(\mathbf{\beta}\) takes the form of \[\hat{\mathbf{\beta}}=(\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{X}^{\prime}\mathbf{Y},\] and the covariance is \[\text{cov}(\hat{\mathbf{\beta}})=s^{2}(\mathbf{X}^{\prime}\mathbf{X})^{-1},\] where \(s^{2}=||\mathbf{Y}-\mathbf{Y}_{\mathbf{X}}||_{2}^{2}/(n-p-1)\), \(\mathbf{Y}_{\mathbf{X}}\) represents projection of \(\mathbf{Y}\) onto the space spanned by \(\mathbf{X}\). As Sobel test is based on \(\hat{\mathbf{\beta}}\) and \(\text{cov}(\hat{\mathbf{\beta}})\), it suffices to show that for orthogonal matrix \(\Gamma\) and constant \(\gamma\), the LSE and estimated covariance of coefficient of the regression problem with transformed data matrix \((\tilde{\mathbf{X}}_{0},\ldots,\tilde{\mathbf{X}}_{p},\tilde{\mathbf{Y}})=\gamma\Gamma( \mathbf{X}_{0},\ldots,\mathbf{X}_{p},\mathbf{Y})\) remain unchanged. First, the invariance of LSE is easy to see as \[\tilde{\mathbf{\beta}}=(\tilde{\mathbf{X}}^{\prime}\tilde{\mathbf{X}})^{-1}\tilde{\mathbf{X}}^{ \prime}\tilde{\mathbf{Y}}=(\mathbf{X}^{\prime}\Gamma^{\prime}\Gamma\mathbf{X})^{-1}\mathbf{X}^{ \prime}\Gamma^{\prime}\Gamma\mathbf{Y}=(\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{X}^{\prime} \mathbf{Y}=\hat{\mathbf{\beta}}.\] Moreover, since \[||\tilde{\mathbf{Y}}-\tilde{\mathbf{Y}}_{\tilde{\mathbf{X}}}||_{2}^{2}=|| \gamma\Gamma(\mathbf{Y}-\mathbf{Y}_{\mathbf{X}})||_{2}^{2}=\gamma^{2}||\mathbf{Y}-\mathbf{Y}_{\mathbf{X }}||_{2}^{2},\] and \[(\tilde{\mathbf{X}}^{\prime}\tilde{\mathbf{X}})^{-1}=(\gamma^{2}\mathbf{X}^{ \prime}\Gamma^{\prime}\Gamma\mathbf{X})^{-1}=(\gamma^{2}\mathbf{X}^{\prime}\mathbf{X})^{- 1},\] we have \[\text{cov}(\tilde{\mathbf{\beta}})=||\tilde{\mathbf{Y}}-\tilde{\mathbf{Y}}_{ \tilde{\mathbf{X}}}||_{2}^{2}/(n-2)\cdot(\tilde{\mathbf{X}}^{\prime}\tilde{\mathbf{X}})^{- 1}=s^{2}(\mathbf{X}^{\prime}\mathbf{X})^{-1}=\text{cov}(\hat{\mathbf{\beta}}).\] Above all, the LSE and estimated covariance are invariant under orthogonal transformation. Hence, the invariance of LSE-Sobel test holds as well. ### Proof of Lemma 2 By definition, the test statistics in Sobel test is \[S=\frac{\hat{a}\hat{b}}{\left(\hat{a}^{2}\text{Var}(\hat{b})+\hat{b}^{2}\text {Var}(\hat{a})\right)^{1/2}}=\frac{T_{a}T_{b}}{\sqrt{T_{a}^{2}+T_{b}^{2}}},\] where \(T_{a}^{2}=\hat{a}^{2}/\text{Var}(\hat{a})\) and \(T_{b}^{2}=\hat{b}^{2}/\text{Var}(\hat{b})\). Based on the transformed data matrix \(\tilde{\mathcal{D}}\), we have \[\hat{a}=\frac{m_{2}}{x_{2}},\ \ \hat{b}=\frac{y_{3}}{m_{3}},\ \ \text{Var}(\hat{a})=\frac{m_{3}^{2}}{(n-2)x_{2}^{2}},\ \ \text{Var}(\hat{b})=\frac{y_{4}^{2}}{(n-3)m_{3}^{2}}.\] Hence, \(T_{a}=\sqrt{n-2}\cdot m_{2}/m_{3}\), \(T_{b}=\sqrt{n-3}\cdot y_{3}/y_{4}\), and \[|S|=\frac{1}{(1/T_{a}^{2}+1/T_{b}^{2})^{1/2}}=\frac{1}{\{1/[(n-2)r^{2}]+1/[(n -3)p^{2}]\}^{1/2}}.\] As sample size \(n\to\infty\), \(S\) follows standard normal distribution asymptotically. Hence, the rejection region of \(a\times b\) is \[\mathcal{R}_{a\times b}(\alpha)=\left\{|S|>z_{\alpha/2}\right\}= \left\{\frac{1}{(n-2)r^{2}}+\frac{1}{(n-3)p^{2}}<\frac{1}{z_{\alpha/2}^{2}} \right\}.\] ### Proof of Theorem 1 As the geometric plots of rejection regions have been shown in Jiang et al. (2021), we only give the mathematical analysis here. Let \(\mathcal{R}_{\beta}(\alpha\mid r)\) be the intersection of \(\mathcal{R}_{\beta}(\alpha)\) and the \(p\)-\(q\) plane \(\mathcal{P}_{r}\) for all \(\beta\in\{a,b,d,c,a\times b\}\). It is straightforward to see that \(\mathcal{R}_{\beta}(\alpha)=\bigcup_{r}\mathcal{R}_{\beta}(\alpha\mid r)\), and \[\bar{\mathcal{R}}_{c}(\alpha)\cap\mathcal{R}_{a\times b}(\alpha)\cap\mathcal{ R}_{d}(\alpha)=\bigcup_{r>0}\left\{\bar{\mathcal{R}}_{c}(\alpha\mid r)\cap \mathcal{R}_{a\times b}(\alpha\mid r)\cap\mathcal{R}_{d}(\alpha\mid r)\right\}.\] As implied by Lemma 2, \(\mathcal{R}_{a\times b}(\alpha|r)=\{p>p_{0}(r)\}\), where \[\frac{1}{(n-2)r^{2}}+\frac{1}{(n-3)p_{0}^{2}(r)}=\frac{1}{z_{\alpha/2}^{2}}, \ (n-2)r^{2}>z_{\alpha/2}^{2}.\] (S1.1) By Jiang et al. (2021), \(\hat{a}\hat{b}\hat{d}>0\) implies \(\hat{a}\hat{b}\hat{c}>0\) and \(\mathcal{R}_{d}(\alpha\mid r)=\{q>rp+p_{n,\alpha}(1+r^{2})^{1/2}\}\). Then for \(r^{2}>z_{\alpha/2}^{2}/(n-2)\), the intersection of \(\mathcal{R}_{a\times b}(\alpha\mid r)\), \(\mathcal{R}_{d}(\alpha\mid r)\) and \(\bar{\mathcal{R}}_{c}(\alpha\mid r)\) is \[\left\{p>p_{0}(r),rp+p_{n,\alpha}(1+r^{2})^{1/2}<q\leq r_{n,\alpha}{(p^{2}+1) }^{1/2}\right\}.\] Hence, it suffices to show that as \(n\to\infty\), \[P\left(\mathcal{D}\in\bigcup_{r>r_{0}(n)}\left\{p>p_{0}(r),rp+p_{n,\alpha}(1+ r^{2})^{1/2}<q\leq r_{n,\alpha}{(p^{2}+1)}^{1/2}\right\}\right)\to 0,\] (S1.2) where \(r_{0}(n)=z_{\alpha/2}/\sqrt{n-2}\). Since the probability density function of \(F\)-distribution with degrees of freedom \((1,n)\) satisfies \[f_{F_{1,n}}(x)=\frac{\Gamma(n/2+1/2)}{\sqrt{\pi}\Gamma(n/2)}\cdot\frac{1}{ \sqrt{nx}}\left(1+\frac{x}{n}\right)^{-\frac{n+1}{2}}\to\frac{1}{\sqrt{2\pi x} }e^{-x/2}\] as \(n\to\infty\), we have \(\lambda_{n-2}(\alpha)\to\chi_{1}^{2}(\alpha)\), where \(\chi_{1}^{2}(\alpha)\) is the \(\alpha\)th-quantile of \(\chi_{1}^{2}\) distribution. Hence, \(\sqrt{\lambda_{n-2}(\alpha)}\to\sqrt{\chi_{1}^{2}(\alpha)}\equiv z_{\alpha/2}\), i.e., \(\sqrt{\lambda_{n-2}(\alpha)}\to z_{\alpha/2}=\sqrt{n-2}\cdot r_{0}(n)\). Therefore, \(\{r:r_{0}(n)<r<r_{n,\alpha}\}\to\emptyset\). According to Jiang et al. (2021), \[\bigcup_{r\geq r_{n,\alpha}}\left\{p>0:rp+p_{n,\alpha}(1+r^{2})^{1/2}<r_{n, \alpha}(p^{2}+1)^{1/2}\right\}=\emptyset.\] Hence, \[P\left(\mathcal{D}\in\bigcup_{r>r_{0}(n)}\left\{p>p_{0}(r),rp+p_{n, \alpha}(1+r^{2})^{1/2}<r_{n,\alpha}(p^{2}+1)^{1/2}\right\}\right)\] \[=P\left(\mathcal{D}\in\bigcup_{r_{0}(n)<r<r_{n,\alpha}}\left\{p>p _{0}(r),rp+p_{n,\alpha}(1+r^{2})^{1/2}<r_{n,\alpha}(p^{2}+1)^{1/2}\right\}\right)\] \[\to 0,\] and the argument (S1.2) holds. ### Proof of Lemma 3 By Lemma 2, \[\hat{a}\hat{b}\hat{d}=\frac{m_{2}y_{3}(m_{3}y_{2}-m_{2}y_{3})}{x_{2}^{2}m_{3}^ {2}},\ \ \hat{a}\hat{b}\hat{c}=\frac{m_{2}y_{2}y_{3}}{x_{2}^{2}m_{3}}.\] Since \(x_{2}>0\) and \(m_{3}>0\), conditions \(\hat{a}\hat{b}\hat{d}<0\) and \(\hat{a}\hat{b}\hat{c}\geq 0\) imply that \[m_{2}m_{3}y_{2}y_{3}<m_{2}^{2}y_{3}^{2}\text{ and }m_{2}y_{2}y_{3}\geq 0.\] Hence, \(m_{2}m_{3}y_{2}y_{3}>0\) and \(|m_{3}y_{2}|<|m_{2}y_{3}|\), which is equivalent to \(q<rp\) as \(y_{4}>0\). ### Proof of Theorem 3 We show that there exists \(N>0\) such that for any \(n>N\), we can find some \(r_{n}>0\) such that \(\bar{\mathcal{R}}_{c}(\alpha|r_{n})\cap\mathcal{R}_{a\times b}(\alpha|r_{n}) \cap\bar{\mathcal{R}}_{d}(\alpha|r_{n})\neq\emptyset\). When \(\hat{a}\hat{b}\hat{c}\geq 0\), \(\mathcal{R}_{a\times b}(\alpha\mid r)\cap\bar{\mathcal{R}}_{c}(\alpha\mid r) \cap\bar{\mathcal{R}}_{d}(\alpha\mid r)\) takes the form \[\left\{\max\left\{rp-p_{n,\alpha}(r^{2}+1)^{1/2},0\right\}\leq q\leq\min\{r_ {n,\alpha}(p^{2}+1)^{1/2},rp+p_{n,\alpha}(r^{2}+1)^{1/2}\},p>p_{0}(r)\right\},\] and when \(\hat{a}b\hat{c}<0\), the above intersection is \[\left\{0\leq q\leq\min\left\{r_{n,\alpha}(p^{2}+1)^{1/2},-rp+p_{n,\alpha}(r^{2}+1 )^{1/2}\right\},p>p_{0}(r)\right\}.\] It is easy to see that \(\bar{\cal R}_{c}(\alpha|r)\cap{\cal R}_{a\times b}(\alpha|r)\cap\bar{\cal R}_{ d}(\alpha|r)\neq\emptyset\) holds if \(p_{0}(r)<p_{n,\alpha}(r^{2}+1)^{1/2}/r.\) As the definition of \(p_{0}(r)\) implies \(r>z_{\alpha/2}/\sqrt{n-2}\), it suffices to show that there exists \(N>0\) such that for any \(n>N\), we can find some \(r_{n}>z_{\alpha/2}/\sqrt{n-2}\) such that \[p_{0}(r_{n})<p_{n,\alpha}(r_{n}^{2}+1)^{1/2}/r_{n}.\] (S1.3) Let \(r_{n}^{2}=2z_{\alpha/2}^{2}/(n-2)\), then Eq. (S1.1) implies \(p_{0}^{2}(r_{n})=2z_{\alpha/2}^{2}/(n-3)\). Since \(\sqrt{n-3}\cdot p_{n,\alpha}\to z_{\alpha/2}\) and \(r_{n}^{2}\to 0\), there exists \(N>0\) s.t. for \(n>N\), we have \(\sqrt{n-3}\cdot p_{n,\alpha}\geq z_{\alpha/2}/2\) and \(r_{n}<1/3\). Therefore, when \(n>N\), we have \[\sqrt{n-3}\cdot p_{n,\alpha}(r_{n}^{2}+1)^{1/2}/r_{n}\geq\sqrt{10}\cdot z_{ \alpha/2}/2>\sqrt{2}z_{\alpha/2}=\sqrt{n-3}\cdot p_{0}(r_{n}).\] Above all, for \(n>N\) and \(r_{n}^{2}=2z_{\alpha/2}^{2}/(n-2)\), we have \(\bar{\cal R}_{c}(\alpha|r_{n})\cap\bar{\cal R}_{a\times b}(\alpha|r_{n})\cap \bar{\cal R}_{d}(\alpha|r_{n})\neq\emptyset\), and thus, \(\bar{\cal R}_{c}(\alpha)\cap{\cal R}_{a\times b}(\alpha)\cap\bar{\cal R}_{d}( \alpha)\neq\emptyset\). ### Proof of Theorem 4 \({\cal R}_{a}(\alpha)\cap{\cal R}_{b}(\alpha)\cap{\cal R}_{d}(\alpha)\cap\bar{ \cal R}_{c}(\alpha)\neq\emptyset\) can be equivalently expressed as \({\cal R}_{a}(\alpha\mid r)\cap{\cal R}_{b}(\alpha\mid r)\cap{\cal R}_{d}( \alpha\mid r)\cap\bar{\cal R}_{c}(\alpha\mid r)\neq\emptyset\) for some \(r>r_{n,\alpha}\) since \({\cal R}_{a}(\alpha\mid r)={\cal P}_{r}\cap\{r>r_{n,\alpha}\}=\emptyset\) when \(r\leq r_{n,\alpha}\). By definition, \({\cal R}_{b}(\alpha\mid r)=\{p>p_{n,\alpha}\}\), and \(\bar{\cal R}_{c}(\alpha\mid r)=\left\{p\geq 0,0\leq q\leq r_{n,\alpha}(p^{2}+1)^{1/2}\right\}\). When \(\hat{a}\hat{b}\hat{c}<0\), \[{\cal R}_{d}(\alpha\mid r)=\left\{p\geq 0,q>-rp+p_{n,\alpha}(r^{2}+1)^{1/2} \right\},\] and for \(\hat{a}\hat{b}\hat{c}\geq 0\), Lemma 3 implies that \[{\cal R}_{d}(\alpha\mid r)=\left\{p\geq 0,0\leq q<rp-p_{n,\alpha}(r^{2}+1)^{1/2} \right\}.\] The geometry of \({\cal R}_{a}(\alpha\mid r)\cap{\cal R}_{b}(\alpha\mid r)\cap{\cal R}_{d}( \alpha\mid r)\cap\bar{\cal R}_{c}(\alpha\mid r)\) is demonstrated in Figure S1.1. When \(\hat{a}\hat{b}\hat{c}\geq 0\), it suffices to show that \(r_{n,\alpha}(p^{2}+1)^{1/2}>\max\{0,rp-p_{n,\alpha}(r^{2}+1)^{1/2}\}\) for some \(p>p_{n,\alpha}\), which always holds for \(p\in(p_{n,\alpha},p_{n,\alpha}(r^{2}+1)^{1/2}/r)\). When \(\hat{a}\hat{b}\hat{c}<0\), it suffices to show that \(r_{n,\alpha}(p^{2}+1)^{1/2}>0\) for some \(p>p_{n,\alpha}\), which always holds. We thereby conclude the proof of Theorem 4. As shown in Figure S1.1, the intersection of \(\mathcal{R}_{a\times b}(\alpha)\), \(\mathcal{R}_{d}(\alpha)\) and \(\bar{\mathcal{R}}_{c}(\alpha)\) under \(\hat{a}\times\hat{b}\times\hat{c}\geq 0\) is a subset of that under \(\hat{a}\times\hat{b}\times\hat{c}<0\), which implies the total-effect test is more likely to be erroneous for establishing competitive mediation when \(\hat{a}\times\hat{b}\times\hat{c}<0\). ### Proof of Theorem 5 We show that \(\bar{\mathcal{R}}_{c}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\cap \mathcal{R}_{d}(\alpha|r)\neq\emptyset\) for some \(r\) when \(\hat{a}\hat{b}\hat{d}<0\). We only consider the case where \(r>z_{\alpha/2}/\sqrt{n-2}\) since \(\mathcal{R}_{a\times b}(\alpha|r)=\emptyset\) when this condition doesn't hold. The expression of \(\mathcal{R}_{d}(\alpha|r)\) is the same as that under the LSE-\(F\) framework. As \(\bar{\mathcal{R}}_{c}(\alpha|r)=\{p\geq 0,q\leq r_{n,\alpha}(p^{2}+1)^{1/2}\}\) and \(\mathcal{R}_{a\times b}(\alpha|r)=\{p>p_{0}(r)\}\), the observation that \(\bar{\mathcal{R}}_{c}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\cap \mathcal{R}_{a\times b}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\) is a subset of that under \(\hat{a}\times\hat{b}\times\hat{c}<0\). We then conclude the proof of Theorem 4. As shown in Figure S1.1, the intersection of \(\mathcal{R}_{a\times b}(\alpha)\), \(\mathcal{R}_{d}(\alpha)\) and \(\bar{\mathcal{R}}_{c}(\alpha)\) under \(\hat{a}\times\hat{b}\times\hat{c}\geq 0\) is a subset of that under \(\hat{a}\times\hat{b}\times\hat{c}<0\), which implies the total-effect test is more likely to be erroneous for establishing competitive mediation when \(\hat{a}\times\hat{b}\times\hat{c}<0\). ### Proof of Theorem 5 We show that \(\bar{\mathcal{R}}_{c}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\cap \mathcal{R}_{d}(\alpha|r)\neq\emptyset\) for some \(r\) when \(\hat{a}\hat{b}\hat{d}<0\). We only consider the case where \(r>z_{\alpha/2}/\sqrt{n-2}\) since \(\mathcal{R}_{a\times b}(\alpha|r)=\emptyset\) when this condition doesn't hold. The expression of \(\mathcal{R}_{d}(\alpha|r)\) is the same as that under the LSE-\(F\) framework. As \(\bar{\mathcal{R}}_{c}(\alpha|r)=\{p\geq 0,q\leq r_{n,\alpha}(p^{2}+1)^{1/2}\}\) and \(\mathcal{R}_{a\times b}(\alpha|r)=\{p>p_{0}(r)\}\), the observation that \(\bar{\mathcal{R}}_{c}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\cap \mathcal{R}_{a\times b}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\) is a subset of that under \(\hat{a}\times\hat{b}\times\hat{c}<0\). We then conclude the proof of Theorem 4. As shown in Figure S1.1, the intersection of \(\mathcal{R}_{a\times b}(\alpha)\), \(\mathcal{R}_{d}(\alpha)\) and \(\bar{\mathcal{R}}_{c}(\alpha)\) under \(\hat{a}\times\hat{b}\times\hat{c}\geq 0\) is a subset of that under \(\hat{a}\times\hat{b}\times\hat{c}<0\), which implies the total-effect test is more likely to be erroneous for establishing competitive mediation when \(\hat{a}\times\hat{b}\times\hat{c}<0\). ### Proof of Theorem 5 We show that \(\bar{\mathcal{R}}_{c}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\cap \mathcal{R}_{d}(\alpha|r)\neq\emptyset\) for some \(r\) when \(\hat{a}\hat{b}\hat{d}<0\). We only consider the case where \(r>z_{\alpha/2}/\sqrt{n-2}\) since \(\mathcal{R}_{a\times b}(\alpha|r)=\emptyset\) when this condition doesn't hold. The expression of \(\mathcal{R}_{d}(\alpha|r)\) is the same as that under the LSE-\(F\) framework. As \(\bar{\mathcal{R}}_{c}(\alpha|r)=\{p\geq 0,q\leq r_{n,\alpha}(p^{2}+1)^{1/2}\}\) and \(\mathcal{R}_{a\times b}(\alpha|r)=\{p>p_{0}(r)\}\), the observation that \(\bar{\mathcal{R}}_{c}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\cap \mathcal{R}_{a\times b}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\) is a subset of that under \(\hat{a}\times\hat{b}\times\hat{c}<0\). We then conclude the proof of Theorem 5. As shown in Figure S1.1, the intersection of \(\mathcal{R}_{a\times b}(\alpha)\), \(\mathcal{R}_{d}(\alpha)\) and \(\bar{\mathcal{R}}_{c}(\alpha)\) under \(\hat{a}\times\hat{b}\times\hat{c}\geq 0\) is a subset of that under \(\hat{a}\times\hat{b}\times\hat{c}<0\), which implies the total-effect test is more likely to be erroneous for establishing competitive mediation when \(\hat{a}\times\hat{b}\times\hat{c}<0\). ### Proof of Theorem 5 We show that \(\bar{\mathcal{R}}_{c}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\cap \mathcal{R}_{d}(\alpha|r)\neq\emptyset\) for some \(r\) when \(\hat{a}\hat{b}\hat{d}<0\). We only consider the case where \(r>z_{\alpha/2}/\sqrt{n-2}\) since \(\mathcal{R}_{a\times b}(\alpha|r)=\emptyset\) when this condition doesn't hold. The expression of \(\mathcal{R}_{d}(\alpha|r)\) is the same as that under the LSE-\(F\) framework. As \(\bar{\mathcal{R}}_{c}(\alpha|r)=\{p\geq 0,q\leq r_{n,\alpha}(p^{2}+1)^{1/2}\}\) and \(\mathcal{R}_{a\times b}(\alpha|r)=\{p>p_{0}(r)\}\), the observation that \(\bar{\mathcal{R}}_{c}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\cap \mathcal{R}_{a\times b}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\) is a subset of that under \(\hat{a}\times\hat{b}\times\hat{c}<0\). We then conclude the proof of Theorem 5. As shown in Figure S1.1, the intersection of \(\mathcal{R}_{a\times b}(\alpha)\), \(\mathcal{R}_{d}(\alpha)\) and \(\bar{\mathcal{R}}_{c}(\alpha)\) under \(\hat{a}\times\hat{b}\times\hat{c}\geq 0\) is a subset of that under \(\hat{a}\times\hat{b}\times\hat{c}<0\), which implies the total-effect test is more likely to be erroneous for establishing competitive mediation when \(\hat{a}\times\hat{b}\times\hat{c}<0\). ### Proof of Theorem 5 We show that \(\bar{\mathcal{R}}_{c}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\cap \mathcal{R}_{d}(\alpha|r)\neq\emptyset\) for some \(r\) when \(\hat{a}\hat{b}\hat{d}<0\). We only consider the case where \(r>z_{\alpha/2}/\sqrt{n-2}\) since \(\mathcal{R}_{a\times b}(\alpha|r)=\emptyset\) when this condition doesn't hold. The expression of \(\mathcal{R}_{d}(\alpha|r)\) is the same as that under the LSE-\(F\) framework. As \(\bar{\mathcal{R}}_{c}(\alpha|r)=\{p\geq 0,q\leq r_{n,\alpha}(p^{2}+1)^{1/2}\}\) and \(\mathcal{R}_{a\times b}(\alpha|r)=\{p>p_{0}(r)\}\), the observation that \(\bar{\mathcal{R}}_{c}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\cap \mathcal{R}_{a\times b}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\) is a subset of that under \(\hat{a}\times\hat{b}\times\hat{c}<0\), which implies the total-effect test is more likely to be erroneous for establishing competitive mediation when \(\hat{a}\times\hat{b}\times\hat{c}<0\). ### Proof of Theorem 5 We show that \(\bar{\mathcal{R}}_{c}(\alpha|r)\cap\mathcal{R}_{a\times b}(\alpha|r)\cap \mathcal{R}_{d}(\alpha|r)\neq\emptyset\) for some \(r\) when \(\hat{a}\hat{b}\hat{d}<0\). We only consider the case where \(r>z_{\alpha/2 \(\mathcal{R}_{d}(\alpha|r)\neq\emptyset\) is trivial. ### Simulations for Competitive Mediation This section presents the results of Monte Carlo simulation for competitive mediation under LSE-\(F\), LSE-Sobel and LAD-\(Z\) frameworks. To validate Theorem 4, we generate the simulated data from model (2.1) and (2.2) as follows: \[n\sim\text{Unif}(\{10,\ldots,100\}),\quad(i_{M},i_{Y},a,b,d)\sim\text{Unif}[-1,1]^{5},\] \[X\sim N(0,1),\quad\sigma_{M}^{2}\text{ and }\sigma_{Y}^{2}\sim\text{Inv-Gamma}(1,1).\] A total of \(10,000\) independent datasets of different sample sizes were simulated. For each simulated dataset, the LSEs \((\hat{a},\hat{b},\hat{c},\hat{d})\) and their \(p\)-values \((p_{a},p_{b},p_{c},p_{d})\) under the LSE-\(F\) framework are calculated. If Theorem 4 holds, then when for any fixed \(\alpha\in(0,1)\), we have \(\{p_{c}\geq\alpha\}\cap\{\hat{a}\hat{b}\hat{d}<0\}\neq\emptyset\) if \(\max(p_{a},p_{b},p_{d})<\alpha\). Figure S2.2 (A) checks the \(p\)-value condition when \(\alpha=0.1\) by demonstrating each simulated dataset with one point in a 2-dimensional space with \(\max(p_{a},p_{b},p_{d})\) and \(p_{c}\) be the \(X\) and \(Y\)-axis, respectively. The solid circles stand for datasets satisfying \(\max(p_{a},p_{b},p_{d})<\alpha\) and \(\hat{a}\hat{b}\hat{d}<0\), gray crossings represent data sets such that \(\max(p_{a},p_{b},p_{d})\geq\alpha\) or \(\hat{a}\hat{b}\hat{d}\geq 0\), and the dark gray dashed line represents \(p_{c}=\alpha\). Solid circles above the line \(p_{c}=\alpha\) form an empirical version of set \(\{p_{c}\geq\alpha\}\). We can see that when \(\max(p_{a},p_{b},p_{d})<\alpha=0.1\), \(\{p_{c}\geq\alpha\}\cap\{\hat{a}\hat{b}\hat{d}<0\}\) is not empty. To check the theoretical result for different values of \(\alpha\), the proportion of datasets satisfying \(p_{c}\geq\alpha\) for 1000 evenly spaced values of \(\alpha\) in \((0.01,0.99)\) when \(\max(p_{a},p_{b},p_{d})<\alpha\) and \(\hat{a}\hat{b}\hat{d}<0\) under LSE-\(F\) framework is shown in Figure S2.2 (B). It implies that the total-effect test will reject competitive mediation erroneously with quite large probability. Similar analysis under LSE-Sobel framework and LAD-\(Z\) framework could be conducted to test whether a similar result holds for other frameworks for establishing competitive mediation. For LSE-Sobel framework, we calculated the LSEs \((\hat{a},\hat{b},\hat{c},\hat{d})\) for each simulated dataset using the same group of simulated datasets. \(p\)-values \((p_{c},p_{d})\) of \(F\)-test as well as the \(p\)-value \(p_{ab}\) of Sobel test for \(a\times b\) are calculated. If a similar result holds for LSE-Sobel framework, we could expect to see for any fixed \(\alpha\in(0,1)\), when \(\max(p_{ab},p_{d})<\alpha\), we have \(\{p_{c}\geq\alpha\}\cap\{\hat{a}\hat{b}\hat{d}<0\}\neq\emptyset\), which is supported by the results in Figure S2.3. For LAD-\(Z\) framework, The LAD estimator \(\hat{a},\hat{b},\hat{d},\hat{c}\) as well as their corresponding \(p\)-values under \(Z\)-test are calculated. Similarly, If \(0\}\neq\emptyset\) for any fixed \(\alpha\in(0,1)\) when \(\max(p_{a},p_{b},p_{d})<\alpha\), the same conclusion can be reached under LAD-\(Z\) framework. Results are shown in Figure S2.4.
2308.16807
Universality of Loop Corrected Soft Theorems in 4d
In \cite{1808.03288}, logarithmic correction to subleading soft photon and soft graviton theorems have been derived in four spacetime dimensions from the ratio of IR-finite S-matrices. This has been achieved after factoring out IR-divergent components from the traditional electromagnetic and gravitational S-matrices using Grammer-Yennie prescription. Although the loop corrected subleading soft theorems are derived from one-loop scattering amplitudes involving scalar particles in a minimally coupled theory with scalar contact interaction, it has been conjectured that the soft factors are universal (theory independent) and one-loop exact (don't receive corrections from higher loops). This paper extends the analysis conducted in \cite{1808.03288} to encompass general spinning particle scattering with non-minimal couplings permitted by gauge invariance and general coordinate invariance. By re-deriving the $\ln\omega$ soft factors in this generic setup, we establish their universal nature. Furthermore, we summarize the results of loop corrected soft photon and graviton theorems up to sub-subleading order, which follows from the analysis of one and two loop QED and quantum gravity S-matrices. While the classical versions of these soft factors have already been derived in the literature, we put forth conjectures regarding the quantum soft factors and outline potential strategies for their derivation.
Hare Krishna, Biswajit Sahoo
2023-08-31T15:30:45Z
http://arxiv.org/abs/2308.16807v2
# Universality of Loop Corrected Soft Theorems in 4d ###### Abstract In [1], logarithmic correction to subleading soft photon and soft graviton theorems have been derived in four spacetime dimensions from the ratio of IR-finite S-matrices. This has been achived after factoring out IR-divergent components from the traditional electromagnetic and gravitational S-matrices using Grammer-Yennie prescription. Although the loop corrected subleading soft theorems are derived from one-loop scattering amplitudes involving scalar particles in a minimally coupled theory with scalar contact interaction, it has been conjectured that the soft factors are universal (theory independent) and one-loop exact (don't receive corrections from higher loops). This paper extends the analysis conducted in [1] to encompass general spinning particle scattering with non-minimal couplings permitted by gauge invariance and general coordinate invariance. By re-deriving the \(\ln\omega\) soft factors in this generic setup, we establish their universal nature. Furthermore, we summarize the results of loop corrected soft photon and graviton theorems up to sub-subleading order, which follows from the analysis of one and two loop QED and quantum gravity S-matrices. While the classical versions of these soft factors have already been derived in the literature, we put forth conjectures regarding the quantum soft factors and outline potential strategies for their derivation. ## 1 Introduction and Result The soft theorem examines the infrared properties of a scattering amplitude involving a low (soft) momentum photon or graviton, in addition to other asymptotic particles. It establishes a relationship between this amplitude and the one without the low momentum photon or graviton. In a series of papers [2; 3; 4; 5; 6], it has been established that tree level soft photon and soft graviton theorems in four spacetime dimensions (\(D=4\)) are just the manifestation of gauge invariance and general coordinate invariance at the scattering amplitude level. Soft factorisation alone does not provide profound insights into the ultraviolet completion of QED or quantum gravity theory, nor does it impose additional constraints on the quantum theory beyond what has already been achieved by gauge invariance and general coordinate invariance.1 Instead, given an effective field theory (EFT) action with potential non-minimal interactions permitted by gauge invariance or general coordinate invariance, one can systematically compute the non-universal soft factors up to a certain order in the soft momentum expansion [3; 5; 8]. In the past, there were challenges in obtaining loop corrections to the subleading soft photon and graviton theorems in \(D=4\) due to the presence of infrared divergence in traditional scattering amplitudes [9; 10]. However, this issue has been successfully resolved in [1] by directly working in \(D=4\) and carefully analyzing the possible non-analytic structures around \(\omega=0\). In this work, the subleading soft photon and soft graviton theorems have been derived at the one-loop level, taking into account both electromagnetic and gravitational interactions. Interestingly, it has been observed that the subleading soft factors emerge at an order \(\ln\omega\), where \(\omega\) represents the energy of the soft photon or graviton. The loop corrected subleading soft factor is dominant compare the tree level subleading soft factor, which is of order \(\omega^{0}\), as the energy approaches zero (\(\omega\to 0\)). The existence of the \(\ln\omega\) soft graviton theorem has been confirmed in [11; 12] with perfect agreement with the result of [1] in the massless limit. Footnote 1: We would like to emphasize that the Ward identities relating these soft theorems to asymptotic symmetries also do not provide any additional constraints on the quantum theory of electromagnetism or gravity beyond what has already been achieved by gauge invariance and general coordinate invariance [7]. Instead, the Ward identities are simply the manifestation of the equations of motion for low-frequency photons or graviton fields at the level of scattering amplitudes. Where the soft charge is represented as an integral over the radiation mode of gauge or graviton fields and the hard charge is an integral over the inverse propagator operating on the current or stress tensor associated with finite energy scattered particles. The asymptotic symmetry parameters (large gauge transformation and asymptotic radial modes of bulk diffeomorphism) on the celestial sphere are just smearing functions that appears in both the integrands of the soft and hard charge expressions. The soft graviton theorem results offer an intriguing application in deriving low-frequency gravitational wave forms and gravitational memory for astrophysical scattering events from their classical limit [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. In a typical classical gravitational scattering scenario, one provides initial scattering data such as masses, velocities, sizes, intrinsic angular momenta, and impact parameters of the scattered objects, along with the specified interaction among them. The goal is to determine the gravitational waveform as an output. However, the classical limit of the universal soft graviton theorem directly provides the low-frequency gravitational waveform in terms of both the initial and final scattering data, regardless of the knowledge of the interaction involved in the scattering process. This suggests a novel approach for deriving low-frequency and late-time gravitational waveforms by directly studying classical gravitational scattering processes with both initial and final scattering data, known as the classical soft graviton theorem. This approach has been successfully pursued in [15; 16; 21; 23]. The derivation of the classical soft graviton theorem readily extends to higher orders in the low-frequency expansion of the gravitational waveform, and numerous higher-order terms have been derived. However, deriving their quantum counterparts from the analysis of scattering amplitudes proves to be challenging in general, as discussed in the main body of the paper. Consider a gravitational scattering amplitude involving \(N\) number of finite energy particles (hard particles) with momenta, spins and polarizations \(\{p_{i},\Sigma_{i},\epsilon_{i}\}\) for \(i=1,2,\cdots,N\) (\(N\geq 4\)) and one low-energy (soft) graviton2 with momentum and polarization \(k,\varepsilon\), and denote this scattering amplitude by \(\mathcal{A}^{(N+1)}\big{(}\{\epsilon_{i},p_{i},\Sigma_{i}\},\varepsilon,k\big{)}\). We are following the convention that all the particles are ingoing, so if some of the particles are outgoing then we have to flip the sign of the four momenta for those particles. We will treat the soft graviton as outgoing with energy \(\omega\), so according to our convention \(k^{\mu}=-\omega\mathbf{n}^{\mu}\) with \(\mathbf{n}^{\mu}\) being the null vector whose spatial part denotes the direction of soft graviton emission. Now the soft expansion of this (\(N+1\)) particle amplitude takes the following form Footnote 2: The graviton is the particle created by operating metric fluctuation field \(h_{\mu\nu}(x)\) on vacuum, where the classical metric fluctuation is defined by \(h_{\mu\nu}(x)=\frac{1}{2\sqrt{8\pi G}}\left(g_{\mu\nu}(x)-\eta_{\mu\nu}\right)\). \[\mathcal{A}^{(N+1)}\big{(}\{\epsilon_{i},p_{i},\Sigma_{i}\}, \varepsilon,k\big{)} =\sqrt{8\pi G}\sum_{i=1}^{N}\epsilon_{i,\alpha}\left[\left(\mathbb{ S}_{\text{tree}}^{gr}\right)_{\beta}^{\alpha}+\left(\mathbb{S}_{\text{1-loop}}^{gr} \right)_{\beta}^{\alpha}+\left(\mathbb{S}_{\text{2-loop}}^{gr}\right)_{\beta} ^{\alpha}+\cdots\right]\] \[\qquad\qquad\times\mathcal{A}^{(N)\beta}_{(i)}\left(p_{i}\right)\, \tag{1}\] where \(\mathcal{A}^{(N)\beta}_{(i)}(p_{i})\) represents the \(i\)'th particle polarisation (\(\epsilon_{i\beta}\)) stripped \(N\)-particle amplitude \(\mathcal{A}^{(N)}\big{(}\{\epsilon_{i},p_{i},\Sigma_{i}\}\big{)}\), which is defined by the following relation \[\mathcal{A}^{(N)}\big{(}\{\epsilon_{i},p_{i},\Sigma_{i}\}\big{)}\equiv \epsilon_{i\beta}\ \mathcal{A}^{(N)\beta}_{(i)}\left(p_{i}\right). \tag{2}\] In (1), the expression of tree level "soft factor"3 for single soft graviton emission reads [2, 3, 8, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33] Footnote 3: The actual tree level soft factor should be think of the expression (3) with a sum over hard particles from \(i=1,2,\cdots,N\). Throughout the whole paper, we follow the same terminology “soft factor” referring to the soft factor expression without hard particle sum. \[\left(\mathbb{S}_{\text{tree}}^{gr}\right)_{\beta}^{\alpha} = \frac{\varepsilon_{\mu\nu}p_{i}^{\mu}p_{i}^{\nu}}{p_{i}\cdot k} \delta_{\beta}^{\alpha}+\frac{\varepsilon_{\mu\nu}p_{i}^{\mu}k_{\rho}}{p_{i} \cdot k}\Bigg{(}\Big{\{}p_{i}^{\nu}\frac{\partial}{\partial p_{i\rho}}-p_{i}^ {\rho}\frac{\partial}{\partial p_{i\nu}}\Big{\}}\delta_{\beta}^{\alpha}+\left( \Sigma_{i}^{\rho\nu}\right)_{\beta}^{\alpha}\Bigg{)} \tag{3}\] \[+\mathcal{O}(\omega^{n},n\geq 1)\.\] In \(D=4\) analyzing tree level scattering amplitudes for effective field theory the non-universal sub-subleading soft graviton factor at order \(\omega\) has also been derived in [3, 8]. Reference [3] explicitly evaluated the non-universal contribution to the sub-subleading soft factor in terms of the non-minimal coupling of two finite energy fields to a soft graviton field through the Riemann tensor, and the general structure of the three-point 1PI (one-particle irreducible) vertex involving two hard particles and a soft graviton. By extending the analysis of [3], it becomes evident that a complete soft factorization is not achievable at order \(\omega^{n}\) for \(n\geq 2\) in a generic theory of quantum gravity involving all possible higher derivative corrections allowed by general covariance. However, a partial soft factorization has been accomplished by enforcing linearized gauge invariance of the (\(N+1\))-particle amplitude in [6, 34]. The generalization of the tree-level soft factor \(\mathbb{S}_{\text{tree}}^{gr}\) for multiple soft graviton emissions up to subleading order can be found in [4, 5, 35, 36, 37, 38, 39, 40, 41, 42, 43]. The one-loop contribution to the "soft factor" for single soft graviton emission in (1) reads \[\left(\mathbb{S}_{\text{1-loop}}^{gr}\right)_{\beta}^{\alpha} = K_{phase}^{reg}\ \frac{\varepsilon_{\mu\nu}p_{i}^{\mu}p_{i}^{\nu}}{p_{i} \cdot k}\delta_{\beta}^{\alpha}+\frac{\varepsilon_{\mu\nu}p_{i}^{\mu}k_{\rho}}{ p_{i}\cdot k}\left\{p_{i}^{\nu}\frac{\partial K_{gr}^{reg}}{\partial p_{ip}}-p_{i}^{ \rho}\frac{\partial K_{gr}^{reg}}{\partial p_{i\nu}}\right\}\delta_{\beta}^{\alpha} \tag{4}\] \[+\ K_{phase}^{reg}\ \frac{\varepsilon_{\mu\nu}p_{i}^{\mu}k_{\rho}}{p_{i} \cdot k}\left(\left\{p_{i}^{\nu}\frac{\partial}{\partial p_{ip}}-p_{i}^{\rho} \frac{\partial}{\partial p_{i\nu}}\right\}\delta_{\beta}^{\alpha}+\left( \Sigma_{i}^{\rho\nu}\right)_{\beta}^{\ \alpha}\right)\] \[+\frac{\varepsilon_{\mu\nu}k_{\rho}k_{\sigma}}{p_{i}\cdot k} \left\{p_{i}^{\mu}\frac{\partial K_{gr}^{reg}}{\partial p_{ip}}-p_{i}^{\rho} \frac{\partial K_{gr}^{reg}}{\partial p_{ip}}\right\}\left(\left\{p_{i}^{\nu} \frac{\partial}{\partial p_{i\sigma}}-p_{i}^{\sigma}\frac{\partial}{\partial p _{i\nu}}\right\}\delta_{\beta}^{\alpha}+\left(\Sigma_{i}^{\sigma\nu}\right)_{ \beta}^{\ \alpha}\right)\] \[+\ {\cal O}(\omega^{n},n\geq 0)\ +\ {\cal O}(\omega^{n}\ln\omega,n\geq 2 )\,\] where \[K_{gr}^{reg}\ =\ \frac{i}{2}\ (8\pi G)\sum_{\ell=1}^{N}\ \sum_{ \begin{subarray}{c}j=1\\ j\neq\ell\end{subarray}}^{N}\left\{(p_{\ell}.p_{j})^{2}-\frac{1}{2}p_{\ell}^{2 }p_{j}^{2}\right\}\ \int_{\omega}^{\Lambda}\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\ \frac{1}{(p_{\ell} \cdot\ell+i\epsilon)\ (p_{j}\cdot\ell-i\epsilon)}\] \[\simeq-iG\ (\ln\omega)\sum_{\ell=1}^{N}\sum_{\begin{subarray}{c}j=1 \\ j\neq\ell\end{subarray}}^{N}\ \frac{\left\{(p_{\ell}.p_{j})^{2}-\frac{1}{2}p_{\ell}^{2 }p_{j}^{2}\right\}}{\sqrt{(p_{\ell}.p_{j})^{2}-p_{\ell}^{2}p_{j}^{2}}}\left\{ \delta_{\eta_{\ell}\eta_{j},1}-\frac{i}{2\pi}\ln\left(\frac{p_{\ell}.p_{j}+ \sqrt{(p_{\ell}.p_{j})^{2}-p_{\ell}^{2}p_{j}^{2}}}{p_{\ell}.p_{j}-\sqrt{(p_{ \ell}.p_{j})^{2}-p_{\ell}^{2}p_{j}^{2}}}\right)\right\}\,\] and \[K_{phase}^{reg} = i\ (8\pi G)\ \sum_{j=1}^{N}(p_{j}.k)^{2}\int_{R^{-1}}^{\omega}\frac{d^{4 }\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\frac{1}{k.\ell+i\epsilon}\frac{1}{p_{j}.\ell-i\epsilon} \tag{6}\] \[\simeq -2iG\ (\ln\omega)\ \left[\ \sum_{\begin{subarray}{c}j=1\\ \eta_{j}=-1\end{subarray}}^{N}p_{j}.k\ -\ \frac{i}{2\pi}\sum_{j=1}^{N}p_{j}.k\ \ln\left(\frac{p_{j}^{2}}{(p_{j}.\mathbf{n})^{2}} \right)\right]\,.\] In the expressions (5) and (6), \(\eta_{j}=+1\) if \(j\)'th particle is ingoing and \(\eta_{j}=-1\) if \(j\)'th particle is outgoing. Under the sign \(\simeq\) we only keep the logarithmic contributions after performing the integrations following [1]. The upper limit of the integration \(\Lambda\) in (5) represents the order of the energy of hard particles and the lower limit of the integration \(R^{-1}\) in (6) represents the order of the energy resolution of the detector. The first line of (4) contains the \({\cal O}(\ln\omega)\) soft factor which has been derived in [1] as an one-loop exact result, analyzing one-loop gravitational S-matrices in the theory of scalar coupled to gravity. In [1] a correction to \({\cal O}(\ln\omega)\) soft graviton factor due to electromagnetic interaction has also been derived when the scattering particles carry some electric charges as well. In this article we re-derive the \({\cal O}(\ln\omega)\) soft factor for single soft graviton emission in a generic theory of quantum gravity for scattering of particles with arbitrary mass and spin. This investigation will demonstrate the universal (independent of theory) nature of the \({\cal O}(\ln\omega)\) soft factor, while also extending the infrared divergence factorization prescription proposed in [1] to encompass a broad range of quantum gravity theories. In this paper we also conjecture the order \(\omega\ln\omega\) soft factor given in the second and third lines of (4) which is derivable from the analysis of one-loop scattering amplitude for the scattering generic spinning particles in a generic theory of quantum gravity extending the analysis of this article. The classical limit of this \(\mathcal{O}(\omega\ln\omega)\) soft factor has already been derived in [23] in the name of spin-dependent classical soft graviton theorem which provides evidence on the correctness of the conjecture. Note that the tree level subleading soft theorem result at order \(\omega^{0}\) in (3) is not universal as it receives correction at one-loop order, which is expected to be dependent on the theory as well as the value of detector resolution (IR regulator) [9; 10]. The two-loop contribution to the "soft factor" for single soft graviton emission in (1) reads \[\left(\mathbb{S}^{gr}_{\text{2-loop}}\right)^{\alpha}_{\beta} = \frac{1}{2}\Big{\{}K^{reg}_{phase}\Big{\}}^{2}\ \frac{\varepsilon_{\mu\nu}p_{i}^{\mu}p_{i}^{\nu}}{p_{i}\cdot k}\delta^{ \alpha}_{\beta}+\ K^{reg}_{phase}\frac{\varepsilon_{\mu\nu}p_{i}^{\mu}k_{\rho} }{p_{i}\cdot k}\Bigg{(}p_{i}^{\nu}\frac{\partial K^{reg}_{gr}}{\partial p_{i \rho}}-p_{i}^{\rho}\frac{\partial K^{reg}_{gr}}{\partial p_{i\nu}}\Bigg{)} \delta^{\alpha}_{\beta} \tag{7}\] \[+\frac{1}{2}\sum_{i=1}^{N}\frac{\varepsilon_{\mu\nu}k_{\rho}k_{ \sigma}}{p_{i}\cdot k}\Bigg{(}p_{i}^{\mu}\frac{\partial K^{reg}_{gr}}{\partial p _{i\rho}}-p_{i}^{\rho}\frac{\partial K^{reg}_{gr}}{\partial p_{i\mu}}\Bigg{)} \Bigg{(}p_{i}^{\nu}\frac{\partial K^{reg}_{gr}}{\partial p_{i\sigma}}-p_{i}^{ \sigma}\frac{\partial K^{reg}_{gr}}{\partial p_{i\nu}}\Bigg{)}\ \delta^{\alpha}_{\beta}\] \[+\ \mathcal{O}(\omega^{n}(\ln\omega)^{2},n\geq 2)\ +\ \mathcal{O}(\omega^{n}\ln\omega,n\geq 1)\ + \mathcal{O}(\omega^{n},n\geq 1)\.\] The above result was conjectured in [21] as a two-loop exact result and can be obtained by analyzing two-loop amplitudes using the same methodology being developed in this paper. The classical limit of this order, denoted as \(\mathcal{O}\big{(}\omega(\ln\omega)^{2}\big{)}\), was derived under the name of the sub-subleading classical soft graviton theorem in [21]. This classical derivation offers substantial evidence supporting the validity of the above two-loop soft factor. Note that the \(\mathcal{O}(\omega\ln\omega)\) soft factor in the second and third lines of (4) at one-loop receives a correction at two-loop order. This correction is expected to depend on the theory of quantum gravity and the value of detector resolution (IR regulator), hence non-universal. From the analysis of the \(n\)-loop gravitational S-matrix, it is expected that the new leading non-analytic soft graviton factor, as the frequency \(\omega\) approaches zero, behaves like \(\omega^{n-1}(\ln\omega)^{n}\) relating it to tree level \(N\)-particle amplitude. The general structure of the order \(\omega^{n-1}(\ln\omega)^{n}\) soft graviton theorem is provided in [21]. Note that the "tree", "1-loop",... subscripts in the soft factors in the expression (1) only specifies the first appearance of the soft factors in the perturbative analysis of the \((N+1)\)-particle amplitudes at that order (tree or \(n\)-loop) and it multiplies to the corresponding tree-level \(N\)-particle amplitudes. But they also appears in the analysis of higher loop amplitudes as well. For example \(\mathbb{S}^{gr}_{\text{tree}}\) also appears as a soft factor in the analysis of \((N+1)\)-particle \(n\)-loop amplitude relating it to the \(N\)-particle \(n\)-loop amplitude for all \(n\geq 1\). Similarly \(\mathbb{S}^{gr}_{\text{1-loop}}\) also appears as a soft factor in the analysis of \((N+1)\)-particle \(n\)-loop amplitude relating it to the \(N\)-particle \((n-1)\)-loop amplitude for all \(n\geq 2\), and \(\mathbb{S}^{gr}_{\text{2-loop}}\) also appears as a soft factor in the analysis of \((N+1)\)-particle \(n\)-loop amplitude relating it to the \(N\)-particle \((n-2)\)-loop amplitude for all \(n\geq 3\). These observations also apply to the soft photon theorem results provided below. Now let us consider the same setup of scattering but turn off the gravitational interaction and turn on electromagnetic interaction between charged particles. We consider the finite energy scattered particles carry some electric charges \(\{e_{i}\}\) and study scattering amplitude involving one soft photon emission with polarization and momentum \((\varepsilon,k)\). So in this case the soft expansion of \((N+1)\) particle amplitude takes the following form \[\mathcal{A}^{(N+1)}\big{(}\{\epsilon_{i},p_{i},e_{i},\Sigma_{i}\}, \varepsilon,k\big{)}= \sum_{i=1}^{N}\epsilon_{i,\alpha}\left[(\mathbb{S}^{em}_{\text{tree }})_{\beta}^{\ \alpha}+(\mathbb{S}^{em}_{\text{1-loop}})_{\beta}^{\ \alpha}+(\mathbb{S}^{em}_{\text{2-loop}})_{\beta}^{\ \alpha}+\cdots\right]\] \[\times\mathcal{A}^{(N)\beta}_{(i)}\left(p_{i}\right)\, \tag{8}\] where \(\mathcal{A}^{(N)\beta}_{(i)}\left(p_{i}\right)\) is defined through the relation (2). In (8), the expression of tree level "soft factor" for single soft photon emission reads [5; 8; 24; 25; 26; 27; 44; 45; 27; 46; 47; 48; 49; 50] \[(\mathbb{S}^{em}_{\text{tree}})_{\beta}^{\ \alpha}= e_{i}\frac{\varepsilon_{\mu}p_{i}^{\mu}}{p_{i}\cdot k}\delta_{ \beta}^{\alpha}+e_{i}\frac{\varepsilon_{\mu}k_{\rho}}{p_{i}\cdot k}\left( \left\{p_{i}^{\mu}\frac{\partial}{\partial p_{i\rho}}-p_{i}^{\rho}\frac{ \partial}{\partial p_{i\mu}}\right\}\delta_{\beta}^{\alpha}+\left(\mathcal{N} ^{\rho\mu}_{i}(-p_{i})\right)_{\beta}^{\ \alpha}\right) \tag{9}\] \[+\mathcal{O}\left(\omega^{n},n\geq 1\right),\] where the generic expression for the non-universal term \(\mathcal{N}^{\rho\sigma}_{i}(-p_{i})\) contributing to the subleading soft photon theorem has been derived in [5] and its explicit form in provided in (11). \(\mathcal{N}^{\rho\mu}_{i}(-p_{i})\) depends on the non-minimal coupling of two finite energy fields to a soft photon field through the field strength, and the general structure of the three-point 1PI vertex involving two hard particles and a soft photon. Extending the analysis of [5] it can be argued that a complete soft factorization of order \(\omega^{n}\) for \(n\geq 1\) is not possible in a generic theory of QED with all possible non-minimal couplings allowed by \(U(1)\) gauge invariance, though a partial soft factorization is achievable by enforcing gauge invariance of the \((N+1)\)-particle amplitude [6; 34]. The generalization of the tree-level soft factor \(\mathbb{S}^{em}_{\text{tree}}\) for multiple soft photon emissions up to subleading order can be found in [5]. The one-loop contribution to the "soft factor" for single soft photon emission in (8) reads \[(\mathbb{S}^{em}_{\text{1-loop}})_{\beta}^{\ \alpha}= e_{i}\frac{\varepsilon_{\mu}k_{\rho}}{p_{i}\cdot k}\left\{p_{i}^{ \mu}\frac{\partial K^{reg}_{em}}{\partial p_{i\rho}}-p_{i}^{\rho}\frac{ \partial K^{reg}_{em}}{\partial p_{i\mu}}\right\}\delta_{\beta}^{\alpha} \tag{10}\] \[+\mathcal{O}(\omega^{n}\ln\omega,n\geq 1)\ +\mathcal{O}(\omega^{n},n \geq 0)\,\] where \[K^{reg}_{em}=\frac{i}{2}\sum_{\ell=1}^{N}\ \sum_{\begin{subarray}{c}j=1\\ j\neq\ell\end{subarray}}^{N}e_{\ell}e_{j}(p_{\ell}\cdot p_{j})\ \int_{\omega}^{\Lambda}\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\ \frac{1}{(p_{\ell} \cdot\ell+i\epsilon)\ (p_{j}\cdot\ell-i\epsilon)}\] \[\simeq -\frac{i}{2}\sum_{\ell=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq\ell\end{subarray}}^{N}\frac{e_{\ell}e_{j}}{4\pi}(\ln\omega)\frac{p_{\ell }\cdot p_{j}}{\sqrt{(p_{\ell}.p_{j})^{2}-p_{\ell}^{2}p_{j}^{2}}}\Bigg{\{} \delta_{\eta\epsilon\eta_{j},1}-\frac{i}{2\pi}\ln\left(\frac{p_{\ell}.p_{j}+ \sqrt{(p_{\ell}.p_{j})^{2}-p_{\ell}^{2}p_{j}^{2}}}{p_{\ell}.p_{j}-\sqrt{(p_{ \ell}.p_{j})^{2}-p_{\ell}^{2}p_{j}^{2}}}\right)\Bigg{\}}.\] The \(\mathcal{O}(\ln\omega)\) soft factor for single soft photon emission in (10) has been derived in [1] as an one-loop exact result, analyzing one-loop S-matrices in the theory of minimally coupled scalar QED. There a correction to \(\mathcal{O}(\ln\omega)\) soft photon factor due to gravitational interaction has also been derived. In this article we re-derive the \(\mathcal{O}(\ln\omega)\) soft factor in a generic theory of quantum gravity for scattering of particles with arbitrary mass, charge and spin in presence of non-minimal coupling. This investigation will demonstrate the universal (independent of theory) nature of the \(\mathcal{O}(\ln\omega)\) soft factor, while also extending the infrared divergence factorization prescription proposed in [1; 51] to encompass a broad range effective field theories for charged objects. The generalization of the one-loop soft factor \(\mathbb{S}^{em}_{\text{1-loop}}\) for multiple soft photon emissions up to subleading order can be found in the section-(3.5) of [52], and the final result has been provided in (3.58). The two-loop contribution to the "soft factor" for single soft photon emission in (1.8) reads \[(\mathbb{S}^{em}_{\text{2-loop}})_{\beta}^{\alpha} = \frac{e_{i}}{2}\frac{\varepsilon_{\mu}k_{\rho}}{p_{i}\cdot k}\, \left(p_{i}^{\mu}\frac{\partial K^{reg}_{em}}{\partial p_{i\rho}}\ -\ p_{i}^{\rho}\frac{\partial K^{reg}_{em}}{ \partial p_{i\mu}}\right)\,\left(k_{\sigma}\frac{\partial K^{reg}_{em}}{ \partial p_{i\sigma}}\right)\delta_{\beta}^{\alpha} \tag{1.12}\] \[+\,(\ln\omega)^{2}\ e_{i}\big{(}\varepsilon^{\mu}k^{\rho}- \varepsilon^{\rho}k^{\mu}\big{)}p_{i\mu}\mathcal{C}^{reg}_{\rho}(q_{i},p_{i}; \{e_{j}\},\{p_{j}\})\delta_{\beta}^{\alpha}\] \[+\ \mathcal{O}(\omega^{n}(\ln\omega)^{2},n\geq 2)+\ \mathcal{O}(\omega^{n}\ln\omega,n\geq 1) \ +\mathcal{O}(\omega^{n},n\geq 1)\,\] where \[\mathcal{C}^{reg}_{\rho}\big{(}q_{i},p_{i};\{e_{j}\},\{p_{j}\} \big{)} \tag{1.13}\] \[= -\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\sum_{\begin{subarray}{c}\ell=1\\ \ell\neq j\end{subarray}}^{N}\frac{e_{i}^{2}e_{j}e_{\ell}}{4}\ \{p_{i}.p_{j}\delta_{\rho}^{\kappa}-p_{i}^{\kappa}p_{j\rho}\}\frac{ \partial}{\partial p_{i\sigma}}\Big{\{}\mathcal{I}(p_{i},p_{\ell})\times p_{i}.p_{\ell}\Big{\}}\frac{\partial^{2}\mathcal{I}(p_{i},p_{j})}{\partial p_{i}^{ \rho}\partial p_{i}^{\kappa}}\] \[+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\sum_{\begin{subarray}{c}\ell=1\\ \ell\neq j\end{subarray}}^{N}\frac{e_{i}e_{j}^{2}e_{\ell}}{4}\ \{p_{i}.p_{j}\delta_{\rho}^{\kappa}-p_{i}^{ \kappa}p_{j\rho}\}\frac{\partial}{\partial p_{j\sigma}}\Big{\{}\mathcal{I}(p_ {j},p_{\ell})\times p_{j}.p_{\ell}\Big{\}}\frac{\partial^{2}\mathcal{I}(p_{i},p _{j})}{\partial p_{i}^{\sigma}\partial p_{i}^{\kappa}}\,\] with \[\mathcal{I}(p_{i},p_{j})=-\frac{1}{4\pi}\frac{1}{\sqrt{(p_{i}.p_{j})^{2}-p_{i}^ {2}p_{j}^{2}}}\left\{\delta_{\eta_{i}\eta_{j},1}-\frac{i}{2\pi}\ln\left(\frac{p _{i}.p_{j}+\sqrt{(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}}}{p_{i}.p_{j}-\sqrt{(p_ {i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}}}\right)\right\}. \tag{1.14}\] The \(\mathcal{O}\left(\omega(\ln\omega)^{2}\right)\) soft factor for single soft photon emission in (1.12) has been derived in the section-4 of [21] as a two-loop exact result, analyzing two-loop S-matrices in the theory of minimally coupled scalar QED. From the analysis of the \(n\)-loop QED S-matrix, it is expected that the new leading non-analytic soft factor for single photon emission, as the frequency \(\omega\) approaches zero, behaves like \(\omega^{n-1}(\ln\omega)^{n}\) and it relates to the tree level \(N\)-particle amplitude. The general structure of the order \(\omega^{n-1}(\ln\omega)^{n}\) soft photon theorem is provided in [21]. The rest of the paper is organized as follows: In section-2, we establish our conventions and describe the general definition of IR-finite scattering amplitudes. We also discuss the EFT action involving massive spinning particles which transform in a generic reducible representation of the Lorentz group. In section-3, we review the covariantization prescription and define one-loop IR-finite QED S-matrices involved in the derivation of the soft photon theorem. Starting from the IR-finite S-matrices, we derive the soft photon theorem up to subleading order. In section-4, after reviewing Sen's covariantization prescription, we define the one-loop IR-finite quantum gravity S-matrices that are involved in the derivation of the soft graviton theorem. Starting from the IR-finite S-matrices, we derive the soft graviton theorem up to subleading order. At the end of both section-3 and 4, we discuss the possible generalizations of our derivations to higher orders. In section-5, we provide some open directions to explore in the future after reviewing what we have been achieved in this article. ## 2 Setup and Strategy Index convention:We utilize the first few Latin alphabets \(a,b,c,d,\ldots\) as Lorentz indices for the tangent space, ranging from \(0\) to \(3\). The Latin alphabets starting from \(i,j,k,\ell,\ldots\) are employed as indices for identifying individual hard particles, ranging from \(1\) to N. The first few Greek alphabets \(\alpha,\beta,\gamma,\delta,\ldots\) are used as polarization indices for spinning particles on the tangent space, while the Greek alphabets beginning with \(\lambda,\mu,\nu,\rho,\sigma,\tau,\ldots\) serve as curved space indices, ranging from \(0\) to \(3\). In section-3, where we derive the soft photon theorem solely under electromagnetic interaction in a flat background, we employ both \(a,b,c,d,\ldots\) and \(\lambda,\mu,\nu,\rho,\sigma,\tau,\ldots\) as flat space Lorentz indices. Metric and unit conventions:In our convention four dimensional Minkowski metric is \(\eta_{ab}=\mathrm{diag}(-1,+1,+1,+1)\). We work in the unit where speed of light \(c=1\) and Planck constant \(\hbar=1\) but keep the gravitational constant \(G\) explicit. We define \(\kappa\equiv\sqrt{8\pi G}\). Setup of scattering event:Let us consider a scattering amplitude involving \(N\) number of finite energy massive particles (hard particles) with charges, momenta, spins and polarizations \(\{e_{i},p_{i},\Sigma_{i},\epsilon_{i}\}\) for \(i=1,2,\cdots,N\) and one low-energy (soft) outgoing photon/graviton with momenta and polarization \(k,\varepsilon\), and denote this scattering amplitude by \(\mathcal{A}^{(N+1)}\). In our convention we consider all the particles are incoming, so if some particles are outgoing we need to flip the sign of four momenta and electric charges for those particles. The energy of outgoing soft photon/graviton is denoted by \(\omega\) so that \(k^{\mu}=-\omega\mathbf{n}^{\mu}\) where \(\mathbf{n}^{\mu}\) being the null vector whose spatial part denotes the direction of soft photon/graviton emission. Here we are only interested to evaluate \(\mathcal{A}^{(N+1)}\) at one-loop order which involves Feynman diagrams involving one virtual photon/graviton running in the loop. Then we perform soft expansion (\(\omega<<|p_{i}|\)) of \(\mathcal{A}^{(N+1)}\) to relate it with the \(N\) point amplitude which carries all the hard particles in the asymptotic state but no soft graviton, denoted by \(\mathcal{A}^{(N)}\).4 Note that both the scattering amplitudes \(\mathcal{A}^{(N)}\) and \(\mathcal{A}^{(N+1)}\) are distributions in momenta as \(\mathcal{A}^{(N)}\) contains momentum conserving delta function \(\delta^{(4)}\big{(}p_{1}+p_{2}+\cdots+p_{N}\big{)}\) and \(\mathcal{A}^{(N+1)}\) contains momentum conserving delta function \(\delta^{(4)}\big{(}p_{1}+p_{2}+\cdots+p_{N}+k\big{)}\). In four spacetime dimensions (\(D=4\)), both scattering amplitudes exhibit infrared (IR) divergences. Therefore, our first step is to separate out the IR divergent contributions from both the scattering amplitudes in an unambiguous manner. Then we can obtain the soft factor by examining the ratio of \(\mathcal{A}^{(N+1)}\) and \(\mathcal{A}^{(N)}\) after full/partial cancellation of the IR divergent contributions as we explain in later sections. Footnote 4: Note that the soft limit can also be defined covariantly by demanding \(\Big{|}\frac{p_{i},k}{|p_{i},p_{j}|}\Big{|}<<1\) for all \(i,j=1,\cdots,N\). Feynman diagram conventions:In all the Feynman diagrams describing scattering amplitudes, time flows from right to left and the particles involved in the scattering will always be treated ingoing. Solid lines in any diagrams corresponds to massive spinning particles and dashed lines represent photons/gravitons. If in a figure multiple Feynman diagrams appears, the counting of their numbers are always from left to right and from top to bottom. A Feynman diagram will be called an \(n\)-loop diagram only if the diagram contains \(n\) number of loops where at-least one virtual photon/graviton is propagating in each loop. The loops involving only massive virtual particles will be taken care of inside the massive EFT 1PI vertices and renormalized propagators of the massive EFT. To determine the Feynman rules for vertices involving photons/gravitons and hard particles, we follow the covariantization technique developed in the photon/graviton background in the references [2; 3; 4; 5]. Handling IR divergences in the derivation of soft theorem:The traditional S-matrix in quantum electrodynamics and quantum gravity, in four spacetime dimensions, exhibits IR-divergence. This is due to the long-range nature of the interactions involved. Previous attempts to construct IR-finite S-matrices, beginning with the Kulish-Faddeev construction [53], demonstrated explicit cancellation of IR divergences. However, a systematic method for extracting the unambiguous IR finite part remained absent, until Grammer and Yennie provided one in [51]. A generalization of Grammer-Yennie prescription for perturbative QCD and quantum gravity can be found in [54] and [1] respectively. In the derivation of soft photon theorem, Grammer-Yennie prescription helps to factor out IR divergences from both the amplitudes \(\mathcal{A}^{(N+1)}\) and \(\mathcal{A}^{(N)}\) in the following way \[\mathcal{A}^{(N)}=\exp\{K_{em}\}\ \mathcal{A}^{(N)}_{\text{IR-finite}}\ \ \ \,\ \ \ \ \mathcal{A}^{(N+1)}=\exp\{K_{em}\}\ \mathcal{A}^{(N+1)}_{\text{IR-finite}}. \tag{1}\] Above the exponential factor containing \(K_{em}\) takes care of the full IR divergent contribution and the IR divergent contributions are exactly same for both the amplitudes. An explicit expression of \(K_{em}\) is provided in (3.28). Basically Grammer-Yennie prescription provides a systematic procedure to compute IR-finite parts perturbatively for both the amplitudes. When the soft factor \(\mathbb{S}^{em}\) is a multiplicative function instead of differential operator, we get \[\mathcal{A}^{(N+1)}=\mathbb{S}^{em}\ \mathcal{A}^{(N)}\ \Longrightarrow\ \mathcal{A}^{(N+1)}_{\text{IR-finite}}=\mathbb{S}^{em}\ \mathcal{A}^{(N)}_{\text{IR-finite}}. \tag{2}\] Hence to derive the \(\mathcal{O}(\ln\omega)\) and \(\mathcal{O}\left(\omega(\ln\omega)^{2}\right)\) soft factors in (10) and (12) we can directly start from one and two-loop contributions of IR finite amplitude \(\mathcal{A}^{(N+1)}_{\text{IR-finite}}\) and perform soft expansion. In the derivation of soft graviton theorem, \(\mathcal{A}^{(N+1)}\) contains some extra divergent factors relative to \(\mathcal{A}^{(N)}\) due to Feynman diagrams involving three graviton self-interaction vertices. An optimistic dream of the factorization IR divergence using the Grammer-Yennie decomposition proposed in [1] would be \[\mathcal{A}^{(N)}=\exp\{K_{gr}\}\ \mathcal{A}^{(N)}_{\text{IR-finite}}\ \ \ \,\ \ \ \ \mathcal{A}^{(N+1)}=\exp\{K_{gr}+K_{phase}\}\ \mathcal{A}^{(N+1)}_{\text{IR-finite}}\, \tag{3}\] where the IR-divergent expressions of \(K_{gr}\) and \(K_{phase}\) are given in (4.28) and (4.29). The result mentioned above has only been verified rigorously up to one-loop order. Verifying it for all loop orders is a computationally challenging task that remains open for future investigation. Now when the soft factor \(\mathbb{S}^{gr}\) is a multiplicative function instead of differential operator, we get \[\mathcal{A}^{(N+1)}=\mathbb{S}^{gr}\ \mathcal{A}^{(N)}\ \Longrightarrow\ \mathcal{A}^{(N+1)}_{\text{IR-finite}}=\exp\{-K_{phase}\}\ \mathbb{S}^{gr}\ \mathcal{A}^{(N)}_{\text{IR-finite}}. \tag{2.4}\] Hence to derive an unambiguous soft factor by analyzing \(\mathcal{A}^{(N+1)}_{\text{IR-finite}}\), we need to regulate the IR divergence of \(K_{phase}\) using a cut off given by the detector resolution. This procedure can be followed to derive \(\mathcal{O}(\ln\omega)\) and \(\mathcal{O}\left(\omega(\ln\omega)^{2}\right)\) soft factors in (1.4) and (1.7) respectively by analyzing one and two loop IR finite amplitudes. But if we want to derive the \(\mathcal{O}(\omega\ln\omega)\) soft factor in the second and third lines of (1.4) we need to deal with the following additional subtleties: 1. Since the order \(\omega\ln\omega\) soft factor in (1.4) is a differential operator we can not really commute the soft factor and infrared divergent exponential to get the second equation of (2.4). Hence to derive the \(\mathcal{O}(\omega\ln\omega)\) soft factor we have to start with the full divergent scattering amplitude \(\mathcal{A}^{(N+1)}\) instead of its IR finite part and at the end of the analysis we may be able to cancel the common IR divergent factor appears in both amplitudes in the soft theorem relation. 2. Note that the momentum conserving delta function associated with \(\mathcal{A}^{(N+1)}\) is \(\delta^{(4)}\left(\sum\limits_{i=1}^{N}p_{i}+k\right)\). On the other hand the momentum conserving delta function associated with \(\mathcal{A}^{(N)}\) is \(\delta^{(4)}\left(\sum\limits_{i=1}^{N}p_{i}\right)\). Now Taylor series expansion of the first delta function around small \(\omega\) produces a correction of order \(\omega\). This correction, when multiplied with the \(\mathcal{O}(\ln\omega)\) soft factor, yields an additional factor of order \(\omega\ln\omega\) at one-loop order. Therefore, this additional contribution needs to be accounted for, if it contributes something non-vanishing at this order. In light of these additional intricacies, we have decided to postpone the derivation of the order \(\omega\ln\omega\) soft graviton factor in (1.4) for future study and focus on deriving the order \(\ln\omega\) soft graviton theorem here. EFT involving massive particles with arbitrary spin:We begin with an effective field theory (EFT) that describes the dynamics of massive spinning particles. The one-particle irreducible (1PI) effective action for this EFT is obtained by integrating out all massive loops. The tree level amplitudes computed using this massive EFT action contain information about all the loop orders in the original un-integrated massive quantum field theory (QFT). However, if the un-integrated QFT includes massless fields, our initial approach using the 1PI effective action becomes invalid. Nevertheless, our prescription for covariantization and computation of loop amplitudes, as described below, remains valid. In such cases, the 1PI effective action should be regarded as the tree level action for the EFT. Let \(\Phi_{\alpha}(x)\) denotes the set of all massive fields in real representation, present in the 1PI effective action5 which transforms in a reducible representation of Lorentz group \(SO(1,3)\) in the following way, \[SO(1,3): x^{a}\to x^{\prime a}=\Lambda^{a}\ _{b}\ x^{b}=(\delta^{a}_{b}+ \lambda^{a}\ _{b})x^{b}+{\cal O}(\lambda^{2}) \tag{5}\] \[\Phi_{\alpha}(x)\to\Phi^{\prime}_{\alpha}(x)=\ \left[\exp\Big{\{}- \frac{i}{2}\lambda_{ab}\Sigma^{ab}\Big{\}}\right]_{\alpha}^{\beta}\ \Phi_{\beta}(\Lambda^{-1}x)\,\] where \(\lambda_{ab}=-\lambda_{ba}\) is the infinitesimal Lorentz transformation parameter and \(\Sigma^{ab}\) is the spin angular momentum generator of \(SO(1,3)\) transformation in the real reducible representation. The subscript index \(\alpha\) is used as a combined notation for denoting different fields in the theory as well as the spin/polarization indices of each of the fields. Under global \(U(1)_{EM}\) the field \(\Phi(x)\) transforms in the following way, \[U(1)_{EM}:\ \Phi_{\alpha}(x)\to\Phi^{\prime}_{\alpha}(x)=\ \Big{[}\exp\big{\{}i{\cal Q}\theta\big{\}}\Big{]}_{\alpha}^{\beta}\ \Phi_{\beta}(x)\, \tag{6}\] where \(\theta\) is the parameter of global \(U(1)_{EM}\) transformation and \({\cal Q}\) is the generator of global \(U(1)_{EM}\) transformation in the real representation of \(\Phi(x)\). Usually we associate \(U(1)_{EM}\) global charge to complex fields but since we want to covariantize the theory simultaneously in the background of gravity and gauge theory together following [5], working in terms of real field components is convenient. For example instead of a complex scalar field we work with two real scalar fields considering them in a two component vector which rotates under \(SO(2)\) and \({\cal Q}\) is the generator of \(SO(2)\) transformation. In the set of fields denoted by \(\Phi_{\alpha}(x)\), there may be some elementary fields in the irreducible representation of Lorentz group which does not transform under global \(U(1)_{EM}\), for those fields the elements of the charge matrix \({\cal Q}\) will be zero. Let us start with the general form of the quadratic part of the massive particle 1PI effective action6 Footnote 6: If the original theory contains some massless fields, then this action should be thought of as quadratic part of the tree level gauge fixed action. Because in presence of massless fields, the 1PI effective action of the theory may be non-local and the kinetic operator \({\cal K}(q)\) may not be polynomially expandable around \(q^{\mu}=0\), which is the key assumption for the validity of the covariantization prescription discussed below. \[S^{(2)} = \frac{1}{2}\int\frac{d^{4}q_{1}}{(2\pi)^{4}}\frac{d^{4}q_{2}}{(2 \pi)^{4}}\ (2\pi)^{4}\delta^{(4)}(q_{1}+q_{2})\ \Phi_{\alpha}(q_{1}){\cal K}^{\alpha\beta}(q_{2})\Phi_{\beta}(q_{2}) \tag{7}\] \[= \frac{1}{2}\int\frac{d^{4}q_{1}}{(2\pi)^{4}}\frac{d^{4}q_{2}}{(2 \pi)^{4}}\ (2\pi)^{4}\delta^{(4)}(q_{1}+q_{2})\ \Phi^{T}(q_{1}){\cal K}(q_{2})\Phi(q_{2})\,\] where \({\cal K}(q)\) is the renormalized momentum space kinetic operator which satisfy the following condition: \[{\cal K}^{\alpha\beta}(q)=\pm\ {\cal K}^{\beta\alpha}(-q)\,\] \[{\cal K}(q)=\pm\ \Big{[}{\cal K}(-q)\Big{]}^{T}. \tag{8}\] In the second lines of (7) and (8) we introduced the index free notation, which we follow through out the article. In the RHS of the above equation, \(+\) sign is for bosonic field and \(-\) sign is for fermionic field. For simplicity we work with the \(+\) sign considering \(\Phi(x)\) being Grassmannian even, but the final result of soft theorem computation will be same for both bosonic and fermionic fields. The Feynman propagator for the \(i\)-th particle with renormalized mass \(m_{i}\) from (7) becomes \[\Delta^{i}_{\alpha\beta}(q)\ =\ i\big{[}{\cal K}_{i}^{-1}(q)\big{]}_{\alpha \beta}\equiv(q^{2}+m_{i}^{2}-i\epsilon)^{-1}\ \Xi_{i\alpha\beta}(q)\, \tag{9}\] where \({\cal K}_{i}(q)\) is the kinetic term for the set of fields representing the \(i\)-th particle after proper diagonalization of the quadratic part of the action \(S^{(2)}\). The above equation also defines \(\Xi_{i}(q)\) as the residue of the pole of the propagator for \(i\)-th particle. The relation between \({\cal K}_{i}\) and \(\Xi_{i}\) and their momentum derivatives satisfy the following relations in the index free notation, which will be useful for later computation [2] \[{\cal K}_{i}(q)\Xi_{i}(q)=i(q^{2}+m_{i}^{2}-i\epsilon)\, \tag{10}\] \[\frac{\partial{\cal K}_{i}(q)}{\partial q^{a}}\Xi_{i}(q)=-{\cal K }_{i}(q)\frac{\partial\Xi_{i}(q)}{\partial q^{a}}+2iq_{a}\,\] (11) \[\frac{\partial^{2}{\cal K}_{i}(q)}{\partial q^{a}\partial q^{b} }\Xi_{i}(q)=-\frac{\partial{\cal K}_{i}(q)}{\partial q^{a}}\frac{\partial\Xi_ {i}(q)}{\partial q^{b}}-\frac{\partial{\cal K}_{i}(q)}{\partial q^{b}}\frac{ \partial\Xi_{i}(q)}{\partial q^{a}}-{\cal K}_{i}(q)\frac{\partial^{2}\Xi_{i} (q)}{\partial q^{a}\partial q^{b}}+2i\eta_{ab}\,\] (12) \[\Xi_{i}(q){\cal K}_{i}(q)=i(q^{2}+m_{i}^{2}-i\epsilon)\,\] (13) \[\frac{\partial\Xi_{i}(q)}{\partial q^{a}}{\cal K}_{i}(q)=-\Xi_{i }(q)\frac{\partial{\cal K}_{i}(q)}{\partial q^{a}}+2iq_{a}\,\] (14) \[\frac{\partial^{2}\Xi_{i}(q)}{\partial q^{a}\partial q^{b}}{\cal K }_{i}(q)=-\frac{\partial\Xi_{i}(q)}{\partial q^{a}}\frac{\partial{\cal K}_{i} (q)}{\partial q^{b}}-\frac{\partial\Xi_{i}(q)}{\partial q^{b}}\frac{\partial{ \cal K}_{i}(q)}{\partial q^{a}}-\Xi_{i}(q)\frac{\partial^{2}{\cal K}_{i}(q)}{ \partial q^{a}\partial q^{b}}+2i\eta_{ab}. \tag{15}\] The Lorentz covariance of \({\cal K}_{i}\) and \(\Xi_{i}\) implies the following two relations \[\left(\Sigma_{i}^{ab}\right)^{T}\!{\cal K}_{i}(q)=-{\cal K}_{i}(q )\Sigma_{i}^{ab}+q^{a}\frac{\partial{\cal K}_{i}(q)}{\partial q_{b}}-q^{b} \frac{\partial{\cal K}_{i}(q)}{\partial q_{a}}\, \tag{16}\] \[\Sigma_{i}^{ab}\ \Xi_{i}(q)=-\Xi_{i}(q)\ \big{(}\Sigma_{i}^{ab} \big{)}^{T}-q^{a}\frac{\partial\Xi_{i}(q)}{\partial q_{b}}+q^{b}\frac{ \partial\Xi_{i}(q)}{\partial q_{a}}\, \tag{17}\] where \(\Sigma_{i}\) is the spin angular momentum generator for \(i\)-th component field inside \(\Phi(x)\). Taking derivatives with respect to momenta the above expressions become \[\left(\Sigma_{i}^{ab}\right)^{T}\!\frac{\partial{\cal K}_{i}(q)}{ \partial q_{c}}=-\frac{\partial{\cal K}_{i}(q)}{\partial q_{c}}\Sigma_{i}^{ab} +q^{a}\frac{\partial^{2}{\cal K}_{i}(q)}{\partial q_{b}\partial q_{c}}-q^{b} \frac{\partial^{2}{\cal K}_{i}(q)}{\partial q_{a}\partial q_{c}}+\eta^{ac} \frac{\partial{\cal K}_{i}(q)}{\partial q_{b}}-\eta^{bc}\frac{\partial{\cal K }_{i}(q)}{\partial q_{a}},\] \[\Sigma_{i}^{ab}\ \frac{\partial\Xi_{i}(q)}{\partial q_{c}}=-\frac{ \partial\Xi_{i}(q)}{\partial q_{c}}\ \big{(}\Sigma_{i}^{ab}\big{)}^{T}-q^{a}\frac{ \partial^{2}\Xi_{i}(q)}{\partial q_{b}\partial q_{c}}+q^{b}\frac{\partial^{2} \Xi_{i}(q)}{\partial q_{a}\partial q_{c}}-\eta^{ac}\frac{\partial\Xi_{i}(q)}{ \partial q_{b}}+\eta^{bc}\frac{\partial\Xi_{i}(q)}{\partial q_{a}}. \tag{18}\] Invariance of (7) under global \(U(1)_{EM}\) transformation implies \[{\cal Q}_{\gamma}\ ^{\alpha}{\cal K}^{\gamma\beta}+{\cal K}^{\alpha\gamma}{ \cal Q}_{\gamma}\ ^{\beta}\ =0\ \Rightarrow\ {\cal Q}^{T}{\cal K}+{\cal K}{\cal Q}=0. \tag{19}\] This also imposes constraint on the numerator of the propagator which reads \[{\cal Q}\Xi+\Xi{\cal Q}^{T}=0. \tag{20}\] The above two equations are also valid for component fields in real representation. We can take momentum derivatives on the above two relations to find useful expressions. When the \(i\)-th spinning particle is on-shell with momentum \(q_{i}\) and polarization tensor \(\epsilon_{i}(q_{i})\) it satisfies \[\mathcal{K}_{i}^{\alpha\beta}(q_{i})\epsilon_{i\beta}(q_{i})=0\ \Rightarrow\ \mathcal{K}_{i}(q_{i})\epsilon_{i}(q_{i})=0\quad\text{ and }\quad\epsilon_{i}^{T}(q_{i})\mathcal{K}_{i}^{T}(q_{i})=0. \tag{21}\] ## 3 Soft photon theorem at one-loop In this section we derive subleading soft photon theorem analyzing one-loop amplitudes for a quantum mechanical scattering process involving \(N\) number of massive charged particles with arbitrary spin. In [1], the order \(\ln\omega\) soft factor has been derived analyzing one-loop amplitude in a theory of minimally coupled scalar-QED in presence of scalar contact interaction and the soft photon factor is determined in terms of the charges and asymptotic momenta of scattered particles, and the direction cosine of soft photon emission. Here in this section we show that even for arbitrary spinning particle scattering in a generic theory of QED with non-minimal interaction, the order \(\ln\omega\) soft factor derived in [1] is universal (theory independent). This section should be thought of as a warm up of the next section where we are going to derive one-loop soft graviton theorem for spinning particle scattering in a generic theory of quantum gravity. ### Covariantization and Feynman rules In [5], the quadratic action \(S^{(2)}\) in (7) has been covariantized simultaneously in photon and graviton background to determine 1PI vertices involving two hard spinning particles and one or two photons/gravitons up to subleading order in the expansion of the momenta of photons/gravitons. Without going into too much details, here we summarise the outcomes of the covariantization prescription in photon background and write down the Feynman rules for vertices involving one and two photons. We derive the vertices for off-shell photon with Feynman gauge fixing term, such that the Feynman propagator for virtual photon reads \[\Delta_{\mu\nu}^{F}(\ell)=\frac{-i}{\ell^{2}-i\epsilon}\ \eta_{\mu\nu}. \tag{22}\] In position space the kinetic operator in (7) contains derivatives over the field \(\Phi_{\beta}(x)\), which have to replace by covariant derivatives under covariantization in presence of photon field \(A_{\mu}(x)\). For example in the case of one and two derivatives the covariantization rule in position space becomes \[\partial_{\mu}\Phi_{\beta}\rightarrow\ D_{\mu}\Phi_{\beta} = (\partial_{\mu}\delta_{\beta}^{\gamma}-i\mathcal{Q}_{\beta}\ ^{\gamma}A_{\mu})\Phi_{\gamma}\, \tag{23}\] \[\partial_{\mu}\partial_{\nu}\Phi_{\beta}\rightarrow\ D_{(\mu}D_{ \nu)}\Phi_{\beta} = \big{[}\partial_{(\mu}\delta_{\beta}^{\gamma}-i\mathcal{Q}_{\beta} \ ^{\gamma}A_{(\mu}\big{]}\big{[}\partial_{\nu)}\delta_{\gamma}^{\delta}-i \mathcal{Q}_{\gamma}\ ^{\delta}A_{\nu}\big{]}\big{]}\Phi_{\delta}\] (24) \[= \partial_{\mu}\partial_{\nu}\Phi_{\beta}-i\mathcal{Q}_{\beta}\ ^{\gamma}(A_{\mu}\partial_{\nu}+A_{\nu}\partial_{\mu})\Phi_{\gamma}\] \[-\frac{i}{2}\mathcal{Q}_{\beta}\ ^{\gamma}(\partial_{\mu}A_{\nu}+ \partial_{\nu}A_{\mu})\Phi_{\gamma}-A_{\mu}A_{\nu}\mathcal{Q}_{\beta}\ ^{\gamma}\ \mathcal{Q}_{\gamma}\ ^{\delta}\Phi_{\delta}\.\] Above we use the following symmetrization convention \(E_{(\mu}F_{\nu)}=\frac{1}{2}\big{(}E_{\mu}F_{\nu}+E_{\nu}F_{\mu}\big{)}\) for two vectors \(E\) and \(F\). For determining minimal interaction vertices \(\Gamma^{(3)}\) involving two massive spinning particles and one photon up to one derivative on photon field, and \(\Gamma^{(4)}\) involving two massive spinning particles and two photons with no derivative on any of the photon fields, the information of covariantization for single and two derivatives as done above would be enough. In momentum space these covariantization rules generate the following minimally interacting actions starting from (7): \[S^{(3)} = \frac{1}{2}\int\frac{d^{4}q_{1}}{(2\pi)^{4}}\frac{d^{4}q_{2}}{(2 \pi)^{4}}\frac{d^{4}\ell}{(2\pi)^{4}}\ (2\pi)^{4}\delta^{(4)}(q_{1}+q_{2}+\ell)\Phi_{\alpha}(q_{1}) \Big{[}-A^{\mu}(\ell)\frac{\partial{\cal K}^{\alpha\gamma}(q_{2})}{\partial q_ {2}^{\mu}}{\cal Q}_{\gamma}\ ^{\beta} \tag{10}\] \[-\frac{1}{4}\big{(}\ell^{\mu}A^{\nu}(\ell)+\ell^{\nu}A^{\mu}( \ell)\big{)}\frac{\partial^{2}{\cal K}^{\alpha\gamma}(q_{2})}{\partial q_{2}^ {\mu}\partial q_{2}^{\nu}}{\cal Q}_{\gamma}\ ^{\beta}\ +\ {\cal O}(\ell^{2})\Big{]}\Phi_{ \beta}(q_{2})\,\] and \[S^{(4)} = \frac{1}{2}\int\frac{d^{4}q_{1}}{(2\pi)^{4}}\frac{d^{4}q_{2}}{(2 \pi)^{4}}\frac{d^{4}\ell_{1}}{(2\pi)^{4}}\ (2\pi)^{4}\delta^{(4)}(q_{1}+q_{2}+\ell_{1}+\ell_{2}) \tag{11}\] \[\Phi_{\alpha}(q_{1})\Big{[}\frac{1}{2}A^{\mu}(\ell_{1})A^{\nu}( \ell_{2})\frac{\partial^{2}{\cal K}^{\alpha\delta}(q_{2})}{\partial q_{2}^{ \mu}\partial q_{2}^{\nu}}{\cal Q}_{\delta}\ ^{\gamma}{\cal Q}_{\gamma}\ ^{\beta}\ +\ {\cal O}(\ell_{1},\ell_{2}) \Big{]}\Phi_{\beta}(q_{2})\.\] Above \(A_{\mu}(\ell)\) is the Fourier transform of gauge field defined through the following relation: \(A_{\mu}(x)\equiv\int\frac{d^{4}\ell}{(2\pi)^{4}}\ e^{i\ell\cdot x}A_{\mu}(\ell)\). At the order of one derivative on the gauge field, we can have non-minimal coupling of photon with matter fields interacting via field strength. In momentum space, the general form of non-minimal interaction takes the following form \[\bar{S}^{(3)}=\frac{1}{2} \int\frac{d^{4}q_{1}}{(2\pi)^{4}}\frac{d^{4}q_{2}}{(2\pi)^{4}} \frac{d^{4}\ell}{(2\pi)^{4}}\ (2\pi)^{4}\delta^{(4)}(q_{1}+q_{2}+\ell) \tag{12}\] \[\Phi_{\alpha}(q_{1})\ \Big{[}F_{\mu\nu}(\ell)\ {\cal B}^{\alpha\beta,\mu\nu}(q_{2}) \ +{\cal O}(\ell^{2})\Big{]}\Phi_{\beta}(q_{2})\.\] where \[F_{\mu\nu}(\ell)=i\big{[}\ell_{\mu}A_{\nu}(\ell)-\ell_{\nu}A_{\mu}(\ell)\big{]}\, \tag{13}\] and \({\cal B}(q_{2})\) satisfies the following relations \[{\cal Q}_{\gamma}\ ^{\alpha}{\cal B}^{\gamma\beta,\mu\nu}(q_{2})+{ \cal B}^{\alpha\gamma,\mu\nu}(q_{2}){\cal Q}_{\gamma}\ ^{\beta}=0 \Rightarrow {\cal Q}^{T}{\cal B}^{\mu\nu}(q_{2})+{\cal B}^{\mu\nu}(q_{2}){\cal Q}=0\, \tag{14}\] \[{\cal B}^{\alpha\beta,\mu\nu}(q_{2})\ =\ -{\cal B}^{\alpha\beta,\nu\mu}(q_{2}) \Rightarrow {\cal B}^{\mu\nu}(q_{2})=-{\cal B}^{\nu\mu}(q_{2})\,\] (15) \[{\cal B}^{\alpha\beta,\mu\nu}(q_{2})\ =\ \pm\ {\cal B}^{\beta\alpha,\mu\nu}(-q_{1}-\ell) \Rightarrow {\cal B}^{\mu\nu}(q_{2})\ =\ \pm\ \big{(}{\cal B}^{\mu\nu}(-q_{1}-\ell)\big{)}^{T}. \tag{16}\] In the last equation above \(+\) sign is for Grassmannian even field and \(-\) sign is for Grassmannian odd field. Again during the derivation we consider \(\Phi\) field components being Grassmannian even, but the final result will be valid for both Grassmannian even and odd fields. We introduced index free notations for all the equations above. Starting from (10) and (12), the interaction vertex describing two ingoing spinning particles with momenta \(q\) and \(-(q+\ell)\), polarization index \(\alpha\) and \(\beta\), and one ingoing photon with momentum \(\ell\) becomes \[\Gamma^{(3)\alpha\beta}_{\mu}\big{(}q,-q-\ell,\ell\big{)} \tag{3.11}\] \[= \frac{i}{2}\Bigg{[}\frac{\partial{\cal K}^{\alpha\gamma}(-q-\ell)}{ \partial q^{\mu}}{\cal Q}_{\gamma}\ ^{\beta}-\frac{1}{2}\ell^{\nu}\frac{\partial^{2}{\cal K}^{\alpha\gamma}(-q- \ell)}{\partial q^{\mu}\partial q^{\nu}}{\cal Q}_{\gamma}\ ^{\beta}-2i\ell^{\nu}{\cal B}^{\alpha\beta},\ _{\mu\nu}(-q-\ell)\] \[-\frac{\partial{\cal K}^{\beta\gamma}(q)}{\partial q^{\mu}}{\cal Q }_{\gamma}\ ^{\alpha}-\frac{1}{2}\ell^{\nu}\frac{\partial^{2}{\cal K}^{\beta\gamma}(q)}{ \partial q^{\mu}\partial q^{\nu}}{\cal Q}_{\gamma}\ ^{\alpha}-2i\ell^{\nu}{\cal B}^{\beta\alpha},\ _{\mu\nu}(q)+{\cal O}(\ell^{2}) \Bigg{]}\,\] Expanding in small \(\ell\) limit and using (2.8),(2.19), (3.10) the above vertex reduce to the following polarization index suppressed form \[\Gamma^{(3)}_{\mu}\big{(}q,-q-\ell,\ell\big{)} = i\Bigg{[}\frac{\partial{\cal K}(-q)}{\partial q^{\mu}}{\cal Q}+ \frac{1}{2}\ell^{\nu}\frac{\partial^{2}{\cal K}(-q)}{\partial q^{\mu}\partial q ^{\nu}}{\cal Q}-2i\ell^{\nu}{\cal B}_{\mu\nu}(-q)\ +\ {\cal O}(\ell^{2}) \Bigg{]}. \tag{3.12}\] Similarly starting from (3.5) the polarization index suppressed four point interaction vertex involving two incoming spinning particles with momenta \(q\) and \(-(q+\ell_{1}+\ell_{2})\) and two incoming photons with momenta \(\ell_{1}\) and \(\ell_{2}\) becomes \[\Gamma^{(4)}_{\mu\nu}\big{(}q,-q-\ell_{1}-\ell_{2},\ell_{1},\ell_{2}\big{)} = i\frac{\partial^{2}{\cal K}(-q)}{\partial q^{\mu}\partial q^{\nu}}{\cal Q} {\cal Q}+\ {\cal O}(\ell_{1},\ell_{2}). \tag{3.13}\] We denote the scattering amplitude describing \(N\)-number of spinning hard particle scattering in massive EFT by \(\Gamma^{(N)}\), which can be expressed as a polarization tensor contracted form in the following way \[\Gamma^{(N)} = \Big{\{}\prod_{i=1}^{N}\epsilon_{\alpha_{i}}(p_{i})\Big{\}}\ \Gamma^{(N)\alpha_{1}\alpha_{2}\cdots\alpha_{N}} \tag{3.14}\] \[= \epsilon_{\alpha_{i}}(p_{i})\epsilon_{\alpha_{j}}(p_{j})\ \Gamma^{(N)\alpha_{i}\alpha_{j}}_{(ij)}(p_{i},p_{j})\] \[= \epsilon_{\alpha_{i}}(p_{i})\Gamma^{(N)\alpha_{i}}_{(i)}(p_{i})\,\] where \[\Gamma^{(N)\alpha_{i}\alpha_{j}}_{(ij)}(p_{i},p_{j}) \equiv \Big{\{}\prod_{\begin{subarray}{c}k=1\\ k\neq i,j\end{subarray}}^{N}\epsilon_{\alpha_{k}}(p_{k})\Big{\}}\ \Gamma^{(N)\alpha_{1}\alpha_{2}\cdots\alpha_{N}}\, \tag{3.15}\] \[\Gamma^{(N)\alpha_{i}}_{(i)}(p_{i}) \equiv \Big{\{}\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\epsilon_{\alpha_{j}}(p_{j})\Big{\}}\ \Gamma^{(N)\alpha_{1}\alpha_{2}\cdots\alpha_{N}}. \tag{3.16}\] Above \(\epsilon_{\alpha_{i}}(p_{i})\) denotes the polarization tensor for \(i\)-th spinning particle with momentum \(p_{i}\). Here we also should remember that \(\Gamma^{(N)}\) is a distribution as it contains a momentum conserving delta function \(\delta^{(4)}\big{(}p_{1}+p_{2}+\cdots+p_{N}\big{)}\). We denote a part of amputated Green's function involving \(N\) number of spinning hard particles and one photon with momentum \(\ell\) by \(\widetilde{\Gamma}^{(N+1)}_{\mu}(\ell)\), which describes sum of the contributions of the Feynman diagrams where the photon is not connected to any external leg. Since \(\widetilde{\Gamma}^{(N+1)}_{\mu}(\ell)\) does not contain any loop involving massless particles, we can write down the following relation between \(\widetilde{\Gamma}^{(N+1)}_{\mu}(\ell)\) and \(\Gamma^{(N)}\) using the same covariantization prescription described above \[\widetilde{\Gamma}^{(N+1)\alpha_{1}\cdots\alpha_{N}}_{\mu}(\ell)=-\sum_{i=1}^{N }\mathcal{Q}_{\beta_{i}}\ ^{\alpha_{i}}\ \frac{\partial}{\partial p_{i}^{\mu}}\ \Gamma^{(N)\alpha_{1}\cdots\alpha_{i-1}\beta_{i} \alpha_{i+1}\cdots\alpha_{N}}\ +\ \mathcal{O}(\ell). \tag{3.17}\] In the above expression \(\Gamma^{(N)}\) contains momentum conserving delta function \(\delta^{(4)}\big{(}p_{1}+p_{2}+\cdots+p_{N}\big{)}\) and \(\widetilde{\Gamma}^{(N+1)}\) contains momentum conserving delta function \(\delta^{(4)}\big{(}p_{1}+p_{2}+\cdots+p_{N}+\ell\big{)}\). ### Grammer-Yennie decomposition and IR-finite amplitudes In this section we discuss the Grammer-Yennie decomposition introduces in [1; 51] and show how it helps to factorize the IR-divergent exponential from the IR-finite part of the amplitudes as proposed in (2.1). In Feynman gauge we decompose the photon propagator with momentum \(\ell\) flowing from the leg \(i\) to the leg \(j\) for \(i\neq j\) \[\Delta^{\mu\nu}_{F}(\ell)=-i\frac{\eta^{\mu\nu}}{\ell^{2}-i\epsilon}\,=\,- \frac{i}{\ell^{2}-i\epsilon}\Big{\{}K^{\mu\nu}_{(ij)}+G^{\mu\nu}_{(ij)}\Big{\}}\, \tag{3.18}\] where \[K^{\mu\nu}_{(ij)}=\ell^{\mu}\ell^{\nu}\frac{(2p_{i}-\ell)\cdot(2p_{j}+\ell)}{ (2p_{i}\cdot\ell-\ell^{2}+i\epsilon)(2p_{j}\cdot\ell+\ell^{2}-i\epsilon)}\,\quad G^{\mu\nu}_{(ij)}=\eta^{\mu\nu}-K^{\mu\nu}_{(ij)}\,. \tag{3.19}\] Note that \(p_{i}\) and \(p_{j}\) refer to the external momenta flowing into the legs \(i\) and \(j\), and not necessarily the momenta of the lines to which the photon propagator attaches (which may have additional contribution from external soft photon momentum or internal virtual photon momentum). For \(i=j\) we do not carry out any decomposition i.e. for the virtual photon involved in the self energy loop we do not carry out the KG-decomposition of (3.18). Also if one or both ends of the virtual photon propagator are attached to any internal massive particle propagator carrying sum of two or more external massive particle's momenta, or vertices of the massive EFT involving more than two massive particles in a loop diagram, then we do not need to perform KG-decomposition as those loop diagrams are IR-finite. In (3.18) the propagator part containing \(K_{(ij)}\) will be denoted by K-photon propagator and the propagator part containing \(G_{(ij)}\) will be denoted by G-photon propagator throughout this section. Ward identities involving K-photon:Since K-photon propagator is proportional to \(\ell^{\mu}\ell^{\nu}\) i.e. pure gauge, we can study the Ward identity for an off-shell un-amputated three particle Green's function with one leg being the K-photon as drawn in Fig.1. The LHS of Fig.1 after contracting with \(\ell^{\mu}\) representing K-photon can be expressed as \[\frac{1}{q_{i}^{2}+m_{i}^{2}-i\epsilon}\ \Xi_{i}(-q_{i})\ \ell^{ \mu}\Gamma^{(3)}_{\mu}(q_{i},-q_{i}-\ell,\ell)\ \Xi_{i}(-q_{i}-\ell)\ \frac{1}{(q_{i}+\ell)^{2}+m_{i}^{2}-i\epsilon}\] \[= i\frac{1}{q_{i}^{2}+m_{i}^{2}-i\epsilon}\frac{1}{(q_{i}+\ell)^{2} +m_{i}^{2}-i\epsilon}\Bigg{[}-i(2q_{i}.\ell+\ell^{2})\Xi_{i}(-q_{i})\mathcal{ Q}_{i}^{T}\] \[+\Xi_{i}(-q_{i}){\cal K}_{i}(-q_{i})\Bigg{\{}\ell^{\mu}\frac{\partial\Xi_{i}(-q_{i })}{\partial q_{i}^{\mu}}+\frac{1}{2}\ell^{\mu}\ell^{\nu}\frac{\partial^{2}\Xi_{ i}(-q_{i})}{\partial q_{i}^{\mu}\partial q_{i}^{\nu}}\ +\ {\cal O}(\ell^{3})\Bigg{\}}{\cal Q}_{i}^{T}\Bigg{]}. \tag{3.20}\] To write down the Feynman rule in the first line above for the diagram in Fig.1, we follow the convention that our time arrow runs from right to left. We shall follow the same time arrow convention for all the Feynman diagrams in this article. To evaluate the first line, with the expression in (3.12) we use the properties (2.11),(2.12),(2.19) and (2.20). Now replacing \((2q_{i}.\ell+\ell^{2})=\{(q_{i}+\ell)^{2}+m_{i}^{2}\}-\{q_{i}^{2}+m_{i}^{2}\}\) within the square bracket above and using the property (2.13), the RHS of the above expression translates to \[\frac{\Xi_{i}(-q_{i}){\cal Q}_{i}^{T}}{q_{i}^{2}+m_{i}^{2}-i \epsilon}-\frac{\Xi_{i}(-q_{i}){\cal Q}_{i}^{T}}{(q_{i}+\ell)^{2}+m_{i}^{2}-i\epsilon}\] \[-\frac{1}{(q_{i}+\ell)^{2}+m_{i}^{2}-i\epsilon}\Bigg{\{}\ell^{\mu }\frac{\partial\Xi_{i}(-q_{i})}{\partial q_{i}^{\mu}}+\frac{1}{2}\ell^{\mu} \ell^{\nu}\frac{\partial^{2}\Xi_{i}(-q_{i})}{\partial q_{i}^{\mu}\partial q_{ i}^{\nu}}\ +\ {\cal O}(\ell^{3})\Bigg{\}}{\cal Q}_{i}^{T}. \tag{3.21}\] If we un-do the small \(\ell\) expansion in the second line of the above expression7 and use the relation (2.20) the Ward identity turns out to be the following expression which has been diagrammatically represented in Fig.1. Footnote 7: Instead of (3.12), if we use the unexpanded expression (3.11) for \(\Gamma^{(3)}\) vertex, we do not need to un-do the small \(\ell\) expansion to derive the result below. Actually the relation in (3.12) is an exact relation valid up to all order in \(\ell\) expansion with any arbitrary non-minimal coupling contributing to \(\Gamma^{(3)}\), as it is a direct consequence of the Ward Identity. \[\frac{1}{q_{i}^{2}+m_{i}^{2}-i\epsilon}\ \Xi_{i}(-q_{i})\ \ell^{\mu} \Gamma^{(3)}_{\mu}(q_{i},-q_{i}-\ell,\ell)\ \Xi_{i}(-q_{i}-\ell)\ \frac{1}{(q_{i}+\ell)^{2}+m_{i}^{2}-i\epsilon}\] \[= {\cal Q}_{i}\frac{\Xi_{i}(-q_{i}-\ell)}{(q_{i}+\ell)^{2}+m_{i}^{ 2}-i\epsilon}\ +\ \frac{\Xi_{i}(-q_{i})}{q_{i}^{2}+m_{i}^{2}-i\epsilon}{\cal Q}_{i}^{T}. \tag{3.22}\] Important to note that the solid blob vertices in Fig.1 carry only the information of charge of the particle with which the K-photon is interacting. The Feynman rules for the blob vertices are independent of the momenta or any other information of the theory. We also need to study the consequence of Ward identity due to insertion of a K-photon in presence of an external off-shell photon with momentum \(k\) and Lorentz index \(\nu\). The set Figure 1: This figure is a Feynman diagrammatic representation of the expression in (3.12). Solid lines represent the massive spinning particles, dashed lines represent the ingoing virtual photon with momentum \(\ell\) and the arrow in the photon line represents that it is a K-photon (pure gauge part contracted). The solid blobs in the RHS represent a new kind of vertices and the Feynman rules for those vertices are just \({\cal Q}_{i}\) and \({\cal Q}_{i}^{T}\) as written next to the vertices. of Feynman diagrams describing the four point un-amputated Green's function with one photon and one K-photon has been drawn in the first line of Fig.2. Using the Ward identity described in Fig.1 for first and third diagrams in the first line of Fig.2, we find the diagrams drawn after the equality in Fig.2. Now if we can show that the sum of the contribution of the three diagrams in the last line of Fig.2 vanishes, then the Ward identity of Fig.1 is also valid in presence of an external photon line. The three diagrams in the last line of Fig.2 has been drawn again in Fig.3 and the contribution becomes \[\frac{1}{q_{i}^{2}+m_{i}^{2}-i\epsilon}\ \Xi_{i}(-q_{i})\Big{[} \mathcal{Q}_{i}^{T}\ \Gamma_{\nu}^{(3)}(q_{i}+\ell,-q_{i}-\ell-k,k)+\ \ell^{\mu}\ \Gamma_{\mu\nu}^{(4)}(q_{i},-q_{i}-\ell-k,\ell,k)\] \[+\ \Gamma_{\nu}^{(3)}(q_{i},-q_{i}-k,k)\ \mathcal{Q}_{i}\Big{]}\ \Xi_{i}(-q_{i}-\ell-k) \frac{1}{(q_{i}+\ell+k)^{2}+m_{i}^{2}-i\epsilon}. \tag{3.23}\] Substituting the expressions for the vertices from (3.12) and (3.13), the expression inside the square bracket of (3.23) turns out to be \[i\mathcal{Q}_{i}^{T}\Bigg{[}\frac{\partial\mathcal{K}_{i}(-q_{i} -\ell)}{\partial q_{i}^{\nu}}\mathcal{Q}_{i}+\frac{1}{2}k^{\mu}\frac{\partial ^{2}\mathcal{K}_{i}(-q_{i}-\ell)}{\partial q_{i}^{\mu}\partial q_{i}^{\nu}} \mathcal{Q}_{i}-2ik^{\mu}\mathcal{B}_{\nu\mu}^{i}(-q_{i}-\ell)\Bigg{]}\] \[+\ i\ell^{\mu}\ \frac{\partial^{2}\mathcal{K}_{i}(-q_{i})}{ \partial q_{i}^{\mu}\partial q_{i}^{\nu}}\ \mathcal{Q}_{i}\mathcal{Q}_{i}\] \[+i\Bigg{[}\frac{\partial\mathcal{K}_{i}(-q_{i})}{\partial q_{i}^ {\nu}}\mathcal{Q}_{i}+\frac{1}{2}k^{\mu}\frac{\partial^{2}\mathcal{K}_{i}(-q_ {i})}{\partial q_{i}^{\mu}\partial q_{i}^{\nu}}\mathcal{Q}_{i}-2ik^{\mu} \mathcal{B}_{\nu\mu}^{i}(-q_{i})\Bigg{]}\mathcal{Q}_{i}. \tag{3.24}\] Figure 2: Diagrams in the first line represents the contribution to the four point un-amputated Green’s function with one photon and one K-photon. By using the Ward identity in Fig.1 we get the diagrams in second and third lines. By Taylor expanding the first line above for small \(\ell\) and only keeping terms up to linear order in \(\ell\) or \(k\), we can use the identities in (2.19) and (3.8) to show that the sum of the contribution in the three lines above vanishes up to linear order in \(\ell\) or \(k\). This proves the diagrammatic identity in Fig.3. By utilizing the expressions of vertices from (3.12) and (3.13), which are given up to linear and zeroth order in \(\ell\) or \(k\) respectively, it may initially appear that the validity of the result in Fig.3 is limited to linear order in \(\ell\) or \(k\). However, it is important to note that the results depicted in Fig.3 and Fig.1 hold true for all orders in the expansion of small \(\ell\) and \(k\). These results play a crucial role in establishing the gauge invariance of any amplitude involving external photons in quantum electrodynamics, as they are connected to the Ward-Takahashi identity of QED. For further details on the spinor-QED case, please look at [51] and section-(7.4) of [55]. For an un-amputated Green's function with two massive spinning particles and arbitrary number of external photon legs, one insertion of K-photon in all possible way finally reduces to sum over sets of diagrams where the K-photon is connected in the end of the spinning particle legs with solid blob vertices as discussed above. This strong statement can be proved using the identity in Fig.1 and the generalized identity in Fig.4. The identity in Fig.4 is a straight forward generalization of the example discussed in Fig.3, which has been tested with the covariantized vertices up to linear order in photon momenta expansion for \(\Gamma^{(n)}\) and \(\Gamma^{(n+1)}\) vertices with \(n=3\). Figure 4: Generalization of the identity in Fig.3 in presence of \(n\)-number of photons. Figure 3: Identity involving last three Feynman diagrams in Fig.2 whose mathematical expression has been written in (3.23). IR-finite amplitudes:As we defined earlier, \({\cal A}^{(N)}\) represents the all loop scattering amplitude with \(N\) number of external massive spinning particles, and \({\cal A}^{(N+1)}\) represents the all loop scattering amplitude with \(N\) number of external massive spinning particles and one external photon. If the massive spinning particles carry definite charges \(\{e_{i}\}\) then the following identity holds \[{\cal Q}_{\alpha_{i}}\ ^{\beta_{i}}\ \epsilon_{i\beta_{i}}(p_{i})=\ e_{i}\ \epsilon_{i\alpha_{i}}(p_{i})\ \Rightarrow\ {\cal Q}\epsilon_{i}(p_{i})=e_{i}\epsilon_{i}(p_{i})\,\ \epsilon_{i}^{T}(p_{i}){\cal Q}^{T}=e_{i} \epsilon_{i}^{T}(p_{i}). \tag{3.25}\] The K-photon insertion Ward identities of Fig.1 and Fig.4 imply the exponentiation of the one-loop K-photon contribution \(K_{em}\) in (2.1), as proven in [51]. However, the proof of this exponentiation in [51] is valid only when the tree-level amplitude of the massive EFT \(\left(\Gamma^{(N)}\right)\) is independent of the momenta of scattering particles i.e. it is described by a momenta-independent contact interaction between \(N\) number of massive fields, as considered in [1]. The validity of the exponentiation of \(K_{em}\) in a generic theory of QED, incorporating all possible interactions without considering to any kind of approximation (like assuming that virtual photon momenta are significantly smaller than external massive particle momenta), is too good to be true.8 But in the limit of small virtual photon momenta the IR divergent piece of \(K_{em}\) exponentiate, which is known as the leading Eikonal exponentiation. Footnote 8: We are thankful to P.V. Athira for extensive discussion on this topic. The final outcome of the Grammer-Yennie decomposition of virtual photon propagator in (3.18) is \[{\cal A}^{(N)}\equiv\exp\{K_{em}\}\ {\cal A}^{(N)}_{\text{IR-finite}}\, \tag{3.26}\] \[{\cal A}^{(N+1)}\equiv\exp\{K_{em}\}\ {\cal A}^{(N+1)}_{\text{IR- finite}}\, \tag{3.27}\] where \[K_{em}=\ \frac{i}{2}\sum_{i=1}^{N}\ \sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}e_{i}e_{j}\ \int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\ \frac{(2p_{i}-\ell)\cdot(2p_{j}+\ell)}{(2p_{i} \cdot\ell-\ell^{2}+i\epsilon)(2p_{j}\cdot\ell+\ell^{2}-i\epsilon)}. \tag{3.28}\] In equations (3.26) and (3.27), \({\cal A}^{(N)}_{\text{IR-finite}}\) and \({\cal A}^{(N+1)}_{\text{IR-finite}}\) represent the infrared finite components of the \(N\)-particle and \(N\)-particle-1-photon amplitudes, respectively. These components are obtained by removing the exponentiated IR-divergent parts from the original divergent amplitudes defined through the relations (3.26) and (3.27). Both \({\cal A}^{(N)}_{\text{IR-finite}}\) and \({\cal A}^{(N+1)}_{\text{IR-finite}}\) comprise contributions from the corresponding tree-level amplitudes and loop amplitudes up to all orders in perturbation theory. However, there is a condition: if both ends of a virtual photon propagator are connected to external massive spinning particle lines (which may already contain additional real or virtual photon lines), then this photon propagator should be replaced by a G-photon propagator when we evaluate them for the IR-finite parts. Additionally the same set of diagrams need to be evaluated with K-photon propagator as well and then have to subtract by a factor of \(K_{em}\) times the IR finite amplitude at one less loop level. In our convention the tree level amplitudes are given by \[{\cal A}^{(N)}_{\text{IR-finite},0}=\Gamma^{(N)}\,\ {\cal A}^{(N+1)}_{ \text{IR-finite},0}=\Gamma^{(N+1)}\, \tag{3.29}\] where in the subscript '0' corresponds to 0-loop i.e. tree level. At one-loop order, \({\cal A}^{(N)}_{\text{IR-finite}}\) and \({\cal A}^{(N+1)}_{\text{IR-finite}}\) are given by: \[{\cal A}^{(N)}_{\text{IR-finite},1} \equiv \left[{\cal A}^{(N)}_{G,1}+{\cal A}^{(N)}_{K-\text{finite},1}+{ \cal A}^{(N)}_{\text{self},1}+{\cal A}^{(N)}_{\text{non-div},1}\right]\,, \tag{3.30}\] \[{\cal A}^{(N+1)}_{\text{IR-finite},1} \equiv \left[{\cal A}^{(N+1)}_{G,1}+{\cal A}^{(N+1)}_{K-\text{finite},1} +{\cal A}^{(N+1)}_{\text{self},1}+{\cal A}^{(N+1)}_{\text{non-div},1}\right]\,, \tag{3.31}\] where in the subscript '1' corresponds to 1-loop. Above 1. \({\cal A}^{(N)}_{G,1}\) corresponds to the diagram in Fig.5 which we need to evaluate with G-photon propagator. 2. \({\cal A}^{(N)}_{K-\text{finite},1}\) corresponds to the contribution from the diagram in Fig.5, evaluated with K-photon propagator and then subtracted the contribution of \(K_{em}\times\Gamma^{(N)}\) at the integrand level. 3. \({\cal A}^{(N)}_{\text{self},1}\) represents the set of Feynman diagrams in Fig.6 evaluated with full photon propagator. 4. \({\cal A}^{(N)}_{\text{non-div},1}\) represents the set of diagrams in Fig.7, evaluated with full photon propagator. 5. \({\cal A}^{(N+1)}_{G,1}\) corresponds to the set of diagrams in Fig.8 when we evaluate them with G-photon propagator. 6. \({\cal A}^{(N+1)}_{K-\text{finite},1}\) corresponds to the contribution from the diagrams in Fig.8, evaluated with K-photon propagator and then subtracted the contribution \(K_{em}\times\Gamma^{(N+1)}\) at the integrand level. 7. \({\cal A}^{(N+1)}_{\text{self},1}\) represents the set of Feynman diagrams in Fig.9, evaluated with full photon propagator. 8. \({\cal A}^{(N+1)}_{\text{non-div},1}\) represents the set of diagrams in Fig.11, evaluated with full photon propagator. ### Derivation of soft photon theorem The goal here will be to derive the order \(\omega^{-1}\) and \(\ln\omega\) soft factors from the ratio of \({\cal A}^{(N+1)}_{\text{IR-finite}}\) and \({\cal A}^{(N)}_{\text{IR-finite}}\) when the external photon energy is small i.e. \(\omega<<|p_{i}^{\mu}|\). #### 3.3.1 IR-finite one loop \(N\)-particle amplitude First we want to analyze all the Feynman diagrams contributing to (3.30) and show that the result is IR-divergence free. We also provide an explicit integral expression of IR-finite 1-loop amplitude for \({\cal A}^{(N)}_{\text{IR-finite},1}\). Using the Feynman rules derived in section-3.1, the diagram in Fig.5 with G-photon propagator contributes to the following \[{\cal A}^{(N)}_{G,1} = \sum_{\begin{subarray}{c}i,j=1\\ i>j\end{subarray}}^{N}\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{-i}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i \epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon}\] \[\Big{[}\epsilon_{i}^{T}(-p_{i})\Gamma^{(3)}_{\mu}(p_{i},-p_{i}+\ell,- \ell)\Xi_{i}(-p_{i}+\ell)\Big{]}\ G^{\mu\nu}_{(ij)}(\ell) \tag{3.32}\] \[\times\Big{[}\epsilon_{j}^{T}(-p_{j})\Gamma^{(3)}_{\nu}(p_{j},-p_ {j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\Big{]}\ \Gamma^{(N)}_{(ij)}(p_{i}-\ell,p_{j}+\ell)\] \[= \sum_{\begin{subarray}{c}i,j=1\\ i>j\end{subarray}}^{N}8e_{i}e_{j}\epsilon_{i}^{T}\epsilon_{j}^{T}\int\frac{d^ {4}\ell}{(2\pi)^{4}}\ \frac{-i}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i \epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon}\] \[\times\Big{\{}\Big{[}p_{i}^{\mu}\epsilon^{\nu}{\cal N}^{j}_{\mu \nu}(-p_{j})\ -\ p_{j}^{\mu}\epsilon^{\nu}{\cal N}^{i}_{\mu\nu}(-p_{i})\Big{]}\ \Gamma^{(N)}_{(ij)}(p_{i},p_{j})+{\cal O}(\ell\ell) \Big{\}}\,\] where to get the last two lines we have used the identity in (A.9) for both \(i\)-th and \(j\)-th particles and Taylor expanded the numerator in the limit \(|\ell^{\mu}|<<|p_{i}^{\mu}|,|p_{j}^{\mu}|\). The expression for \({\cal N}^{i}(-p_{i})\) is given in (A.3). On the other hand the contribution of \({\cal A}^{(N)}_{K-{\rm finite},1}\) from the diagram in Fig.5 becomes \[{\cal A}^{(N)}_{K-{\rm finite},1} = \sum_{\begin{subarray}{c}i,j=1\\ i>j\end{subarray}}^{N}\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{-i}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i \epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon} \tag{3.33}\] \[\Big{[}\epsilon_{i}^{T}(-p_{i})\Gamma^{(3)}_{\mu}(p_{i},-p_{i}+ \ell,-\ell)\Xi_{i}(-p_{i}+\ell)\Big{]}\ K^{\mu\nu}_{(ij)}(\ell)\] \[\times\Big{[}\epsilon_{j}^{T}(-p_{j})\Gamma^{(3)}_{\nu}(p_{j},-p_ {j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\Big{]}\ \Gamma^{(N)}_{(ij)}(p_{i}-\ell,p_{j}+\ell)\] \[-\ K_{em}\times\Gamma^{(N)}\.\] Now evaluating this expression using the identity in (A.9) and Taylor expanding \(\Gamma^{(N)}_{(ij)}(p_{i}-\ell,p_{j}+\ell)\) in the limit \(|\ell^{\mu}|<<|p_{i}^{\mu}|,|p_{j}^{\mu}|\) we get \[{\cal A}^{(N)}_{K-{\rm finite},1} = i\ \sum_{\begin{subarray}{c}i,j=1\\ i>j\end{subarray}}^{N}e_{i}e_{j}\ \epsilon_{i}^{T}\epsilon_{j}^{T}\int\frac{d^ {4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\ \frac{(2p_{i}-\ell)\cdot(2p_{j}+\ell)}{(2p_{i} \cdot\ell-\ell^{2}+i\epsilon)(2p_{j}\cdot\ell+\ell^{2}-i\epsilon)} \tag{3.34}\] \[\times\Big{[}-\ell^{\rho}\frac{\partial}{\partial p_{i}^{\rho}} \Gamma^{(N)}_{(ij)}(p_{i},p_{j})+\ell^{\rho}\frac{\partial}{\partial p_{j}^{ \rho}}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})+{\cal O}(\ell\ell)\Big{]}\.\] Figure 5: Diagram contributes to \({\cal A}^{(N)}_{G,1}\), where the virtual photon propagator is the G-photon propagator connected between two external hard particle lines. The diagram also contributes to \({\cal A}^{(N)}_{K-{\rm finite},1}\) when evaluated with K-photon propagator and subtracted \(K_{em}\Gamma^{(N)}\) from it’s contribution. Diagrams in Fig.6 renormalizes the massive spinning particle propagators in presence of electromagnetic interaction, and all the loops are IR-finite. Say the three diagrams in Fig.6 contributes to \[\epsilon_{i}^{T}F_{1}\frac{\Xi_{i}(-p_{i})}{p_{i}^{2}+m_{i}^{2}} \Gamma^{(N)}_{(i)}(p_{i})\,\ \epsilon_{i}^{T}F_{2}\frac{\Xi_{i}(-p_{i})}{p_{i}^{2}+m_{i}^{2}} \Gamma^{(N)}_{(i)}(p_{i})\,\ \epsilon_{i}^{T}C\frac{\Xi_{i}(-p_{i})}{p_{i}^{2}+m_{i}^{2}} \Gamma^{(N)}_{(i)}(p_{i}) \tag{3.35}\] respectively. Where \(F_{1},F_{2},C\) are unknown constant matrices, which are related using on-shell renormalization condition \[C=-F_{1}-F_{2}. \tag{3.36}\] Hence the on-shell renormalization condition implies \[\mathcal{A}^{(N)}_{\text{self},1}=0. \tag{3.37}\] In Fig.7 we draw the sets of diagrams where one or both ends of the photon loop are attached to some internal massive virtual line or massive EFT vertex inside \(\Gamma^{(N)}\). These diagrams are also IR-finite. For example the first diagram in Fig.7 with full photon propagator becomes \[\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{-i}{\ell^{2}-i\epsilon}\ \frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon}\ \eta^{\mu\nu}\] \[\times\epsilon_{i}^{T}(-p_{i})\Big{[}\Gamma^{(3)}_{\mu}(p_{i},-p _{i}+\ell,-\ell)\Xi_{i}(-p_{i}+\ell)\widetilde{\Gamma}^{(N+1)}_{(i)\nu}(p_{i} -\ell;\ell)\Big{]}. \tag{3.38}\] Now using the identity in (A.9) for \(i\)-th particle and the Feynman rule of (3.17) it is evident that in the limit when the loop momentum \(\ell^{\mu}\to 0\) the integration behaves as \(\int\frac{d^{4}\ell}{|\ell|^{3}}\), hence Figure 6: Diagram contributes to \(\mathcal{A}^{(N)}_{\text{self},1}\), where the virtual photon propagator is the full photon propagator. In the last diagram the cross corresponds to counter term, which cancels the UV divergences. Figure 7: Diagram contributes to \(\mathcal{A}^{(N)}_{\text{non-div},1}\), where the virtual photon propagator is the full photon propagator whose one or both end connected to \(\Gamma^{(N)}\). In the last diagram the cross corresponds to counter term, which cancels the UV divergences. is IR-finite. This is the reason we call the set of diagrams in Fig.7 as \({\cal A}^{(N)}_{\rm non-div,1}\) as those are IR-divergence free. We do not need to evaluate the contribution \({\cal A}^{(N)}_{\rm non-div,1}\) explicitly for deriving soft photon theorem. Now summing over the contributions of (3.32), (3.34), (3.37) and \({\cal A}^{(N)}_{\rm non-div,1}\), we get \[{\cal A}^{(N)}_{\rm IR-finite,1} = \sum_{\begin{subarray}{c}i,j=1\\ i>j\end{subarray}}^{N}8e_{i}e_{j}\epsilon_{i}^{T}\epsilon_{j}^{T}\int\frac{d^{ 4}\ell}{(2\pi)^{4}}\ \frac{-i}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i \epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon} \tag{3.39}\] \[\times\Big{\{}\Big{[}p_{i}^{\mu}\ell^{\nu}{\cal N}^{j}_{\mu\nu}( -p_{j})\ -\ p_{j}^{\mu}\ell^{\nu}{\cal N}^{i}_{\mu\nu}(-p_{i})\Big{]}\ \Gamma^{(N)}_{(ij)}(p_{i},p_{j})+{\cal O}(\ell \ell)\Big{\}}\] \[+i\ \sum_{\begin{subarray}{c}i,j=1\\ i>j\end{subarray}}^{N}e_{i}e_{j}\ \epsilon_{i}^{T}\epsilon_{j}^{T}\int\frac{d^{ 4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\ \frac{(2p_{i}-\ell) \cdot(2p_{j}+\ell)}{(2p_{i}\cdot\ell-\ell^{2}+i\epsilon)(2p_{j}\cdot\ell+\ell^ {2}-i\epsilon)}\] \[\times\Big{[}-\ell^{\rho}\frac{\partial}{\partial p_{i}^{\rho}} \Gamma^{(N)}_{(ij)}(p_{i},p_{j})+\ell^{\rho}\frac{\partial}{\partial p_{j}^{ \rho}}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})+{\cal O}(\ell\ell)\Big{]}\] \[+\ {\cal A}^{(N)}_{\rm non-div,1}\.\] From the above expression, it is clear that the loop integrals become infrared finite in the limit as \(\ell\) approaches zero. Therefore, the Grammer-Yennie prescription offers a clear definition of an IR-finite S-matrix. An explicit expression of IR-finite S-matrix at one-loop order is provided in the above expression in a general theory of quantum electrodynamics (QED). #### 3.3.2 IR-finite one loop \((N+1)\)-particle amplitude in the soft limit Here we analyze all the Feynman diagrams contributing to (3.31) in the soft limit i.e. \(\omega\to 0\). Let us start analyzing the first diagram in Fig.8 with G-photon propagator, which has the following expression after using Feynman rules \[A_{I} \equiv \sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{-i}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}+k)^{2}+m_{i}^{2}-i \epsilon}\frac{1}{(p_{i}+k-\ell)^{2}+m_{i}^{2}-i\epsilon}\frac{1}{(p_{j}+\ell )^{2}+m_{j}^{2}-i\epsilon} \tag{3.40}\] \[\Big{[}\epsilon_{i}^{T}(-p_{i})\varepsilon^{\rho}(k)\Gamma^{(3)} _{\rho}(p_{i},-p_{i}-k,k)\Xi_{i}(-p_{i}-k)\Gamma^{(3)}_{\mu}(p_{i}+k,-p_{i}-k +\ell,-\ell)\Xi_{i}(-p_{i}-k+\ell)\Big{]}\] \[\times\ G^{\mu\nu}_{(ij)}(\ell)\ \Big{[}\epsilon_{j}^{T}(-p_{j}) \Gamma^{(3)}_{\nu}(p_{j},-p_{j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\Big{]}\ \Gamma^{(N)}_{(ij)}(p_{i}+k-\ell,p_{j}+\ell)\.\] Substituting the results from (A.4) and (A.9) in the above expression and after some manipulation we get \[A_{I} = \sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}e_{i}^{2}e_{j}\epsilon_{i}^{T}\epsilon_{j}^{T}\frac{ 1}{(p_{i}+k)^{2}+m_{i}^{2}-i\epsilon}\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{-i}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}+k-\ell)^{2}+m_{i}^{2}-i \epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon}\] \[\ \[+4ip_{i}.k\varepsilon^{\rho}p_{j}^{\sigma}\frac{\partial{\cal K}_{i}(- p_{i})}{\partial p_{i}^{\rho}}\frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{ \sigma}}-\frac{4p_{i}.p_{j}}{2p_{i}.\ell-\ell^{2}+i\epsilon}\Big{\{}4\varepsilon.p_{i}\ell.k+4p_{i}.\ell(\varepsilon^{\rho}k^{\sigma}-\varepsilon^{\sigma}k^{ \rho}){\cal N}_{\rho\sigma}^{i}(-p_{i})\] \[+2ip_{i}.k\varepsilon^{\rho}\ell^{\sigma}\frac{\partial{\cal K}_{ i}(-p_{i})}{\partial p_{i}^{\rho}}\frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{ \sigma}}\Big{\}}\ +{\cal O}(\ell\ell,k\ell,kk)\Bigg{]}\Big{\{}\Gamma^{(N)}_{(ij)}(p_{i},p_{j}) +{\cal O}(\ell,k)\Big{\}}. \tag{3.41}\] Inside the square bracket of the numerator in the above expression, we only keep the terms up to linear order in \(\ell\) or \(k\), as our vertices are derived only up to that order. Note that the above expression is IR-finite in the limit \(\ell^{\mu}\to 0\) and in the region of integration \(|\ell^{\mu}|<<\omega<<|p_{i}^{\mu}|,|p_{j}^{\mu}|\) it contributes at order \({\cal O}(\omega^{0})\). Now to extract \(\ln\omega\) contribution we approximate the integrand in the integration range \(\omega<<|\ell^{\mu}|<<|p_{i}^{\mu}|,|p_{j}^{\mu}|\), by doing so we can approximate \[\frac{1}{(p_{i}+k-\ell)^{2}+m_{i}^{2}-i\epsilon}\simeq\frac{1}{(p_{i}-\ell)^{ 2}+m_{i}^{2}-i\epsilon}\Big{[}1+\frac{p_{i}.k}{p_{i}.\ell+i\epsilon}\Big{]}. \tag{3.42}\] Figure 8: Set of 1-loop diagrams contributing to \({\cal A}^{(N+1)}_{G,1}\), where the virtual photon propagator is the G-photon propagator connected between two external particle lines. Here we omitted the diagrams involving counter terms to remove UV divergences. We need to sum over all possible external legs while evaluating the contributions from these diagrams. When we evaluate these diagrams with K-photon propagator and subtracted \(K_{em}\Gamma^{(N+1)}\) from it’s contribution, it also contributes to \({\cal A}^{(N+1)}_{K-{\rm finite},1}\). With this approximation the order \(\omega^{-1}\) and \(\ln\omega\) contribution turns out to be, \[A_{I} = \sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}e_{i}\frac{\varepsilon.p_{i}}{p_{i}.k}\ 8e_{i}e_{j}\epsilon_{i}^{T}\epsilon_{j}^{T}\int_{reg}\frac{d^{4}\ell}{(2\pi)^{4 }}\ \frac{-i}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon} \tag{3.43}\] \[\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon}\Big{[}p_{i}^{ \rho}\ell^{\sigma}\mathcal{N}_{\rho\sigma}^{j}(-p_{j})-p_{j}^{\rho}\ell^{ \sigma}\mathcal{N}_{\rho\sigma}^{i}(-p_{i})\Big{]}\Gamma_{(ij)}^{(N)}(p_{i},p_ {j})\] \[+i\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}e_{i}^{2}e_{j}\frac{1}{p_{i}.k}\epsilon_{i}^{T} \epsilon_{j}^{T}\int_{reg}\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\frac{1}{p_{i}.\ell+i \epsilon}\frac{1}{p_{j}.\ell-i\epsilon}\] \[\Bigg{[}\varepsilon.p_{i}p_{j}.k+\frac{p_{i}.k}{p_{i}.\ell+i \epsilon}\Big{\{}2\varepsilon.p_{i}p_{i}^{\rho}\ell^{\sigma}\mathcal{N}_{ \rho\sigma}^{j}(-p_{j})-2\varepsilon.p_{i}p_{j}^{\rho}\ell^{\sigma}\mathcal{N }_{\rho\sigma}^{i}(-p_{i})\Big{\}}\] \[+\frac{i}{2}p_{i}.k\varepsilon^{\rho}p_{j}^{\sigma}\frac{ \partial\mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\rho}}\frac{\partial\Xi_{i} (-p_{i})}{\partial p_{i}^{\sigma}}-\frac{p_{i}.p_{j}}{p_{i}.\ell+i\epsilon} \Big{\{}\varepsilon.p_{i}\ell.k\] \[+\frac{i}{2}p_{i}.k\varepsilon^{\rho}\ell^{\sigma}\frac{\partial \mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\rho}}\frac{\partial\Xi_{i}(-p_{i})} {\partial p_{i}^{\sigma}}\Big{\}}\ \Bigg{]}\ \Gamma_{(ij)}^{(N)}(p_{i},p_{j})\ +\ \mathcal{O}(\omega^{0}).\] Above the subscript "\(reg\)" in the loop integral corresponds to the restricted loop-momentum range \(\omega<<|\ell^{\mu}|<<|p_{i}^{\mu}|,|p_{j}^{\mu}|\). Using Feynman rules the second diagram in Fig.8 with G-photon propagator becomes \[A_{II} \equiv \sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{-i}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}+k-\ell)^{2}+m_{i}^{2}-i \epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon} \tag{3.44}\] \[\Big{[}\epsilon_{i}^{T}(-p_{i})\varepsilon^{\rho}(k)\Gamma_{\rho \mu}^{(4)}(p_{i},-p_{i}-k+\ell,k,-\ell)\Xi_{i}(-p_{i}-k+\ell)\Big{]}\ G_{(ij)}^{ \mu\nu}(\ell)\] \[\times\ \Big{[}\epsilon_{j}^{T}(-p_{j})\Gamma_{\nu}^{(3)}(p_{j},-p_{ j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\Big{]}\ \Gamma_{(ij)}^{(N)}(p_{i}+k-\ell,p_{j}+\ell).\] After substituting the results from (A.6) and (A.9) and keeping terms which can contribute up to order \(\ln\omega\) in the integration range \(reg\equiv(\omega<<|\ell^{\mu}|<<|p_{i}^{\mu}|,|p_{j}^{\mu}|)\) we get, \[A_{II} = -i\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}e_{i}^{2}e_{j}\ \epsilon_{i}^{T}\epsilon_{j}^{T}\int_{reg}\frac{d^{4}\ell}{(2\pi)^{4}} \frac{1}{\ell^{2}-i\epsilon}\frac{1}{p_{i}.\ell+i\epsilon}\frac{1}{p_{j}.\ell-i\epsilon} \tag{3.45}\] \[\Bigg{[}\varepsilon.p_{j}+\frac{i}{2}\varepsilon^{\rho}p_{j}^{ \sigma}\Big{\{}\frac{\partial\mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\rho}} \frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\sigma}}+\frac{\partial \mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\sigma}}\frac{\partial\Xi_{i}(-p_{i}) }{\partial p_{i}^{\rho}}\Big{\}}-\frac{p_{i}.p_{j}}{p_{i}.\ell+i\epsilon}\] \[\Bigg{(}\varepsilon.\ell+\frac{i}{2}\varepsilon^{\rho}\ell^{ \sigma}\Big{\{}\frac{\partial\mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\rho}} \frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\sigma}}+\frac{\partial \mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\sigma}}\frac{\partial\Xi_{i}(-p_{i})} {\partial p_{i}^{\rho}}\Big{\}}\Bigg{)}\Bigg{]}\ \Gamma_{(ij)}^{(N)}(p_{i},p_{j})\] \[+\ \mathcal{O}(\omega^{0})\.\] Using Feynman rules the third diagram in Fig.8 with G-photon propagator becomes \[A_{III} \equiv \sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\int\,\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{-i}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon} \frac{1}{(p_{i}+k-\ell)^{2}+m_{i}^{2}-i\epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j }^{2}-i\epsilon} \tag{3.46}\] \[\left[\epsilon_{i}^{T}\,(-p_{i})\varepsilon^{\rho}(k)\Gamma_{\mu} ^{(3)}(p_{i},-p_{i}+\ell,-\ell)\Xi_{i}(-p_{i}+\ell)\Gamma_{\rho}^{(3)}(p_{i}- \ell,-p_{i}-k+\ell,k)\Xi_{i}(-p_{i}-k+\ell)\right]\] \[\times\ G_{(ij)}^{\mu\nu}(\ell)\,\left[\epsilon_{j}^{T}(-p_{j}) \Gamma_{\nu}^{(3)}(p_{j},-p_{j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\right]\,\Gamma _{(ij)}^{(N)}(p_{i}+k-\ell,p_{j}+\ell)\.\] After substituting the results from (A.7) and (A.9) and keeping terms which can contribute up to order \(\ln\omega\) in the integration range \(\omega<<|\ell^{\mu}|<<|p_{i}^{\mu}|,|p_{j}^{\mu}|\) we get, \[A_{III} = -i\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}e_{i}^{2}e_{j}\ \epsilon_{i}^{T}\epsilon_{j}^{T}\int_{reg} \frac{d^{4}\ell}{(2\pi)^{4}}\frac{1}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}.\ell+ i\epsilon)^{2}}\frac{1}{p_{j}.\ell-i\epsilon}\] Figure 9: Set of 1-loop diagrams contributing to \(\mathcal{A}_{\text{self},1}^{(N+1)}\), where the virtual photon propagator is the full photon propagator connecting two different points on the same massive spinning particle leg. The cross appears in some diagrams above corresponds to counter term, which cancels the UV divergences in the renormalization prescription. \[\Bigg{[}2\varepsilon.p_{i}p_{i}^{\rho}\ell^{\sigma}{\cal N}^{j}_{ \rho\sigma}(-p_{j})-2\varepsilon.p_{i}p_{j}^{\rho}\ell^{\sigma}{\cal N}^{i}_{ \rho\sigma}(-p_{i})-\frac{i}{2}p_{i}.\ell p_{j}^{\rho}\varepsilon^{\sigma} \frac{\partial{\cal K}_{i}(-p_{i})}{\partial p_{i}^{\rho}}\,\frac{\partial \Xi_{i}(-p_{i})}{\partial p_{i}^{\sigma}}\] \[+\frac{i}{2}p_{i}.p_{j}\ell^{\rho}\varepsilon^{\sigma}\frac{ \partial{\cal K}_{i}(-p_{i})}{\partial p_{i}^{\rho}}\frac{\partial\Xi_{i}(-p_ {i})}{\partial p_{i}^{\sigma}}\Bigg{]}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})+\ {\cal O}( \omega^{0}). \tag{3.47}\] The fourth diagram in Fig.8 with G-graviton contributes at order \(\omega^{-1}\) but won't contribute at order \(\ln\omega\). The order \(\omega^{-1}\) contribution turns out to be, \[A_{IV} = \sum_{k=1}^{N}e_{k}\frac{\varepsilon.p_{k}}{p_{k}.k}\ \sum_{ \begin{subarray}{c}i=1\\ i\neq k\end{subarray}}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq k\\ j>i\end{subarray}}^{N}8e_{i}e_{j}\epsilon_{i}^{T}\epsilon_{j}^{T}\int_{reg}\frac {d^{4}\ell}{(2\pi)^{4}}\ \frac{-i}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon} \tag{3.48}\] \[\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon}\Big{[}p_{i}^{\rho }\ell^{\sigma}{\cal N}^{j}_{\rho\sigma}(-p_{j})-p_{j}^{\rho}\ell^{\sigma}{ \cal N}^{i}_{\rho\sigma}(-p_{i})\Big{]}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\ +\ {\cal O}( \omega^{0})\.\] The fifth diagram in Fig.8 start contributing from order \(\omega^{0}\) in the soft expansion when we evaluate it with G-photon propagator i.e. \(A_{V}=0+{\cal O}(\omega^{0})\). Here we are not writing down the non-vanishing contribution of \(A_{V}\) at order \(\omega^{0}\) explicitly, as it is not essential for deriving the order \(\omega^{-1}\) and \(\ln\omega\) soft factors. Now summing over the external particle legs, total contribution of \({\cal A}^{(N+1)}_{G,1}\) at orders \(\omega^{-1}\) and \(\ln\omega\) turns out to be \[{\cal A}^{(N+1)}_{G,1} \tag{3.49}\] \[= A_{I}+A_{II}+A_{III}+A_{IV}+A_{V}\] \[= \sum_{k=1}^{N}e_{k}\frac{\varepsilon.p_{k}}{p_{k}.k}\sum_{i=1}^{ N}\ \sum_{\begin{subarray}{c}j=1\\ j>i\end{subarray}}^{N}8e_{i}e_{j}\epsilon_{i}^{T}\epsilon_{j}^{T}\int_{reg} \frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon}\] \[\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon}\Big{[}p_{i}^{\rho }\ell^{\sigma}{\cal N}^{j}_{\rho\sigma}(-p_{j})-p_{j}^{\rho}\ell^{\sigma}{ \cal N}^{i}_{\rho\sigma}(-p_{i})\Big{]}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\] \[+ i\ \sum_{i=1}^{N}\ \sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}e_{i}^{2}e_{j}\ \epsilon_{i}^{T}\epsilon_{j}^{T}\int_{reg} \frac{d^{4}\ell}{(2\pi)^{4}}\frac{1}{\ell^{2}-i\epsilon}\ \frac{1}{p_{i}.\ell+i \epsilon}\ \frac{1}{p_{j}.\ell-i\epsilon}\] \[\times\Bigg{[}\frac{\varepsilon.p_{i}}{p_{i}.k}p_{j}.k-\frac{ \varepsilon.p_{i}}{p_{i}.k}\ p_{i}.p_{j}\frac{\ell.k}{p_{i}.\ell+i\epsilon}- \varepsilon.p_{j}+\varepsilon.\ell\frac{p_{i}.p_{j}}{p_{i}.\ell+i\epsilon} \Bigg{]}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\ +{\cal O}(\omega^{0})\.\] Interestingly in the above expression all the theory dependent pieces involving \({\cal K}_{i}\,\Xi_{i}\,{\cal B}^{i}\) at order \(\ln\omega\) cancels out at the integrand level, when we some over the contributions (3.43),(3.45),(3.47). This confirms the fact that the subleading soft photon theorem at order \(\ln\omega\) is universal. The above result can also be rewritten in the following compact form \[{\cal A}^{(N+1)}_{G,1} = \sum_{k=1}^{N}e_{k}\frac{\varepsilon.p_{k}}{p_{k}.k}\ {\cal A}^{(N)}_{G,1} \tag{3.50}\] \[+ \sum_{i=1}^{N}e_{i}\frac{\varepsilon_{\mu}k_{\nu}}{p_{i}.k}\Big{\{} \Big{(}p_{i}^{\mu}\frac{\partial}{\partial p_{i\nu}}-p_{i}^{\nu}\frac{ \partial}{\partial p_{i\mu}}\Big{)}K^{reg}_{em}\Big{\}}\ \Gamma^{(N)}\ +{\cal O}(\omega^{0})\,\] where the expression of \({\cal A}^{(N)}_{G,1}\) is given in (3.32), and \(K^{reg}_{em}\) is the approximated form of the integral \(K_{em}\) in (3.28) in the integration range \(\omega<<|\ell^{\mu}|<<|p_{i}^{\mu}|,|p_{j}^{\mu}|\). The integration has been explicitly evaluated in [1] and the result reads \[K^{reg}_{em}\equiv\frac{i}{2}\sum_{\ell=1}^{N}\ \sum_{ \begin{subarray}{c}j=1\\ j\neq\ell\end{subarray}}^{N}e_{\ell}e_{j}\ \int_{reg}\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\ \frac{(p_{\ell}\cdot p_{j})}{(p_{\ell} \cdot\ell+i\epsilon)\ (p_{j}\cdot\ell-i\epsilon)}\] \[\simeq -\frac{i}{2}\sum_{\ell=1}^{N}\ \sum_{\begin{subarray}{c}j=1\\ j\neq\ell\end{subarray}}^{N}\frac{e_{\ell}e_{j}}{4\pi}\ (\ln\omega)\ \frac{p_{\ell} \cdot p_{j}}{\sqrt{(p_{\ell}\cdot p_{j})^{2}-p_{\ell}^{2}p_{j}^{2}}}\Bigg{\{} \delta_{\eta_{\ell}\eta_{j},1}-\frac{i}{2\pi}\ln\Bigg{(}\frac{p_{\ell}\cdot p _{j}+\sqrt{(p_{\ell}\cdot p_{j})^{2}-p_{\ell}^{2}p_{j}^{2}}}{p_{\ell}\cdot p_{ j}-\sqrt{(p_{\ell}\cdot p_{j})^{2}-p_{\ell}^{2}p_{j}^{2}}}\Bigg{)}\Bigg{\}}. \tag{3.51}\] Above \(\eta_{j}=+1\) if \(j\)-th particle is ingoing and \(\eta_{j}=-1\) if \(j\)-th particle is outgoing and under \(\simeq\) sign we only have written the order \(\ln\omega\) contribution while evaluating the integral. To evaluate the contribution of \({\cal A}^{(N+1)}_{K-{\rm finite},1}\), first we compute the diagrams in Fig.8 with the virtual photon propagator replaced by the K-photon propagator. Afterward, we subtract the contribution of \(K_{em}\Gamma^{(N+1)}\) from this computed result. The final contribution of \({\cal A}^{(N+1)}_{K-{\rm finite},1}\) within the regulated range of integration \(\omega<<|\ell^{\mu}|<<|p_{i}^{\mu}|,|p_{j}^{\mu}|\) becomes \[{\cal A}^{(N+1)}_{K-{\rm finite},1} = \sum_{k=1}^{N}e_{k}\frac{\varepsilon.p_{k}}{p_{k}.k}{\cal A}^{(N )}_{K-{\rm finite},1}+{\cal O}(\omega^{0})\, \tag{3.52}\] where the expression of \({\cal A}^{(N)}_{K-{\rm finite},1}\) is given in (3.34). Note that the above result contributes at order \(\omega^{-1}\) and does not contribute to order \(\ln\omega\) soft theorem. The self-energy kind of diagrams contributing to \({\cal A}^{(N+1)}_{{\rm self},1}\) in Fig.9 are not necessary to compute explicitly as these diagrams sum up to zero using on-shell renormalization condition (3.36) as we are going to discuss below. The contribution from the sum of the first four diagrams in Fig.9 can be described by the following general structure \(k\)). Similarly the contribution from the sum of fifth and sixth diagrams in Fig.9 can be described by the following structure \(\epsilon_{i}^{T}\varepsilon.p_{i}f_{2}(p_{i}.k)\Gamma^{(N)}_{(i)}(p_{i}+k)\). Where \(f_{1}(p_{i}.k)\) and \(f_{2}(p_{i}.k)\) are two unknown functions with specific polarization/spin indices, which we determine below by replacing \(\varepsilon^{\mu}\to k^{\mu}\) and using Ward identity. Using the diagrammatic identities of Fig.1 and Fig.3, the first four diagrams in Fig.9 after replacing \(\varepsilon^{\mu}\to k^{\mu}\) reduces to the first diagram in Fig.10. Similarly using the diagrammatic identities of Fig.1 and Fig.4, the sum of fifth and sixth diagrams in Fig.9 after replacing \(\varepsilon^{\mu}\to k^{\mu}\) reduces to the second diagram in Fig.10. Now using the constant matrices \(F_{1},F_{2}\) introduced in (3.35) for the diagrams in Fig.6 and comparing the general structure above we get \[f_{1}(p_{i}.k)=-\frac{1}{2(p_{i}.k)^{2}}F_{1}{\cal Q}_{i}\Xi_{i}(-p_{i}-k)\,\ f_{1}(p_{i}.k)=-\frac{1}{2(p_{i}.k)^{2}}F_{2}{\cal Q}_{i}\Xi_{i}(-p_{i}-k). \tag{3.53}\] Now we substituting the above result in the general structures for the sum of diagrams mentioned above. Finally summing over the contributions of the first seven diagrams in Fig.9 we get \[\epsilon_{i}^{T}\frac{\varepsilon.p_{i}}{2(p_{i}.k)^{2}}\Big{[}-F_{1}{\cal Q} _{i}-F_{2}{\cal Q}_{i}+{\cal Q}_{i}^{T}C\Big{]}\Xi_{i}(-p_{i}-k)\Gamma^{(N)}_ {(i)}(p_{i}+k). \tag{3.54}\] Note that \(F_{1}\), \(F_{2}\) satisfy the same property under the operation of charge matrix \({\cal Q}\) as \(\Xi_{i}\) satisfies in (2.20). Hence using this relation the terms inside the square bracket in the above expression reduces to \({\cal Q}_{i}^{T}(F_{1}+F_{2}+C)\) which vanishes using the on-shell renormalization condition (3.36). Note that in the above expression we neglected the possible order \(\omega^{0}\) contribution in the soft expansion of counter term diagram. Using the same on-shell renormalization condition the sum of the rest of the diagrams in Fig.9 also vanishes up to possible order \(\omega^{0}\) contribution. Hence in the soft limit, the sum over all the Feynman diagrams in Fig.9 contribute as \[{\cal A}^{(N+1)}_{\rm self,1}\ =\ 0\ +{\cal O}(\omega^{0}). \tag{3.55}\] The diagrams in Fig.11 contributing to \({\cal A}^{(N+1)}_{\rm non-div,1}\) are IR-divergence free, as in \(\ell^{\mu}\to 0\) limit and finite \(k^{\mu}\) the third, sixth and seventh diagrams behaves like \(\int\frac{d^{4}\ell}{|\ell|^{3}}\), and first, second, fourth and fifth diagrams behaves like \(\int\frac{d^{4}\ell}{|\ell|^{2}}\). On the other hand the sum of first, fourth and sixth diagrams in Fig.11 contribute to leading soft factor at order \(\omega^{-1}\). Now in the integration region \(\omega<<|\ell^{\mu}|<<|p_{i}^{\mu}|\), individually the first and third diagrams in Fig.11 behave like \(\int_{reg}\frac{d^{4}\ell}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}.\ell+i\epsilon) ^{2}}\) after the expansion of the propagators, hence those have the possibility of contributing at order \(\ln\omega\). But when we sum over the contributions of the first, second and third diagrams, the order \(\ln\omega\) contributing coefficient of \(\int_{reg}\frac{d^{4}\ell}{\ell^{2}-i\epsilon}\frac{1}{(p_{i}.\ell+i\epsilon) ^{2}}\) cancels each other, and left out part starts contributing from order \(\omega^{0}\). Hence leaving first, fourth and sixth diagrams, all the other diagrams start contributing at order \(\omega^{0}\) in the soft expansion. After summing over all the contributions, we get \[{\cal A}^{(N+1)}_{\rm non-div,1}=\sum_{i=1}^{N}e_{i}\frac{\varepsilon.p_{i}}{p _{i}.k}\ {\cal A}^{(N)}_{\rm non-div,1}\ +{\cal O}(\omega^{0}). \tag{3.56}\] Soft photon theorem result at one-loop:Summing over the contributions of (3.50),(3.52),(3.55),(3.56) in the soft limit, we get the following soft theorem expression \[\mathcal{A}^{(N+1)}_{\text{IR-finite,1}} = \Big{[}\mathcal{A}^{(N+1)}_{G,1}+\mathcal{A}^{(N+1)}_{K-\text{ finite,1}}+\mathcal{A}^{(N+1)}_{\text{self,1}}+\mathcal{A}^{(N+1)}_{\text{non-div,1}} \Big{]} \tag{3.57}\] \[\stackrel{{\omega\to 0}}{{=}} \sum_{i=1}^{N}e_{i}\frac{\varepsilon\cdot p_{i}}{p_{i}\cdot k} \ \mathcal{A}^{(N)}_{\text{IR-finite,1}}\] \[+ \sum_{i=1}^{N}e_{i}\frac{\varepsilon_{\mu}k_{\nu}}{p_{i}\cdot k} \Big{\{}\Big{(}p_{i}^{\mu}\frac{\partial}{\partial p_{i\nu}}-p_{i}^{\nu}\frac{ \partial}{\partial p_{i\mu}}\Big{)}K^{reg}_{em}\Big{\}}\ \mathcal{A}^{(N)}_{\text{IR- finite,0}}+\mathcal{O}(\omega^{0})\.\] This result agrees with the loop corrected subleading soft photon theorem, originally derived in [1] for minimally coupled scalar QED. The derivation of this result further confirms the universality of the \(\ln\omega\) soft factor in scattering events involving particles with arbitrary spins in a generic theory of quantum electrodynamics, which allows for arbitrary non-minimal couplings. Moreover, the obtained result also verifies the well-known fact that Weinberg's leading soft photon theorem remains unaltered by loop corrections. Therefore, even from the analysis presented above, we observe that Weinberg's soft theorem still holds, relating two one-loop IR-finite amplitudes. ### Discussion on generalization In order to obtain the order \(\omega\ln\omega\) soft factor from the aforementioned one-loop amplitude, we require the vertices: \(\Gamma^{(3)}\) as given in equation (3.12) up to order \(\mathcal{O}(\ell^{2})\), \(\Gamma^{(4)}\) as given in equation (3.13) up to order \(\mathcal{O}(\ell_{1},\ell_{2})\), and \(\widetilde{\Gamma}^{(N+1)}\) up to order \(\mathcal{O}(\ell)\). However, when dealing with non-minimal couplings as expressed in equation (3.6), it is currently unknown how to derive \(\widetilde{\Gamma}^{(N+1)}\) in terms of \(\Gamma^{(N)}\) at order \(\mathcal{O}(\ell)\). Additionally, at order \(\mathcal{O}(\ell^{2})\), new sets of non-minimal couplings will contribute to \(\Gamma^{(3)}\). Due to these reasons, we are unable to Figure 11: Set of 1-loop diagrams contributing to \(\mathcal{A}^{(N+1)}_{\text{non-div,1}}\) consists of diagrams where the virtual photon propagator is the full photon propagator, with at least one leg connected to an internal massive particle propagator or massive EFT vertices. Diagrams involving counter terms to remove UV divergences have been omitted. derive the order \(\omega\ln\omega\) soft photon theorem in this article, and it is not clear whether such a soft factorization at order \(\omega\ln\omega\) is possible or not. In the theory of scalar-QED minimally coupled to gravity the order \(\ln\omega\) correction to (3.57) due to gravitational interaction has also been derived in [1] by analyzing one-loop amplitudes. In the theory of scalar-QED soft photon theorem at order \(\omega(\ln\omega)^{2}\) has also been derived in [21] analyzing two-loop amplitudes and the soft factor is provided in (1.12). Multiple soft photon theorem up to subleading order in soft expansion has also been derived in section-(3.5) of [52], by analyzing one-loop amplitudes and the result reads: \[\mathcal{A}^{(N+M)}_{\text{IR-finite,1}}\] \[+\sum_{\ell=1}^{M}\Bigg{[}\prod_{\begin{subarray}{c}m=1\\ m\neq\ell\end{subarray}}^{M}\Bigg{\{}\sum_{j=1}^{N}e_{j}\frac{\varepsilon_{m} \cdot p_{j}}{p_{j}\cdot k_{m}}\Bigg{\}}\Bigg{]}\sum_{i=1}^{N}e_{i}\frac{ \varepsilon_{\mu}k_{\ell\nu}}{p_{i}\cdot k_{\ell}}\Big{\{}\Big{(}p_{i}^{\mu} \frac{\partial}{\partial p_{i\nu}}-p_{i}^{\nu}\frac{\partial}{\partial p_{i \mu}}\Big{)}K_{em}^{reg}\Big{\}}~{}\mathcal{A}^{(N)}_{\text{IR-finite,0}}\] \[+~{}\mathcal{O}\big{(}\omega^{-M+1}\big{)}~{}. \tag{3.58}\] Above \(\mathcal{A}^{(N+M)}_{\text{IR-finite,1}}\) represents the IR-finite part of 1-loop scattering amplitude involving \(N\)-number of hard particles and \(M\)-number of soft photons with polarizations and momenta \(\{\varepsilon_{\ell},k_{\ell}=-\omega\mathbf{n}_{\ell}\}\) for \(\ell=1,\cdots,M\) as external states. ## 4 Soft graviton theorem at one-loop In this section we derive subleading soft graviton theorem analyzing one-loop amplitudes for a quantum mechanical scattering process involving \(N\) number of massive particles with arbitrary spin and one graviton, extending the analysis of [1] for a generic theory of quantum gravity. This derivation will establish the universal (theory independent) nature of \(\ln\omega\) soft graviton factor. ### Sen's Covariantization prescription and Feynman rules The covariantization of the quadratic part of the massive EFT action (2.7) in the soft gravitational background has been carried out in [2; 3; 4]. This development has been utilized to derive vertices involving two massive spinning particles and one or two on-shell soft gravitons. However, when performing loop computations, we require the same vertices involving off-shell gravitons. Due to the off-shell nature of the gravitons, it is not possible to independently impose the traceless and transverse conditions on the gravitational fluctuation consistently with diffeomorphism. To address this issue, we make slight modifications to the covariantization prescription proposed in [2; 3; 4], as described below. In this work, we use a different parametrization of gravitational fluctuation, which is closely related to the one presented in [56]. This alternative parametrization allows us to derive the vertices within the covariantization prescription while ensuring compatibility with the _de Donder_ gauge choice. Let us define the deviation of background metric from flat Minkowski metric as \[h_{\mu\nu}(x)~{}\equiv~{}\frac{1}{2\kappa}\big{(}g_{\mu\nu}(x)-\eta_{\mu\nu} \big{)}~{}, \tag{4.1}\] where \(\kappa=\sqrt{8\pi G}\) with \(G\) being the four dimensional Newton's constant. In _de Donder_ gauge \(\partial^{\mu}h_{\mu\nu}=\frac{1}{2}\partial_{\nu}h^{\rho}_{\rho}\) the graviton propagator becomes \[\Delta^{\mu\nu,\rho\sigma}_{F}(\ell)=\frac{-i}{\ell^{2}-i\epsilon}\ \frac{1}{2}\big{(}\eta^{\mu\rho}\eta^{\nu\sigma}+\eta^{\mu\sigma}\eta^{\nu\rho }-\eta^{\mu\nu}\eta^{\rho\sigma}\big{)}\, \tag{4.2}\] where \((\mu\nu)\) and \((\rho\sigma)\) are Lorentz indices of the two ends of the graviton propagator. The polarization tensor for on-shell graviton with momentum \(k\) will be denoted by \(\varepsilon_{\mu\nu}(k)\) which satisfies the traceless and transverse conditions \[\eta^{\mu\nu}\varepsilon_{\mu\nu}(k)=0\qquad\quad,\qquad\ k^{\mu}\varepsilon_ {\mu\nu}(k)=0. \tag{4.3}\] With the definition of gravitational fluctuation (4.1), we express the following quantities as a power series expansion of \(\kappa\): \[\sqrt{-{\rm det}\ g} = 1+\kappa h+\frac{\kappa^{2}}{2}(h^{2}-2h^{\mu\nu}h_{\mu\nu})+ \frac{\kappa^{3}}{6}(h^{3}+8h^{\mu\rho}h_{\rho\nu}h^{\nu}_{\ \mu}-6hh^{\mu\nu}h_{\mu\nu})+{\cal O}(\kappa^{4})\,\] \[g^{\mu\nu} = \eta^{\mu\nu}-2\kappa h^{\mu\nu}+4\kappa^{2}h^{\mu\rho}h^{\ \nu}_{\rho}-8\kappa^{3}h^{\mu\tau}h_{\tau\rho}h^{\rho\nu}+{\cal O}(\kappa^{4})\,\] \[e^{\ a}_{\mu} = \delta^{a}_{\mu}+\kappa h_{\mu}^{\ \ a}-\frac{\kappa^{2}}{2}h_{\rho}^{\ \ a}h_{\mu}^{\ \rho}+\frac{\kappa^{3}}{2}h_{\rho}^{\ \ h}h_{\nu}^{\ \rho}h_{\mu}^{\ \nu}+{\cal O}(\kappa^{4})\,\] \[E^{\ a}_{a} = \delta^{\mu}_{a}-\kappa h_{a}^{\ \mu}+\frac{3}{2}\kappa^{2}h_{a}^{\ \rho}h_{\mu}^{\ \mu}-\frac{5}{2}\kappa^{3}h_{a}^{\ \rho}h_{\rho\nu}h^{\nu\mu}+{\cal O}(\kappa^{4})\,\] \[\Gamma^{\lambda}_{\mu\nu} = \kappa(\eta^{\lambda\sigma}-2\kappa h^{\lambda\sigma}+4\kappa^{ 2}h^{\lambda\rho}h_{\rho}^{\ \ \sigma})\big{(}\partial_{\mu}h_{\nu\sigma}+\partial_{\nu}h_{\sigma\mu}- \partial_{\sigma}h_{\mu\nu}\big{)}+{\cal O}(\kappa^{4})\,\] \[\omega^{ab}_{\mu} \equiv \eta^{bc}\ e_{\nu}^{\ \ a}D_{\mu}E_{c}^{\ \nu}=\eta^{bc}\ e_{\nu}^{\ \ a}\partial_{\mu}E_{c}^{\ \nu}+\eta^{bc}\ e_{\nu}^{\ \ \ a}\Gamma^{\nu}_{\mu\rho}E_{c}^{\ \rho}\] \[= \kappa(\partial^{b}h_{\mu}^{\ \ a}-\partial^{a}h_{\mu}^{\ \ b})+\kappa^{2} \Big{(}\frac{1}{2}h^{b\nu}\partial_{\mu}h_{\nu}^{\ \ a}-\frac{1}{2}h^{a\nu}\partial_{\mu}h_{\nu}^{\ \ b}+h^{b\nu}\partial^{a}h_{\mu\nu}-h^{a\nu}\partial^{b}h_{\mu\nu}\] \[+h^{a\nu}\partial_{\nu}h_{\mu}^{\ \ b}-h^{b\nu}\partial_{\nu}h_{\mu}^{\ \ a}\Big{)}+{\cal O}(\kappa^{3})\,\] \[R_{\mu\nu\rho\sigma} = \kappa(\partial_{\rho}\partial_{\nu}h_{\mu\sigma}+\partial_{\mu} \partial_{\sigma}h_{\nu\rho}-\partial_{\mu}\partial_{\rho}h_{\nu\sigma}- \partial_{\sigma}\partial_{\nu}h_{\mu\rho})+{\cal O}(\kappa^{2}). \tag{4.4}\] In the RHS of all the above expressions both curved space indices \((\mu,\nu,\rho,\sigma,\ldots)\) and tangent space indices \((a,b,c,\ldots)\) are raised or lowered by using the Minkowski metric \(\eta\). The trace of the metric fluctuation is defined as \(h\equiv h_{\mu\nu}\eta^{\mu\nu}\). Additionally, the symbols \(e_{\mu}^{\ a}\) represent the vierbein, \(E_{a}^{\ \mu}\) represents the inverse vierbein, \(\omega^{ab}_{\mu}\) represents the spin-connection, \(\Gamma^{\lambda}_{\mu\nu}\) represents the Christoffel connection, and \(R_{\mu\nu\rho\sigma}\) represents the Riemann tensor. Covariantization:In the covariantization prescription, we derive the action that describes the interaction between two spinning particles and one graviton up to second derivative order on gravitational fluctuations. Additionally, we obtain the interaction between two spinning particles and two gravitons up to first derivative order on gravitational fluctuations in the derivative expansion. At the second derivative order, the interacting action of two spinning particles and one graviton also includes a generic non-minimal coupling through the curvature tensor, which is inherently general coordinate invariant. Under covariantization prescription tangent space derivatives on \(\Phi_{\beta}\) should be replaced by covariant derivative in the curved space after multiplication of inverse vierbeins in the following way \[\partial_{a_{1}}\partial_{a_{2}}\cdots\partial_{a_{n}}\Phi_{\beta}\ \to\ E_{a_{1}}^{\ \ \mu_{1}}E_{a_{2}}^{\ \mu_{2}}\cdots E_{a_{n}}^{\ \mu_{n}}\ D_{(\mu_{1}}D_{\mu_{2}}\cdots D_{\mu_{n})}\Phi_{\beta}\, \tag{4.5}\] where the expression of one, two and three covariant derivatives on \(\Phi_{\beta}\) are given by \[D_{\mu}\Phi_{\beta} = \partial_{\mu}\Phi_{\beta}+\left[\frac{1}{2}\omega_{\mu}^{ab}(\Sigma _{ab})_{\beta}^{\ \gamma}\Phi_{\gamma}\right]\, \tag{4.6}\] \[D_{(\mu}D_{\nu)}\Phi_{\beta} = \partial_{\mu}\partial_{\nu}\Phi_{\beta}+\frac{1}{2}\Big{(} \omega_{\mu}^{ab}(\Sigma_{ab})_{\beta}^{\ \gamma}\partial_{\nu}\Phi_{\gamma}+\omega_{\nu}^{ab}(\Sigma_{ab})_{\beta}^{ \ \gamma}\partial_{\mu}\Phi_{\gamma}\Big{)}+\left[\frac{1}{2}\partial_{(\mu}\omega _{\nu)}^{ab}(\Sigma_{ab})_{\beta}^{\ \gamma}\Phi_{\gamma}\right]\] (4.7) \[+\frac{1}{4}\omega_{(\mu}^{ab}\omega_{\nu)}^{cd}(\Sigma_{ab})_{ \beta}^{\ \gamma}(\Sigma_{cd})_{\gamma}^{\ \delta}\Phi_{\delta}-\left[\Gamma_{\mu\nu}^{\rho}\partial_{\rho}\Phi_{ \beta}\right]-\frac{1}{2}\Gamma_{\mu\nu}^{\rho}\omega_{\rho}^{ab}(\Sigma_{ab} )_{\beta}^{\ \gamma}\Phi_{\gamma}\,\] \[D_{\mu}D_{\nu}D_{\rho}\Phi_{\beta} = \partial_{\mu}\partial_{\nu}\partial_{\rho}\Phi_{\beta}+\frac{1} {2}\partial_{\mu}\omega_{\nu}^{ab}(\Sigma_{ab})_{\beta}^{\ \gamma}D_{\rho}\Phi_{\gamma}+\frac{1}{2}\omega_{\nu}^{ab}(\Sigma_{ab})_{\beta} ^{\ \gamma}\partial_{\mu}D_{\rho}\Phi_{\gamma}\] (4.8) \[-\left[\partial_{\mu}\Gamma_{\nu\rho}^{\sigma}\partial_{\sigma} \Phi_{\beta}\right]-\frac{1}{2}\partial_{\mu}\Gamma_{\ \nu\rho}^{\sigma}\ \omega_{\sigma}^{ab}(\Sigma_{ab})_{\beta}^{\ \gamma}\Phi_{\gamma}-\Gamma_{\nu\rho}^{\sigma} \partial_{\mu}D_{\sigma}\Phi_{\beta}\] \[+\frac{1}{2}\omega_{\mu}^{ab}(\Sigma_{ab})_{\beta}^{\ \gamma}D_{\nu}D_{\rho}\Phi_{\gamma}-\Gamma_{\mu\nu}^{\sigma}D_{\sigma}D_{\rho }\Phi_{\beta}-\Gamma_{\mu\rho}^{\sigma}D_{\nu}D_{\sigma}\Phi_{\beta}\.\] The last equation above needs to symmetrize in \(\mu,\nu,\rho\) indices before substituting in (4.5). In the above expressions the terms within square brackets are new at each derivative order in the covariantization prescription, which are important in deriving the interaction between two spinning particles and one graviton up to two derivative order on gravitational fluctuation, and the interaction between two spinning particles and two gravitons up to one derivative order on gravitational fluctuation. To the derivative order we are interested to find the interacting parts of the action we do not need to know the new terms coming from covariantization of more than three derivatives on \(\Phi\). Also while covariantizing the action (2.7), we need to include \(\sqrt{-{\rm det}\ g}\) as a covariant measure of volume. The Fourier transform of gravitational fluctuation \(h_{\mu\nu}(x)\) will be denoted by \(h_{\mu\nu}(\ell)\) and the relation between them is given by \[h_{\mu\nu}(x)\equiv\int\frac{d^{4}\ell}{(2\pi)^{4}}\ e^{i\ell.x}\ h_{\mu\nu}( \ell). \tag{4.9}\] Interaction part of the action:Under the above prescribed covariantization procedure the interactive part of the action describing interaction between two massive spinning particles and one graviton up to quadratic order in graviton momentum becomes \[S^{(3)} = \frac{\kappa}{2}\int\frac{d^{4}q_{1}}{(2\pi)^{4}}\frac{d^{4}q_{2 }}{(2\pi)^{4}}\frac{d^{4}\ell}{(2\pi)^{4}}\ (2\pi)^{4}\delta^{(4)}(q_{1}+q_{2}+\ell)\Phi_{\alpha}(q_{1}) \Bigg{[}h(\ell){\cal K}^{\alpha\beta}(q_{2})-h_{a}^{\mu}(\ell)q_{2\mu}\frac{ \partial{\cal K}^{\alpha\beta}(q_{2})}{\partial q_{2a}} \tag{4.10}\] \[+\frac{1}{2}\big{\{}\ell^{b}h_{\mu}^{a}(\ell)-\ell^{a}h_{\mu}^{b} (\ell)\big{\}}\frac{\partial{\cal K}^{\alpha\gamma}(q_{2})}{\partial q_{2\mu}} (\Sigma_{ab})_{\gamma}^{\ \beta}+\frac{1}{4}\ell_{\mu}\big{\{}\ell^{b}h_{\nu}^{a}(\ell)-\ell^{a}h_{\nu}^{ b}(\ell)\big{\}}\ \frac{\partial^{2}{\cal K}^{\alpha\gamma}(q_{2})}{\partial q_{2\mu}\partial q_{2 \nu}}(\Sigma_{ab})_{\gamma}^{\ \beta}\] \[-\frac{1}{2}\big{\{}\ell_{\mu}h_{\nu}^{\rho}(\ell)+\ell_{\nu}h_{ \mu}^{\rho}(\ell)-\ell^{\rho}h_{\mu\nu}(\ell)\big{\}}q_{2\rho}\frac{\partial^{2} {\cal K}^{\alpha\beta}(q_{2})}{\partial q_{2\mu}\partial q_{2\nu}}\] \[-\frac{1}{6}\ell_{\mu}\big{\{}\ell_{\nu}h_{\rho}^{\sigma}(\ell)+ \ell_{\rho}h_{\nu}^{\sigma}(\ell)-\ell^{\sigma}h_{\nu\rho}(\ell)\big{\}}q_{2 \sigma}\frac{\partial^{3}{\cal K}^{\alpha\beta}(q_{2})}{\partial q_{2\mu} \partial q_{2\nu}\partial q_{2\rho}}+{\cal O}(\ell^{3})\Bigg{]}\Phi_{\beta}(q_{2 })\.\] By covariantization procedure we only get the interacting action representing minimal coupling of graviton with matter field. On top of it at the quadratic order in graviton momentum we also need to add generic non-minimal interaction term which describes the interaction between two spinning particles and one graviton through lineariszed Riemann tensor. A generic form of the action describing such kind of non-minimal interaction is given by \[\overline{S}^{(3)}=\frac{1}{2} \int\frac{d^{4}q_{1}}{(2\pi)^{4}}\frac{d^{4}q_{2}}{(2\pi)^{4}} \frac{d^{4}\ell}{(2\pi)^{4}}\ (2\pi)^{4}\delta^{(4)}(q_{1}+q_{2}+\ell) \tag{111}\] \[\Phi_{\alpha}(q_{1})\ \Big{[}R_{\mu\nu\rho\sigma}(\ell)\ \mathcal{G}^{\alpha\beta,\mu\nu\rho\sigma}(q_{2})\ +\mathcal{O}(\ell^{3})\Big{]}\Phi_{\beta}(q_{2})\,\] where \(R_{\mu\nu\rho\sigma}(\ell)\) is the Fourier transform of the linearized Riemann tensor in (4.4) which reads \[R_{\mu\nu\rho\sigma}(\ell)=-\kappa(\ell_{\rho}\ell_{\nu}h_{\mu \sigma}+\ell_{\mu}\ell_{\sigma}h_{\nu\rho}-\ell_{\mu}\ell_{\rho}h_{\nu\sigma}- \ell_{\sigma}\ell_{\nu}h_{\mu\rho}). \tag{112}\] In the non-minimal action (111), \(\mathcal{G}\) satisfies the following property \[\mathcal{G}^{\alpha\beta,\mu\nu\rho\sigma}(q_{2}) = \pm\ \mathcal{G}^{\beta\alpha,\mu\nu\rho\sigma}(-q_{1}-\ell). \tag{113}\] In the above equation \(+\) sign is for Grassmannian even field and \(-\) sign is for Grassmannian odd field. Again during the derivation we consider \(\Phi\) field components being Grassmannian even, but the final result will be valid for both Grassmannian even and odd fields. Following the covariantization procedure, the part of the action describing interaction between two massive spinning particles and two gravitons up to linear order in graviton momenta becomes \[S^{(4)} = \frac{\kappa^{2}}{2}\int\frac{d^{4}q_{1}}{(2\pi)^{4}}\frac{d^{4}q _{2}}{(2\pi)^{4}}\frac{d^{4}\ell_{1}}{(2\pi)^{4}}\ (2\pi)^{4}\delta^{(4)}(q_{1}+q_{2}+\ell_{1}+\ell_{2}) \tag{114}\] \[\Phi_{\alpha}(q_{1})\Bigg{[}\frac{1}{2}\big{\{}h(\ell_{1})h(\ell_ {2})-2h^{\mu\nu}(\ell_{1})h_{\mu\nu}(\ell_{2})\big{\}}\mathcal{K}^{\alpha\beta }(q_{2})+h(\ell_{1})\Bigg{(}-h_{a}^{\mu}(\ell_{2})q_{2\mu}\frac{\partial \mathcal{K}^{\alpha\beta}(q_{2})}{\partial q_{2a}}\] \[+\frac{1}{2}\big{\{}\ell_{2}^{b}h_{\mu}^{a}(\ell_{2})-\ell_{2}^{ a}h_{\mu}^{b}(\ell_{2})\big{\}}\frac{\partial\mathcal{K}^{\alpha\gamma}(q_{2})}{ \partial q_{2\mu}}(\Sigma_{ab})_{\gamma}^{\ \beta}-\frac{1}{2}\big{\{}\ell_{2\mu}h_{\nu}^{\rho}(\ell_{2})+\ell_{2\nu}h_{ \mu}^{\rho}(\ell_{2})-\ell_{2}^{\rho}h_{\mu\nu}(\ell_{2})\big{\}}\] \[\times q_{2\rho}\frac{\partial^{2}\mathcal{K}^{\alpha\beta}(q_{2} )}{\partial q_{2\mu}\partial q_{2\nu}}\Bigg{)}+\frac{3}{2}h_{a}^{\rho}(\ell_{1 })h_{\rho}^{\mu}(\ell_{2})q_{2\mu}\frac{\partial\mathcal{K}^{\alpha\beta}(q_{ 2})}{\partial q_{2a}}+\frac{1}{2}h_{a}^{\mu}(\ell_{1})h_{b}^{\nu}(\ell_{2})q_ {2\mu}q_{2\nu}\frac{\partial^{2}\mathcal{K}^{\alpha\beta}(q_{2})}{\partial q _{2a}\partial q_{2b}}\] \[-\frac{1}{2}h_{c}^{\mu}(\ell_{1})\big{\{}\ell_{2}^{b}h_{\mu}^{a}( \ell_{2})-\ell_{2}^{a}h_{\mu}^{b}(\ell_{2})\big{\}}\frac{\partial\mathcal{K}^{ \alpha\gamma}(q_{2})}{\partial q_{2c}}(\Sigma_{ab})_{\gamma}^{\ \beta}-\frac{1}{4}h_{c}^{\mu}(\ell_{1})q_{2\mu}\big{\{}\ell_{2}^{b}h_{\nu}^{a} (\ell_{2})-\ell_{2}^{a}h_{\nu}^{b}(\ell_{2})\big{\}}\] \[\times\frac{\partial^{2}\mathcal{K}^{\alpha\gamma}(q_{2})}{\partial q _{2c}\partial q_{2\nu}}(\Sigma_{ab})_{\gamma}^{\ \beta}+\frac{1}{2}h_{a}^{\mu}(\ell_{1})\big{\{}\ell_{2\mu}h_{\nu}^{\rho}(\ell_ {2})+\ell_{2\nu}h_{\mu}^{\rho}(\ell_{2})-\ell_{2}^{\rho}h_{\mu\nu}(\ell_{2}) \big{\}}q_{2\rho}\frac{\partial^{2}\mathcal{K}^{\alpha\beta}(q_{2})}{\partial q _{2a}\partial q_{2\nu}}\] \[+\frac{1}{6}h_{a}^{\mu}(\ell_{1})q_{2\mu}\big{\{}\ell_{2\nu}h_{ \sigma}^{\rho}(\ell_{2})+\ell_{2\sigma}h_{\nu}^{\rho}(\ell_{2})-\ell_{2}^{\rho} h_{\nu\sigma}(\ell_{2})\big{\}}q_{2\rho}\frac{\partial^{3}\mathcal{K}^{\alpha\beta}(q_{ 2})}{\partial q_{2a}\partial q_{2\nu}\partial q_{2\sigma}}\] \[+\frac{1}{2}\Big{\{}\frac{1}{2}h^{b\sigma}(\ell_{1})\ell_{2\mu}h_{ \sigma}^{a}(\ell_{2})-\frac{1}{2}h^{a\sigma}(\ell_{1})\ell_{2\mu}h_{\sigma}^{b} (\ell_{2})+h^{b\sigma}(\ell_{1})\ell_{2}^{a}h_{\mu\sigma}(\ell_{2})-h^{a\sigma }(\ell_{1})\ell_{2}^{b}h_{\mu\sigma}(\ell_{2})\] \[+h^{a\sigma}(\ell_{1})\ell_{2\sigma}h_{\mu}^{b}(\ell_{2})-h^{b \sigma}(\ell_{1})\ell_{2\sigma}h_{\mu}^{a}(\ell_{2})\Big{\}}\frac{\partial \mathcal{K}^{\alpha\gamma}(q_{2})}{\partial q_{2\mu}}(\Sigma_{ab})_{\gamma}^{ \ \beta}\ +h^{\rho\sigma}(\ell_{1})\big{\{}\ell_{2\mu}h_{\nu\sigma}(\ell_{2})\] \[+\ell_{2\nu}h_{\mu\sigma}(\ell_{2})-\ell_{2\sigma}h_{\mu\nu}(\ell_{2 })\big{\}}q_{2\rho}\frac{\partial^{2}\mathcal{K}^{\alpha\beta}(q_{2})}{\partial q _{2\mu}\partial q_{2\nu}}+\mathcal{O}(\ell_{1}^{2},\ell_{1}\ell_{2},\ell_{2}^{2} )\Bigg{]}\Phi_{\beta}(q_{2})\.\] We also need to provide a purely gravitational effective action, constructed out off curvature tensors in derivative expansion. This action describes the self-interaction of the graviton field and provide dynamics to graviton. For the analysis we are conducting here, it suffices to consider the leading term of the EFT action, which corresponds to the Einstein-Hilbert action and is expressed as follows: \[S_{\text{EFT}} = \frac{1}{2\kappa^{2}}\int d^{4}x\sqrt{-\text{det}(g_{\mu\nu})}\ \left(R+\mathcal{O}(R^{2})\right). \tag{111}\] Feynman rules for the vertices:Starting from the interacting parts of the action given in (108) and (110), Feynman rule for the vertex describing interaction between two spinning particles with momenta \(q\) and \(-(q+\ell)\), and one graviton with Lorentz indices \(\mu\nu\) and momentum \(\ell\) turns out to be \[\Gamma^{(3)}_{\mu\nu}(q,-q-\ell,\ell) \tag{112}\] \[= i\kappa\Bigg{[}\eta_{\mu\nu}\mathcal{K}(-q)+\frac{1}{2}\eta_{\mu \nu}\ell^{\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{\rho}}+\frac{1}{4} \eta_{\mu\nu}\ell^{\rho}\ell^{\sigma}\frac{\partial^{2}\mathcal{K}(-q)}{ \partial q^{\rho}\partial q^{\sigma}}-q_{(\mu}\frac{\partial\mathcal{K}(-q)} {\partial q^{\nu)}}-\frac{1}{2}\ell_{(\mu}\frac{\partial\mathcal{K}(-q)}{ \partial q^{\nu)}}\] \[-\frac{1}{2}q_{(\mu}\ell^{\rho}\frac{\partial^{2}\mathcal{K}(-q)} {\partial q^{\nu)}\partial q^{\rho}}-\frac{1}{4}q_{(\mu}\ell^{\rho}\ell^{\sigma }\frac{\partial^{3}\mathcal{K}(-q)}{\partial q^{\nu)}\partial q^{\rho} \partial q^{\sigma}}-\frac{1}{2}\ell_{(\mu}\ell^{\rho}\frac{\partial^{2} \mathcal{K}(-q)}{\partial q^{\nu)}\partial q^{\rho}}-\frac{1}{2}\ell^{b} \frac{\partial\mathcal{K}(-q)}{\partial q^{(\mu}}\Sigma_{\nu)b}\] \[+\frac{1}{2}\ell^{b}\ \Sigma^{T}_{(\nu b}\frac{\partial\mathcal{K}(-q)} {\partial q^{\mu)}}-\frac{1}{4}\ell^{b}\ell^{\rho}\ \frac{\partial^{2}\mathcal{K}(-q)}{ \partial q^{(\mu}\partial q^{\rho)}}\Sigma_{\nu)b}+\frac{1}{4}\ell^{b}\ell^{ \rho}\Sigma^{T}_{(\nu b}\frac{\partial^{2}\mathcal{K}(-q)}{\partial q^{\mu)} \partial q^{\rho}}\] \[+\frac{1}{4}\Bigg{\{}\ell_{\mu}\ell^{\rho}\frac{\partial^{2} \mathcal{K}(-q)}{\partial q^{\rho}\partial q^{\nu}}+\ell_{\nu}\ell^{\rho} \frac{\partial^{2}\mathcal{K}(-q)}{\partial q^{\rho}\partial q^{\mu}}-\ell^{2 }\frac{\partial^{2}\mathcal{K}(-q)}{\partial q^{\mu}\partial q^{\nu}}\Bigg{\}}\] \[+\frac{1}{12}\ell^{\rho}\Bigg{\{}q_{\mu}\ell^{\rho}\frac{ \partial^{3}\mathcal{K}(-q)}{\partial q^{\rho}\partial q^{\nu}\partial q^{ \sigma}}+q_{\nu}\ell^{\rho}\frac{\partial^{3}\mathcal{K}(-q)}{\partial q^{\rho }\partial q^{\mu}\partial q^{\sigma}}-q.\ell\frac{\partial^{3}\mathcal{K}(-q)} {\partial q^{\mu}\partial q^{\nu}\partial q^{\sigma}}\Bigg{\}}\] \[-\ell^{\rho}\ell^{\sigma}\Big{\{}\mathcal{G}_{(\mu\rho\sigma\nu)} (-q)+\mathcal{G}_{\sigma(\nu\mu)\rho}(-q)-\mathcal{G}_{(\mu\rho\nu)\sigma}(-q)- \mathcal{G}_{\sigma(\nu\rho\mu)}(-q)\Big{\}}+\mathcal{O}(\ell^{3})\Bigg{]}\,\] where we suppressed the massive particle spin/polarization indices. The above expression is symmetrized under \(\mu\leftrightarrow\nu\) exchange and in our convention momenta of the particles are always flowing towards the interaction vertex i.e. ingoing. Analogously starting from the interacting part of the action given in (111), Feynman rule for the vertex describing interaction between two spinning particles with momenta \(q\) and \(-(q+\ell_{1}+\ell_{2})\), and two gravitons with Lorentz indices \((\mu\nu),(\rho\sigma)\) and momenta \(\ell_{1},\ell_{2}\) respectively, turns out to be \[\Gamma^{(4)}_{\mu\nu,\rho\sigma}(q,-q-\ell_{1}-\ell_{2},\ell_{1}, \ell_{2}) \tag{113}\] \[= i\kappa^{2}\Bigg{[}(\eta_{\mu\nu}\eta_{\rho\sigma}-2\eta_{\mu \rho}\eta_{\nu\sigma})\Big{\{}\mathcal{K}(-q)+\frac{1}{2}(\ell_{1}+\ell_{2})^ {\kappa}\frac{\partial\mathcal{K}(-q)}{\partial q^{\kappa}}\Big{\}}-\eta_{\mu \nu}\Bigg{\{}q_{\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{\sigma}}\] \[+\frac{1}{2}(\ell_{1}+\ell_{2})_{\rho}\frac{\partial\mathcal{K}( -q)}{\partial q^{\sigma}}+\frac{1}{2}q_{\rho}(\ell_{1}+\ell_{2})^{\kappa}\frac{ \partial^{2}\mathcal{K}(-q)}{\partial q^{\sigma}\partial q^{\kappa}}+\frac{1}{2} \ell_{2}^{b}\frac{\partial\mathcal{K}(-q)}{\partial q^{\rho}}\Sigma_{\sigma b }-\frac{1}{2}\ell_{2}^{b}\Sigma^{T}_{\sigma b}\frac{\partial\mathcal{K}(-q)}{ \partial q^{\rho}}\Bigg{\}}\] \[-\eta_{\rho\sigma}\Bigg{\{}q_{\mu}\frac{\partial\mathcal{K}(-q)} {\partial q^{\nu}}+\frac{1}{2}(\ell_{1}+\ell_{2})_{\mu}\frac{\partial\mathcal{K} (-q)}{\partial q^{\nu}}+\frac{1}{2}q_{\mu}(\ell_{1}+\ell_{2})^{\kappa}\frac{ \partial^{2}\mathcal{K}(-q)}{\partial q^{\nu}\partial q^{\kappa}}+\frac{1}{2} \ell_{1}^{b}\frac{\partial\mathcal{K}(-q)}{\partial q^{\mu}}\Sigma_{\nu b}\] \[-\frac{1}{2}\ell_{1}^{b}\Sigma^{r}_{\nu b}\frac{\partial{\cal K}(-q)}{ \partial q^{\mu}}\Bigg{\}}+\frac{3}{2}\eta_{\mu\rho}\Bigg{\{}q_{\sigma}\frac{ \partial{\cal K}(-q)}{\partial q^{\nu}}+\frac{1}{2}(\ell_{1}+\ell_{2})_{\sigma} \frac{\partial{\cal K}(-q)}{\partial q^{\nu}}+\frac{1}{2}q_{\sigma}(\ell_{1}+ \ell_{2})^{\kappa}\frac{\partial^{2}{\cal K}(-q)}{\partial q^{\kappa}\partial q ^{\nu}}\] \[+q_{\nu}\frac{\partial{\cal K}(-q)}{\partial q^{\sigma}}+\frac{1} {2}(\ell_{1}+\ell_{2})_{\nu}\frac{\partial{\cal K}(-q)}{\partial q^{\sigma}}+ \frac{1}{2}q_{\nu}(\ell_{1}+\ell_{2})^{\kappa}\frac{\partial^{2}{\cal K}(-q)}{ \partial q^{\kappa}\partial q^{\sigma}}\Bigg{\}}+q_{\mu}q_{\rho}\frac{ \partial^{2}{\cal K}(-q)}{\partial q^{\nu}\partial q^{\sigma}}\] \[+\frac{1}{2}\Big{\{}q_{\mu}(\ell_{1}+\ell_{2})_{\rho}+q_{\rho}( \ell_{1}+\ell_{2})_{\mu}\Big{\}}\frac{\partial^{2}{\cal K}(-q)}{\partial q^{ \nu}\partial q^{\sigma}}+\frac{1}{2}q_{\mu}q_{\rho}(\ell_{1}+\ell_{2})^{\kappa }\frac{\partial^{3}{\cal K}(-q)}{\partial q^{\nu}\partial q^{\sigma}\partial q ^{\kappa}}\] \[+\frac{1}{2}\eta_{\mu\rho}\Bigg{\{}\ell_{2}^{b}\frac{\partial{ \cal K}(-q)}{\partial q^{\nu}}\Sigma_{\sigma b}-\ell_{2}^{b}\Sigma^{T}_{\sigma b }\frac{\partial{\cal K}(-q)}{\partial q^{\nu}}+\ell_{1}^{b}\frac{\partial{ \cal K}(-q)}{\partial q^{\sigma}}\Sigma_{\nu b}-\ell_{1}^{b}\frac{\partial{ \cal K}(-q)}{\partial q^{\sigma}}\Bigg{\}}\] \[+\frac{1}{4}q_{\mu}\ell_{2}^{b}\Bigg{\{}\frac{\partial^{2}{\cal K }(-q)}{\partial q^{\nu}\partial q^{\rho}}\Sigma_{\sigma b}-\Sigma^{T}_{\sigma b }\frac{\partial^{2}{\cal K}(-q)}{\partial q^{\nu}\partial q^{\rho}}\Bigg{\}} +\frac{1}{4}q_{\rho}\ell_{1}^{b}\Bigg{\{}\frac{\partial^{2}{\cal K }(-q)}{\partial q^{\sigma}\partial q^{\mu}}\Sigma_{\nu b}-\Sigma^{T}_{\nu b} \frac{\partial^{2}{\cal K}(-q)}{\partial q^{\sigma}\partial q^{\mu}}\Bigg{\}}\] \[-\frac{1}{4}\eta_{\nu\sigma}(\ell_{2}-\ell_{1})^{\kappa}\Bigg{\{} \frac{\partial{\cal K}(-q)}{\partial q^{\kappa}}\Sigma_{\rho\mu}-\Sigma^{T}_{ \rho\mu}\frac{\partial{\cal K}(-q)}{\partial q^{\kappa}}\Bigg{\}}-\frac{1}{ 2}\eta_{\nu\sigma}\ell_{2}^{a}\Bigg{\{}\frac{\partial{\cal K}(-q)}{\partial q ^{\rho}}\Sigma_{a\mu}-\Sigma^{T}_{a\mu}\frac{\partial{\cal K}(-q)}{\partial q ^{\rho}}\Bigg{\}}\] \[-\frac{1}{2}\eta_{\nu\sigma}\ell_{1}^{a}\Bigg{\{}\frac{\partial{ \cal K}(-q)}{\partial q^{\mu}}\Sigma_{a\rho}-\Sigma^{T}_{a\rho}\frac{\partial {\cal K}(-q)}{\partial q^{\mu}}\Bigg{\}}-\frac{1}{2}\ell_{2\nu}\Bigg{\{} \frac{\partial{\cal K}(-q)}{\partial q^{\sigma}}\Sigma_{\mu\rho}-\Sigma^{T}_{ \mu\rho}\frac{\partial{\cal K}(-q)}{\partial q^{\sigma}}\Bigg{\}}\] \[-\frac{1}{2}\ell_{1\sigma}\Bigg{\{}\frac{\partial{\cal K}(-q)}{ \partial q^{\nu}}\Sigma_{\rho\mu}-\Sigma^{T}_{\rho\mu}\frac{\partial{\cal K}(- q)}{\partial q^{\nu}}\Bigg{\}}+{\cal O}(\ell_{1}^{2},\ell_{1}\ell_{2},\ell_{2}^{2}) \Bigg{]}\, \tag{4.17}\] where we suppressed the massive particle spin/polarization indices. We could have symmetrize the above expression under \(\mu\leftrightarrow\nu\) and \(\rho\leftrightarrow\sigma\) exchanges. However, it is unnecessary because any contraction involved with this vertex in the computation of loop diagrams will already exhibit symmetry under these exchanges. Now we want to evaluate the amputated Green's function involving \(N\) number of massive spinning particles and one graviton with momentum \(\ell\) and Lorentz indices \(\mu\nu\), where the graviton is not connected to any external particle leg. Using covariantization prescription it has been evaluated in appendix-B, where additional complication has been taken care off due to the presence of different momentum conserving delta functions between \(\widetilde{\Gamma}^{(N+1)}\) and \(\Gamma^{(N)}\). The final result reads from (B.11) turns out to be \[\widetilde{\Gamma}^{(N+1)\alpha_{1}\cdots\alpha_{N}}_{\mu\nu}(\ell) = \kappa\sum_{i=1}^{N}\Bigg{[}-\delta^{\alpha_{i}}_{\beta_{i}}\;p_{i (\mu}\frac{\partial}{\partial p_{i}^{\nu)}}+(\Sigma_{i(\nu b)}\,^{\alpha_{i}}_{ \beta_{i}}\ell^{b}\;\frac{\partial}{\partial p_{i}^{\mu)}}\;-\frac{1}{2}\delta^ {\alpha_{i}}_{\beta_{i}}\Bigg{\{}\ell^{\rho}p_{i\mu}\frac{\partial^{2}}{ \partial p_{i}^{\rho}\partial p_{i}^{\nu}} \tag{4.18}\] \[+\ell^{\rho}p_{i\nu}\frac{\partial^{2}}{\partial p_{i}^{\rho} \partial p_{i}^{\mu}}-p_{i}.\ell\frac{\partial^{2}}{\partial p_{i}^{\mu} \partial p_{i}^{\nu}}\Bigg{\}}\Bigg{]}\Gamma^{(N)\alpha_{1}\cdots\alpha_{i-1} \beta_{i}\alpha_{i+1}\cdots\alpha_{N}}+{\cal O}(\ell^{2}).\] The three graviton vertex follows from (4.15) with the ingoing graviton momenta \(k\), \(\ell\) and \(-(\ell+k)\), and the Lorentz indices carried by them \((\mu\nu)\), \((\rho\sigma)\) and \((\tau\kappa)\) respectively takes the following form: \[V^{(3)}_{\mu\nu,\rho\sigma,\tau\kappa}(k,\ell,-\ell-k) \tag{4.19}\] \[= i\,\kappa\,\,\Big{[}-(k.\ell+\ell^{2}+k^{2})\left(\eta_{\mu\nu} \eta_{\rho\tau}\eta_{\sigma\kappa}+\eta_{\rho\sigma}\eta_{\mu\tau}\eta_{\nu \kappa}+\eta_{\tau\kappa}\eta_{\mu\rho}\eta_{\nu\sigma}\right)\] \[+\,4\big{(}k.\ell+\ell^{2}+k^{2}\big{)}\eta_{\rho\nu}\eta_{\sigma \tau}\eta_{\kappa\mu}+\frac{1}{2}\big{(}k.\ell+\ell^{2}+k^{2}\big{)}\eta_{\mu \nu}\eta_{\rho\sigma}\eta_{\tau\kappa}\] \[-\,2\,\big{(}k_{\tau}\ell_{\kappa}\eta_{\mu\rho}\eta_{\nu\sigma}- \ell_{\mu}(k+\ell)_{\nu}\eta_{\rho\tau}\eta_{\sigma\kappa}-(k+\ell)_{\rho}k_{ \sigma}\eta_{\mu\tau}\eta_{\nu\kappa}\big{)}\] \[-2\big{(}(k_{\mu}\ell_{\tau}+\ell_{\mu}\ell_{\tau}-k_{\tau}\ell_{ \mu})\eta_{\nu\rho}\eta_{\sigma\kappa}+(2\ell_{\mu}k_{\rho}+\ell_{\mu}\ell_{ \rho}+k_{\mu}k_{\rho})\eta_{\tau\nu}\eta_{\kappa\sigma}\] \[+(k_{\rho}k_{\tau}+\ell_{\rho}k_{\tau}-\ell_{\tau}k_{\rho})\eta_{ \mu\sigma}\eta_{\nu\kappa}\big{)}\Big{]}\ +{\cal O}(k^{4},k^{3}\ell,k^{2}\ell^{2},k \delta^{3},\ell^{4})\.\] ### KG-decomposition and IR-finite amplitudes In Grammer-Yennie prescription we decompose the internal graviton propagator (4.2) with momentum \(\ell\) flowing from massive spinning particle \(i\) to \(j(\neq i)\) in the following way [1] (see also [57]) \[\Delta^{\mu\nu,\rho\sigma}_{F}(\ell)\ =\ \frac{-i}{\ell^{2}-i\epsilon}\ \frac{1}{2}\left[K^{\mu\nu,\rho\sigma}_{(ij)}(\ell,p_{i},p_{j})+G^{\mu\nu, \rho\sigma}_{(ij)}(\ell,p_{i},p_{j})\right], \tag{4.20}\] where \[K^{\mu\nu,\rho\sigma}_{(ij)}(\ell,p_{i},p_{j}) = {\cal C}(\ell,p_{i},p_{j})\,\left[(p_{i}-\ell)^{\mu}\ell^{\nu}+( p_{i}-\ell)^{\nu}\ell^{\mu}\right]\,\big{[}(p_{j}+\ell)^{\rho}\ell^{\sigma}+(p_{j}+ \ell)^{\sigma}\ell^{\rho}\big{]},\] \[G^{\mu\nu,\rho\sigma}_{(ij)}(\ell,p_{i},p_{j}) = \left(\eta^{\mu\rho}\eta^{\nu\sigma}+\eta^{\mu\sigma}\eta^{\nu \rho}-\eta^{\mu\nu}\eta^{\rho\sigma}\right)-K^{\mu\nu,\rho\sigma}_{(ij)}(\ell, p_{i},p_{j})\, \tag{4.21}\] with \[{\cal C}(\ell,p_{i},p_{j}) = \frac{(-1)}{\{p_{i}.(p_{i}-\ell)-i\epsilon\}\ \{p_{j}.(p_{j}+\ell)-i\epsilon\}\{\ \ell.(\ell-2p_{i})-i\epsilon\}\ \{\ell.(\ell+2p_{j})-i\epsilon\}} \tag{4.22}\] \[\times\Big{[}2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}-\ell^{2}(p_{i}.p_{j})+2(p_{i}.p_{j})(p_{i}.\ell)-2(p_{i}.p_{j})(p_{j}.\ell)\Big{]}\,.\] Note that \(p_{i}\) and \(p_{j}\) above refer to the external momenta flowing into the legs \(i\) and \(j\), and not necessarily the momenta of the lines to which the graviton propagator attaches (which may have additional contribution from external soft graviton momentum or internal virtual graviton momentum). For virtual gravitons whose one or both ends are attached to a 3-graviton vertex instead of a massive particle, or to some internal massive particle line or vertex inside \(\widetilde{\Gamma}^{(N+1)}\), we do not carry out any Grammer-Yennie decomposition as those won't contribute to IR divergences. In (4.20) the propagator part containing \(K_{(ij)}\) will be denoted by K-graviton propagator and the propagator part containing \(G_{(ij)}\) will be denoted by G-graviton propagator throughout this section. Ward identities involving K-graviton:From the definition of K-graviton propagator in (4.21), it is clear that K-graviton is proportional to a pure gauge of structure \(\zeta^{\mu}\ell^{\nu}+\zeta^{\nu}\ell^{\mu}\), with \(\zeta=p_{i}-\ell\) when it flows from \(i\)-th leg. Let us study the Ward identity for an off-shell un-amputated three particle Green's function involving two massive fields and one K-graviton. It is diagrammatically represented in Fig.12. The LHS of the Fig.12 takes the following form: \[\frac{1}{q^{2}+m^{2}-i\epsilon}\frac{1}{(q+\ell)^{2}+m^{2}-i\epsilon} \big{(}\zeta^{\mu}\ell^{\nu}+\zeta^{\nu}\ell^{\mu}\big{)}\Big{[}\Xi(-q)\Gamma^{( 3)}_{\mu\nu}(q,-q-\ell,\ell)\Xi(-q-\ell)\Big{]} \tag{4.23}\] To compute the above expression we used the result of (C.4) and then simplified using the identities in (2.10),(2.17) and derivatives of (2.17). After all the simplification we get \[\frac{i\kappa}{(q+\ell)^{2}+m^{2}-i\epsilon}\Bigg{\{}2i\zeta.\ell \Xi(-q)+2i\zeta.q\ell^{\rho}\frac{\partial\Xi(-q)}{\partial q^{\rho}}+i\zeta. \ell\ell^{\rho}\ell^{\sigma}\frac{\partial^{2}\Xi(-q)}{\partial q^{\rho} \partial q^{\sigma}}+2i\zeta.\ell\ell^{\rho}\frac{\partial\Xi(-q)}{\partial q ^{\rho}}\] \[+\frac{i}{2}\ell^{b}\ell^{\rho}\zeta^{\nu}\ell^{\mu}\Sigma_{\nu b }\frac{\partial^{2}\Xi(-q)}{\partial q^{\mu}\partial q^{\rho}}+\frac{i}{3} \zeta.q\ell^{\rho}\ell^{\sigma}\ell^{\nu}\frac{\partial^{3}\Xi(-q)}{\partial q ^{\nu}\partial q^{\rho}\partial q^{\sigma}}+2i\zeta.q\Xi(-q)+i\zeta^{\nu}\ell ^{b}\Sigma_{\nu b}\Xi(-q)\] \[+i\zeta^{\nu}\ell^{\mu}\ell^{b}\Sigma_{\nu b}\frac{\partial\Xi(- q)}{\partial q^{\mu}}+i\zeta.q\ell^{\mu}\ell^{b}\frac{\partial^{2}\Xi(-q)}{ \partial q^{\mu}\partial q^{b}}+\mathcal{O}(\ell^{4})\Bigg{\}}\] \[+\frac{i\kappa}{q^{2}+m^{2}-i\epsilon}\Big{\{}-2i\zeta.q\Xi(-q)+ i\zeta^{\nu}\ell^{b}\Xi(-q)\Sigma_{\nu b}^{T}\Big{\}}. \tag{4.24}\] Now undoing the \(\ell\) expansion for \(\Xi(-q-\ell)\) the above expression can be re-written as \[\frac{1}{q^{2}+m^{2}-i\epsilon}\frac{1}{(q+\ell)^{2}+m^{2}-i \epsilon}\big{(}\zeta^{\mu}\ell^{\nu}+\zeta^{\nu}\ell^{\mu}\big{)}\Big{[}\Xi( -q)\Gamma^{(3)}_{\mu\nu}(q,-q-\ell,\ell)\Xi(-q-\ell)\Big{]}\] \[= \kappa\Big{\{}-2\zeta.(q+\ell)-\zeta^{\nu}\ell^{b}\Sigma_{\nu b }\Big{\}}\frac{\Xi(-q-\ell)}{(q+\ell)^{2}+m^{2}-i\epsilon}+\frac{\Xi(-q)}{q^{ 2}+m^{2}-i\epsilon}\ \kappa\Big{\{}2\zeta.q-\zeta^{\nu}\ell^{b}\Sigma_{\nu b}^{T} \Big{\}}.\] The above identity is diagrammatically represented in Fig.12. It is worth noting that the blob vertices in this representation depend on the momenta of the massive particle and the attached K-graviton, as well as on the spin angular momenta of the massive particle, as indicated in Fig.12. In comparison to the QED Ward identity in (3.22), the momenta dependence of the blob vertices weakens the power of KG-decomposition. For instance, if we wish to study the Ward identity for the four-point un-amputated Green's function associated with two Figure 12: This figure is a Feynman diagrammatic representation of the expression in (4.25). Solid lines represent the massive spinning particles, dashed lines represent the ingoing virtual graviton with momentum \(\ell\) and the arrow in the graviton line represents that it is a K-graviton proportional to \(\zeta_{\mu}\ell_{\nu}+\zeta_{\nu}\ell_{\mu}\). The solid blobs in the RHS represent a new kind of vertices and the Feynman rules for the left blob vertex is \(-\kappa\{2\zeta.(q+\ell)+\zeta^{\nu}\ell^{b}\Sigma_{\nu b}\}\) and for the right blob vertex is \(\kappa\{2\zeta.q-\zeta^{\nu}\ell^{b}\Sigma_{\nu b}^{T}\}\). massive particles, one graviton, and one K-graviton, analogous to Fig.2, it will result in new Feynman rules for the right blob vertex drawn in the second diagram after the equality of Fig.2. Specifically, the right blob vertex Feynman rule reads \(\kappa\{2\zeta.(q+k)-\zeta^{\nu}\ell^{b}\Sigma^{T}_{\nu b}\}\), which contains an additional term \(2\kappa\zeta\cdot k\) relative to the three-point un-amputated Green's function right blob Feynman rule \(\kappa\{2\zeta.q-\zeta^{\nu}\ell^{b}\Sigma^{T}_{\nu b}\}\) in Fig.12. In turn this implies that \({\cal A}^{(N+1)}\) contains an extra exponentiation term compared to the IR-exponentiation factor of \({\cal A}^{(N)}\). However this extra contribution arising from the momenta dependent right blob vertex rule contributes to IR-finite part in the loop integral, allowing us to follow the same strategy used in deriving the soft photon theorem. In [1], it is also observed that the right-hand side of Fig.3 in presence of one external graviton, due to one K-graviton insertion, does not vanish, but instead leaves out some finite residual contribution. Fortunately, this residual terms also contributes to the IR-finite part in the loop integral when we evaluate \({\cal A}^{(N+1)}\), which we can systematically account for it as well. However, as mentioned in section-2, we still need to regulate some additional IR divergence in \({\cal A}^{(N+1)}\) that arises from the Feynman diagram involving three graviton interaction vertices of which one graviton being the external graviton at the one-loop level. IR-finite amplitudes:The IR-finite amplitudes associated with the scattering of \(N\) number of spinning massive particles, and associated with \(N\) number of spinning massive particles plus one outgoing graviton are defined by \[{\cal A}^{(N)}\equiv\exp\{K_{gr}\}\ {\cal A}^{(N)}_{\text{IR-finite}}\, \tag{4.26}\] \[{\cal A}^{(N+1)}\stackrel{{\text{reg}}}{{=}}\exp\{ K_{gr}\}\ {\cal A}^{(N+1)}_{\text{IR-finite}}\, \tag{4.27}\] where \[K_{gr} = -i\kappa^{2}\sum_{i=1}^{N}\ \sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\ \frac{1}{\ell.(\ell-2p_{i})-i\epsilon}\ \frac{1}{\ell.(\ell+2p_{j})-i\epsilon} \tag{4.28}\] \[\times\Big{[}2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}-\ell^{2}(p_{i }.p_{j})+2(p_{i}.p_{j})(p_{i}.\ell)-2(p_{i}.p_{j})(p_{j}.\ell)\Big{]}\] is the Eikonal IR-divergent exponentiated factor. In equations (4.26) and (4.27), \({\cal A}^{(N)}_{\text{IR-finite}}\) and \({\cal A}^{(N+1)}_{\text{IR-finite}}\) represent the infrared finite components of the \(N\)-particle and \(N\)-particle-1-graviton amplitudes, respectively.9 Footnote 9: To determine \({\cal A}^{(N+1)}_{\text{IR-finite}}\), we need to use an explicit IR cut-off for the diagrams involving graviton self interaction vertices. The “reg” over the \(\equiv\) sign in (4.27) corresponds to this particular IR regularization scheme. Effectively this IR-regularization scheme removes a factor of \(\exp\{K_{phase}\}\) from \({\cal A}^{(N+1)}\) with an explicit expression for \(K_{phase}\) being \[K_{phase}=i\kappa^{2}\ \sum_{j=1}^{N}(p_{j}.k)^{2}\int_{0}^{R^{-1}}\frac{d^{4} \ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\frac{1}{k.\ell+i\epsilon}\frac{1}{p_{j}.\ell-i \epsilon}. \tag{4.29}\] These components are obtained by removing the exponentiated IR-divergent parts from the original divergent amplitudes defined through the relations (4.26) and (4.27). Both \({\cal A}^{(N)}_{\text{IR-finite}}\) and \({\cal A}^{(N+1)}_{\text{IR-finite}}\) comprise contributions from the corresponding tree-level amplitudes and loop amplitudes up to all orders in perturbation theory. However, there is a condition: if both ends of a virtual graviton propagator are attached to different external massive spinning particle lines (which may already contain additional real or virtual graviton lines), then this graviton propagator should be replaced by a G-graviton propagator when we evaluate them for the IR-finite parts. Additionally the same set of diagrams need to be evaluated with K-graviton propagator as well and then have to subtract by a factor of times the IR finite amplitude at one less loop level. On top of it to evaluate the full IR-finite part of, we need to IR regulate diagrams containing at least one graviton self interacting vertex involving the external graviton. Rigorous definitions of one-loop IR-finite amplitudes are provided below. Now analogous to (3.29), (3.30) and (3.31), here we also split the IR-finite parts of the tree and one-loop amplitudes in the following way (4.30) (4.31) (4.32) (4.33) Above different components of the IR-finite one-loop amplitudes are defined as 1. corresponds to the diagram in Fig.5 with the dashed line being a G-graviton propagator. 2. corresponds to the contribution from the diagram in Fig.5, evaluated with K-graviton propagator representing the dashed line there and then subtracted the contribution from it at the integrand level. 3. represents the set of Feynman diagrams in Fig.6 evaluated with full graviton propagator representing the dashed lines. 4. represents the set of diagrams in Fig.7, evaluated with full graviton propagator representing the dashed lines. 5. corresponds to the set of diagrams in Fig.8 where we need to evaluate the diagrams with G-graviton propagator representing the dashed virtual lines. The dashed external lines represent the on-shell graviton with momentum. 6. corresponds to the contribution from the diagrams in Fig.8, evaluated with K-graviton propagator representing the dashed line and then subtracted the contribution at the integrand level. The dashed external lines represent the on-shell graviton with momentum. 7. represents the sum of the contributions of Feynman diagrams in Fig.13, when we evaluate them using full graviton propagator and regulate the IR divergence considering detector resolution as the explicit IR cut-off. 8. \({\cal A}^{(N+1)}_{\rm self,1}\) represents the set of Feynman diagrams in Fig.14, evaluated with full graviton propagator representing the dashed lines. The dashed external lines represent the on-shell graviton with momentum \(k\). 9. \({\cal A}^{(N+1)}_{\rm non-div,1}\) represents the set of diagrams in Fig.11, evaluated with full graviton propagator representing the dashed lines. The dashed external lines represent the on-shell graviton with momentum \(k\). ### Derivation of soft graviton theorem The goal here will be to derive the order \(\omega^{-1}\) and \(\ln\omega\) soft factors from the ratio of \({\cal A}^{(N+1)}_{\rm IR\mbox{-}finite}\) and \({\cal A}^{(N)}_{\rm IR\mbox{-}finite}\) when the external graviton energy is small i.e. \(\omega<<|p_{i}^{\mu}|\). #### 4.3.1 IR-finite one-loop N-particle amplitude Let us evaluate the sum of contributions from Fig.5 with insertion of G-graviton propagator and the finite part with the insertion of K-graviton propagator together. We can do this by examining the diagram in Fig.5, where the dashed line represents a full graviton propagator. Then we need to subtract \(K_{gr}\times\Gamma^{(N)}\) from the evaluated result. The mathematical expression reads \[{\cal A}^{(N)}_{G,1}+{\cal A}^{(N)}_{K-{\rm finite},1} = \sum_{\begin{subarray}{c}i,j=1\\ i>j\end{subarray}}^{N}\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^ {2}-i\epsilon} \tag{113}\] \[\times\Big{[}\epsilon_{i}^{T}(-p_{i})\Gamma^{(3)}_{\mu\nu}(p_{i},-p_{i}+\ell,-\ell)\Xi_{i}(-p_{i}+\ell)\Big{]}\Delta^{\mu\nu,\rho\sigma}_{F}(\ell)\] \[\Big{[}\epsilon_{j}^{T}(-p_{j})\Gamma^{(3)}_{\rho\sigma}(p_{j},- p_{j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\Big{]}\Gamma^{(N)}_{(ij)}(p_{i}-\ell,p_{j}+\ell)\] \[-\ K_{gr}\times\Gamma^{(N)}\.\] We evaluate the above expression by using the identity derived in (108). The result in small \(\ell\) expansion in the integrand turns out to be \[{\cal A}^{(N)}_{G,1}+{\cal A}^{(N)}_{K-{\rm finite},1} \tag{114}\] \[= -i\kappa^{2}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\epsilon_{i}^{T}\epsilon_{j}^{T}\int\frac{d^{4}\ell}{ (2\pi)^{4}}\frac{1}{\ell^{2}-i\epsilon}\ \frac{1}{\ell\cdot(\ell-2p_{i})-i\epsilon}\ \frac{1}{\ell\cdot(\ell+2p_{j})-i\epsilon}\] \[\Bigg{[}\{2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}\}\left(-\ell^{p} \frac{\partial}{\partial p_{i}^{\rho}}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})+\ell^{ \rho}\frac{\partial}{\partial p_{j}^{\rho}}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\right)\] \[+\Big{\{}2(p_{i}.p_{j})\left(p_{j}^{\rho}\Sigma^{T}_{i\rho\sigma} \ell^{\sigma}-p_{i}^{\rho}\Sigma^{T}_{j\rho\sigma}\ell^{\sigma}\right)+p_{i} ^{2}p_{j}^{\rho}\Sigma^{T}_{j\rho\sigma}\ell^{\sigma}-p_{j}^{2}p_{i}^{\rho} \Sigma^{T}_{i\rho\sigma}\ell^{\sigma}\Big{\}}\,\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\] \[+{\cal O}(\ell\ell)\Bigg{]}\] Note that in the limit \(\ell^{\mu}\to 0\) the integrand of the above expression behaves like \(\int\frac{d^{4}\ell}{|\ell|^{3}}\) at leading order, hence the contribution is IR-finite as promised. Diagrams in Fig.6 renormalizes the massive spinning particle propagators in presence of gravitational interaction, when the dashed lines represent graviton propagators. The contribution from the loop diagrams there are IR-finite. Hence following the analogous wave function renormalization condition (3.36), the sum of the contribution vanishes i.e. \[\mathcal{A}^{(N)}_{\text{self,1}}=0. \tag{4.36}\] Now let us analyze the diagrams in Fig.7 with dashed lines being full graviton propagators are connected to some internal massive virtual lines or massive EFT vertices inside \(\Gamma^{(N)}\). In the limit \(\ell^{\mu}\to 0\) the integrand of the first diagram behaves like \(\int\frac{d^{4}\ell}{|\ell|^{3}}\) at leading order, and the second diagram behaves like \(\int\frac{d^{4}\ell}{|\ell|^{2}}\) at leading order, hence IR-finite. We do not need to evaluate them explicitly. Let the total contribution after removing the UV divergences by adding counter term diagrams reads \[\mathcal{A}^{(N)}_{\text{non-div,1}}. \tag{4.37}\] Hence the total IR-finite contribution to \(N\)-particle amplitude follows from the definition (4.32) becomes \[\mathcal{A}^{(N)}_{\text{IR-finite,1}} = -i\kappa^{2}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\epsilon_{i}^{T}\epsilon_{j}^{T}\int\frac{d^{4}\ell} {(2\pi)^{4}}\frac{1}{\ell^{2}-i\epsilon}\ \frac{1}{\ell\cdot(\ell-2p_{i})-i\epsilon}\ \frac{1}{\ell\cdot(\ell+2p_{j})-i\epsilon} \tag{4.38}\] \[\left[\{2(p_{i}\!\cdot\!p_{j})^{2}-p_{i}^{2}p_{j}^{2}\}\left(- \ell^{\rho}\frac{\partial}{\partial p_{i}^{\rho}}\Gamma^{(N)}_{(ij)}(p_{i},p_ {j})+\ell^{\rho}\frac{\partial}{\partial p_{j}^{\rho}}\Gamma^{(N)}_{(ij)}(p_{i },p_{j})\right)\right.\] \[\left.+\left\{2(p_{i}\!\cdot\!p_{j})\left(p_{j}^{\rho}\Sigma_{i \rho\sigma}^{T}\ell^{\sigma}-p_{i}^{\rho}\Sigma_{j\rho\sigma}^{T}\ell^{\sigma }\right)+p_{i}^{2}p_{j}^{\rho}\Sigma_{j\rho\sigma}^{T}\ell^{\sigma}-p_{j}^{2} p_{i}^{\rho}\Sigma_{i\rho\sigma}^{T}\ell^{\sigma}\right\}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\right.\] \[\left.+\mathcal{O}(\ell\ell)\right]\ +\ \mathcal{A}^{(N)}_{\text{non-div,1}}.\] #### 4.3.2 IR-finite one-loop (N+1)-particle amplitude in the soft limit Here we analyze all the Feynman diagrams contributing to (3.31) in the soft limit i.e. \(\omega\to 0\). We start by analyzing the diagrams in Fig.8 with the dashed lines being full graviton propagators, which evaluates the contribution of \(\mathcal{A}^{(N+1)}_{G,1}+\mathcal{A}^{(N+1)}_{K-\text{finite},1}+K_{gr}\Gamma ^{(N+1)}\). By evaluating this sum with full graviton propagator, we avoid all the computational complicacies in the KG-decomposition in presence of external graviton as pointed out in the paragraph below (4.25). Then finally we subtract the contribution of \(K_{gr}\Gamma^{(N+1)}\) from the sum to extract the \(\omega^{-1}\) and \(\ln\omega\) soft factors from \(\mathcal{A}^{(N+1)}_{G,1}+\mathcal{A}^{(N+1)}_{K-\text{finite},1}\) in the limit \(\omega<<|p_{i}^{\mu}|\). The first diagram in Fig.8 with full graviton propagator representing the internal dashed line takes the following form \[B_{I} \equiv \sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\frac{\varepsilon^{\lambda\tau}(k)}{2p_{i}.k}\int \frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{(p_{i}+k-\ell)^{2}+m_{i}^{2}-i\epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j }^{2}-i\epsilon}\] \[\left[\epsilon_{i}^{T}(-p_{i})\Gamma^{(3)}_{\lambda\tau}(p_{i},- p_{i}-k,k)\Xi_{i}(-p_{i}-k)\Gamma^{(3)}_{\rho\sigma}(p_{i}+k,-p_{i}-k+\ell,- \ell)\Xi_{i}(-p_{i}-k+\ell)\right]\] \[\times\left[\epsilon_{j}^{T}(-p_{j})\Gamma^{(3)}_{\lambda\tau}(p_{j},-p_ {j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\right]\,\Gamma^{(N)}_{(ij)}(p_{i}+k-\ell,p_{j} +\ell). \tag{4.39}\] After using the identities in (C.4) and (C.6) and simplifying the above expression reduces to \[B_{I} = i\kappa^{3}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\epsilon_{i}^{T}\epsilon_{j}^{T}\,\frac{1}{p_{i}.k} \int\frac{d^{4}\ell}{(2\pi)^{4}}\,\,\frac{1}{\ell^{2}-i\epsilon}\frac{1}{\ell.(\ell-2p_{i}-2k)+2p_{i}.k-i\epsilon}\,\frac{1}{\ell.(\ell+2p_{j})-i\epsilon} \tag{4.40}\] \[\left[\Bigg{(}-2(p_{i}.\varepsilon.p_{i})\left\{2(p_{i}.p_{j})^{2 }-p_{i}^{2}p_{j}^{2}+4p_{i}.p_{j}p_{j}.k-2p_{i}.p_{j}p_{j}.\ell+2p_{i}.p_{j}p _{i}.\ell\right\}\right.\] \[\left.+4(p_{i}.\varepsilon.p_{i})p_{i}.p_{j}p_{i}^{\mu}\Sigma_{ j\mu\nu}^{T}\ell^{\nu}-2(p_{i}.\varepsilon.p_{i})p_{i}^{2}p_{j}^{\mu}\Sigma_{ j\mu\nu}^{T}\ell^{\nu}+2\{2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}\}p_{i\mu}\varepsilon^{ \mu\rho}\Sigma_{i\mu\nu}^{T}k^{\nu}\right.\] \[\left.-4(p_{i}.\varepsilon.p_{i})p_{i}.p_{j}p_{i}^{\mu}\Sigma_{ i\mu\nu}^{T}\ell^{\nu}+2(p_{i}.\varepsilon.p_{i})p_{j}^{2}p_{i}^{\mu}\Sigma_{ j\mu\nu}^{T}\ell^{\nu}-2ip_{i}.kp_{i}.p_{j}p_{i\mu}\varepsilon^{\mu\nu}\frac{ \partial K_{i}(-p_{i})}{\partial p_{i}^{\nu}}\frac{\partial\Xi_{i}(-p_{i})}{ \partial p_{i}^{\sigma}}p_{j}^{\sigma}\right.\] \[\left.+ip_{i}.kp_{j}^{2}p_{i\mu}\varepsilon^{\mu\nu}\frac{ \partial K_{i}(-p_{i})}{\partial p_{i}^{\nu}}\frac{\partial\Xi_{i}(-p_{i})}{ \partial p_{i}^{\sigma}}p_{i}^{\sigma}\right)\Gamma^{(N)}_{(ij)}(p_{i},p_{j}) \ -2(p_{i}.\varepsilon.p_{i})\left\{2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}\right\}\] \[\times\left((k-\ell)^{\mu}\frac{\partial}{\partial p_{i}^{\mu}} \Gamma^{(N)}_{(ij)}(p_{i},p_{j})+\ell^{\mu}\frac{\partial}{\partial p_{j}^{ \mu}}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\right)\ +\ {\cal O}(\ell\ell,\ell k,kk)\right]\,.\] The second diagram in Fig.8 with full graviton propagator representing the dashed internal line takes the following form \[B_{II} \equiv \sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\int\frac{d^{4}\ell}{(2\pi)^{4}}\,\,\frac{1}{(p_{i}+k- \ell)^{2}+m_{i}^{2}-i\epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon} \Delta_{F}^{\rho\sigma,\lambda\tau}(\ell) \tag{4.41}\] \[\times\Big{[}\epsilon_{1}^{T}(-p_{i})\varepsilon^{\mu\nu}(k) \Gamma^{(4)}_{\mu\nu,\rho\sigma}(p_{i},-p_{i}-k+\ell,k,-\ell)\Xi_{i}(-p_{i}-k +\ell)\Big{]}\] \[\times\,\left[\epsilon_{j}^{T}(-p_{j})\Gamma^{(3)}_{\lambda\tau}( p_{j},-p_{j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\right]\,\Gamma^{(N)}_{(ij)}(p_{i}+k-\ell,p_{j} +\ell)\.\] After using the identities in (C.4) and (C.8) and simplifying the above expression reduces to \[B_{II} = i\kappa^{3}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\epsilon_{i}^{T}\epsilon_{j}^{T}\int\frac{d^{4}\ell}{(2 \pi)^{4}}\,\,\frac{1}{\ell^{2}-i\epsilon}\frac{1}{\ell.(\ell-2p_{i}-2k)+2p_{i }.k-i\epsilon}\frac{1}{\ell.(\ell+2p_{j})-i\epsilon} \tag{4.42}\] \[\Bigg{[}\Bigg{\{}-4(p_{i}.\varepsilon.p_{i})p_{j}^{2}+16(p_{i}. \varepsilon.p_{j})(p_{i}.p_{j})+2ip_{i}.p_{j}p_{i\mu}\varepsilon^{\mu\nu}\frac{ \partial{\cal K}_{i}(-p_{i})}{\partial p_{i}^{\nu}}p_{j}^{\sigma}\frac{ \partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\sigma}}\] \[+2ip_{i}.p_{j}p_{j}^{\sigma}\frac{\partial{\cal K}_{i}(-p_{i})}{ \partial p_{i}^{\sigma}}p_{i\mu}\varepsilon^{\mu\nu}\frac{\partial\Xi_{i}(-p_{i}) }{\partial p_{i}^{\nu}}-ip_{j}^{2}p_{i\mu}\varepsilon^{\mu\nu}\frac{\partial{ \cal K}_{i}(-p_{i})}{\partial p_{i}^{\nu}}p_{i}^{\sigma}\frac{\partial\Xi_{i}(-p _{i})}{\partial p_{i}^{\sigma}}\] \[-ip_{j}^{2}p_{i}^{\sigma}\frac{\partial{\cal K}_{i}(-p_{i})}{ \partial p_{i}^{\sigma}}p_{i\mu}\varepsilon^{\mu\nu}\frac{\partial\Xi_{i}(-p_{i}) }{\partial p_{i}^{\nu}}\Bigg{\}}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})+{\cal O}(\ell, k)\Bigg{]}\.\] The third diagram in Fig.8 with full graviton propagator representing the dashed internal line takes the following form \[B_{III} \equiv \sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon}\frac{1}{(p_{i}+k-\ell)^{2}+m_{i }^{2}-i\epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon} \tag{111}\] \[\Big{[}\epsilon_{i}^{T}(-p_{i})\Gamma^{(3)}_{\mu\nu}(p_{i},-p_{i} +\ell,-\ell)\Xi_{i}(-p_{i}+\ell)\varepsilon^{\rho\sigma}(k)\Gamma^{(3)}_{\rho \sigma}(p_{i}-\ell,-p_{i}-k+\ell,k)\Xi_{i}(-p_{i}-k+\ell)\Big{]}\] \[\times\Delta^{\mu\nu,\lambda\tau}_{F}(\ell)\ \Big{[}\epsilon_{j}^{T}(-p_{j})\Gamma^{(3)}_{ \lambda\tau}(p_{j},-p_{j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\Big{]}\ \Gamma^{(N)}_{(ij)}(p_{i}+k-\ell,p_{j}+\ell)\.\] After using the identities in (108) and (109) and simplifying the above expression reduces to \[B_{III} = i\kappa^{3}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\epsilon_{i}^{T}\epsilon_{j}^{T}\int\frac{d^{4}\ell}{ (2\pi)^{4}}\frac{1}{\ell^{2}-i\epsilon}\frac{1}{\ell.(\ell-2p_{i})-i\epsilon }\frac{1}{\ell.(\ell-2p_{i}-2k)+2p_{i}.k-i\epsilon} \tag{112}\] \[\times\frac{1}{\ell.(\ell+2p_{j})-i\epsilon}\Bigg{[}\Bigg{(}-4(p _{i}.\varepsilon.p_{i})\{2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}\}-8(p_{i}. \varepsilon.p_{i})p_{i}.p_{j}p_{i}.\ell\] \[+8(p_{i}.\varepsilon.\ell)\{2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2} \}+8(p_{i}.\varepsilon.p_{i})p_{i}.p_{j}p_{j}.\ell+4\{2(p_{i}.p_{j})^{2}-p_{i} ^{2}p_{j}^{2}\}p_{i\rho}\varepsilon^{\rho\sigma}\Sigma^{T}_{i\sigma\mu}k^{\mu}\] \[-8p_{i}.p_{j}(p_{i}.\varepsilon.p_{i})p_{j}^{\mu}\Sigma^{T}_{i \mu\nu}\ell^{\nu}+4p_{j}^{2}(p_{i}.\varepsilon.p_{i})p_{i}^{\mu}\Sigma^{T}_{i \mu\nu}\ell^{\nu}+8(p_{i}.\varepsilon.p_{i})p_{i}.p_{j}p_{i}^{\mu}\Sigma^{T}_{ j\mu\nu}\ell^{\nu}\] \[-4p_{i}^{2}(p_{i}.\varepsilon.p_{i})p_{j}^{\mu}\Sigma^{T}_{j\mu \nu}\ell^{\nu}+4ip_{i}.\ell p_{i}.p_{j}p_{j}^{\mu}\frac{\partial\mathcal{K}_{i }(-p_{i})}{\partial p_{i}^{\mu}}p_{i\rho}\varepsilon^{\rho\sigma}\frac{ \partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\sigma}}\] \[-2ip_{i}.\ell p_{j}^{2}p_{i}^{\mu}\frac{\partial\mathcal{K}_{i}(-p _{i})}{\partial p_{i}^{\mu}}p_{i\rho}\varepsilon^{\rho\sigma}\frac{\partial \Xi_{i}(-p_{i})}{\partial p_{i}^{\sigma}}\Bigg{)}\Gamma^{(N)}_{(ij)}(p_{i},p _{j})\ -4(p_{i}.\varepsilon.p_{i})\left\{2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}\right\}\] \[\times\left((k-\ell)^{\mu}\frac{\partial}{\partial p_{i}^{\mu}} \Gamma^{(N)}_{(ij)}(p_{i},p_{j})+\ell^{\mu}\frac{\partial}{\partial p_{j}^{\mu }}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\right)\ +\ \mathcal{O}(\ell\ell,\ell k,kk)\Bigg{]}\.\] The fourth diagram in Fig.8 with full graviton propagator representing the dashed internal line takes the following form \[B_{IV} \equiv \sum_{k=1}^{N}\frac{1}{2p_{k}\cdot k}\epsilon_{k}^{T}(-p_{k}) \varepsilon^{\lambda\tau}(k)\Gamma^{(3)}_{\lambda\tau}(p_{k},-p_{k}-k,k)\Xi_ {k}(-p_{k}-k) \tag{113}\] \[\times\frac{1}{2}\sum_{\begin{subarray}{c}i=1\\ i\neq k\end{subarray}}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i,k\end{subarray}}^{N}\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{ 2}-i\epsilon}\] \[\times\Big{[}\epsilon_{i}^{T}(-p_{i})\Gamma^{(3)}_{\mu\nu}(p_{i},-p_{i}+\ell,-\ell)\Xi_{i}(-p_{i}+\ell)\Big{]}\Delta^{\mu\nu,\rho\sigma}_{F}(\ell)\] \[\times\Big{[}\epsilon_{j}^{T}(-p_{j})\Gamma^{(3)}_{\rho\sigma}(p_ {j},-p_{j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\Big{]}\Gamma^{(N)}_{(ijk)}(p_{i}-\ell,p_{j}+\ell,p_{k}+k)\.\] The fifth diagram in Fig.8 with full graviton propagator representing the dashed internal line takes the following form \[B_{V} \equiv \frac{1}{2}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{ 2}-i\epsilon}\] \[\times\Big{[}\epsilon_{i}^{T}(-p_{i})\Gamma^{(3)}_{\mu\nu}(p_{i},-p_{i }+\ell,-\ell)\Xi_{i}(-p_{i}+\ell)\Big{]}\Delta_{F}^{\mu\nu,\rho\sigma}(\ell)\ \varepsilon^{\lambda\tau}(k)\] \[\times\Big{[}\epsilon_{j}^{T}(-p_{j})\Gamma^{(3)}_{\rho\sigma}(p_ {j},-p_{j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\Big{]}\widetilde{\Gamma}^{(N+1)}_{( ij),\lambda\tau}(p_{i}-\ell,p_{j}+\ell,k). \tag{114}\] In both the expressions of \(B_{IV}\) and \(B_{V}\) the integrands can be simplified using (100) and the result of the following common loop integral reads \[\int\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2} -i\epsilon}\frac{1}{(p_{j}+\ell)^{2}+m_{j}^{2}-i\epsilon}\Big{[}\epsilon_{i} ^{T}(-p_{i})\Gamma^{(3)}_{\mu\nu}(p_{i},-p_{i}+\ell,-\ell)\Xi_{i}(-p_{i}+\ell) \Big{]}\] \[\times\Delta_{F}^{\mu\nu,\rho\sigma}(\ell)\ \times\Big{[}\epsilon_{j}^{T}(-p_{j}) \Gamma^{(3)}_{\rho\sigma}(p_{j},-p_{j}-\ell,\ell)\Xi_{j}(-p_{j}-\ell)\Big{]}\] \[= -2i\kappa^{2}\epsilon_{i}^{T}\epsilon_{j}^{T}\int\frac{d^{4}\ell }{(2\pi)^{4}}\frac{1}{\ell^{2}-i\epsilon}\ \frac{1}{\ell\cdot(\ell-2p_{i})-i\epsilon}\ \frac{1}{\ell\cdot(\ell+2p_{j})-i \epsilon}\Big{[}2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}\] \[+2(p_{i}.p_{j})(p_{i}.\ell)-2(p_{i}.p_{j})(p_{j}.\ell)+2(p_{i}.p_{ j})\left(p_{j}^{\rho}\Sigma_{i\rho\sigma}^{T}\ell^{\sigma}-p_{i}^{\rho}\Sigma_{j \rho\sigma}^{T}\ell^{\sigma}\right)+p_{i}^{2}p_{j}^{\rho}\Sigma_{j\rho\sigma}^ {T}\ell^{\sigma}\] \[-p_{j}^{2}p_{i}^{\rho}\Sigma_{i\rho\sigma}^{T}\ell^{\sigma}\ +{\cal O}(\ell \ell)\Big{]}. \tag{115}\] Hence the total IR-finite contribution from the sets of diagrams in Fig.8 reads \[{\cal A}^{(N+1)}_{G,1}+{\cal A}^{(N+1)}_{K-{\rm finite},1}\] \[= B_{I}+B_{II}+B_{III}+B_{IV}+B_{V}-K_{gr}\times\Gamma^{(N+1)}\, \tag{116}\] where the \((N+1)\) particle tree level amplitude is given by \[\Gamma^{(N+1)} = \sum_{k=1}^{N}\frac{1}{2p_{k}\cdot k}\epsilon_{k}^{T}(-p_{k}) \varepsilon^{\lambda\tau}(k)\Gamma^{(3)}_{\lambda\tau}(p_{k},-p_{k}-k,k)\Xi_{ k}(-p_{k}-k)\Gamma^{(N)}_{(k)}(p_{k}+k) \tag{117}\] \[+\ \varepsilon^{\lambda\tau}(k)\ \widetilde{\Gamma}^{(N+1)}_{ \lambda\tau}(k)\.\] In the soft limit the above tree level \((N+1)\) particle amplitude provides the tree level soft graviton theorem with soft factor given in (3). Substituting the results of (114), (115), (116), (117), (118) and (116) in (116), it is easy to see that the final expression is IR-finite in the limit \(\ell^{\mu}\to 0\). In explicit computation, the IR divergent contribution in the sum of \((B_{I}+B_{III})+B_{IV}+B_{V}\) cancels with the IR divergent contribution of \(K_{gr}\times\Gamma^{(N+1)}\), and \(B_{II}\) is IR finite at finite value of \(k\). Now let us analyze the expression (116) in the soft limit i.e. \(\omega<<|p_{i}^{\mu}|\). In this limit a part of the final expression (116) contributes to \(\omega^{-1}\) for the full integration range of virtual graviton momenta, which reproduce the Weinberg's soft graviton factor relating two IR-finite one-loop amplitudes. For evaluating the rest of the IR-finite part of (116) we divide the virtual momenta integration range into three regions: \(|\ell^{\mu}|\in[0,\omega],[\omega,|p_{i}^{\mu}|]\) and \([|p_{i}^{\mu}|,\infty)\). It turns out that the integrand starts contributing at order \(\omega^{0}\) in the region of integration \(|\ell^{\mu}|\in[0,\omega]\) and \([|p_{i}^{\mu}|,\infty)\)10 in the soft limit. Finally in the region of integration \(|\ell^{\mu}|\in[\omega,|p_{i}^{\mu}|]\) the integrand contributes at order \(\ln\omega\) which is dominant compare to order \(\omega^{0}\) in the soft expansion. Hence the order \(\omega^{-1}\) and \(\ln\omega\) contribution from (4.48) turns out to be \[\mathcal{A}_{G,1}^{(N+1)}+\mathcal{A}_{K-{\rm finite},1}^{(N+1)} \tag{4.50}\] \[= \kappa\sum_{k=1}^{N}\frac{p_{k}.\varepsilon.p_{k}}{p_{k}.k}\times \Big{(}\mathcal{A}_{G,1}^{(N)}+\mathcal{A}_{K-{\rm finite},1}^{(N)}\Big{)}\] \[-\frac{i\kappa^{3}}{4}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1 \\ j\neq i\end{subarray}}^{N}\epsilon_{i}^{T}\,\epsilon_{j}^{T}\int_{reg}\frac{d^ {4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\frac{1}{\ell.p_{i}+i\epsilon}\frac{1}{\ell.p_{j} -i\epsilon}\] \[\times\Bigg{[}-8(p_{i}.\varepsilon.p_{i})p_{i}.p_{j}\frac{p_{j}. k}{p_{i}.k}+\frac{2\ell.k}{p_{i}.k}\frac{1}{\ell.p_{i}+i\epsilon}(p_{i}. \varepsilon.p_{i})\,\big{\{}2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}\big{\}}\] \[-\frac{4(p_{i}.\varepsilon.\ell)}{\ell.p_{i}+i\epsilon}\{2(p_{i}. p_{j})^{2}-p_{i}^{2}p_{j}^{2}\}-4(p_{i}.\varepsilon.p_{i})p_{j}^{2}+16(p_{i}. \varepsilon.p_{j})(p_{i}.p_{j})\Bigg{]}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\] \[+\mathcal{O}(\omega^{0})\.\] Above the expression of \(\mathcal{A}_{G,1}^{(N)}+\mathcal{A}_{K-{\rm finite},1}^{(N)}\) is given in (4.35). The "\(reg\)" in the subscript of loop momentum integration above refers to the integration range \(|\ell^{\mu}|\in[\omega,|p_{i}^{\mu}|],|p_{j}^{\mu}|]\). It is intriguing to observe that the components reliant on theory, such as \(\mathcal{K}_{i}\), \(\Xi_{i}\), and \(\frac{\partial\Gamma^{(N)}}{\partial p_{i}}\), along with the dependence on spin angular momenta of the massive particles, vanish when the individual diagram contributions are summed in the \(\ln\omega\) order soft factor contributing integrand. This theory and spin independence feature is not true for the order \(\omega^{0}\) contribution from (4.48), which we ignored here. Let us proceed to calculate the contribution of \(\mathcal{A}_{\rm 3-graviton-reg,1}^{(N+1)}\). This term represents the sum of contributions from the Feynman diagrams shown in Fig.13. We evaluate them using full graviton propagator and regulate the IR divergence of the virtual loop momentum integration by introducing an explicit IR cut-off \(R^{-1}\), which serves as the resolution of the detector. The expression of the first diagram in Fig.13 reads \[C_{I} \equiv \sum_{\begin{subarray}{c}i,j=1\\ i>j\end{subarray}}^{N}\int_{R^{-1}}^{\infty}\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon}\frac{1}{(p_{j}+\ell+k)^{2}+m_{j} ^{2}-i\epsilon}\] \[\times\Big{[}\epsilon_{i}^{T}(-p_{i})\Gamma^{(3)}_{ab}(p_{i},-p_{ i}+\ell,-\ell)\Xi_{i}(-p_{i}+\ell)\Big{]}\] \[\times\Delta_{F}^{ab,\rho\sigma}(\ell)\varepsilon^{\mu\nu}(k)V^{( 3)}_{\mu\nu,\rho\sigma,\tau\kappa}(k,\ell,-\ell-k)\Delta_{F}^{\tau\kappa,cd}(\ell)\] \[\Big{[}\epsilon_{j}^{T}(-p_{j})\Gamma^{(3)}_{cd}(p_{j},-p_{j}- \ell-k,\ell+k)\Xi_{j}(-p_{j}-\ell-k)\Big{]}\Gamma^{(N)}_{(ij)}(p_{i}-\ell,p_{j }+\ell+k)\] Now using the identity in (C.4) and substituting the 3-graviton vertex from (4.19) with on-shell and transverse-traceless condition for the external graviton the above expression simplifies to \[C_{I} = -\frac{i\kappa^{3}}{8}\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{N}\epsilon_{i}^{T}\epsilon_{j}^{T}\int_{R^{-1}}^{\infty} \frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon}\frac{1}{(p_{j}+\ell+k)^{2}+m_{j }^{2}-i\epsilon}\frac{1}{\ell^{2}-i\epsilon} \tag{112}\] \[\times\frac{1}{(\ell+k)^{2}-i\epsilon}\Big{[}4p_{i}^{\rho}p_{i}^ {\sigma}-2p_{i}^{2}\eta^{\rho\sigma}-2p_{i}^{\rho}\ell^{\sigma}-2p_{i}^{\sigma }\ell^{\rho}+2p_{i}^{\rho}\Sigma_{i}^{T\sigma a}\ell_{a}+2p_{i}^{\sigma}\Sigma _{i}^{T\rho a}\ell_{a}\] \[-2\eta^{\rho\sigma}p_{i}^{a}\Sigma_{iac}^{T}\ell^{c}+\mathcal{O} (\ell\ell)\Big{]}\times\mathcal{C}(\ell\ell)\Big{]}\times\mathcal{C}[-2 \eta^{\rho\sigma}p_{i}^{a}\Sigma_{iac}^{T}\ell^{c}+\mathcal{O}(\ell\ell) \Big{]}\times\mathcal{C}[-(k.\ell+\ell^{2})\left(\eta_{\rho\sigma}\eta_{ \mu\tau}\eta_{\nu\kappa}+\eta_{\tau\kappa}\eta_{\mu\rho}\eta_{\nu\sigma} \right)\] \[+\ 4\left(k.\ell+\ell^{2}\right)\eta_{\rho\nu}\eta_{\sigma\tau} \eta_{\kappa\mu}-\ 2\left(k_{\tau}\ell_{\kappa}\eta_{\mu\rho}\eta_{\nu\sigma}- \ell_{\mu}\ell_{\nu}\eta_{\rho\tau}\eta_{\sigma\kappa}-(k+\ell)_{\rho}k_{ \sigma}\eta_{\mu\tau}\eta_{\nu\kappa}\right)\] \[-2\left\{(\ell_{\mu}\ell_{\tau}-k_{\tau}\ell_{\mu})\eta_{\nu\rho }\eta_{\sigma\kappa}+(2\ell_{\mu}k_{\rho}+\ell_{\mu}\ell_{\rho})\eta_{\tau\nu }\eta_{\kappa\sigma}+(k_{\rho}k_{\tau}+\ell_{\rho}k_{\tau}-\ell_{\tau}k_{\rho })\eta_{\mu\sigma}\eta_{\nu\kappa}\right\}\] \[+\mathcal{O}(k^{4},k^{3}\ell,k^{2}\ell^{2},k\ell^{3},\ell^{4}) \Big{]}\times\left[4p_{j}^{\tau}p_{i}^{\kappa}-2p_{j}^{2}\eta^{\tau\kappa}+2 p_{j}^{\tau}(\ell+k)^{\kappa}+2p_{j}^{\kappa}(\ell+k)^{\tau}\right.\] \[\left.-2p_{j}^{\tau}\Sigma_{j}^{T\kappa b}(\ell+k)_{b}-2p_{j}^{ \kappa}\Sigma_{j}^{T\tau b}(\ell+k)_{b}+2\eta^{\tau\kappa}p_{j}^{b}\Sigma_{jbd }^{T}(\ell+k)^{d}+\mathcal{O}(\ell\ell,\ell k,kk)\right]\] \[\times\Gamma_{(ij)}^{(N)}(p_{i}-\ell,p_{j}+\ell+k)\.\] The above expression contributes at order \(\ln\omega\) in three regions of integration \(|\ell^{\mu}|\in[R^{-1},\omega]\), \(|(\ell+k)^{\mu}|\in[R^{-1},\omega]\), and "\(reg\)"\(\equiv|\ell^{\mu}|\in[\omega,|p_{i}^{\mu}|]\) in the soft limit. The contribution from the regions \(|\ell^{\mu}|\in[R^{-1},\omega]\) and \(|(\ell+k)^{\mu}|\in[R^{-1},\omega]\) turn out to be same as \(C_{I}\) is symmetric under the simultaneous exchange of \(\ell\leftrightarrow(\ell+k)\) and \(i\leftrightarrow j\). Hence we only evaluate the Figure 13: Diagrams containing 3-graviton vertex contributing to \(\mathcal{A}_{\text{3-graviton-reg,1}}^{(N+1)}\) after regulating the IR-divergence considering detector resolution as the explicit IR cut-off. The solid lines represent massive spinning particles and the dashed lines represent gravitons. contribution in region \(|\ell^{\mu}|\in[R^{-1},\omega]\) and multiply by a factor of 2 to incorporate the contribution of other region. In the region \(|\ell^{\mu}|\in[R^{-1},\omega]\) we approximate the propagator denominators at leading order as \[\frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon}\simeq-\frac{1}{2p_{ i}.\ell+i\epsilon}\,,\,\frac{1}{(p_{j}+\ell+k)^{2}+m_{j}^{2}-i\epsilon}\simeq \frac{1}{2p_{j}.k-i\epsilon}\,,\] \[\frac{1}{(\ell+k)^{2}-i\epsilon}\simeq\frac{1}{2\ell.k-i\epsilon }\,. \tag{112}\] Then only when we choose the order \({\cal O}(kk)\) terms from the \(V^{(3)}\) vertex, after the loop momentum integration we can have \({\cal O}(\ln\omega)\) contribution. On the other hand in the region "\(reg\)"\(\equiv|\ell^{\mu}|\in[\omega,|p_{i}^{\mu}|]\) we approximate the propagator denominators at leading order as \[\frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i\epsilon}\simeq-\frac{1}{2p_ {i}.\ell+i\epsilon}\,,\,\frac{1}{(p_{j}+\ell+k)^{2}+m_{j}^{2}-i\epsilon}\simeq \frac{1}{2p_{j}.\ell-i\epsilon}\,,\] \[\frac{1}{(\ell+k)^{2}-i\epsilon}\simeq\frac{1}{\ell^{2}-i \epsilon}\,. \tag{113}\] Then only when we choose the order \({\cal O}(\ell\ell)\) terms from the \(V^{(3)}\) vertex, after the loop momentum integration we can have \({\cal O}(\ln\omega)\) contribution. Hence the full contribution at order \(\ln\omega\) from all the three regions of integration becomes \[C_{I} = \frac{i\kappa^{3}}{4}\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{N}\epsilon_{i}^{T}\epsilon_{j}^{T}\frac{1}{p_{j}.k} \int_{R^{-1}}^{\omega}\,\frac{d^{4}\ell}{(2\pi)^{4}}\,\,\frac{1}{\ell^{2}-i \epsilon}\frac{1}{p_{i}.\ell+i\epsilon}\frac{1}{\ell.k-i\epsilon}\Big{[}4(p_ {i}.k)^{2}(p_{j}.\varepsilon.p_{j}) \tag{114}\] \[-4(p_{i}.k)(p_{j}.k)(p_{i}.\varepsilon.p_{j})\Big{]}\,\,\Gamma^{( N)}_{(ij)}(p_{i},p_{j})\] \[+\frac{i\kappa^{3}}{2}\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{N}\epsilon_{i}^{T}\epsilon_{j}^{T}\int_{reg}\frac{d^{ 4}\ell}{(2\pi)^{4}}\,\,\frac{1}{\{\ell^{2}-i\epsilon\}^{2}}\frac{1}{p_{i}. \ell+i\epsilon}\frac{1}{p_{j}.\ell-i\epsilon}\Big{[}-\ell^{2}(p_{i}. \varepsilon.p_{i})p_{j}^{2}\] \[-\ell^{2}(p_{j}.\varepsilon.p_{j})p_{i}^{2}+4\ell^{2}(p_{i}.p_{j} )(p_{i}.\varepsilon.p_{j})+2(p_{i}.p_{j})^{2}(\ell.\varepsilon.\ell)-2(p_{i}. \varepsilon.\ell)(p_{i}.p_{j})(p_{j}.\ell)\] \[+2p_{j}^{2}(p_{i}.\varepsilon.\ell)(p_{i}.\ell)-p_{i}^{2}p_{j}^{2 }(\ell.\varepsilon.\ell)-2(p_{j}.\varepsilon.\ell)(p_{i}.p_{j})(p_{i}.\ell)+2 p_{i}^{2}(p_{j}.\varepsilon.\ell)(p_{j}.\ell)\Big{]}\] \[\,\,\times\,\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\,+\,{\cal O}(\omega^ {0})\,\] Note that the integrands above are independent of the spin angular momenta of external massive particles as well as do not depend on the theory dependent terms such as \({\cal K}_{i},\Xi_{i}\) or non-minimal couplings. The first integrand above can be evaluated using the result of the integral (derived in [1]) \[\int_{R^{-1}}^{\omega}\frac{d^{4}\ell}{(2\pi)^{4}}\,\,\frac{1}{ \ell^{2}-i\epsilon}\frac{1}{p_{i}.\ell+i\epsilon}\frac{1}{\ell.k-i\epsilon} \tag{115}\] \[= -\frac{1}{4\pi}\frac{1}{p_{i}.k}\ln(\omega R)\left[\delta_{\eta_{ i},-1}\,\,-\frac{i}{2\pi}\,\,\ln\left(\frac{p_{i}^{2}}{(p_{i}.{\bf n})^{2}} \right)\right]+{\cal O}(\omega^{-1})\.\] and momentum conservation relation \[\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}p_{j}^{\mu}=-p_{i}^{\mu}. \tag{110}\] Above \(\eta_{i}\) convention is the same as described below (6). We won't evaluate the second integrand in (101) explicitly at this moment, but will simplify some of the terms which contains \((p_{i}.\ell)\) or \((p_{j}.\ell)\) in the numerator using the momentum conservation relation in (110). Also the term containing \((\ell.\varepsilon.\ell)\) can be simplified by the following using integration by parts relation \[\int\frac{d^{4}\ell}{(2\pi)^{4}}\frac{\ell^{\mu}\ell^{\nu}}{( \ell^{2}-i\epsilon)^{2}}\frac{1}{p_{i}.\ell+i\epsilon}\frac{1}{p_{j}.\ell-i\epsilon} \tag{111}\] \[= \frac{1}{2}\int\frac{d^{4}\ell}{(2\pi)^{4}}\frac{1}{\ell^{2}-i \epsilon}\frac{1}{p_{i}.\ell+i\epsilon}\frac{1}{p_{j}.\ell-i\epsilon}\left( \eta^{\mu\nu}-\frac{p_{i}^{\mu}\ell^{\nu}}{p_{i}.\ell+i\epsilon}-\frac{p_{j}^ {\mu}\ell^{\nu}}{p_{j}.\ell-i\epsilon}\right)\.\] Following all the steps outlined above and using exchange symmetry \((p_{i},\ell)\leftrightarrow(p_{j},-\ell)\) in the second integrand of (101), the simplified expression of \(C_{I}\) becomes \[C_{I} = -\frac{i\kappa^{3}}{4\pi}\ln\omega\ \sum_{j=1}^{N}\epsilon_{j}^{T} \frac{p_{j}.\varepsilon.p_{j}}{p_{j}.k}\times\sum_{i=1}^{N}\epsilon_{i}^{T}(p_ {i}.k)\left[\delta_{\eta_{i},-1}\ -\frac{i}{2\pi}\ \ln\left(\frac{p_{i}^{2}}{(p_{i}. \mathbf{n})^{2}}\right)\right]\Gamma^{(N)}_{(ij)}(p_{i},p_{j}) \tag{112}\] \[+\frac{i\kappa^{3}}{2}\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{N}\epsilon_{i}^{T}\epsilon_{j}^{T}\int_{reg}\frac{d^{ 4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\frac{1}{p_{i}.\ell+i\epsilon}\frac{1}{p_{j}. \ell-i\epsilon}\Big{[}-(p_{i}.\varepsilon.p_{i})p_{j}^{2}-(p_{j}.\varepsilon. p_{j})p_{i}^{2}\] \[+4(p_{i}.p_{j})(p_{i}.\varepsilon.p_{j})-\frac{1}{p_{i}.\ell+i \epsilon}\big{\{}2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}\big{\}}(p_{i}. \varepsilon.\ell)\Big{]}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})+\mathcal{O}(\omega^{ 0})\.\] Note that in the first line above \(j=i\) sum is included while it was not present in the expression (101). The inclusion of this term originates from the second term in the numerator of the first integral in (101) after using the momentum conservation relation (110). For completeness here we also briefly analyze the second diagram in Fig.13 which reads \[C_{II} \equiv \sum_{i=1}^{N}\frac{1}{2p_{i}.k}\epsilon_{i}^{T}\int_{R^{-1}}^{ \infty}\frac{d^{4}\ell}{(2\pi)^{4}}\frac{1}{(p_{i}-\ell)^{2}+m_{i}^{2}-i \epsilon}\ \Gamma^{(3)}_{ab}(p_{i},-p_{i}+\ell,-\ell)\Xi_{i}(-p_{i}+\ell) \tag{113}\] \[\times\Gamma^{(3)}_{cd}(p_{i}-\ell,-p_{i}-k,\ell+k)\Xi_{i}(-p_{i }-k)\Delta^{ab,\rho\sigma}_{F}(\ell)\varepsilon^{\mu\nu}V^{(3)}_{\mu\nu,\rho \sigma,\tau\kappa}(k,\ell,-k-\ell)\] \[\times\Delta^{\tau\kappa,cd}_{F}(\ell+k)\Gamma^{(N)}_{(i)}(p_{i}+ k)\.\] In the integration region \(|\ell^{\mu}|\in[R^{-1},\omega]\) or \(|(\ell+k)^{\mu}|\in[R^{-1},\omega]\) the numerator of the potentially contributing \(\ln\omega\) terms vanishes in the integrand. On the other hand, in the integration region "\(reg\)" using a set of integration by parts to cancel \((p_{i}.k)^{-1}\) factor, the terms potentially contributing at order \(\ln\omega\) becomes \[C_{II} = -\frac{i\kappa^{3}}{2}\sum_{i=1}^{N}\epsilon_{i}^{T}\int_{reg} \frac{d^{4}\ell}{(2\pi)^{4}}\frac{1}{(p_{i}.\ell+i\epsilon)^{2}}\frac{1}{ \ell^{2}-i\epsilon}\Big{[}-2p_{i}^{2}(p_{i}.\varepsilon.p_{i})+\frac{(p_{i}^{2 })^{2}}{p_{i}.\ell+i\epsilon}(p_{i}.\varepsilon.\ell)\Big{]} \tag{114}\] \[\times\Gamma^{(N)}_{(i)}(p_{i})\ +\ \mathcal{O}(\omega^{0})\.\] The above expression of \(C_{II}\) can be simplified using the following identity \[\int_{reg}\frac{d^{4}\ell}{(2\pi)^{4}}\frac{\ell^{\mu}}{(p_{i}.\ell+i\epsilon)^{3 }}\frac{1}{\ell^{2}-i\epsilon}=\frac{1}{p_{i}^{2}}\int_{reg}\frac{d^{4}\ell}{(2 \pi)^{4}}\frac{1}{\left(p_{i}.\ell+i\epsilon\right)^{2}}\frac{1}{\ell^{2}-i \epsilon}\, \tag{111}\] and the simplified expression reads \[C_{II} = \frac{i\kappa^{3}}{2}\sum_{i=1}^{N}\epsilon_{i}^{T}\int_{reg}\frac {d^{4}\ell}{(2\pi)^{4}}\frac{1}{(p_{i}.\ell+i\epsilon)^{2}}\frac{1}{\ell^{2}-i \epsilon}p_{i}^{2}(p_{i}.\varepsilon.p_{i})\times\Gamma^{(N)}_{(i)}(p_{i})+ \mathcal{O}(\omega^{0}). \tag{112}\] The third, fourth and fifth diagrams in Fig.13 do not contribute to order \(\ln\omega\) from any integration region. Hence the total order \(\ln\omega\) contribution to \(\mathcal{A}^{(N+1)}_{\text{3-graviton-reg},1}\) after summing over (110) and (112) turns out to be \[\mathcal{A}^{(N+1)}_{\text{3-graviton-reg},1} \tag{113}\] \[= -\frac{i\kappa^{3}}{4\pi}\ln\omega\ \sum_{j=1}^{N}\epsilon_{j}^{T}\frac{p_{j}. \varepsilon.p_{j}}{p_{j}.k}\times\sum_{i=1}^{N}\epsilon_{i}^{T}(p_{i}.k)\left[ \delta_{\eta_{i},-1}\ -\frac{i}{2\pi}\ \ln\left(\frac{p_{i}^{2}}{(p_{i}.\mathbf{n})^{2}} \right)\right]\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\] \[+\frac{i\kappa^{3}}{2}\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{N}\epsilon_{i}^{T}\epsilon_{j}^{T}\int_{reg}\frac{d^{4} \ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\frac{1}{p_{i}.\ell+i\epsilon}\frac{1}{p_{j}. \ell-i\epsilon}\Bigg{[}-(p_{i}.\varepsilon.p_{i})p_{j}^{2}-(p_{j}.\varepsilon.p_{j})p_{i}^{2}\] \[+4(p_{i}.p_{j})(p_{i}.\varepsilon.p_{j})-\frac{1}{p_{i}.\ell+i \epsilon}\big{\{}2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}\big{\}}(p_{i}. \varepsilon.\ell)\Bigg{]}\Gamma^{(N)}_{(ij)}(p_{i},p_{j})\] \[+\frac{i\kappa^{3}}{2}\sum_{i=1}^{N}\epsilon_{i}^{T}\int_{reg} \frac{d^{4}\ell}{(2\pi)^{4}}\frac{1}{(p_{i}.\ell+i\epsilon)^{2}}\frac{1}{ \ell^{2}-i\epsilon}p_{i}^{2}(p_{i}.\varepsilon.p_{i})\times\Gamma^{(N)}_{(i)}( p_{i})+\mathcal{O}(\omega^{0})\.\] Let us now analyze the set of Feynman diagrams in Fig.14 which contributes to \(\mathcal{A}^{(N+1)}_{\text{self},1}\). All the diagrams are IR-finite for finite \(k\) and the UV divergences in the sum of contributions cancel by using on-shell renormalization condition with proper choice of counter terms. After renormalization a finite contribution remains, and it contributes at order \(\ln\omega\) in the region "\(reg\)". In specific, the sum of the diagrams in Set 1, Set 2, Set 4 and Set 5 individually vanishes using the gravitational analogue of the wave functional renormalization condition (112). On the other hand the counter term choice of the last diagram in Set 3 cancels the sum of UV divergences appearing in the the first four diagrams in Set 3 and the second and third diagrams of Fig.13. After the cancellation of these UV divergences, only in the integration region "\(reg\)" we get the following expression, which can potentially contribute at order \(\ln\omega\) in the soft limit \[\mathcal{A}^{(N+1)}_{\text{self},1} = -\frac{i\kappa^{3}}{2}\sum_{i=1}^{N}\epsilon_{i}^{T}\int_{reg} \frac{d^{4}\ell}{(2\pi)^{4}}\frac{1}{(p_{i}.\ell+i\epsilon)^{2}}\frac{1}{\ell ^{2}-i\epsilon}\ p_{i}^{2}(p_{i}.\varepsilon.p_{i})\times\Gamma^{(N)}_{(i)}(p_ {i}) \tag{114}\] \[+\mathcal{O}(\omega^{0})\.\] Before even evaluating \(\mathcal{A}^{(N+1)}_{\text{non-div},1}\) from the set of diagrams in Fig.11, it becomes evident that these diagrams are IR-finite for finite \(k\) when \(\ell^{\mu}\to 0\). The sum of first, fourth and sixth diagrams in Fig.11 contribute to leading soft graviton factor at order \(\omega^{-1}\) multiplying \({\cal A}^{(N)}_{\text{non-div},1}\). On the other hand in the integration region \(\omega<<|\ell^{\mu}|<<|p_{i}^{\mu}|\), though individually the first and third diagrams in Fig.11 have the potential to contribute at order \(\ln\omega\) as those behave like \(\int_{\omega}\frac{d^{4}\ell}{|\ell|^{4}}\), when we sum them up, in the numerator we get an extra factors of \(\ell\) and/or \(k\). Hence they can only contribute from order \(\omega^{0}\) or \(\omega\ln\omega\). Hence, summing over all the contribution we get \[{\cal A}^{(N+1)}_{\text{non-div},1}=\kappa\sum_{i=1}^{N}\frac{p_{i}.\varepsilon.p_{i}}{p_{i}.k}\ {\cal A}^{(N)}_{\text{non-div},1}\ +{\cal O}(\omega^{0}). \tag{111}\] Summing over the contribution of (110), (111), (112) and (111) we get \[{\cal A}^{(N+1)}_{\text{IR-finite},1} = \kappa\sum_{i=1}^{N}\frac{p_{i}.\varepsilon.p_{i}}{p_{i}.k}\ \times{\cal A}^{(N)}_{\text{IR-finite},1}\] \[-\frac{i\kappa^{3}}{4\pi}\ln\omega\ \sum_{j=1}^{N}\frac{p_{j}. \varepsilon.p_{j}}{p_{j}.k}\times\sum_{i=1}^{N}(p_{i}.k)\left[\delta_{\eta_{i},-1}\ -\frac{i}{2\pi}\ \ln\left(\frac{p_{i}^{2}}{(p_{i}.{\bf n})^{2}}\right)\right] \times\Gamma^{(N)}\] Figure 14: Set of 1-loop diagrams contributing to \({\cal A}^{(N+1)}_{\text{self},1}\), where the virtual dashed lines represent the full graviton propagator connecting two different points on the same massive spinning particle leg. The cross appears in some diagrams above corresponds to counter term, which cancels the UV divergences in the renormalization prescription. \[-\frac{i\kappa^{3}}{2}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\int_{reg}\frac{d^{4}\ell}{(2\pi)^{4}}\ \frac{1}{\ell^{2}-i\epsilon}\frac{1}{\ell.p_{i}+i\epsilon}\frac{1}{\ell.p_{j}- i\epsilon}\] \[\times\Bigg{[}-4(p_{i}.\varepsilon.p_{i})p_{i}.p_{j}\frac{p_{j}. k}{p_{i}.k}+\frac{\ell.k}{p_{i}.k}\frac{1}{\ell.p_{i}+i\epsilon}(p_{i}. \varepsilon.p_{i})\left\{2(p_{i}.p_{j})^{2}-p_{i}^{2}p_{j}^{2}\right\}\] \[-\frac{(p_{i}.\varepsilon.\ell)}{\ell.p_{i}+i\epsilon}\{2(p_{i}. p_{j})^{2}-p_{i}^{2}p_{j}^{2}\}+4(p_{i}.\varepsilon.p_{j})(p_{i}.p_{j}) \Bigg{]}\times\Gamma^{(N)}+\mathcal{O}(\omega^{0})\, \tag{4.67}\] where the expression of \(\mathcal{A}^{(N)}_{\text{IR-finite},1}\) is given in (4.38). Soft graviton theorem result at one-loop:The expression for one-loop amplitude in the soft expansion derived in (4.67) can be re-written in the following compact way \[\mathcal{A}^{(N+1)}_{\text{IR-finite},1} = \kappa\sum_{i=1}^{N}\frac{p_{i}.\varepsilon.p_{i}}{p_{i}.k}\ \times\mathcal{A}^{(N)}_{\text{IR-finite},1} \tag{4.68}\] \[+K^{reg}_{phase}\times\kappa\sum_{i=1}^{N}\frac{p_{i}. \varepsilon.p_{i}}{p_{i}.k}\ \times\mathcal{A}^{(N)}_{\text{IR-finite},0}\] \[+\kappa\sum_{i=1}^{N}\frac{\varepsilon_{\mu\nu}p_{i}^{\mu}k_{\rho }}{p_{i}.k}\left\{p_{i}^{\nu}\frac{\partial K^{reg}_{gr}}{\partial p_{i\rho} }-p_{i}^{\rho}\frac{\partial K^{reg}_{gr}}{\partial p_{i\nu}}\right\}\times \mathcal{A}^{(N)}_{\text{IR-finite},0}+\mathcal{O}(\omega^{0})\,\] where the expressions of \(K^{reg}_{phase}\) and \(K^{reg}_{gr}\) are given in (1.6) and (1.5) respectively. This result agrees with the loop corrected subleading soft graviton theorem, originally derived in [1] for minimally coupled scalar-gravity with scalar contact interaction. Here the re-derivation of this result from a scattering amplitude involving particles with arbitrary spins in a generic theory of quantum gravity confirms the universality of the \(\ln\omega\) soft factor. Moreover, the obtained result also verifies the well-known fact that Weinberg's leading soft graviton theorem remains unaltered by loop corrections. Therefore, even from the analysis presented above, we observe that Weinberg's soft theorem still holds, relating two one-loop IR-finite amplitudes. ### Discussion on generalization When some of the massive spinning particles carry electric charge, the order \(\ln\omega\) soft graviton factor in (4.68) undergoes correction due to electromagnetic interactions. The correction term has been derived in [1] in a minimally coupled charged scalar theory. A straightforward generalization of our derivation, combined with the covariantization prescription outlined in [5], will be useful for establishing the universal nature of this correction. At one-loop order, it is also possible to derive the spin-dependent order \(\omega\ln\omega\) soft graviton factor as provided in (1.4), once we understand how to account for the additional subtleties highlighted in the discussion below (2.4). Furthermore, through an analysis of two-loop amplitudes in the soft limit, it is possible to derive the order \(\omega(\ln\omega)^{2}\) soft graviton theorem as conjectured in (1.7), which is also expected to be universal. We intend to pursue these analyses in the future. Using the Feynman rules derived in section-4.1, in combination with the identities in appendix-C, it is possible to derive the tree-level simultaneous and consecutive double soft-graviton theorems up to sub-subleading orders in a generic theory of quantum gravity. This result does not exist in literature and will be interesting to explore in future. ## 5 Summary and outlook In this article, we have proved that the order \(\ln\omega\) soft photon and graviton theorems are universal (theory independent) by working within a generic setup. Along the way, we have also provided the definitions of IR-finite amplitudes in the generic theory of QED and quantum gravity for scattering involving spinning particles. We used the Ward Identities along with the Grammer-Yennie decomposition to perform Eikonal exponentialization and extract the IR-finite amplitudes. At one-loop order, we provide integral expressions for the IR-finite amplitudes, which have been used to derive soft factors. While it would have been desirable to explicitly evaluate these one-loop IR-finite amplitudes and explore their crossing and unitarity properties in a specific theory, this remains an avenue for future investigation.11 We have also discussed that while the KG-decomposition is powerful for extracting IR-finite amplitudes in a generic theory of QED, it loses efficacy in the generic theory of quantum gravity. Furthermore, we have provided a set of soft photon and graviton theorems up to two-loop orders in section-1, with some of these theorems conjectured based on classical analysis. Footnote 11: BS acknowledges Wäßl Aoun for pursuing his master thesis project on this topic using the IR-finite S-matrix formalism proposed in [58] and making substantial progress. In [13, 14], a relation between low-frequency electromagnetic/gravitational waveforms and the classical limit of soft photon/graviton factors has been derived. In frequency space, the low-frequency gravitational waveform in \(D=4\) is given by 12 Footnote 12: The explicit expression is given in equation (2.5) of [23], where we have adopted the convention that the energy of the outgoing particle is positive, which is contrary to the convention used in this paper. After Fourier transformation in frequency variable, it provides gravitational memory along with multiple tail contributions at late and early time for generic gravitation scattering event. \[\varepsilon^{\mu\nu}(k)\widetilde{e}_{\mu\nu}(\omega,R\mathbf{\widehat{n}})\,= \,-i\,\,\frac{2G}{R}\,\,\exp\left\{i\omega\left(R+2G\ln R\sum_{ \begin{subarray}{c}j=1\\ \eta_{j}=-1\end{subarray}}^{N}p_{j}\cdot\mathbf{n}\right)\right\}\times\mathbb{ S}_{\text{classical}}^{gr}\, \tag{5.1}\] where \(\mathbb{S}_{\text{classical}}^{gr}\) represents the classical limit of quantum soft factor for single soft graviton emission with momentum \(k^{\mu}=-\omega\mathbf{n}^{\mu}\). In the above expression \(\widetilde{e}_{\mu\nu}\) is defined by \[\widetilde{e}_{\mu\nu}(\omega,\vec{x})\,\equiv\,\int_{-\infty}^{\infty}dt\,\, e^{i\omega t}\left(h_{\mu\nu}(t,\vec{x})-\frac{1}{2}\eta_{\mu\nu}h_{\rho}^{\rho}(t, \vec{x})\right). \tag{5.2}\] Note that to derive the classical limit of the quantum "soft factor" from (1.1), first we need to incorporate the sum over particles indexed by \(i\). Subsequently, in their respective expressions, when the orbital momentum operator operates on \(\mathcal{A}^{(N)}\), it must be substituted with the classical orbital momenta of the scattered objects. Furthermore, based on explicit classical computations of gravitational waveforms presented in [21; 23; 16], it has been conjectured that the validity of the relation (5.1) holds true solely when, in the classical limit, the soft graviton factor is calculated using the retarded propagator for the graviton field instead of Feynman propagator. In practice this prescription suggests that in the integral representations of (1.5) and (1.6), the term \((\ell^{2}-i\epsilon)^{-1}\) should be substituted with \(-\left((\ell^{0}+i\epsilon)^{2}-\vec{\ell}^{2}\right)^{-1}\) in order to extract \(\mathbb{S}_{\rm classical}^{gr}\). With this substitution, \(\mathbb{S}_{\rm classical}^{gr}\) only receives contributions from the part of the loop integrals in which the virtual hard particle propagator goes on-shell (referred to as the potential region) and the graviton propagator with principal value. However, it does not receive contributions from the part of the loop integrals in which the virtual graviton goes on-shell (known as the radiation-reaction region). Unfortunately, we lack a fundamental understanding of why the classical limit of the quantum soft factor does not include contributions from radiation reactions. For a \(2\to 2\) scattering process with a large impact parameter or low momentum transfer, the contribution of radiation reaction to the quantum soft factor turns out to be suppressed compared to the contribution from the potential region (classical), as discussed in [1]. This has also been established in [59] using the KMOC formalism [60; 61] after substituting the final momenta of the scattered particles in terms of initial momenta and the perturbatively computed momentum impulse in terms of initial scattering data and specified interaction. However, for hard scattering (small impact parameter scattering), the reason why the radiation reaction contribution to the quantum soft factor does not affect the classical waveform in the classical limit has not yet been resolved. We believe that a generalization of recent investigations into deriving classical gravitational waveforms from Eikonal exponentiation in [62; 63; 64; 65; 66; 67] could potentially resolve this puzzle even for hard scattering. It would be interesting to explore whether there are any observable consequences resulting from the contribution of radiation reaction in the quantum soft factor. There have been many applications of soft theorems both in the context of scattering amplitudes and in relation to gravitational memory. For instance, the universal characteristic of Weinberg's soft theorem imposes an infinite hierarchy of constraints on the linear momentum impulse within the KMOC formalism, as derived in [68]. It is also anticipated that the universality of the \(\ln\omega\) soft theorem should impose an infinite hierarchy of constraints on the angular momentum impulse within the KMOC formalism.13 As discussed in Section 1, while the soft theorems alone cannot impose non-trivial constraints on the quantum theory in the UV, combining the results of the soft theorems with certain physical assumptions about scattering amplitudes, such as analyticity, unitarity and crossing grants them the ability to constrain the UV quantum theory. As an illustration of this concept, the article [69] derived non-perturbative bounds on the \(a\)-anomaly coefficient of the UV conformal field theory (CFT), the deformation of which leads to a massive QFT along the renormalization group (RG) flow. These bounds were established by incorporating the constraints from the double soft dilaton theorem within the framework of non-perturbative S-matrix bootstrap. For numerous astrophysical scattering events, the order of magnitude of gravitational tail memory follows from the \(\ln\omega\) soft theorem has been estimated in [16; 20]. In certain classical scattering scenarios, the gravitational waveform resulting from the \(\ln\omega\) soft theorem has also been derived in [70; 71; 72], carrying observable consequences in the present era of gravitational wave physics. We would like to thank Wael Aoun, P.V. Athira, Hofie Hannesdottir, Alok Laddha, Joao Penedones and Ashoke Sen for useful discussion. The work of HK is supported by NSF grant PHY 2210533. The work of BS is supported by the Simons Foundation grant 488649 (Simons Collaboration on the Nonperturbative Bootstrap) and by the Swiss National Science Foundation through the National Centre of Competence in Research SwissMAP. ## Appendix A Intermediate steps in deriving soft photon theorem Let us evaluate the following expression appears in (3.40) after substituting the vertex (3.12) and using the identities (2.19), (2.20) and (3.25), \[\zeta_{1\mu} \equiv \epsilon_{i}^{T}(-p_{i})\varepsilon^{\rho}(k)\Gamma_{\rho}^{(3)} (p_{i},-p_{i}-k,k)\Xi_{i}(-p_{i}-k)\Gamma_{\mu}^{(3)}(p_{i}+k,-p_{i}-k+\ell,- \ell)\Xi_{i}(-p_{i}-k+\ell)\] (A.1) \[= \epsilon_{i}^{2}\epsilon_{i}^{T}\Bigg{[}\varepsilon^{\rho}\frac {\partial\mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\rho}}\Xi_{i}(-p_{i})+ \varepsilon^{\rho}k^{\sigma}\frac{\partial\mathcal{K}_{i}(-p_{i})}{\partial p _{i}^{\rho}}\frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\sigma}}+\frac{1}{ 2}\varepsilon^{\rho}k^{\sigma}\frac{\partial^{2}\mathcal{K}_{i}(-p_{i})}{ \partial p_{i}^{\rho}\partial p_{i}^{\sigma}}\Xi_{i}(-p_{i})\] \[+\frac{2i}{e_{i}}\varepsilon^{\rho}k^{\sigma}\mathcal{B}_{\rho \sigma}^{i}(-p_{i})\Xi_{i}(-p_{i})+\mathcal{O}(kk)\Bigg{]}\times\Bigg{[}- \frac{\partial\mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\mu}}\Xi_{i}(-p_{i})- (k-\ell)^{\kappa}\frac{\partial\mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{ \mu}}\frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\kappa}}\] \[-\frac{1}{2}(2k-\ell)^{\kappa}\frac{\partial^{2}\mathcal{K}_{i}( -p_{i})}{\partial p_{i}^{\mu}\partial p_{i}^{\kappa}}\Xi_{i}(-p_{i})+\frac{2 i}{e_{i}}\ell^{\kappa}\mathcal{B}_{\mu\kappa}^{i}(-p_{i})\Xi_{i}(-p_{i})+ \mathcal{O}(kk,k\ell,\ell\ell)\Bigg{]}\.\] Now using the identities (2.11),(2.12) and (2.21) we simplify the above expression and get \[\zeta_{1\mu} = \epsilon_{i}^{2}\epsilon_{i}^{T}\Bigg{[}2i\varepsilon\cdot p_{i} +\frac{1}{2}(\varepsilon^{\rho}k^{\sigma}-\varepsilon^{\sigma}k^{\rho}) \frac{\partial\mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\rho}}\frac{\partial \Xi_{i}(-p_{i})}{\partial p_{i}^{\sigma}}+\frac{i}{e_{i}}(\varepsilon^{\rho}k^ {\sigma}-\varepsilon^{\sigma}k^{\rho})\mathcal{B}_{\rho\sigma}^{i}(-p_{i})\Xi_ {i}(-p_{i})\] (A.2) \[+\mathcal{O}(kk)\Bigg{]}\times\Bigg{[}-i(2p_{i}+2k-\ell)_{\mu}+ \mathcal{K}_{i}(-p_{i})\frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\mu}}+ \frac{1}{2}(2k-\ell)^{\kappa}\mathcal{K}_{i}(-p_{i})\frac{\partial^{2}\Xi_{i} (-p_{i})}{\partial p_{i}^{\mu}\partial p_{i}^{\kappa}}\] \[+\frac{1}{2}\ell^{\kappa}\frac{\partial\mathcal{K}_{i}(-p_{i})}{ \partial p_{i}^{\mu}}\frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\kappa}}+ \frac{1}{2}(2k-\ell)^{\kappa}\frac{\partial\mathcal{K}_{i}(-p_{i})}{\partial p _{i}^{\kappa}}\frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\mu}}+\frac{2i}{ e_{i}}\ell^{\kappa}\mathcal{B}_{\mu\kappa}^{i}(-p_{i})\Xi_{i}(-p_{i})\] \[+\mathcal{O}(kk,k\ell,\ell\ell)\Bigg{]}\] \[= \epsilon_{i}^{2}\epsilon_{i}^{T}\Bigg{[}2\varepsilon\cdot p_{i}(2 p_{i}+2k-\ell)_{\mu}+i\varepsilon\cdot p_{i}\ell^{\rho}\Big{\{}\frac{\partial \mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\mu}}\frac{\partial\Xi_{i}(-p_{i})}{ \partial p_{i}^{\rho}}-\frac{\partial\mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{ \rho}}\frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\mu}}\] \[+\frac{4i}{e_{i}}\mathcal{B}_{\mu\rho}^{i}(-p_{i})\Xi_{i}(-p_{i}) \Big{\}}-\ \frac{i}{2}p_{i\mu}(\varepsilon^{\rho}k^{\sigma}-\varepsilon^{\sigma}k^{ \rho})\Big{\{}\frac{\partial\mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\rho}} \frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\sigma}}-\frac{\partial \mathcal{K}_{i}(-p_{i})}{\partial p_{i}^{\sigma}}\frac{\partial\Xi_{i}(-p_{i})}{ \partial p_{i}^{\rho}}\] \[+\frac{4i}{e_{i}}\mathcal{B}_{\rho\sigma}^{i}(-p_{i})\Xi_{i}(-p_{i}) \Big{\}}+2ip_{i}.k\varepsilon^{\rho}\frac{\partial\mathcal{K}_{i}(-p_{i})}{ \partial p_{i}^{\rho}}\frac{\partial\Xi_{i}(-p_{i})}{\partial p_{i}^{\mu}}+ \mathcal{O}(kk,k\ell,\ell\ell)\Bigg{]}\.\] Let us define a specific tensor structure which will appear together in all the computations in section-3.3, which is also the non-universal contribution to the tree-level subleading soft photon theorem as derived in [5] \[\mathcal{N}^{i}_{\rho\sigma}(-p_{i})\equiv-\frac{i}{8}\Bigg{[}\frac{\partial \mathcal{K}_{i}(-p_{i})}{\partial p^{\rho}_{i}}\frac{\partial\Xi_{i}(-p_{i})}{ \partial p^{\sigma}_{i}}-\frac{\partial\mathcal{K}_{i}(-p_{i})}{\partial p^{ \sigma}_{i}}\frac{\partial\Xi_{i}(-p_{i})}{\partial p^{\rho}_{i}}+\frac{4i}{e _{i}}\mathcal{B}^{i}_{\rho\sigma}(-p_{i})\Xi_{i}(-p_{i})\Bigg{]}.\] (A.3) With the above definition, the expression of \(\zeta_{1\mu}\) can be written in the following compact form \[\zeta_{1\mu} = e_{i}^{2}\epsilon_{i}^{T}\Bigg{[}2\varepsilon\cdot p_{i}(2p_{i} +2k-\ell)_{\mu}-8\varepsilon\cdot p_{i}\ell^{\rho}\mathcal{N}^{i}_{\mu\rho}(-p _{i})+4\ p_{i\mu}(\varepsilon^{\rho}k^{\sigma}-\varepsilon^{\sigma}k^{\rho}) \mathcal{N}^{i}_{\rho\sigma}(-p_{i})\] (A.4) \[+2i\ p_{i}.k\ \varepsilon^{\rho}\frac{\partial\mathcal{K}_{i}(-p_{i} )}{\partial p^{\rho}_{i}}\frac{\partial\Xi_{i}(-p_{i})}{\partial p^{\mu}_{i} }+\mathcal{O}(kk,k\ell,\ell\ell)\Bigg{]}\.\] Let us evaluate the following expression appears in (3.44) after substituting the vertex (3.13) and using the identities (2.19), (2.20) and (3.25), \[\zeta_{2\mu} \equiv \epsilon_{i}^{T}(-p_{i})\varepsilon^{\rho}(k)\Gamma^{(4)}_{\rho \mu}(p_{i},-p_{i}-k+\ell,k,-\ell)\Xi_{i}(-p_{i}-k+\ell)\] (A.5) \[= ie_{i}^{2}\ e_{i}^{T}\ \epsilon^{\rho}\frac{\partial^{2} \mathcal{K}_{i}(-p_{i})}{\partial p^{\mu}_{i}\partial p^{\rho}_{i}}\Xi_{i}(-p _{i})+\mathcal{O}(\ell,k)\.\] Using the identity in (2.12) and on-shell condition (2.21) the above expression reduces to \[\zeta_{2\mu} = -e_{i}^{2}\epsilon_{i}^{T}\Bigg{[}2\varepsilon_{\mu}+i\varepsilon ^{\rho}\Big{\{}\frac{\partial\mathcal{K}_{i}(-p_{i})}{\partial p^{\mu}_{i}} \frac{\partial\Xi_{i}(-p_{i})}{\partial p^{\rho}_{i}}+\frac{\partial\mathcal{ K}_{i}(-p_{i})}{\partial p^{\rho}_{i}}\frac{\partial\Xi_{i}(-p_{i})}{ \partial p^{\mu}_{i}}\Big{\}}+\mathcal{O}(\ell,k)\Bigg{]}.\] (A.6) The expression below appears in (3.46) and can be evaluated analogous to the evaluation of \(\zeta_{1\mu}\). The final result reads \[\zeta_{3\mu} \equiv \epsilon_{i}^{T}(-p_{i})\varepsilon^{\rho}(k)\Gamma^{(3)}_{\mu} (p_{i},-p_{i}+\ell,-\ell)\Xi_{i}(-p_{i}+\ell)\Gamma^{(3)}_{\rho}(p_{i}-\ell,- p_{i}-k+\ell,k)\Xi_{i}(-p_{i}-k+\ell)\] (A.7) \[= e_{i}^{2}\epsilon_{i}^{T}\Bigg{[}2\varepsilon\cdot p_{i}(2p_{i} -\ell)_{\mu}-4\varepsilon.\ell p_{i\mu}-8\varepsilon\cdot p_{i}\ell^{\rho} \mathcal{N}^{i}_{\mu\rho}(-p_{i})+4\ p_{i\mu}(\varepsilon^{\rho}k^{\sigma}- \varepsilon^{\sigma}k^{\rho})\mathcal{N}^{i}_{\rho\sigma}(-p_{i})\] \[-2i\ p_{i}.\ell\ \varepsilon^{\rho}\frac{\partial\mathcal{K}_{i}(-p_{i})}{ \partial p^{\mu}_{i}}\frac{\partial\Xi_{i}(-p_{i})}{\partial p^{\rho}_{i}}+ \mathcal{O}(kk,k\ell,\ell\ell)\Bigg{]}\.\] Let us evaluate the following expression appears in (3.32), (3.40),(3.44) and (3.46). After substituting the vertex from (3.12) and using the identities (2.19), (2.20) and (3.25) we get \[\zeta_{4\nu} \equiv \epsilon_{j}^{T}(-p_{j})\Gamma^{(3)}_{\nu}(p_{j},-p_{j}-\ell,\ell )\Xi_{j}(-p_{j}-\ell)\] (A.8) \[= -ie_{j}\ \epsilon_{j}^{T}\Bigg{[}\frac{\partial\mathcal{K}_{j}(-p_{j} )}{\partial p^{\nu}_{j}}\Xi_{j}(-p_{j})+\ell^{\sigma}\frac{\partial\mathcal{K} _{j}(-p_{j})}{\partial p^{\nu}_{j}}\frac{\partial\Xi_{j}(-p_{j})}{\partial p^{ \sigma}_{j}}+\frac{1}{2}\ell^{\sigma}\frac{\partial^{2}\mathcal{K}_{j}(-p_{j})}{ \partial p^{\nu}_{j}\partial p^{\sigma}_{j}}\Xi_{j}(-p_{j})\] \[+\frac{2i}{e_{j}}\ell^{\sigma}\mathcal{B}^{j}_{\nu\sigma}(-p_{j}) \Xi_{j}(-p_{j})+\mathcal{O}(\ell\ell)\Bigg{]}\.\] Using the identities (11),(12) and the on-shell condition (21), the above expression reduces to \[\zeta_{4\nu} = e_{j}\epsilon_{j}^{T}\Bigg{[}(2p_{j}+\ell)_{\nu}+4\ell^{\sigma} \mathcal{N}^{j}_{j\sigma}(-p_{j})+\mathcal{O}(\ell\ell)\Bigg{]}. \tag{114}\] ## Appendix B Amputated Green's function involving single graviton Following the covariantization prescription described in section-4.1, we compute the amputated Green's function involving \(N\) number of massive spinning particles and one off-shell graviton, where the graviton is not attached to any external spinning particle leg. The resulting expression is given by \[h^{\mu\nu}(\ell)\widetilde{\Gamma}^{(N+1)\alpha_{1}\cdots\alpha_ {N}}_{\mu\nu}(\ell) = \kappa\ (2\pi)^{4}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell)\sum_{i=1}^{N} \Bigg{[}\delta^{\alpha_{i}}_{\beta_{i}}\ h(\ell)-\delta^{\alpha_{i}}_{\beta_{i }}\ h^{\mu\nu}(\ell)p_{i\mu}\frac{\partial}{\partial p_{i}^{\nu}} \tag{115}\] \[+h^{\mu\nu}(\ell)(\Sigma_{i\nu b})_{\beta_{i}}^{\ \alpha_{i}}\ \ell^{b}\ \frac{ \partial}{\partial p_{i}^{\mu}}-\frac{1}{2}\delta^{\alpha_{i}}_{\beta_{i}}h^ {\mu\nu}(\ell)\Bigg{\{}\ell^{\rho}p_{i\mu}\frac{\partial^{2}}{\partial p_{i}^ {\rho}\partial p_{i}^{\nu}}+\ell^{\rho}p_{i\nu}\frac{\partial^{2}}{\partial p _{i}^{\rho}\partial p_{i}^{\mu}}\] \[-p_{i}.\ell\frac{\partial^{2}}{\partial p_{i}^{\mu}\partial p_{i} ^{\nu}}\Bigg{\}}+\mathcal{O}(\ell^{2})\Bigg{]}\widehat{\Gamma}^{(N)\alpha_{1} \cdots\alpha_{i-1}\beta_{i}\alpha_{i+1}\cdots\alpha_{N}}\.\] Above \(\widehat{\Gamma}^{(N)}\) is defined after stripping out the momentum conserving delta function \((2\pi)^{4}\delta^{(4)}(p_{1}+\cdots+p_{N})\) from the expression of \(\Gamma^{(N)}\), i.e. \(\Gamma^{(N)}\equiv(2\pi)^{4}\delta^{(4)}(p_{1}+\cdots+p_{N})\widehat{\Gamma}^ {(N)}\). Now starting from the above covariantized expression, the goal is to express \(\widetilde{\Gamma}^{(N+1)}_{\mu\nu}(\ell)\) in terms of some operator operating on \(\Gamma^{(N)}\) up to linear order in \(\ell\). To do that we mostly follow the analysis of [3] with the only difference being \(h^{\mu\nu}(\ell)\) is an off-shell graviton so we can not impose traceless or transverse condition. Instead in de Donder gauge, we use \(\ell_{\mu}h^{\mu\nu}(\ell)=\frac{1}{2}\ell^{\nu}h(\ell)\) in the intermediate stages of calculation. Let us first analyze the first two terms within the square bracket in (115) which is defined as \[J_{1} \equiv \kappa\ (2\pi)^{4}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell)\Bigg{[}h( \ell)-\ h^{\mu\nu}(\ell)\sum_{i=1}^{N}p_{i\mu}\frac{\partial}{\partial p_{i}^ {\nu}}\Bigg{]}\widehat{\Gamma}^{(N)\alpha_{1}\cdots\alpha_{N}} \tag{116}\] \[= \kappa\ (2\pi)^{4}\delta^{(4)}(p_{1}+\cdots+p_{N})\Bigg{[}h(\ell)- \ h^{\mu\nu}(\ell)\sum_{i=1}^{N}p_{i\mu}\frac{\partial}{\partial p_{i}^{\nu}} \Bigg{]}\widehat{\Gamma}^{(N)\alpha_{1}\cdots\alpha_{N}}\] \[+\kappa\ (2\pi)^{4}\ \ell^{\rho}\Big{\{}\frac{\partial}{\partial \ell^{\rho}}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell)\Big{\}}_{\ell=0}\Bigg{[}h( \ell)-\ h^{\mu\nu}(\ell)\sum_{i=1}^{N}p_{i\mu}\frac{\partial}{\partial p_{i}^ {\nu}}\Bigg{]}\widehat{\Gamma}^{(N)\alpha_{1}\cdots\alpha_{N}}\] \[+\mathcal{O}(\ell^{2})\.\] Above we have just Taylor expanded the delta function and kept terms up to linear order in \(\ell\). Now to evaluate the second line above we use the property involving momentum conserving delta function \[\sum_{i=1}^{N}p_{i\mu}\frac{\partial}{\partial p_{i}^{\nu}}\delta^{(4 )}(p_{1}+\cdots+p_{N})\ =\sum_{i=1}^{N}p_{i\mu}\frac{\partial}{\partial p_{1}^{\nu}}\delta^{(4)}(p_{1}+ \cdots+p_{N}) \tag{111}\] \[= \frac{\partial}{\partial p_{1}^{\nu}}\sum_{i=1}^{N}p_{i\mu}\delta ^{(4)}(p_{1}+\cdots+p_{N})-\eta_{\mu\nu}\ \delta^{(4)}(p_{1}+\cdots+p_{N})\] \[= -\eta_{\mu\nu}\ \delta^{(4)}(p_{1}+\cdots+p_{N})\.\] So using the above property and commuting the delta function through the momentum derivative the expression of \(J_{1}\) becomes, \[J_{1} = -\kappa\ (2\pi)^{4}\ h^{\mu\nu}(\ell)\sum_{i=1}^{N}p_{i\mu}\frac{ \partial}{\partial p_{i}^{\nu}}\bigg{\{}\delta^{(4)}(p_{1}+\cdots+p_{N}) \widehat{\Gamma}^{(N)\alpha_{1}\cdots\alpha_{N}}\bigg{\}} \tag{112}\] \[+\kappa\ (2\pi)^{4}\ \ell^{\rho}\Big{\{}\frac{\partial}{\partial \ell^{\rho}}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell)\Big{\}}_{\ell=0}\Bigg{[}h( \ell)-\ h^{\mu\nu}(\ell)\sum_{i=1}^{N}p_{i\mu}\frac{\partial}{\partial p_{i}^ {\nu}}\Bigg{]}\widehat{\Gamma}^{(N)\alpha_{1}\cdots\alpha_{N}}\] \[+{\cal O}(\ell^{2})\.\] Last two terms within the square bracket in (112) can be evaluated in the following way \[J_{2} \equiv \kappa\ (2\pi)^{4}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell)h^{\mu\nu}( \ell)\sum_{i=1}^{N}\Bigg{[}(\Sigma_{i\nu b})_{\beta_{i}}^{\ \alpha_{i}}\ \ell^{b}\ \frac{\partial}{ \partial p_{i}^{\mu}}-\frac{1}{2}\delta_{\beta_{i}}^{\alpha_{i}}\Bigg{\{}\ell^ {\rho}p_{i\mu}\frac{\partial^{2}}{\partial p_{i}^{\rho}\partial p_{i}^{\nu}} \tag{113}\] \[+\ell^{\rho}p_{i\nu}\frac{\partial^{2}}{\partial p_{i}^{\rho} \partial p_{i}^{\mu}}-p_{i}.\ell\frac{\partial^{2}}{\partial p_{i}^{\mu} \partial p_{i}^{\nu}}\Bigg{\}}\Bigg{]}\widehat{\Gamma}^{(N)\alpha_{1}\cdots \alpha_{i-1}\beta_{i}\alpha_{i+1}\cdots\alpha_{N}}\] \[= \kappa\ (2\pi)^{4}h^{\mu\nu}(\ell)\sum_{i=1}^{N}\Bigg{[}(\Sigma_{i \nu b})_{\beta_{i}}^{\ \alpha_{i}}\ \ell^{b}\ \frac{\partial}{\partial p_{i}^{\mu}}-\frac{1}{2}\delta_{\beta_{i}}^{\alpha_{ i}}\Bigg{\{}\ell^{\rho}p_{i\mu}\frac{\partial^{2}}{\partial p_{i}^{\rho} \partial p_{i}^{\nu}}+\ell^{\rho}p_{i\nu}\frac{\partial^{2}}{\partial p_{i}^{ \rho}\partial p_{i}^{\mu}}-p_{i}.\ell\frac{\partial^{2}}{\partial p_{i}^{\mu} \partial p_{i}^{\nu}}\Bigg{\}}\Bigg{]}\] \[\Big{\{}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell)\widehat{\Gamma}^{(N )\alpha_{1}\cdots\alpha_{i-1}\beta_{i}\alpha_{i+1}\cdots\alpha_{N}}\Big{\}}\ +\ {\cal L}\,\] where \[{\cal L} = -\kappa\ (2\pi)^{4}\ \widehat{\Gamma}^{(N)\alpha_{1}\cdots\alpha_{i-1} \beta_{i}\alpha_{i+1}\cdots\alpha_{N}}h^{\mu\nu}(\ell)\sum_{i=1}^{N}\Bigg{[}( \Sigma_{i\nu b})_{\beta_{i}}^{\ \alpha_{i}}\ \ell^{b}\ \frac{\partial}{\partial p_{i}^{\mu}}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell) \tag{114}\] \[-\frac{1}{2}\delta_{\beta_{i}}^{\alpha_{i}}\Bigg{\{}\ell^{\rho}p_ {i\mu}\frac{\partial^{2}}{\partial p_{i}^{\rho}\partial p_{i}^{\nu}}+\ell^{ \rho}p_{i\nu}\frac{\partial^{2}}{\partial p_{i}^{\rho}\partial p_{i}^{\mu}}-p _{i}.\ell\frac{\partial^{2}}{\partial p_{i}^{\mu}\partial p_{i}^{\nu}}\Bigg{\}} \delta^{(4)}(p_{1}+\cdots+p_{N}+\ell)\Bigg{]}\] \[+\frac{\kappa}{2}(2\pi)^{4}h^{\mu\nu}(\ell)\sum_{i=1}^{N}\Big{\{} \frac{\partial}{\partial p_{i}^{\rho}}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell) \Big{\}}\frac{\partial\widehat{\Gamma}^{\alpha_{1}\cdots\alpha_{N}}}{\partial p _{i}^{\sigma}}\Big{\{}\ell^{\rho}p_{i\mu}\delta_{\nu}^{\sigma}+\ell^{\sigma}p_ {i\mu}\delta_{\nu}^{\rho}\] \[+\ell^{\rho}p_{i\nu}\delta_{\mu}^{\sigma}+\ell^{\sigma}p_{i\nu} \delta_{\mu}^{\rho}-p_{i}.\ell\delta_{\mu}^{\rho}\delta_{\nu}^{\sigma}-p_{i}. \ell\delta_{\mu}^{\sigma}\delta_{\nu}^{\rho}\Big{\}}\.\] To evaluate \({\cal L}\) we use the same trick as described earlier i.e. derivative w.r.t. \(p_{i}\) on the delta function is same as derivative w.r.t. \(\ell\) and then use the momentum conservation relation enforced by the delta function. After all these steps we get \[{\cal L} = -\kappa\ (2\pi)^{4}\ \widehat{\Gamma}^{(N)\alpha_{1}\cdots\alpha_{i-1} \beta_{i}\alpha_{i+1}\cdots\alpha_{N}}h^{\mu\nu}(\ell)\Bigg{[}\sum_{i=1}^{N}( \Sigma_{i\nu b})_{\beta_{i}}^{\ \alpha_{i}}\ \ell^{b}\ \frac{\partial}{\partial\ell^{\mu}}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell) \tag{111}\] \[+\frac{1}{2}\delta_{\beta_{i}}^{\alpha_{i}}\Bigg{\{}\Bigg{(} \ell^{\rho}\frac{\partial^{2}}{\partial\ell^{\rho}\partial\ell^{\nu}}\ell_{\mu }+\ell^{\rho}\frac{\partial^{2}}{\partial\ell^{\rho}\partial\ell^{\mu}}\ell_{ \nu}-\ell^{\rho}\frac{\partial^{2}}{\partial\ell^{\mu}\partial\ell^{\nu}}\ell _{\rho}\Bigg{)}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell)\Bigg{\}}\Bigg{]}\] \[+\kappa(2\pi)^{4}\sum_{i=1}^{N}\Big{\{}\frac{\partial}{\partial \ell^{\rho}}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell)\Big{\}}\frac{\partial \widehat{\Gamma}^{\alpha_{1}\cdots\alpha_{N}}}{\partial p_{i}^{\sigma}}\Big{\{} p_{i\mu}\ell^{\rho}h^{\mu\sigma}(\ell)+p_{i\mu}\ell^{\sigma}h^{\mu\rho}(\ell)\] \[-p_{i}.\ell h^{\rho\sigma}(\ell)\Big{\}}\.\] To evaluate the first term within the square bracket above we use the conservation of total angular momenta follows from the Lorentz covariance of \(\widehat{\Gamma}\) \[\sum_{i=1}^{N}\Big{[}(\Sigma_{i\nu b})_{\beta_{i}}^{\ \alpha_{i}}\widehat{\Gamma}^{(N)\alpha_{1}\cdots\alpha_{i-1}\beta_{i}\alpha_{i +1}\cdots\alpha_{N}}-\Big{\{}p_{i\nu}\frac{\partial}{\partial p_{i}^{b}}-p_{ ib}\frac{\partial}{\partial p_{i}^{\nu}}\Big{\}}\widehat{\Gamma}^{(N) \alpha_{1}\cdots\alpha_{N}}\Big{]}\ =0. \tag{112}\] Using the above relation in the first line of the expression of \({\cal L}\) in (111), operating the derivatives w.r.t. \(\ell\) in the second term within the square bracket in (111) and using de Donder gauge condition, we find \[{\cal L} = -\frac{\kappa}{2}\ (2\pi)^{4}\widehat{\Gamma}^{\alpha_{1}\cdots \alpha_{N}}\Big{\{}h(\ell)\ell^{\nu}\ell^{\rho}\frac{\partial^{2}}{\partial \ell^{\rho}\partial\ell^{\nu}}-\ell^{2}h^{\mu\nu}(\ell)\frac{\partial^{2}}{ \partial\ell^{\mu}\partial\ell^{\nu}}+2h(\ell)\ell^{\rho}\frac{\partial}{ \partial\ell^{\rho}}\Big{\}}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell) \tag{113}\] \[+\kappa(2\pi)^{4}\sum_{i=1}^{N}\Big{\{}\frac{\partial}{\partial \ell^{\rho}}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell)\Big{\}}\frac{\partial \widehat{\Gamma}^{\alpha_{1}\cdots\alpha_{N}}}{\partial p_{i}^{\sigma}}p_{i\mu }\ell^{\rho}h^{\mu\sigma}(\ell)\.\] Now after substituting the expression of \({\cal L}\) from (113) in (100), then Taylor expanding the delta function and keeping terms up to linear in \(\ell\) we get \[J_{2} = \kappa\ (2\pi)^{4}h^{\mu\nu}(\ell)\sum_{i=1}^{N}\Bigg{[}(\Sigma_{i \nu b})_{\beta_{i}}^{\ \alpha_{i}}\ \ell^{b}\ \frac{\partial}{\partial p_{i}^{\mu}}-\frac{1}{2}\delta_{\beta_{i}}^{\alpha_{i}} \Bigg{\{}\ell^{\rho}p_{i\mu}\frac{\partial^{2}}{\partial p_{i}^{\rho}\partial p _{i}^{\nu}}+\ell^{\rho}p_{i\nu}\frac{\partial^{2}}{\partial p_{i}^{\rho} \partial p_{i}^{\mu}}-p_{i}.\ell\frac{\partial^{2}}{\partial p_{i}^{\mu} \partial p_{i}^{\nu}}\Bigg{\}}\Bigg{]} \tag{114}\] \[-\kappa\ (2\pi)^{4}\widehat{\Gamma}^{\alpha_{1}\cdots\alpha_{N}}h( \ell)\ell^{\rho}\Big{\{}\frac{\partial}{\partial\ell^{\rho}}\delta^{(4)}(p_{1}+ \cdots+p_{N}+\ell)\Big{\}}_{\ell=0}\] \[+\kappa(2\pi)^{4}\sum_{i=1}^{N}\Big{\{}\frac{\partial}{\partial \ell^{\rho}}\delta^{(4)}(p_{1}+\cdots+p_{N}+\ell)\Big{\}}_{\ell=0}\frac{ \partial\widehat{\Gamma}^{\alpha_{1}\cdots\alpha_{N}}}{\partial p_{i}^{\sigma}}p _{i\mu}\ell^{\rho}h^{\mu\sigma}(\ell)\ +{\cal O}(\ell^{2})\.\] Finally after adding the expressions in (102) and (114) we get \[h^{\mu\nu}(\ell)\widetilde{\Gamma}^{(N+1)\alpha_{1}\cdots\alpha_{ N}}_{\mu\nu}(\ell) \tag{115}\] \[= -\kappa\ (2\pi)^{4}\ h^{\mu\nu}(\ell)\sum_{i=1}^{N}p_{i\mu}\frac{ \partial}{\partial p_{i}^{\nu}}\Big{\{}\delta^{(4)}(p_{1}+\cdots+p_{N})\widehat {\Gamma}^{(N)\alpha_{1}\cdots\alpha_{N}}\Big{\}}\] \[+\kappa\ (2\pi)^{4}h^{\mu\nu}(\ell)\sum_{i=1}^{N}\Bigg{[}(\Sigma_{i \nu b})_{\beta_{i}}^{\ \alpha_{i}}\ \ell^{b}\ \frac{\partial}{\partial p_{i}^{\mu}}-\frac{1}{2}\delta_{\beta_{i}}^{\alpha_{i}} \Bigg{\{}\ell^{\rho}p_{i\mu}\frac{\partial^{2}}{\partial p_{i}^{\rho}\partial p _{i}^{\nu}}+\ell^{\rho}p_{i\nu}\frac{\partial^{2}}{\partial p_{i}^{\rho} \partial p_{i}^{\mu}}-p_{i}.\ell\frac{\partial^{2}}{\partial p_{i}^{\mu} \partial p_{i}^{\nu}}\Bigg{\}}\Bigg{]}\] \[\Big{\{}\delta^{(4)}(p_{1}+\cdots+p_{N})\widehat{\Gamma}^{(N) \alpha_{1}\cdots\alpha_{i-1}\beta_{i}\alpha_{i+1}\cdots\alpha_{N}}\Big{\}}\ +{\cal O}(\ell^{2})\.\] Intermediate steps in deriving soft graviton theorem We want to compute the following expression involving the vertex in (116) which takes the following form once we expanded in power of small \(\ell\) and keep terms up to quadratic order \[\Gamma^{(3)}_{\mu\nu}(q,-q-\ell,\ell)\Xi(-q-\ell) \tag{117}\] \[= i\kappa\Bigg{[}\eta_{\mu\nu}\mathcal{K}(-q)\Xi(-q)+\eta_{\mu\nu} \mathcal{K}(-q)\ell^{\rho}\frac{\partial\Xi(-q)}{\partial q^{\rho}}+\frac{1}{ 2}\eta_{\mu\nu}\mathcal{K}(-q)\ell^{\rho}\ell^{\sigma}\frac{\partial^{2}\Xi(-q )}{\partial q^{\rho}\partial q^{\sigma}}\] \[+\frac{1}{2}\eta_{\mu\nu}\ell^{\rho}\frac{\partial\mathcal{K}(-q )}{\partial q^{\rho}}\Xi(-q)+\frac{1}{2}\eta_{\mu\nu}\ell^{\rho}\ell^{\sigma} \frac{\partial\mathcal{K}(-q)}{\partial q^{\rho}}\frac{\partial\Xi(-q)}{ \partial q^{\sigma}}+\frac{1}{4}\eta_{\mu\nu}\ell^{\rho}\ell^{\sigma}\frac{ \partial^{2}\mathcal{K}(-q)}{\partial q^{\rho}\partial q^{\sigma}}\Xi(-q)\] \[-q_{(\mu}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu})}\Xi(- q)-q_{(\mu}\ell^{\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu})}\frac{ \partial\Xi(-q)}{\partial q^{\rho}}-\frac{1}{2}q_{(\mu}\ell^{\rho}\ell^{ \sigma}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu})}\frac{\partial^{2}\Xi (-q)}{\partial q^{\rho}\partial q^{\sigma}}\] \[-\frac{1}{2}\ell_{(\mu}\frac{\partial\mathcal{K}(-q)}{\partial q^ {\nu})}\Xi(-q)-\frac{1}{2}\ell_{(\mu}\ell^{\rho}\frac{\partial\mathcal{K}(-q)}{ \partial q^{\nu})}\frac{\partial\Xi(-q)}{\partial q^{\rho}}-\frac{1}{2}q_{( \mu}\ell^{\rho}\frac{\partial^{2}\mathcal{K}(-q)}{\partial q^{\nu)}\partial q ^{\rho}}\Xi(-q)\] \[-\frac{1}{2}q_{(\mu}\ell^{\rho}\ell^{\sigma}\frac{\partial^{2} \mathcal{K}(-q)}{\partial q^{\nu})\partial q^{\rho}}\frac{\partial\Xi(-q)}{ \partial q^{\sigma}}-\frac{1}{4}q_{(\mu}\ell^{\rho}\ell^{\sigma}\frac{ \partial^{3}\mathcal{K}(-q)}{\partial q^{\nu})\partial q^{\rho}\partial q^{ \sigma}}\Xi(-q)\] \[-\frac{1}{2}\ell_{(\mu}\ell^{\rho}\frac{\partial^{2}\mathcal{K}(- q)}{\partial q^{\nu})\partial q^{\rho}}\Xi(-q)-\frac{1}{2}\ell^{b}\frac{ \partial\mathcal{K}(-q)}{\partial q^{(\mu}}\Sigma_{\nu)b}\Xi(-q)-\frac{1}{2} \ell^{b}\ell^{\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{(\mu}}\Sigma_{ \nu)b}\frac{\partial\Xi(-q)}{\partial q^{\rho}}\] \[+\frac{1}{2}\ell^{b}\ \Sigma^{T}_{(\nu b}\frac{\partial\mathcal{K}(-q)}{ \partial q^{\mu})}\Xi(-q)+\frac{1}{2}\ell^{b}\ell^{\rho}\ \Sigma^{T}_{(\nu b}\frac{\partial \mathcal{K}(-q)}{\partial q^{\mu})}\frac{\partial\Xi(-q)}{\partial q^{\rho}}- \frac{1}{4}\ell^{b}\ell^{\rho}\ \frac{\partial^{2}\mathcal{K}(-q)}{\partial q^{(\mu} \partial q^{\rho})}\Sigma_{\nu)b}\Xi(-q)\] \[+\frac{1}{4}\ell^{b}\ell^{\rho}\Sigma^{T}_{(\nu b}\frac{ \partial^{2}\mathcal{K}(-q)}{\partial q^{\mu})\partial q^{\rho}}\Xi(-q)+\frac{ 1}{4}\Bigg{\{}\ell_{\mu}\ell^{\rho}\frac{\partial^{2}\mathcal{K}(-q)}{ \partial q^{\rho}\partial q^{\nu}}+\ell_{\nu}\ell^{\rho}\frac{\partial^{2} \mathcal{K}(-q)}{\partial q^{\rho}\partial q^{\mu}}-\ell^{2}\frac{\partial^{2} \mathcal{K}(-q)}{\partial q^{\mu}\partial q^{\nu}}\Bigg{\}}\Xi(-q)\] \[+\frac{1}{12}\ell^{\sigma}\Bigg{\{}q_{\mu}\ell^{\rho}\frac{ \partial^{3}\mathcal{K}(-q)}{\partial q^{\rho}\partial q^{\nu}\partial q^{ \sigma}}+q_{\nu}\ell^{\rho}\frac{\partial^{3}\mathcal{K}(-q)}{\partial q^{\rho }\partial q^{\mu}\partial q^{\sigma}}-q.\ell\frac{\partial^{3}\mathcal{K}(-q)}{ \partial q^{\mu}\partial q^{\nu}\partial q^{\sigma}}\Bigg{\}}\Xi(-q)\] \[-\ell^{\rho}\ell^{\sigma}\Big{\{}\mathcal{G}_{(\mu\rho\sigma\nu)} (-q)+\mathcal{G}_{\sigma(\nu\mu)\rho}(-q)-\mathcal{G}_{(\mu\rho\nu)\sigma}(-q)- \mathcal{G}_{\sigma(\nu\rho\mu)}(-q)\Big{\}}\Xi(-q)+\mathcal{O}(\ell^{3}) \Bigg{]}.\] Now to simplify the above expression we need to move the momenta derivatives from \(\mathcal{K}\) to \(\Xi\) as much possible and also move the spin operator to the extreme right using the identities (10)-(17). Following these steps the result up to order \(\mathcal{O}(\ell^{2})\) reads \[\Gamma^{(3)}_{\mu\nu}(q,-q-\ell,\ell)\Xi(-q-\ell) \tag{118}\] \[= i\kappa\Bigg{[}i\eta_{\mu\nu}(q^{2}+m^{2}+q.\ell)-2iq_{\mu}q_{\nu }-2iq_{(\mu}\ell_{\nu)}+2i\ell^{b}q_{(\mu}\Sigma^{T}_{\nu)b}+i\ell^{b}\ell_{( \mu}\Sigma^{T}_{\nu)b}\] \[+q_{(\mu}\mathcal{K}(-q)\frac{\partial\Xi(-q)}{\partial q^{\nu})}+ \frac{1}{4}\eta_{\mu\nu}\mathcal{K}(-q)\ell^{\rho}\ell^{\sigma}\frac{\partial^{2} \Xi(-q)}{\partial q^{\rho}\partial q^{\sigma}}+\frac{1}{2}q.\ell\mathcal{K}(-q) \frac{\partial^{2}\Xi(-q)}{\partial q^{\mu}\partial q^{\nu}}\] \[+\ell_{(\mu}\mathcal{K}(-q)\frac{\partial\Xi(-q)}{\partial q^{\nu})} -\frac{1}{2}\ell^{b}\mathcal{K}(-q)\frac{\partial\Xi(-q)}{\partial q^{(\mu} \Sigma^{T}_{\nu)b}}-\frac{1}{4}\ell^{\rho}\ell^{b}\mathcal{K}(-q)\frac{ \partial^{2}\Xi(-q)}{\partial q^{(\mu}\partial q^{\rho})}\Sigma^{T}_{\nu)b}\] \[-\frac{1}{2}\ell^{b}\ \mathcal{K}(-q)\frac{\partial\Xi(-q)}{\partial q^{(\mu} \Sigma^{T}_{\nu)b}}+\frac{1}{4}\ell^{b}\ell^{\rho}\mathcal{K}(-q)\Sigma_{(\nu b }\frac{\partial^{2}\Xi(-q)}{\partial q^{\mu}\partial q^{\rho})}+\frac{1}{4} \ell^{2}\mathcal{K}(-q)\frac{\partial^{2}\Xi(-q)}{\partial q^{\mu}\partial q^{ \nu}}\] \[+\frac{1}{12}\ell^{\sigma}\Bigg{\{}q_{(\mu}\ell^{\rho}\mathcal{K}(-q )\frac{\partial^{3}\Xi(-q)}{\partial q^{\rho}\partial q^{\nu})\partial q^{ \sigma}}+q.\ell\mathcal{K}(-q)\frac{\partial^{3}\Xi(-q)}{\partial q^{\mu} \partial q^{\nu}\partial q^{\sigma}}\Bigg{\}}\] \[+\frac{1}{3}q.\ell\ell\ell^{\rho}\frac{\partial{\cal K}(-q)}{ \partial q^{\rho}}\frac{\partial^{2}\Xi(-q)}{\partial q^{\mu}\partial q^{\nu}}+ \frac{1}{3}\ell^{b}\ell^{\rho}q_{(\mu}\frac{\partial{\cal K}(-q)}{\partial q^{ \nu})}\frac{\partial^{2}\Xi(-q)}{\partial q^{\rho}\partial q^{\rho}}-\frac{1}{ 3}q.\ell\ell^{\rho}\frac{\partial{\cal K}(-q)}{\partial q^{\mu}\partial q^{ \nu}}\frac{\partial^{2}\Xi(-q)}{\partial q^{\nu}\partial q^{\rho}}\] \[-\frac{1}{3}\ell^{b}\ell^{\rho}q_{(\mu}\frac{\partial{\cal K}(-q) }{\partial q^{\rho}}\frac{\partial^{2}\Xi(-q)}{\partial q^{\nu)}\partial q^{ \phi}}+\frac{1}{6}q.\ell\ell\ell^{\rho}\frac{\partial^{2}{\cal K}(-q)}{ \partial q^{(\mu}\partial q^{\rho}\partial q^{\rho}}\frac{\partial\Xi(-q)}{ \partial q^{\nu)}}-\frac{1}{6}\ell^{b}\ell^{\rho}q_{(\mu}\frac{\partial^{2}{ \cal K}(-q)}{\partial q^{\rho}\partial q^{b}}\frac{\partial\Xi(-q)}{\partial q ^{\nu)}}\] \[+\frac{1}{6}q_{(\mu}\ell^{\rho}\ell^{\sigma}\frac{\partial^{2}{ \cal K}(-q)}{\partial q^{\rho}\partial q^{\nu)}}\frac{\partial\Xi(-q)}{ \partial q^{\sigma}}-\frac{1}{6}q.\ell\ell^{\rho}\frac{\partial^{2}{\cal K}(-q )}{\partial q^{\mu}\partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{\rho}}\] \[-\frac{1}{2}\ell^{b}\ell^{\rho}\ \frac{\partial{\cal K}(-q)}{ \partial q^{\rho}}\frac{\partial\Xi(-q)}{\partial q^{(\mu}}\Sigma^{T}_{\nu)b} +\frac{1}{2}\ell^{b}\ell^{\rho}\frac{\partial{\cal K}(-q)}{\partial q^{(\mu} }\frac{\partial\Xi(-q)}{\partial q^{\rho}}\Sigma^{T}_{\nu)b}\] \[-\ell^{\rho}\ell^{\sigma}\Big{\{}{\cal G}_{(\mu\rho\sigma\nu)}( -q)+{\cal G}_{\sigma(\nu\mu)\rho}(-q)-{\cal G}_{(\mu\rho\nu)\sigma}(-q)-{\cal G }_{\sigma(\nu\rho\mu)}(-q)\Big{\}}\Xi(-q)+{\cal O}(\ell^{3})\Bigg{]}. \tag{113}\] The symmetrization in the exchange between \(\mu\) and \(\nu\) can be omitted since any contraction involving the above expression in any Feynman diagram calculation will always exhibit symmetry under the exchange of \(\mu\) and \(\nu\). Also the above expression can be written in a compact way by introducing the following tensor structure \[\Delta_{\mu\rho\nu\sigma}(-q) \equiv \frac{1}{3}q_{\mu}\frac{\partial{\cal K}(-q)}{\partial q^{\nu}} \frac{\partial^{2}\Xi(-q)}{\partial q^{\rho}\partial q^{\sigma}}-\frac{1}{6}q _{\rho}\frac{\partial^{2}{\cal K}(-q)}{\partial q^{\mu}\partial q^{\nu}}\frac {\partial\Xi(-q)}{\partial q^{\sigma}}+\frac{1}{4}\frac{\partial{\cal K}(-q)}{ \partial q^{\mu}}\frac{\partial\Xi(-q)}{\partial q^{\rho}}\Sigma^{T}_{\nu\sigma} \tag{114}\] \[+\ {\cal G}_{\mu\rho\nu\sigma}(-q)\Xi(-q)\.\] Hence removing the \(\mu\leftrightarrow\nu\) symmetrization, the expression in (113) can be compactly written as \[\Gamma^{(3)}_{\mu\nu}(q,-q-\ell,\ell)\Xi(-q-\ell) \tag{115}\] \[= i\kappa\Bigg{[}i\eta_{\mu\nu}(q^{2}+m^{2}+q.\ell)-2iq_{\mu}q_{ \nu}-2iq_{\mu}\ell_{\nu}+2i\ell^{b}q_{\mu}\Sigma^{T}_{\nu b}+i\ell^{b}\ell_{ \mu}\Sigma^{T}_{\nu b}\] \[+q_{\mu}{\cal K}(-q)\frac{\partial\Xi(-q)}{\partial q^{\nu}}+ \frac{1}{4}\eta_{\mu\nu}{\cal K}(-q)\ell^{\rho}\ell^{\sigma}\frac{\partial^{2} \Xi(-q)}{\partial q^{\rho}\partial q^{\sigma}}+\frac{1}{2}q.\ell{\cal K}(-q) \frac{\partial^{2}\Xi(-q)}{\partial q^{\mu}\partial q^{\nu}}\] \[+\ell_{\mu}{\cal K}(-q)\frac{\partial\Xi(-q)}{\partial q^{\nu}}- \ell^{b}{\cal K}(-q)\frac{\partial\Xi(-q)}{\partial q^{\mu}}\Sigma^{T}_{\nu b }-\frac{1}{4}\ell^{\rho}\ell^{b}{\cal K}(-q)\frac{\partial^{2}\Xi(-q)}{ \partial q^{\mu}\partial q^{\rho}}\Sigma^{T}_{\nu b}\] \[+\frac{1}{4}\ell^{b}\ell^{\rho}{\cal K}(-q)\Sigma_{\nu b}\frac{ \partial^{2}\Xi(-q)}{\partial q^{\mu}\partial q^{\rho}}+\frac{1}{4}\ell^{2}{ \cal K}(-q)\frac{\partial^{2}\Xi(-q)}{\partial q^{\mu}\partial q^{\nu}}\] \[+\frac{1}{12}\ell^{\sigma}\Bigg{\{}q_{\mu}\ell^{\rho}{\cal K}(-q) \frac{\partial^{3}\Xi(-q)}{\partial q^{\rho}\partial q^{\nu}\partial q^{\sigma}} +q.\ell{\cal K}(-q)\frac{\partial^{3}\Xi(-q)}{\partial q^{\mu}\partial q^{\nu} \partial q^{\sigma}}\Bigg{\}}\] \[+\ell^{\rho}\ell^{\sigma}\Big{\{}\Delta_{\mu\rho\nu\sigma}(-q)+ \Delta_{\rho\mu\sigma\nu}(-q)-\Delta_{\rho\mu\nu\sigma}(-q)-\Delta_{\mu\rho\sigma \nu}(-q)\Big{\}}+{\cal O}(\ell^{3})\Bigg{]}.\] Now we want to compute the following expression involving two vertices of kind (4.16) which takes the following form once we expanded in power of small \(\ell_{1},\ell_{2}\) and keep terms up to quadratic order \[\Gamma^{(3)}_{\mu\nu}(q,-q-\ell_{1},\ell_{1})\Xi(-q-\ell_{1})\Gamma^ {(3)}_{\rho\sigma}(q+\ell_{1},-q-\ell_{1}-\ell_{2},\ell_{2})\Xi(-q-\ell_{1}-\ell_ {2}) \tag{116}\] \[= -\kappa^{2}\Bigg{[}i\eta_{\mu\nu}(q^{2}+m^{2}+q.\ell_{1})-2iq_{\mu }q_{\nu}-2iq_{\mu}\ell_{1\nu}+2i\ell_{1}^{b}q_{\mu}\Sigma^{T}_{\nu b}+i\ell_{1}^ {b}\ell_{1\mu}\Sigma^{T}_{\nu b}\] \[+q_{\mu}{\cal K}(-q)\frac{\partial\Xi(-q)}{\partial q^{\nu}}+\frac{1} {4}\eta_{\mu\nu}{\cal K}(-q)\ell_{1}^{\lambda}\ell_{1}^{b}\frac{\partial^{2}\Xi( -q)}{\partial q^{\lambda}\partial q^{b}}+\frac{1}{2}q.\ell_{1}{\cal K}(-q) \frac{\partial^{2}\Xi(-q)}{\partial q^{\mu}\partial q^{\nu}}\] \[+\ell_{1\mu}{\cal K}(-q)\frac{\partial\Xi(-q)}{\partial q^{\nu}}- \ell_{1}^{b}{\cal K}(-q)\frac{\partial\Xi(-q)}{\partial q^{\mu}}\Sigma^{T}_{ \nu b}-\frac{1}{4}\ell_{1}^{\lambda}\ell_{1}^{b}{\cal K}(-q)\frac{\partial^{2 }\Xi(-q)}{\partial q^{\mu}\partial q^{\lambda}}\Sigma^{T}_{\nu b}\] \[+\frac{1}{4}\ell_{1}^{b}\ell_{1}^{\lambda}{\cal K}(-q)\Sigma_{ \nu b}\frac{\partial^{2}\Xi(-q)}{\partial q^{\mu}\partial q^{\lambda}}+\frac{ 1}{4}\ell_{1}^{2}{\cal K}(-q)\frac{\partial^{2}\Xi(-q)}{\partial q^{\mu} \partial q^{\nu}}\] \[+\frac{1}{12}\ell_{1}^{b}\Bigg{\{}q_{\mu}\ell_{1}^{\lambda}{\cal K }(-q)\frac{\partial^{3}\Xi(-q)}{\partial q^{\lambda}\partial q^{\nu}\partial q ^{b}}+q.\ell_{1}{\cal K}(-q)\frac{\partial^{3}\Xi(-q)}{\partial q^{\mu} \partial q^{\nu}\partial q^{b}}\Bigg{\}}\] \[+\ell_{1}^{\lambda}\ell_{1}^{b}\Big{\{}\Delta_{\mu\lambda\nu b}(- q)+\Delta_{\lambda\mu b\nu}(-q)-\Delta_{\lambda\mu\nu b}(-q)-\Delta_{\mu\lambda b \nu}(-q)\Big{\}}\Bigg{]}\] \[\times\Bigg{[}i\eta_{\rho\sigma}(q^{2}+m^{2}+2q.\ell_{1}+q.\ell_ {2}+\ell_{1}^{2}+\ell_{1}.\ell_{2})-2i(q+\ell_{1})_{\rho}(q+\ell_{1})_{\sigma} -2i(q+\ell_{1})_{\rho}\ell_{2\sigma}\] \[+2i\ell_{2}^{a}(q+\ell_{1})_{\rho}\Sigma^{T}_{\sigma a}+i\ell_{2 }^{a}\ell_{2\rho}\Sigma^{T}_{\sigma a}\] \[+(q+\ell_{1})_{\rho}{\cal K}(-q)\frac{\partial\Xi(-q)}{\partial q ^{\sigma}}+(q+\ell_{1})_{\rho}\ell_{1}^{\tau}\frac{\partial{\cal K}(-q)}{ \partial q^{\tau}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}+(q+\ell_{1})_{ \rho}\ell_{1}^{\tau}{\cal K}(-q)\frac{\partial^{2}\Xi(-q)}{\partial q^{\sigma} \partial q^{\tau}}\] \[+\frac{1}{2}q_{\rho}\ell_{1}^{\tau}\ell_{1}^{\tau}\Bigg{\{}\frac{ \partial^{2}{\cal K}(-q)}{\partial q^{\tau}\partial q^{\kappa}}\frac{\partial \Xi(-q)}{\partial q^{\sigma}}+2\frac{\partial{\cal K}(-q)}{\partial q^{\tau} }\frac{\partial^{2}\Xi(-q)}{\partial q^{\sigma}\partial q^{\kappa}}+{\cal K} (-q)\frac{\partial^{3}\Xi(-q)}{\partial q^{\sigma}\partial q^{\kappa}\partial q ^{\tau}}\Bigg{\}}\] \[+\frac{1}{4}\eta_{\rho\sigma}{\cal K}(-q)\ell_{2}^{\kappa}\ell_{ 2}^{\tau}\frac{\partial^{2}\Xi(-q)}{\partial q^{\kappa}\partial q^{\tau}}+ \frac{1}{2}(q+\ell_{1}).\ell_{2}{\cal K}(-q)\frac{\partial^{2}\Xi(-q)}{ \partial q^{\rho}\partial q^{\sigma}}\] \[+\frac{1}{2}q.\ell_{2}\ell_{1}^{\tau}\frac{\partial{\cal K}(-q)}{ \partial q^{\tau}}\frac{\partial^{2}\Xi(-q)}{\partial q^{\rho}\partial q^{ \sigma}}+\frac{1}{2}q.\ell_{2}\ell_{1}^{\tau}{\cal K}(-q)\frac{\partial^{3}\Xi (-q)}{\partial q^{\rho}\partial q^{\sigma}\partial q^{\tau}}\] \[+\ell_{2\rho}{\cal K}(-q)\frac{\partial\Xi(-q)}{\partial q^{\sigma }}+\ell_{2\rho}\ell_{1}^{\tau}\frac{\partial{\cal K}(-q)}{\partial q^{\tau}} \frac{\partial\Xi(-q)}{\partial q^{\sigma}}+\ell_{2\rho}\ell_{1}^{\tau}{\cal K}( -q)\frac{\partial^{2}\Xi(-q)}{\partial q^{\sigma}\partial q^{\tau}}\] \[-\ell_{2}^{a}{\cal K}(-q)\frac{\partial\Xi(-q)}{\partial q^{\rho} }\Sigma^{T}_{\sigma a}-\ell_{2}^{a}\ell_{1}^{\tau}\frac{\partial{\cal K}(-q)}{ \partial q^{\tau}}\frac{\partial\Xi(-q)}{\partial q^{\rho}}\Sigma^{T}_{\sigma a }-\ell_{2}^{a}\ell_{1}^{\tau}{\cal K}(-q)\frac{\partial^{2}\Xi(-q)}{\partial q ^{\rho}\partial q^{\tau}}\Sigma^{T}_{\sigma a}\] \[-\frac{1}{4}\ell_{2}^{r}\ell_{2}^{a}{\cal K}(-q)\frac{\partial^{2 }\Xi(-q)}{\partial q^{\rho}\partial q^{\tau}}\Sigma^{T}_{\sigma a}+\frac{1}{4} \ell_{2}^{a}\ell_{2}^{\tau}{\cal K}(-q)\Sigma_{\sigma a}\frac{\partial^{2}\Xi( -q)}{\partial q^{\rho}\partial q^{\tau}}+\frac{1}{4}\ell_{2}^{2}{\cal K}(-q) \frac{\partial^{2}\Xi(-q)}{\partial q^{\rho}\partial q^{\sigma}}\] \[+\frac{1}{12}\ell_{2}^{r}\Bigg{\{}q_{\rho}\ell_{2}^{\kappa}{\cal K}( -q)\frac{\partial^{3}\Xi(-q)}{\partial q^{\kappa}\partial q^{\sigma}\partial q^{ \tau}}+q.\ell_{2}{\cal K}(-q)\frac{\partial^{3}\Xi(-q)}{\partial q^{\rho} \partial q^{\sigma}\partial q^{\tau}}\Bigg{\}}\] \[+\ell_{2}^{\kappa}\ell_{2}^{r}\Big{\{}\Delta_{\rho\kappa\sigma\tau}( -q)+\Delta_{\kappa\rho\tau\sigma}(-q)-\Delta_{\kappa\rho\sigma\tau}(-q)- \Delta_{\rho\kappa\tau\sigma}(-q)\Big{\}}\Bigg{]}\.\] (C.5) In principle the above expression can be evaluated using the identities (2.10)-(2.17), but it is tedious and we don't need the full contribution. Instead we only evaluate the above expression by contracting \(\epsilon(-q)^{T}\) from the left considering the particle with momentum \(q\) being on-shell i.e. \(q^{2}+m^{2}=0\) and \(\epsilon(-q)^{T}{\cal K}(-q)=0\). Implementing these conditions and using the identities (2.10)-(2.17), the above expression simplifies to \[Z_{1,\mu\nu\rho\sigma}\] \[\equiv \epsilon(-q)^{T}\Gamma^{(3)}_{\mu\nu}(q,-q-\ell_{1},\ell_{1})\Xi(-q -\ell_{1})\Gamma^{(3)}_{\rho\sigma}(q+\ell_{1},-q-\ell_{1}-\ell_{2},\ell_{2}) \Xi(-q-\ell_{1}-\ell_{2})\] \[= -\kappa^{2}\epsilon^{T}\Bigg{[}-\eta_{\mu\nu}\eta_{\rho\sigma}q.\ell_{1 }(2q.\ell_{1}+q.\ell_{2})+2q_{\mu}q_{\nu}\eta_{\rho\sigma}(2q.\ell_{1}+q.\ell_{2 }+\ell_{1}^{2}+\ell_{1}.\ell_{2})\] \[+2q_{\mu}\ell_{1\nu}\eta_{\rho\sigma}(2q.\ell_{1}+q.\ell_{2})-2 \eta_{\rho\sigma}q_{\mu}\ell_{1}^{b}\Sigma_{\nu b}^{T}(2q.\ell_{1}+q.\ell_{2}) +2\eta_{\mu\nu}q.\ell_{1}(q_{\rho}q_{\sigma}+q_{\rho}\ell_{1\sigma}+q_{\sigma} \ell_{1\rho})\] \[-4q_{\mu}q_{\nu}(q+\ell_{1})_{\rho}(q+\ell_{1})_{\sigma}-4q_{\mu} \ell_{1\nu}(q_{\rho}q_{\sigma}+q_{\rho}\ell_{1\sigma}+q_{\sigma}\ell_{1\rho}) +4q_{\mu}\ell_{1}^{b}\Sigma_{\nu b}^{T}(q_{\rho}q_{\sigma}+q_{\rho}\ell_{1 \sigma}+q_{\sigma}\ell_{1\rho})\] \[+2\ell_{1}^{b}\ell_{1\mu}\Sigma_{\nu b}^{T}q_{\rho}q_{\sigma}+2 \eta_{\mu\nu}q.\ell_{1}q_{\rho}\ell_{2\sigma}-4q_{\mu}q_{\nu}(q+\ell_{1})_{ \rho}\ell_{2\sigma}-4q_{\mu}\ell_{1\nu}q_{\rho}\ell_{2\sigma}+4q_{\mu}\ell_{1 \nu b}^{B}q_{\rho}\ell_{2\sigma}\] \[-2\eta_{\mu\nu}q.\ell_{1}\ell_{2}^{a}q_{\rho}\Sigma_{\sigma a}^{T }+4q_{\mu}q_{\nu}\ell_{2}^{a}(q+\ell_{1})_{\rho}\Sigma_{\sigma a}^{T}+4q_{\mu} \ell_{1\nu}\ell_{2}^{a}q_{\rho}\Sigma_{\sigma a}^{T}-4q_{\mu}\ell_{1}^{b} \Sigma_{\nu b}^{T}\ell_{2}^{a}q_{\rho}\Sigma_{\sigma a}^{T}\] \[+2q_{\mu}q_{\nu}\ell_{2}^{a}\ell_{2\rho}\Sigma_{\sigma a}^{T}-2iq. \ell_{1}q_{\mu}q_{\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu}}\frac{ \partial\Xi(-q)}{\partial q^{\sigma}}\] \[-2iq_{\rho}q_{\sigma}\ell_{1}^{b}\Big{\{}\Delta_{\mu\lambda\nu b} (-q)+\Delta_{\lambda\mu b\nu}(-q)-\Delta_{\lambda\mu\nu b}(-q)-\Delta_{\mu \lambda b\nu}(-q)\Big{\}}\] \[-2iq_{\mu}q_{\nu}\ell_{2}^{s}\ell_{2}^{r}\Big{\{}\Delta_{\rho \kappa\sigma\tau}(-q)+\Delta_{\kappa\rho\tau\sigma}(-q)-\Delta_{\kappa\rho \sigma\tau}(-q)-\Delta_{\rho\kappa\tau\sigma}(-q)\Big{\}}\] \[+\ell_{1}^{\lambda}\ell_{1}^{b}\bigg{\{}-\frac{1}{6}q_{\mu}q_{ \rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{ \partial q^{\lambda}}\frac{\partial\mathcal{K}(-q)}{\partial q^{b}}\frac{ \partial\Xi(-q)}{\partial q^{\sigma}}-\frac{1}{6}q_{\mu}q_{\rho}\frac{ \partial\mathcal{K}(-q)}{\partial q^{\lambda}}\frac{\partial\Xi(-q)}{\partial q ^{\nu}}\frac{\partial\mathcal{K}(-q)}{\partial q^{\rho}}\frac{\partial\Xi(-q) }{\partial q^{\sigma}}\] \[-\frac{1}{6}q_{\lambda}q_{\rho}\frac{\partial\mathcal{K}(-q)}{ \partial q^{\mu}}\frac{\partial\Xi(-q)}{\partial q^{b}}\frac{\partial\mathcal{K }(-q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}-\frac{1}{6 }q_{\lambda}q_{\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{b}}\frac{ \partial\Xi(-q)}{\partial q^{\mu}}\frac{\partial\mathcal{K}(-q)}{\partial q ^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}\] \[+\frac{1}{3}q_{\mu}q_{\rho}\frac{\partial\mathcal{K}(-q)}{ \partial q^{b}}\frac{\partial\Xi(-q)}{\partial q^{\lambda}}\frac{\partial\Xi(- q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}+\frac{1}{3}q_{ \lambda}q_{\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{\mu}}\frac{ \partial\Xi(-q)}{\partial q^{b}}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu }}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}\Bigg{\}}\] \[-i\ell_{1\nu}\ell_{1}^{b}q_{\mu}q_{\rho}\frac{\partial\mathcal{K}( -q)}{\partial q^{b}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}+2i\eta_{\nu \sigma}q_{\rho}q.\ell_{1}\ell_{1}^{b}\frac{\partial\mathcal{K}(-q)}{\partial q ^{\mu}}\frac{\partial\Xi(-q)}{\partial q^{b}}-2i\ell_{1\sigma}q_{\rho}q.\ell_{ 1}\frac{\partial\mathcal{K}(-q)}{\partial q^{\mu}}\frac{\partial\Xi(-q)}{ \partial q^{\nu}}\] \[-i(q.\ell_{1})^{2}q_{\rho}\frac{\partial^{2}\mathcal{K}(-q)}{ \partial q^{\mu}\partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}+iq.\ell_{1}\ell_{1}^{b}q_{\nu}q_{\rho}\frac{\partial^{2}\mathcal{K}(-q)}{ \partial q^{\mu}\partial q^{b}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}+2iq _{\rho}q.\ell_{1}\ell_{1}^{b}\frac{\partial\mathcal{K}(-q)}{\partial q^{\mu}} \frac{\partial\Xi(-q)}{\partial q^{\sigma}}\Sigma_{\nu b}^{T}\] \[+2iq_{\rho}q.\ell_{1}\ell_{1}^{b}q_{\nu}\frac{\partial\mathcal{K}( -q)}{\partial q^{\mu}}\frac{\partial^{2}\Xi(-q)}{\partial q^{b}\partial q^{ \sigma}}-2iq_{\rho}(q.\ell_{1})^{2}\frac{\partial\mathcal{K}(-q)}{\partial q^{ \mu}}\frac{\partial^{2}\Xi(-q)}{\partial q^{\nu}\partial q^{\sigma}}\] \[-2iq_{\rho}.\ell_{1}q_{\mu}\ell_{1\rho}\frac{\partial\mathcal{K}(-q )}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}+i\ell_{1}^{b} \ell_{1\mu}q_{\rho}q_{\nu}\frac{\partial\mathcal{K}(-q)}{\partial q^{b}}\frac{ \partial\Xi(-q)}{\partial\partial q^{\rho}}-2iq.\ell_{1}\ell_{1\mu}q_{\rho}\frac{ \partial\mathcal{K}(-q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{ \sigma}}\] \[+2i\eta_{\mu\nu}q.\ell_{1}q_{\rho}\ell_{1}^{\tau}\frac{\partial \mathcal{K}(-q)}{\partial q^{\tau}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}- iq.\ell_{1}q_{\mu}q_{\rho}\ell_{1}^{\tau}\frac{\partial^{2}\mathcal{K}(-q)}{ \partial q^{\nu}\partial q^{\tau}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}- i\ell_{1}^{2}q_{\mu}q_{\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu}} \frac{\partial\Xi(-q)}{\partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q ^{\sigma}}\] \[-2iq_{\ell}.\ell_{1}q_{\mu}q_{\rho}\ell_{1}^{\tau}\frac{\partial \mathcal{K}(-q)}{\partial q^{\nu}}\frac{\partial^{2}\Xi(-q)}{\partial q^{\rho} }+i\ell_{1}^{b}q_{\mu}q.\ell_{2}q_{\nu}\frac{\partial\mathcal{K}(-q)}{ \partial q^{b}}\frac{\partial^{2}\Xi(-q)}{\partial q^{\rho}\partial q^{\sigma}}- iq.\ell_{1}q_{\mu}q_{\rho}\ell_{2}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu}} \frac{\partial^{2}\Xi(-q)}{\partial q^{\rho}\partial q^{\sigma}}\] \[-2iq_{\mu}q_{\nu}\ell_{2}\rho_{\ell}^{\tau}\frac{\partial\mathcal{K}( -q)}{\partial q^{\tau}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}-2i\ell_{1}^{b}q_{ \mu}\ell_{2} linear order \[\Gamma^{(4)}_{\mu\nu,\rho\sigma}(q,-q-\ell_{1}-\ell_{2},\ell_{1},\ell _{2})\Xi(-q-\ell_{1}-\ell_{2}) \tag{111}\] \[= i\kappa^{2}\Bigg{[}(\eta_{\mu\nu}\eta_{\rho\sigma}-2\eta_{\mu\rho }\eta_{\nu\sigma})\Bigg{\{}{\cal K}(-q)\Xi(-q)+(\ell_{1}+\ell_{2})\kappa{\cal K }(-q)\frac{\partial\Xi(-q)}{\partial q^{\kappa}}+\frac{1}{2}(\ell_{1}+\ell_{2} )\kappa\frac{\partial{\cal K}(-q)}{\partial q^{\kappa}}\Xi(-q)\Bigg{\}}\] \[-\eta_{\mu\nu}\Bigg{\{}q_{\rho}\frac{\partial{\cal K}(-q)}{ \partial q^{\sigma}}\Xi(-q)+q_{\rho}(\ell_{1}+\ell_{2})\kappa\frac{\partial{ \cal K}(-q)}{\partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{\kappa}} +\frac{1}{2}(\ell_{1}+\ell_{2})_{\rho}\frac{\partial{\cal K}(-q)}{\partial q ^{\sigma}}\Xi(-q)\] \[+\frac{1}{2}q_{\rho}(\ell_{1}+\ell_{2})\kappa\frac{\partial^{2}{ \cal K}(-q)}{\partial q^{\sigma}\partial q^{\kappa}}\Xi(-q)+\frac{1}{2}\ell_ {2}^{b}\frac{\partial{\cal K}(-q)}{\partial q^{\rho}}\Sigma_{\sigma b}\Xi(-q )-\frac{1}{2}\ell_{2}^{b}\Sigma_{\sigma b}^{T}\frac{\partial{\cal K}(-q)}{ \partial q^{\rho}}\Xi(-q)\Bigg{\}}\] \[-\eta_{\rho\sigma}\Bigg{\{}q_{\mu}\frac{\partial{\cal K}(-q)}{ \partial q^{\nu}}\Xi(-q)+q_{\mu}(\ell_{1}+\ell_{2})\kappa\frac{\partial{\cal K }(-q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{\kappa}}+\frac{1} {2}(\ell_{1}+\ell_{2})_{\mu}\frac{\partial{\cal K}(-q)}{\partial q^{\nu}}\Xi (-q)\] \[+\frac{1}{2}q_{\mu}(\ell_{1}+\ell_{2})\kappa\frac{\partial^{2}{ \cal K}(-q)}{\partial q^{\nu}\partial q^{\kappa}}\Xi(-q)+\frac{1}{2}\ell_{1}^ {b}\frac{\partial{\cal K}(-q)}{\partial q^{\mu}}\Sigma_{\nu b}\Xi(-q)-\frac{1 }{2}\ell_{1}^{b}\Sigma_{\nu b}^{T}\frac{\partial{\cal K}(-q)}{\partial q^{ \mu}}\Xi(-q)\Bigg{\}}\] \[+\frac{3}{2}\eta_{\mu\rho}\Bigg{\{}q_{\sigma}\frac{\partial{\cal K }(-q)}{\partial q^{\nu}}\Xi(-q)+q_{\sigma}(\ell_{1}+\ell_{2})\kappa\frac{ \partial{\cal K}(-q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{ \kappa}}+\frac{1}{2}(\ell_{1}+\ell_{2})_{\sigma}\frac{\partial{\cal K}(-q)}{ \partial q^{\nu}}\Xi(-q)\] \[+\frac{1}{2}q_{\sigma}(\ell_{1}+\ell_{2})\kappa\frac{\partial^{2}{ \cal K}(-q)}{\partial q^{\kappa}\partial q^{\nu}}\Xi(-q)+q_{\nu}\frac{ \partial{\cal K}(-q)}{\partial q^{\sigma}}\Xi(-q)+q_{\nu}(\ell_{1}+\ell_{2}) \kappa\frac{\partial{\cal K}(-q)}{\partial q^{\sigma}}\frac{\partial\Xi(-q)} {\partial q^{\kappa}}\] \[+\frac{1}{2}(\ell_{1}+\ell_{2})_{\nu}\frac{\partial{\cal K}(-q)}{ \partial q^{\sigma}}\Xi(-q)+\frac{1}{2}q_{\nu}(\ell_{1}+\ell_{2})\kappa\frac{ \partial^{2}{\cal K}(-q)}{\partial q^{\kappa}\partial q^{\sigma}}\Xi(-q) \Bigg{\}}+q_{\mu}q_{\rho}\frac{\partial^{2}{\cal K}(-q)}{\partial q^{\nu} \partial q^{\sigma}}\Xi(-q)\] \[+q_{\mu}q_{\rho}(\ell_{1}+\ell_{2})\kappa\frac{\partial^{2}{\cal K }(-q)}{\partial q^{\nu}\partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{ \kappa}}+\frac{1}{2}\Big{\{}q_{\mu}(\ell_{1}+\ell_{2})_{\rho}+q_{\rho}(\ell_{1 }+\ell_{2})_{\mu}\Big{\}}\frac{\partial^{2}{\cal K}(-q)}{\partial q^{\nu} \partial q^{\sigma}}\Xi(-q)\] \[+\frac{1}{2}q_{\mu}q_{\rho}(\ell_{1}+\ell_{2})\kappa\frac{\partial ^{3}{\cal K}(-q)}{\partial q^{\nu}\partial q^{\sigma}\partial q^{\kappa}}\Xi(- q)-\frac{1}{4}\eta_{\nu\sigma}(\ell_{2}-\ell_{1})\kappa\Bigg{\{}\frac{ \partial{\cal K}(-q)}{\partial q^{\kappa}}\Sigma_{\rho\mu}-\Sigma_{\rho\mu}^{T} \frac{\partial{\cal K}(-q)}{\partial q^{\kappa}}\Bigg{\}}\Xi(-q)\] \[+\frac{1}{2}\eta_{\mu\rho}\Bigg{\{}\ell_{2}^{b}\frac{\partial{\cal K }(-q)}{\partial q^{\nu}}\Sigma_{\sigma b}-\ell_{2}^{b}\Sigma_{\sigma b}^{T} \frac{\partial{\cal K}(-q)}{\partial q^{\nu}}+\ell_{1}^{b}\frac{\partial{\cal K }(-q)}{\partial q^{\sigma}}\Sigma_{\nu b}-\ell_{1}^{b}\Sigma_{\nu b}^{T}\frac{ \partial{\cal K}(-q)}{\partial q^{\sigma}}\Bigg{\}}\Xi(-q)\] \[+\frac{1}{4}q_{\mu}\ell_{2}\Bigg{\{}\frac{\partial^{2}{\cal K}(-q)}{ \partial q^{\nu}\partial q^{\rho}}\Sigma_{\sigma b}-\Sigma_{\sigma b}^{T}\frac{ \partial^{2}{\cal K}(-q)}{\partial q^{\nu}\partial q^{\rho}}\Bigg{\}}\Xi(-q)+ \frac{1}{4}q_{\rho}\ell_{1}^{b}\Bigg{\{}\frac{\partial^{2}{\cal K}(-q)}{ \partial q^{\sigma}\partial q^{\mu}}\Sigma_{\nu b}-\Sigma_{\nu b}^{T}\frac{ \partial^{2}{\cal K}(-q)}{\partial q^{\sigma}\partial q^{\mu}}\Bigg{\}}\Xi(-q)\] \[-\frac{1}{2}\eta_{\nu\sigma}\ell_{2}^{b}\Bigg{\{}\frac{\partial{\cal K }(-q)}{\partial q^{\rho}}\Sigma_{a\mu}-\Sigma_{a\mu}^{T}\frac{\partial{\cal K }(-q)}{\partial q^{\rho}}\Bigg{\}}\Xi(-q)-\frac{1}{2}\eta_{\nu\sigma}\ell_{1}^{a} \Bigg{\{}\frac{\partial{\cal K}(-q)}{\partial q^{\mu}}\Sigma_{a\rho}-\Sigma_{a \rho}^{T}\frac{\partial{\cal K}(-q)}{\partial q^{\mu}}\Bigg{\}}\Xi(-q)\] \[-\frac{1}{2}\ell_{2\nu}\Bigg{\{}\frac{\partial{\cal K}(-q)}{ \partial q^{\sigma}}\Sigma_{\mu\rho}-\Sigma_{\mu\rho}^{T}\frac{\partial{\cal K }(-q)}{\partial q^{\sigma}}\Bigg{\}}\Xi(-q)-\frac{1}{2}\ell_{1\sigma} \Bigg{\{}\frac{\partial{\cal K}(-q)}{\partial q^{\nu}}\Sigma_{\rho\mu}-\Sigma_{ \rho\mu}^{T}\frac{\partial{\cal K}(-q)}{\partial q^{\nu}}\Bigg{\}}\Xi(-q)\] \[+{\cal O}(\ell_{1}^{2},\ell_{1}\ell_{2},\ell_{2}^{2})\Bigg{]}\.\] Now we contract the above expression with \(\epsilon(-q)^{T}\) from the left considering the particle with momentum \(q\) being on-shell i.e. \(q^{2}+m^{2}=0\) and \(\epsilon(-q)^{T}{\cal K}(-q)=0\). Then using the identities (2.10)-(2.17), the above expression up to order \({\cal O}(\ell_{1},\ell_{2})\) simplifies to \[Z_{2,\mu\nu\rho\sigma}\] \[\equiv \epsilon(-q)^{T}\ \Gamma^{(4)}_{\mu\nu,\rho\sigma}(q,-q-\ell_{1}-\ell_{2}, \ell_{1},\ell_{2})\Xi(-q-\ell_{1}-\ell_{2})\] \[= i\kappa^{2}\epsilon^{T}\Bigg{[}(\eta_{\mu\nu}\eta_{\rho\sigma}-2 \eta_{\mu\rho}\eta_{\nu\sigma})\Big{\{}i(q^{2}+m^{2})+iq.(\ell_{1}+\ell_{2}) \Big{\}}-\eta_{\mu\nu}\Big{\{}2iq_{\rho}q_{\sigma}+i(\ell_{1}+\ell_{2})_{\rho }q_{\sigma}\] \[+iq_{\rho}(\ell_{1}+\ell_{2})_{\sigma}\Big{\}}-\eta_{\rho\sigma} \Big{\{}2iq_{\mu}q_{\nu}+i(\ell_{1}+\ell_{2})_{\mu}q_{\nu}+iq_{\mu}(\ell_{1}+ \ell_{2})_{\nu}\Big{\}}+3\eta_{\mu\rho}\Big{\{}2iq_{\nu}q_{\sigma}\] \[+iq_{\nu}(\ell_{1}+\ell_{2})_{\sigma}+iq_{\sigma}(\ell_{1}+\ell _{2})_{\nu}\Big{\}}+\eta_{\nu\sigma}\Big{\{}2iq_{\mu}q_{\nu}+iq_{\mu}(\ell_{1} +\ell_{2})_{\rho}+iq_{\rho}(\ell_{1}+\ell_{2})_{\mu}\Big{\}}\] \[-i\eta_{\nu\rho}q_{\mu}\ell_{2}^{b}\Sigma^{T}_{\sigma b}-i\eta_{ \sigma\mu}q_{\rho}\ell_{1}^{b}\Sigma^{T}_{\nu b}+2i\eta_{\mu\nu}\ell_{2}^{b}q _{\rho}\Sigma^{T}_{\sigma b}+2i\eta_{\rho\sigma}\ell_{1}^{b}q_{\mu}\Sigma^{T} _{\nu b}\] \[+2i\eta_{\nu\sigma}q_{\rho}\ell_{2}^{b}\Sigma^{T}_{\sigma a}+2i \eta_{\nu\sigma}q_{\mu}\ell_{1}^{a}\Sigma^{T}_{\sigma a}+2i\eta_{\sigma\nu} \sigma^{T}_{\mu\rho}+2i\ell_{1\sigma}q_{\nu}\Sigma^{T}_{\sigma\mu}\] \[-2i\eta_{\mu\rho}q_{\nu}\ell_{2}^{b}\Sigma^{T}_{\sigma b}-2i\eta_ {\mu\rho}q_{\sigma}\ell_{1}^{b}\Sigma^{T}_{\nu b}+i\eta_{\nu\sigma}q.(\ell_{ 2}-\ell_{1})\Sigma^{T}_{\rho\mu}\] \[-\eta_{\mu\nu}\Bigg{\{}\frac{1}{2}q_{\rho}\ell_{1}^{\kappa}\frac {\partial{\cal K}(-q)}{\partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q ^{\kappa}}-\frac{1}{2}q_{\rho}\ell_{1}^{\kappa}\frac{\partial{\cal K}(-q)}{ \partial q^{\kappa}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}\Bigg{\}}\] \[-\eta_{\rho\sigma}\Bigg{\{}\frac{1}{2}q_{\mu}\ell_{2}^{\kappa} \frac{\partial{\cal K}(-q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q ^{\kappa}}-\frac{1}{2}q_{\mu}\ell_{2}^{\kappa}\frac{\partial{\cal K}(-q)}{ \partial q^{\kappa}}\frac{\partial\Xi(-q)}{\partial q^{\nu}}\Bigg{\}}\] \[+\frac{3}{2}\eta_{\mu\rho}\Bigg{\{}-\frac{1}{2}q_{\sigma}(\ell_{1 }+\ell_{2})^{\kappa}\frac{\partial{\cal K}(-q)}{\partial q^{\kappa}}\frac{ \partial\Xi(-q)}{\partial q^{\nu}}+\frac{1}{2}q_{\sigma}(\ell_{1}+\ell_{2})^{ \kappa}\frac{\partial{\cal K}(-q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{ \partial q^{\kappa}}\] \[-\frac{1}{2}q_{\nu}(\ell_{1}+\ell_{2})^{\kappa}\frac{\partial{ \cal K}(-q)}{\partial q^{\kappa}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}+ \frac{1}{2}q_{\nu}(\ell_{1}+\ell_{2})^{\kappa}\frac{\partial{\cal K}(-q)}{ \partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{\kappa}}\Bigg{\}}\] \[-q_{\mu}q_{\rho}\frac{\partial{\cal K}(-q)}{\partial q^{\nu}} \frac{\partial\Xi(-q)}{\partial q^{\sigma}}-q_{\mu}q_{\rho}\frac{\partial{\cal K }(-q)}{\partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{\nu}}+q_{\mu}q _{\rho}(\ell_{1}+\ell_{2})^{\kappa}\frac{\partial^{2}{\cal K}(-q)}{\partial q ^{\nu}\partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{\kappa}}\] \[-\frac{1}{2}\Big{\{}q_{\mu}(\ell_{1}+\ell_{2})_{\rho}+q_{\rho}( \ell_{1}+\ell_{2})_{\mu}\Big{\}}\Bigg{\{}\frac{\partial{\cal K}(-q)}{\partial q ^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}+\frac{\partial{\cal K}(-q)}{ \partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{\nu}}\Bigg{\}}\] \[-\frac{1}{2}q_{\mu}q_{\rho}(\ell_{1}+\ell_{2})^{\kappa}\Bigg{\{} \frac{\partial^{2}{\cal K}(-q)}{\partial q^{\nu}\partial q^{\sigma}}\frac{ \partial\Xi(-q)}{\partial q^{\kappa}}+\frac{\partial^{2}{\cal K}(-q)}{ \partial q^{\nu}\partial q^{\kappa}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}+ \frac{\partial^{2}{\cal K}(-q)}{\partial q^{\kappa}\partial q^{\sigma}}\frac{ \partial\Xi(-q)}{\partial q^{\nu}}\] \[+\frac{\partial{\cal K}(-q)}{\partial q^{\nu}}\frac{\partial^{2} \Xi(-q)}{\partial q^{\kappa}\partial q^{\sigma}}+\frac{\partial{\cal K}(-q)}{ \partial q^{\kappa}}\frac{\partial^{2}\Xi(-q)}{\partial q^{\sigma}\partial q^{ \nu}}+\frac{\partial{\cal K}(-q)}{\partial q^{\sigma}}\frac{\partial^{2}\Xi(-q)}{ \partial q^{\nu}\partial q^{\kappa}}\Bigg{\}}\] \[+\frac{1}{2}\eta_{\mu\rho}\ell_{2}^{b}\Bigg{\{}-q_{\sigma}\frac{ \partial{\cal K}(-q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{b}}+q_{ \sigma}\frac{\partial{\cal K}(-q)}{\partial q^{b}}\frac{\partial\Xi(-q)}{\partial q ^{\nu}}+q_{b}\frac{\partial{\cal K}(-q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{ \partial q^{\sigma}}-q_{b}\frac{\partial{\cal K}(-q)}{\partial q^{\sigma}} \frac{\partial\Xi(-q)}{\partial q^{\nu}}\Bigg{\}}\] \[+\frac{1}{2}\eta_{\mu\rho}\ell_{1}^{b}\Bigg{\{}-q_{\nu}\frac{ \partial{\cal K}(-q)}{\partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{b}}+q_{ \nu}\frac{\partial{\cal K}(-q)}{\partial q^{b}}\frac{\partial\Xi(-q)}{\partial q ^{\sigma}}+q_{b}\frac{\partial{\cal K}(-q)}{\partial q^{\sigma}}\frac{\partial \Xi(-q)}{\partial q^{\nu}}-q_{b}\frac{\partial{\cal K}(-q)}{\partial q^{\nu}} \frac{\partial\Xi(-q)}{\partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{ \sigma}}\Bigg{\}}\] \[+\frac{1}{4}q_{\mu}\ell_{2}^{b}\Bigg{\{}2\frac{\partial{\cal K}(-q)}{ \partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{\rho}}\Sigma^{T}_{\sigma b}+2 \frac{\partial{\cal K}(-q)}{\partial q^{\rho}}\frac{\partial\Xi(-q)}{ \partial q^{\rho}}\Sigma^{T}_{\sigma b}-q_{\sigma}\frac{\partial^{2}{\cal K}(-q)}{ \partial q^{\nu}\partial q^{\rho}}\frac{\partial\Xi(-q)}{\partial q^{b}}\] \[+q_{\sigma}\frac{\partial^{2}{\cal K}(-q)}{\partial q^{\nu} \partial q^{b}}\frac{\partial\Xi(-q)}{\partial q^{\rho}}+q_{\sigma}\frac{ \partial^{2}{\cal K}(-q)}{\partial q^{b}\partial q^{\rho}}\frac{\partial\Xi(-q)}{ \partial q^{\nu}}-q_{b}\frac{\partial^{2}{\cal K}(- \[-\eta_{b\nu}\frac{\partial\mathcal{K}(-q)}{\partial q^{\sigma}}\frac{ \partial\Xi(-q)}{\partial q^{\rho}}+\eta_{\sigma\rho}\frac{\partial\mathcal{K}(- q)}{\partial q^{b}}\frac{\partial\Xi(-q)}{\partial q^{\nu}}-\eta_{b\rho}\frac{\partial \mathcal{K}(-q)}{\partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{\nu}}+ \eta_{\sigma\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu}}\frac{ \partial\Xi(-q)}{\partial q^{b}}\] \[-\eta_{b\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu}} \frac{\partial\Xi(-q)}{\partial q^{\sigma}}+\eta_{\sigma\nu}\frac{\partial \mathcal{K}(-q)}{\partial q^{\rho}}\frac{\partial\Xi(-q)}{\partial q^{b}}- \eta_{b\nu}\frac{\partial\mathcal{K}(-q)}{\partial q^{\rho}}\frac{\partial \Xi(-q)}{\partial q^{\sigma}}+q_{\sigma}\frac{\partial\mathcal{K}(-q)}{ \partial q^{b}}\frac{\partial^{2}\Xi(-q)}{\partial q^{\nu}\partial q^{\rho}}\] \[-q_{b}\frac{\partial\mathcal{K}(-q)}{\partial q^{\sigma}}\frac{ \partial^{2}\Xi(-q)}{\partial q^{\nu}\partial q^{\rho}}+q_{\sigma}\frac{ \partial\mathcal{K}(-q)}{\partial q^{\nu}}\frac{\partial^{2}\Xi(-q)}{ \partial q^{b}\partial q^{\rho}}-q_{b}\frac{\partial\mathcal{K}(-q)}{\partial q ^{\nu}}\frac{\partial^{2}\Xi(-q)}{\partial q^{\sigma}\partial q^{\rho}}+q_{ \sigma}\frac{\partial\mathcal{K}(-q)}{\partial q^{\rho}}\frac{\partial^{2} \Xi(-q)}{\partial q^{\nu}\partial q^{b}}\] \[-q_{b}\frac{\partial\mathcal{K}(-q)}{\partial q^{\rho}}\frac{ \partial^{2}\Xi(-q)}{\partial q^{\nu}\partial q^{\sigma}}\Bigg{\}}\] \[+\frac{1}{4}q_{\rho}\ell_{1}^{b}\Bigg{\{}2\frac{\partial\mathcal{ K}(-q)}{\partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{\mu}}\Sigma_{\nu b}^{T}+2 \frac{\partial\mathcal{K}(-q)}{\partial q^{\mu}}\frac{\partial\Xi(-q)}{ \partial q^{\sigma}}\Sigma_{\nu b}^{T}-q_{\nu}\frac{\partial^{2}\mathcal{K}( -q)}{\partial q^{\sigma}\partial q^{\mu}}\frac{\partial\Xi(-q)}{\partial q^{ b}}\] \[+q_{\nu}\frac{\partial^{2}\mathcal{K}(-q)}{\partial q^{\sigma} \partial q^{b}}\frac{\partial\Xi(-q)}{\partial q^{\mu}}+q_{\nu}\frac{\partial ^{2}\mathcal{K}(-q)}{\partial q^{b}\partial q^{\mu}}\frac{\partial\Xi(-q)}{ \partial q^{\sigma}}-q_{b}\frac{\partial^{2}\mathcal{K}(-q)}{\partial q^{\nu }\partial q^{\mu}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}+\eta_{\sigma\nu} \frac{\partial\mathcal{K}(-q)}{\partial q^{b}}\frac{\partial\Xi(-q)}{ \partial q^{\mu}}\] \[-\eta_{b\sigma}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu}} \frac{\partial\Xi(-q)}{\partial q^{\mu}}+\eta_{\nu\mu}\frac{\partial\mathcal{ K}(-q)}{\partial q^{b}}\frac{\partial\Xi(-q)}{\partial q^{\sigma}}-\eta_{b\mu} \frac{\partial\mathcal{K}(-q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{ \partial q^{\sigma}}+\eta_{b\nu\mu}\frac{\partial\mathcal{K}(-q)}{\partial q ^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{b}}\] \[-\eta_{b\mu}\frac{\partial\mathcal{K}(-q)}{\partial q^{\sigma}} \frac{\partial\Xi(-q)}{\partial q^{\nu}}+\eta_{\sigma\nu}\frac{\partial \mathcal{K}(-q)}{\partial q^{\mu}}\frac{\partial\Xi(-q)}{\partial q^{b}}-\eta_{b \sigma}\frac{\partial\mathcal{K}(-q)}{\partial q^{\mu}}\frac{\partial\Xi(-q)}{ \partial q^{\mu}}+q_{\nu}\frac{\partial\mathcal{K}(-q)}{\partial q^{\rho}} \frac{\partial^{2}\Xi(-q)}{\partial q^{\sigma}\partial q^{\mu}}\] \[-q_{b}\frac{\partial\mathcal{K}(-q)}{\partial q^{\nu}}\frac{ \partial^{2}\Xi(-q)}{\partial q^{\sigma}\partial q^{\mu}}+q_{\nu}\frac{\partial \mathcal{K}(-q)}{\partial q^{\sigma}}\frac{\partial^{2}\Xi(-q)}{\partial q^{b} \partial q^{\mu}}-q_{b}\frac{\partial\mathcal{K}(-q)}{\partial q^{\sigma}} \frac{\partial^{2}\Xi(-q)}{\partial q^{\nu}\partial q^{\mu}}+q_{\nu}\frac{ \partial\mathcal{K}(-q)}{\partial q^{\mu}}\frac{\partial^{2}\Xi(-q)}{ \partial q^{\sigma}\partial q^{b}}\] \[-q_{b}\frac{\partial\mathcal{K}(-q)}{\partial q^{\mu}}\frac{ \partial^{2}\Xi(-q)}{\partial q^{\sigma}\partial q^{\nu}}\Bigg{\}}\] \[-\frac{1}{4}\eta_{\nu\sigma}(\ell_{2}+\ell_{1})^{\kappa}\Bigg{\{}- q_{\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{\kappa}}\frac{\partial\Xi(-q)}{ \partial q^{\mu}}+q_{\rho}\frac{\partial\mathcal{K}(-q)}{\partial q^{\mu}} \frac{\partial\Xi(-q)}{\partial q^{\kappa}}-q_{\mu}\frac{\partial\mathcal{K}(- q)}{\partial q^{\kappa}}\frac{\partial\Xi(-q)}{\partial q^{\rho}}\] \[+q_{\mu}\frac{\partial\mathcal{K}(-q)}{\partial q^{\rho}}\frac{ \partial\Xi(-q)}{\partial q^{\kappa}}\Bigg{\}}-\frac{1}{2}\eta_{\nu\sigma}q_{.}(\ell_{1}-\ell_{2})\Bigg{\{}-\frac{\partial\mathcal{K}(-q)}{\partial q^{\mu}} \frac{\partial\Xi(-q)}{\partial q^{\rho}}+\frac{\partial\mathcal{K}(-q)}{ \partial q^{\rho}}\frac{\partial\Xi(-q)}{\partial q^{\mu}}\Bigg{\}}\] \[-\frac{1}{2}\ell_{2\nu}\Bigg{\{}q_{\rho}\frac{\partial\mathcal{K}( -q)}{\partial q^{\sigma}}\frac{\partial\Xi(-q)}{\partial q^{\mu}}-q_{\rho} \frac{\partial\mathcal{K}(-q)}{\partial q^{\mu}}\frac{\partial\Xi(-q)}{\partial q ^{\sigma}}\Bigg{\}}-\frac{1}{2}\ell_{1\sigma}\Bigg{\{}q_{\mu}\frac{\partial \mathcal{K}(-q)}{\partial q^{\nu}}\frac{\partial\Xi(-q)}{\partial q^{\rho}}\] \[-q_{\mu}\frac{\partial\mathcal{K}(-q)}{\partial q^{\rho}}\frac{ \partial\Xi(-q)}{\partial q^{\nu}}\Bigg{\}}+\mathcal{O}(\ell_{1}^{2},\ell_{1} \ell_{2},\ell_{2}^{2})\Bigg{]}\.\] (C.8) Above we removed the terms which are anti-symmetric under \(\mu\leftrightarrow\nu\) and/or \(\rho\leftrightarrow\sigma\) exchanges.
2310.20127
Improving Prompt Tuning with Learned Prompting Layers
Prompt tuning prepends a soft prompt to the input embeddings or hidden states and only optimizes the prompt to adapt pretrained models (PTMs) to downstream tasks. The previous work manually selects prompt layers which are far from optimal and failed to exploit the potential of prompt tuning. In this work, we propose a novel framework, \underline{S}elective \underline{P}rompt \underline{T}uning (SPT), that learns to select the proper prompt layers by inserting a prompt controlled by a learnable probabilistic gate at each intermediate layer. We further propose a novel bi-level optimization framework, SPT-DARTS, that can better optimize the learnable gates and improve the final prompt tuning performances of the learned prompt layer settings. We conduct extensive experiments with ten benchmark datasets under the full-data and few-shot scenarios. The results demonstrate that our SPT framework can perform better than the previous state-of-the-art PETuning baselines with comparable or fewer tunable parameters.
Wei Zhu, Ming Tan
2023-10-31T02:07:51Z
http://arxiv.org/abs/2310.20127v1
# Improving Prompt Tuning with Learned Prompting Layers ###### Abstract Prompt tuning prepends a soft prompt to the input embeddings or hidden states and only optimizes the prompt to adapt pretrained models (PTMs) to downstream tasks. The previous work manually selects prompt layers which are far from optimal and failed to exploit the potential of prompt tuning. In this work, we propose a novel framework, Selective Prompt Tuning (SPT), that learns to select the proper prompt layers by inserting a prompt controlled by a learnable probabilistic gate at each intermediate layer. We further propose a novel bi-level optimization framework, SPT-DARTS, that can better optimize the learnable gates and improve the final prompt tuning performances of the learned prompt layer settings. We conduct extensive experiments with ten benchmark datasets under the full-data and few-shot scenarios. The results demonstrate that our SPT framework can perform better than the previous state-of-the-art PETuning baselines with comparable or fewer tunable parameters. ## 1 Introduction Increasingly large pre-trained models (PTMs) [16, 17, 18, 19, 20, 21, 22, 23, 24, 25] have achieved the state-of-the-art (SOTA) performances on most NLP tasks. Full-model fine-tuning is one of the most widely used method for utilizing PTMs. However, fine-tuning [17, 16, 16, 15, 14, 23, 22, 23] needs to tune all parameters of PTMs for each task, resulting in large GPU memory and storage costs, especially for supersized PTMs [16, 15]. Parameter-efficient tuning (PETuning) is a new fine-tuning paradigm that can reduce the adaptation costs of PTMs by only tuning a very small number of internal or additional parameters [14, 23, 22, 23]. Prompt tuning [14] is a simple and popular PETuning method that prepends a sequence of soft prompt tokens to the input sequence and only tunes the prompts to adapt the PTM backbones to downstream tasks. Despite its advantages in parameter efficiency and convenience in PTM deployment, prompt tuning suffers from lower performance and convergence rate than other PETuning methods like Adapaters [11, 22, 23, 23, 22, 23], and BitFit [16]. Recently, there has been a branch of literature investigating the advanced techniques for improving the performances of prompt tuning. P-tuning v2 [15] improves the performance of prompt tuning by inserting soft prompts into every hidden layer of PTMs. However, it is difficult to optimize and needs more training steps to attain competitive performance. BBT [15] optimizes the inserted prompts with derivative-free optimization. Figure 1: Overall comparison between our SPT method and baselines under the few-shot scenario with 100 training samples for each task. All methods are evaluated on ten text classification tasks using RoBERTa-large. The radius of every circle indicates training speed (tokens per millisecond). IDPG (Wu et al., 2022) employs a prompt generator with parameterized hyper-complex multiplication (Le et al., 2021) to generate instance-aware soft prompts. LPT (Liu et al., 2022) inserts an instance-aware late prompt generated by the neural prompt into an intermediate layer of the PTM instead of the input layer or all layers. Liu et al. (2022) can achieve competitive performance under both full-data and few-shot scenarios. Note that the above methods adopt heuristic strategies to determine where to insert prompts on the PTMs. In this paper, we first conduct a pilot experiment to show that simple modifications to the prompt inserting strategies in Liu et al. (2022, 202) can result in better performances than these baselines with comparable tunable parameters. The pilot experiments demonstrate that there is a dire need for the optimal strategy of setting prompt layers into PTMs. Predictably, such an optimal strategy may vary across tasks or PTM backbones and is difficult to construct manually. Therefore, we propose the **S**elective **P**rompt **T**uning (SPT) framework (Figure 2), which automatically searches for the optimal strategy of inserting prompts into the PTMs. Our SPT framework considers a simple search space of whether to insert the generated instance-aware prompts into an intermediate layer of the PTM. As depicted in Figure 2, we initialize a prompt hyper-network where each intermediate layer of PTMs inserts a prompt controlled by a learnable probabilistic gate \(\alpha_{m}\). We follow the bi-level optimization strategy of Liu et al. (2019) to optimize the learnable probabilistic gates. After optimization, we keep the top prompt layers that receive the highest probabilities to meet the tunable parameter budgets. To better optimize the learnable gates and obtain better prompt layer settings, we propose SPT-DARTS, which consists of two novel techniques to improve the optimization process of Liu et al. (2019). Our SPT framework can automatically determine the suitable prompt-inserting strategy that achieves a high-quality trade-off between parameter efficiency and model performance. Extensive experiments are conducted on six benchmark datasets from the GLUE benchmark and four widely studied text classification benchmarks. The results show that SPT performs comparable to or outperforms the previous SOTA PETuning methods. Especially in the few-shot scenario with 100 training samples, SPT outperforms the PETuning baselines by a clear margin. Figure 1 depicts the overall comparison between our SPT method and baselines. To summarize, our contributions are: * We propose the SPT framework, which automatically learns to insert instance-aware prompts at the proper intermediate layers of PTMs. * We propose SPT-DARTS, which contains two novel techniques to improve the optimization process of the prompt hyper-network. * We verify our SPT framework in the full-data and few-shot scenarios across ten benchmark text classification tasks and three different PTM backbones. ## 2 Related work ### Prompt-based tuning A major research line of PETuning is the prompt-based tuning that inserts some additional soft prompts into the embeddings or hidden states on specific layers of PTMs. Prompt tuning (Lester et al., 2021) and P-tuning (Liu et al., 2021) insert a soft prompt on the word embedding layer only and can achieve competitive results when applied to large-scale PTMs. Prefix tuning (Li and Liang, 2021) and P-tuning v2 (Liu et al., 2021) insert prompts to every hidden layer of PTM. (Zuo et al., 2022) propose to utilize prompt tuning in the continual learning of rumor detection. BBT (Sun et al., 2022) optimizes the inserted prompt with derivative-free optimization. Recently, there have been a few works investigating instance-aware prompting. IDPG (Wu et al., 2022) uses a prompt generator to generate a soft prompt for every instance. Context tuning (Tang et al., 2022) uses the pretrained BERT model (Devlin et al., 2019) as the prompt generator and focuses on NLG tasks. IPL (Jin et al., 2022) first calculates relevance scores between prompt tokens and inputs, then uses the scores to re-weight the original prompt tokens. However, IPL needs to tune all parameters of PTM. LPT inserts an instance-aware late prompt into the middle intermediate layer of the PTM instead of the embedding layer or all the Transformer layers and achieves competitive performances despite its simple design. Our work contributes to the literature by proposing SPT, which selectively inserts prompts on certain intermediate layers of PTMs and effectively boosts prompt tuning performance while maintaining high parameter efficiency. ### Other PETuning method One important research line of PETuning is the adapter-based tuning Ding et al. (2022) that inserts certain adapter modules between or around the self-attention or feed-forward modules of the Transformer layer and only tunes these adapters in downstream training for model adaptation. Adapter Houlsby et al. (2019) inserts adapter modules with bottleneck architecture between every consecutive Transformer Vaswani et al. (2017) sublayers. AdapterDrop Ruckle et al. (2020) improves efficiency by removing adapters from lower layers. Compacter Mahabadi et al. (2021) used low-rank optimization and parameterized hypercomplex multiplication Le et al. (2021) to compress adapters. Zhang et al. (2023) propose to optimize the adapter architectures in order to obtain better fine-tuning performances. Adapter-based tuning methods have comparable results with model tuning when training data is sufficient but work less well in the few-shot scenario Wang et al. (2021). There are also some other popular PETuning methods, such as BitFit Ben-Zaken et al. (2021) which only tunes the bias terms, LoRA Hu et al. (2021) which optimizes low-rank decomposition matrices of the weights within self-attention layers. Recently, there are work conducting automatic configurations of PETuning modules, such as Hu et al. (2022); Zhang et al. (2023). Compared to, we focus on the prompt layer selection, which is not included in the search space of Hu et al. (2022). Thus our work can be seen as a meaningful complement to the existing literature. ## 3 Problem Formulation For PTM full fine-tuning, the input samples are usually reformulated as \(\left[\text{CLS}\right]\left\langle S_{1}\right\rangle\)\(\left[\text{SEP}\right]\) if the inputs are single sentences, and as \(\left[\text{CLS}\right]\left\langle S_{1}\right\rangle\)\(\left[\text{SEP}\right]\)\(\left\langle S_{2}\right\rangle\)\(\left[\text{SEP}\right]\) if the inputs are sentence pairs. After the PTM backbone encodes the inputs, the final hidden states of the \(\left[\text{CLS}\right]\) token will be used to predict classification labels with a linear classification head. In the settings of prompt tuning, the downstream tasks are reformulated as masked language model tasks to close the gap between pre-training and fine-tuning. Specifically, we insert randomly initialized soft prompt \(p\) on the word embeddings, and also modify the original inputs using different manually designed templates with a \(\left[\text{MASK}\right]\) token for task adaptations. For example, in single-sentence tasks, the input will be transformed into a template like \[\text{concat}(p,\text{E}(\left[\text{CLS}\right]\left\langle S_{1}\right\rangle \text{ It was }\left[\text{MASK}\right].\text{ }\left[\text{SEP}\right]))\] where \(\text{E}(x)\) means to map the tokens in the input sequence \(x\) into embedding vectors. Then, we map the original labels \(\mathcal{Y}\) to some words (label words) in the vocabulary \(\mathcal{V}\) of \(\mathcal{M}\). Then the final hidden Figure 2: The overview of our SPT framework. **Left**: a prompt generator with a bottleneck architecture. **Right**: The forward pass of SPT. At each transformer layer of PLM, one has to decide whether to use the prompt propagated from the previous layer or inject a newly generated prompt. states of [MASK] token will be fed into the pre-trained masked language modeling (MLM) head to predict label words. During downstream task tuning, the PTM backbone and the MLM head will be frozen, and only the soft prompt \(p\) will be tuned. This way, the downstream tasks are formulated as a masked language modeling task to close the gap between pre-training and downstream task tuning. In the setting of our proposed SPT framework (depicted in Figure 2), we investigate the problem of whether to insert newly generated instance-aware prompts at the word embeddings or certain intermediate layers of PTM. For convenience, we will refer to the word embedding layer as the 0-th layer of the PTM. We refer to the layers at which new prompts are inserted as the prompt layers (**PLs**). At a certain prompt layer \(i\), we will use a prompt generator \(\textbf{PG}_{i}\) to generate a prompt \(\textbf{p}_{i}\) from a given input's hidden states at layer \(i\). ## 4 SPT: Selective Prompt Tuning In this section, we will elaborate on our Selective Prompt Tuning (SPT) framework, which is depicted in Figure 2. We first discuss our motivations. Then we will elaborate on our method for determine the prompt layers. ### Motivation We have conducted a pilot experiment on the RTE [1] and Subj [1] datasets in which we manually design a series of strategies to set the prompt layers of RoBERTa-large [14]. The details are put in Appendix A due to limited length. The experimental results demonstrate that: (a) simple manually designed strategies of inserting prompts could perform comparably with a recent strong baseline prompt tuning method Liu et al. (2022), with comparable numbers of tunable parameters. (b) setting too many prompt layers will instead hurt the tuning performances. The above observations raise a vital research question which we will address: **R.G.1**: _how do we find the optimal strategy of prompt injection, given the task at hand?_ ### prompt generators A prompt generator is a simple feed-forward layer with a bottleneck architecture [20]. It first down-projects the hidden states \(\mathbf{h}\) of PTM from dimension \(d\) to dimension \(m\) (\(m\ll d\)) via a linear layer \(\text{MLP}_{down}\). Then it obtains the prompt \(\mathbf{p}\) with length \(l\) through average pooling operation \(\text{Pooling}()\). The pooled prompt will go through an activation function \(g\) and be up-projected to dimension \(d\) via another linear layer \(\text{MLP}_{up}\). \[\mathbf{p}=\text{MLP}_{up}(g(\text{Pooling}(\text{MLP}_{down}(\mathbf{h})))). \tag{1}\] Following Mahabadi et al. (2021); Wu et al. (2022), we employ the parameterized hyper-complex multiplication (PHM) layer [11] with parameter \(n\) to reduce the parameters of \(\text{MLP}_{down}\) and \(\text{MLP}_{up}\). PHM substitutes the weight matrix of a linear layer to a sum of Kronecker products, thus having a parameter complexity of \(\mathcal{O}(md/n)\), reducing the parameters of the projection layers by at most \(\frac{1}{n}\). ### Prompt hyper-network We aim to search for the optimal setting of prompt layers under the limited tunable parameter budgets. Assume the parameter budget allows \(K\) prompt layers. Since not all prompt layers contribute equally to task performance, only a fraction of layers should be selected as prompt layers to avoid redundancy of the tunable parameters. Thus, we initialize a prompt hyper-network where the embedding layer and all the intermediate layers have a prompt generation layer controlled by a _learnable probabilistic gate_. Introducing a zero-initialized learnable parameter \(\alpha_{i}\in\mathbf{R}\), the learnable gate at layer \(i\) is given by \[a_{i}=\text{Sigmoid}(\alpha_{i}), \tag{2}\] where \(\text{Sigmoid}()\) is the Sigmoid activation function. \(a_{i}\in(0,1)\) can be seen as the probability of activating the prompt generator at layer \(i\). At each layer of the hyper-network, the prompt \(\textbf{p}_{i}\) consists of the prompt \(\textbf{p}_{i}^{(prev)}\) propagated from the previous layer, and the prompt \(\textbf{p}_{i}^{(new)}\) generated from the prompt generator \(\textbf{PG}_{i}\) at layer \(i\). Formally, the prompt \(\textbf{p}_{i}\) at layer \(i\) is given by \[\textbf{p}_{i}=(1-\tau*a_{i})\textbf{p}_{i}^{(prev)}+\tau*a_{i}*\textbf{p}_{i }^{(new)}, \tag{3}\] where \(\tau\in\{0.5,1.0\}\) is a hyper-parameter determining whether to discard the previous layer's prompt \(\textbf{p}_{i}^{(prev)}\) when a new prompt is generated at layer \(i\). Note that \(\tau=1.0\) is similar to Liu et al. (2021) where instance-independent new prompts are inserted at each intermediate layer. Through optimization, the probabilistic gate \(a_{i}\)'s value will move toward \(0\) or \(1\), acting as importance scores for the prompt layers. The top \(K\) layers that receive the highest probabilistic gate values will be set as prompt layers to meet the parameter budget, and the model with such a group of prompt layers will be referred to as the learned SPT model. Our hyper-network, which is the backbone model with soft prompts at each layer and the prompts are controlled by the learnable gates \(\alpha_{i}\). The parameters \(\alpha_{i}\) are learnt jointly with the model parameters. So they are not hyper-parameters and do not require hyper-parameter tuning to determine their values. ### Optimization of prompt hyper-network Following DARTS Liu et al. (2019), we consider all the parameters \(\alpha_{i}\) of the learnable probabilistic gates as architectural parameters, denoted as \(\alpha\), and optimize them via bi-level optimization. Denote the hyper-networks' prompt generator parameters as \(\omega\). The bi-level optimization optimize \(\alpha\) conditioned on the optimized parameters of prompt generators \(\omega^{*}\). At each epoch, the train set is split into two splits \(\mathcal{D}_{\alpha}\) and \(\mathcal{D}_{\omega}\). The inner and outer levels of optimization are conducted on these two separate splits, which is analogous to validating architectures trained on \(\mathcal{D}_{\omega}\) using a different split \(\mathcal{D}_{\alpha}\) to avoid over-fitting. Thus the optimization objective is: \[\min_{\alpha}\mathcal{L}(\mathcal{D}_{\alpha},\omega^{*},\alpha),\] \[\textit{s.t.}\ \ \omega^{*}=\arg\min_{\omega}\mathcal{L}( \mathcal{D}_{\omega},\omega,\alpha), \tag{4}\] where \(\mathcal{L}()\) is the objective function on a given downstream task. The above bi-level optimization problem is approximated with an alternating optimization strategy. The gradients of the prompt generators are calculated with batches of samples from \(\mathcal{D}_{\omega}\), and the gradients of \(\alpha\) are calculated on \(\mathcal{D}_{\alpha}\). Although DARTS is widely applied, it is known to produce unstable gradients and sub-optimal performances Dong and Yang (2020). We propose two novel techniques to improve the optimization of architectural parameters \(\alpha\). We will refer to our modifications to DARTS as SPT-DARTS. **Re-parameterization of probabilistic gates** Note that probabilistic gates \(a_{i}\) is calculated by equation 3. Their optimization does not explicitly consider the trade-offs across different layers, thus not optimizing to fully reveal the difference in their contributions to the prompt hyper-network. We now introduce a re-parameterization step to \(a_{i}\) before the calculation of equation 3: \[\hat{a}_{i}=a_{i}*C=a_{i}*\frac{\sum_{i}\text{GD}(a_{i})}{\sum_{i}a_{i}}, \tag{5}\] where GD\(()\) detaches the parameter from the computational graph, and the parameter will never have gradients. The above equation does not change the value of \(a_{i}\) since \(C\) has a value of 1. And equation 3 becomes \[\mathbf{p}_{i}=(1-\tau*\hat{a}_{i})*\mathbf{p}_{i}^{(prev)}+\tau*\hat{a}_{i}* \mathbf{p}_{i}^{(new)}. \tag{6}\] Now the gradient of \(\alpha_{i}\) is given by: \[\frac{\partial\mathcal{L}}{\partial\alpha_{i}}=\sum_{k}\frac{ \partial\mathcal{L}}{\partial\hat{a}_{k}}\frac{\partial\hat{a}_{k}}{\partial \alpha_{i}}\] \[= C*\frac{\partial\mathcal{L}}{\partial\hat{a}_{i}}\frac{\partial a _{i}}{\partial\alpha_{i}}-\sum_{k}a_{k}\frac{\partial\mathcal{L}}{\partial \hat{a}_{k}}\frac{\sum_{i}\text{GD}(a_{i})}{\left(\sum_{i}a_{i}\right)^{2}} \frac{\partial a_{i}}{\partial\alpha_{i}}\] \[= \frac{\partial a_{i}}{\partial\alpha_{i}}*\left(\frac{\partial \mathcal{L}}{\partial\hat{a}_{i}}-\sum_{k}\frac{a_{k}}{\sum_{j}a_{j}}\frac{ \partial\mathcal{L}}{\partial\hat{a}_{k}}\right). \tag{7}\] We can see that our re-parameterization technique introduces an extra term \(\sum_{k}\frac{a_{k}}{\sum_{j}a_{j}}\frac{\partial\mathcal{L}}{\partial\hat{a}_ {k}}\) in the gradient. This way, we explicitly introduce the interactions among the gating parameters from different layers during gradient computations. **Architectural consistency learning** Note that the final optimized model we want is sparse, with most layers' prompt generators being pruned. To close the gap between the hyper-network and the final model, we assign a Bernoulli distributed random mask \(m_{i}\in\{0,1\}\) with mean value \(s\in(0,1)\) to each learnable probabilistic gate \(a_{i}\). Thus, equation 6 becomes \[\mathbf{p}_{i}=(1-m_{i}*\tau*\hat{a}_{i})*\mathbf{p}_{i}^{(prev)}+m_{i}*\tau* \hat{a}_{i}*\mathbf{p}_{i}^{(new)}. \tag{8}\] Now we ask the same input \(x\) to go through the forward pass twice, once with the architectural masks applied (Equation 8) and once with the architectural masks turned off (Equation 6), resulting in different hidden representations \(h_{x}^{(1)}\) and \(h_{x}^{(2)}\) for the input sample. We now introduce a consistency regularization objective in addition to the task's objective function: \[\mathcal{L}_{c}=\textbf{MSE}(h_{x}^{(1)},h_{x}^{(2)}), \tag{9}\] where **MSE** is the mean squared error loss function. Note that this regularization term will be added to both the inner and outer objectives in Equation 4.1 Footnote 1: We set the coefficient \(\lambda_{c}\) of this term to 1.0. Without further hyper-parameter tuning, this regularization can already help improve the model performances. Our consistency regularization objective is inspired by the recent works in consistency learning Liang et al. (2021). Here, we apply the idea of consistency learning to enhance the optimization process of the learnable probabilistic gates. Intuitively, this regularization term encourages the hyper-network to output consistent hidden states when different sets of prompt generators are pruned. It ensures that each prompt generator is well-trained and bridges the gap between the hyper-network and the final discretized SPT model. As a result, the optimization of \(a_{i}\) can better reflect the contributions of each prompt generator, and thus the final learned model will obtain better performance. ## 5 Experiments ### Evaluation datasets We evaluate our method on five single-sentence and five sentence-pair classification tasks, including six tasks from GLUE benchmark Wang et al. (2018) and four other popular tasks, including MPQA Wiebe et al. (2005), MR Pang and Lee (2005), Subj Pang and Lee (2004), and TREC Voorhees and Tice (2000) tasks. All details about the dataset statistics and evaluation metrics can be found in Appendix B. ### Experiment Settings All experiments are conducted on NVIDIA GTX A40 GPUs. We use Pytorch Paszke et al. (2019) and HuggingFace's Transformers Wolf et al. (2020) libraries to implement our SPT method. We evaluate our method in both full-data and few-shot scenarios on three PTM backbones, RoBERTalarge Liu et al. (2019), DeBERTa-large He et al. (2020), and GPT2-large Radford et al. (2019). Unless otherwise specified, the number of prompt layers \(K\) is set to 4, the prompt length \(l\) is 10, and we set \(\tau=0.5\) and the coefficient \(\lambda_{c}\) of the consistency regularization term in Eq 9 to 1.0. That is, our method will keep the previous layer's prompt when inserting new prompts. Moreover, we report the average performances and standard deviations on the test set of the learned SPT model across 5 random seeds under the full-data scenario and 10 random seeds under the few-shot scenario. More implementation details are provided in Appendix C. ### Baselines We compare our SPT framework with the current SOTA baseline methods. **Fine-tune**: The traditional fine-tuning method that trains all parameters in the PTM backbone. **Adapter-based tuning**: we compare with (1) Adapter Houlsby et al. (2019); (2) AdapterDrop Ruckle et al. (2020). **Prompt-based tuning**: For prompt-based tuning methods, we compare with (1) Prompt Tuning Lester et al. (2021), (2) P-tuning v2 Liu et al. (2022), (3) IDPG Wu et al. (2022), and (4) LPT Liu et al. (2022). The prompt length for all these methods are set to 10. **Other PETuning methods**: We also compare: (1) BitFit Ben-Zaken et al. (2021); (2) LoRA Hu et al. (2021); (3) S\({}^{3}\) by Hu et al. (2022). We implement Adapter, AdapterDrop, BitFit, and LoRA using OpenDelta2 library. Other baselines are implemented using their open-sourced codes. For a fair comparison, we do not use supplementary training like Wu et al. (2022) to enhance performance. Footnote 2: [https://github.com/thunlp/OpenDelta](https://github.com/thunlp/OpenDelta) ### Results in the few-shot scenario We first evaluate our SPT framework in the few-shot scenario. Following Wu et al. (2022); Liu et al. (2022), we consider three settings where the training set size is 100, 200, and 500, respectively. Under a given random seed, we randomly sample the training samples from the original training set. For every baseline and our SPT method, we will run the experiments over 10 different random seeds and report the mean and deviation on each task. The development and test sets are the same as the full-data scenarios. The results for the few-shot scenario of 100 samples are presented in Table 1. The results for the few-shot scenario of 200 and 500 samples are in Table 12 of Appendix E. Our SPT method outperforms all the baseline methods in the few-shot settings. Especially when the training set has only 100 samples, the SPT method outperforms model tuning by 3.3 points on average and Adapter by 3.7 points. Our method also outperforms all the prompt-based baseline methods with comparable or less tunable parameters. The results demonstrate that our method can achieve good generalization performance when the training data is very scarce. From Table 1 and 12, we can see that although outperforming the prompt tuning-based baseline methods, SPT with \(\tau=1\) generally performs less well than SPT with \(\tau=0.5\). This result is intuitive. The prompt propagated from the previous layers carries different semantic or syntactic information Clark et al. (2019), which can help the current and future layers to better encode the input sample. ### Results in the full-data scenario The overall comparison of our SPT framework and the baselines in the full-data scenario is reported in Table 11 in Appendix D. From the experimental results, we can see that our SPT method can outperform the PETuning baselines with comparable or fewer tunable parameters. We can also observe from Table 11 that: (a) Generally, the prompt-based methods are weaker than the adapter-based methods under the full-data settings, especially on sentence-pair tasks, which is consistent with the results from Sun et al. (2022). However, our method overcomes this shortcoming by properly setting the prompt generators at certain intermediate layers. (b) Our method SPT with the learned prompt layer setting is comparable with or outperforms the strong baselines, like AdapterDrop, BitFit, and LoRA, with even fewer tunable parameters. ### Analysis and ablation studies **Visualization and discussions of the learned SPT models** We visualize the learned SPT models on the ten tasks with RoBERTa-large backbone via heat map, as depicted in Figure 3. In Figure 3, the rows represent different tasks, and the columns correspond to the layer indices. For each cell, dark green represents a prompt layer, while bright yellow means not. We can observe the following patterns: (a) all the tasks decide to insert prompts after the embedding layer (layer 0) and the first four transformer layers, which is a similar observation to Liu et al. (2022). (b) Layers 10 to 19 of RoBERTa-large are frequently chosen as the prompt layers. Similarly, Liu et al. (2022) observe that the middle intermediate layers are the most performing prompt layers. (c) SPT discards the last four layers. We hypothesize that if we set prompt layers among these layers, the newly generated prompt will not be propagated long enough to formulate useful task-related information. **The effects of the number of prompt layers** In the main experiments (Tables 1 and 11), we mainly set the number of prompt layers \(K\) to 4. To investigate how \(K\) affects SPT's performance, we now run the SPT method with \(K\in\{1,2,8,16\}\). We adjust \(n\) so that different settings of \(k\) have comparable tunable parameters. The results of the learned \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & **Tunable** & **SST-2** & **MPQA** & **MR** & **Subj** & **TRC** & **MNLI** & **MRPC** & **QNLI** & **QQP** & **RTE** \\ & **Params** & (acc) & (acc) & (acc) & (acc) & (acc) & (acc) & (acc) & (acc) & (acc) & (acc) & (acc) and (f) & (acc) & **Avg** \\ \hline Model tuning & 355M & 87.6 (1.2) & 80.5 (2.0) & 82.5 (2.5) & 88.6 (1.5) & 89.3 (1.9) & 51.5 (3.3) & 77.3 (1.0) & 79.6 (6.9) & 69.9 (2.1) & 48.6 (3.0) & 74.7 \\ Adapter & 1.6M & 85.3 (1.3) & 80.3 (1.0) & 82.9 (1.5) & 88.7 (1.7) & 47.9 (1.2) & 76.8 (1.4) & 68.5 (2.7) & 67.3 (1.8) & 53.1 (2.4) & 74.5 \\ AdapterDrop & 811.6 & 86.8 (1.8) & 80.3 (9.2) & 83.3 (1.1) & 88.3 (1.2) & 88.9 (2.2) & 45.2 (0.8) & 68.4 (0.9) & 67.4 (3.9) & 65.7 (1.7) & 54.1 (4.8) & 73.5 \\ BitFit & 273.5 & 86.7 (1.9) & 82.0 (1.1) & 83.1 (1.0) & 83.1 (1.6) & 87.3 (1.5) & 76.7 (1.5) & 76.3 (1.5) & 78.4 (1.7) & 69.3 (5.6) & 67.9 (3.9) & 54.1 (2.2) & 75.5 \\ LoRA & 788K & 88.5 (1.3) & 82.3 (1.3) & 83.5 (0.9) & 88.6 (1.4) & 89.9 (0.8) & 51.3 (2.7) & 77.8 (1.7) & 69.9 (5.7) & 70.3 (1.3) & 56.3 (2.0) & 75.8 \\ S\({}^{3}\) & 293K & 89.2 (1.2) & 82.5 (2.5) & 83.4 (0.8) & 99.1 (1.3) & 89.8 (1.5) & 51.8 (1.7) & 73.3 (1.3) & 72.0 (2.6) & 70.6 (1.1) & 56.9 (1.5) & 76.2 \\ Prompt Tuning & 213K & 87.1 (2.7) & 75.5 (8.8) & 82.1 (2.6) & 81.2 (2.6) & 81.5 (3.7) & 46.3 (1.8) & 74.2 (1.3) & 62.5 (2.5) & 59.7 (2.1) & 56.6 (2.5) & 70.5 \\ P-tuning v2 & 985K & 87.8 (0.6) & 78.6 (1.6) & 81.6 (2.1) & 87.7 (1.4) & 84.1 (3.1) & 75.2 (1.6) & 66.2 (3.3) & 66.7 (3.0) & 53.8 (2.3) & 72.3 \\ IDPG & 296K & 88.6 (1.7) & 77.5 (5.8) & 82.7 (1.8) & 86.6 (1.5) & 85.6 (2.7) & 48.8 (1.3) & 76.1 (1.6) & 68.6 (3.1) & 64.5 (1.6) & 55.7 (2.8) & 73.5 \\ LPT & 263K & 89.7 (0.8) & 82.5 (1.4) & 83.3 (1.5) & 89.7 (1.7) & 89.3 (1.8) & 52.5 (2.0) & 78.1 (2.0) & 71.6 (1.7) & 70.8 (1.9) & 57.1 (3.5) & 76.5 \\ \hline \multicolumn{10}{c}{**Our SPT method**} \\ \hline SPT (ours) & 152K & **90.8** (1.0) & **84.5** (1.6) & **84.3** (0.5) & **90.8** (0.9) & **90.5** (1.8) & **54.9** (1.7) & **79.2** (1.5) & **73.2** (2.2) & **72.3** (1.3) & **58.9** (2.3) & **78.0** \\ SPT (\(\tau=1.0\)) & 152K & **90.1** (1.1) & **83.2** (1.5) & **83.6** (1.1) & 89.6 (1.2) & **89.6** (1.2) & **89.4** (2.2) & **53.1** (2.1) & 79.9 (1.9) & 72.1 (2.5) & 71.4 (1.1) & 57.5 (2.1) & 76.6 \\ \hline \hline \end{tabular} \end{table} Table 1: Results in the few-shot scenario of 100 training samples. We report mean and standard deviation of performance across 10 random seeds. Bold and Underline indicate the best and the second best results. All the results are obtained using RoBERTa-large. Figure 3: The heat map representing the chosen prompt layers by our SPT method on each task, using RoBERTa-large as backbone. For each cell, dark green represents a prompt layer, while bright yellow means not. SPT models can be found in Table 2. The results show that: (a) our main setting \(K=4\) performs comparable to or better than other settings of larger \(K\), showing that we can not achieve performance bumps simply by adding more prompt layers. (b) Note that by learning the placement of a single prompt layer, our SPT (\(K=1\)) model performs comparable to or better than the strong baseline Liu et al. (2022). The results demonstrate that our method indeed has the ability to discover the proper prompt layers. **The effects of prompt length** In the main experiments (Table 1), the prompt length \(l\) for our SPT method and LPT, IDPG is set to 10 following LPT Liu et al. (2022). The same prompt length on the three methods ensures that the comparisons of the three methods are fair. Now, we change \(l\) to 5 or 20, and the performances on 4 tasks are reported in Table 14 of Appendix G. From the results, we can see that: (a) our method can consistently outperform the baseline method under different prompt lengths. (b) Our method is less affected by the prompt length hyper-parameter, which is important to real-world application since increasing the prompt length increases computations quadratically. **Transferability of the learned PG settings** We now evaluate the transferability of the learned SPT models. In table 3, we select four datasets, SST-2, MNLI, RTE, and Subj, and treat them as the source or target dataset. We search the prompt layer setting on the source dataset and train with the learned prompt layer on the target task. We can see from Table 3 that the transferred prompt layer settings can perform close to the directly learned settings and already achieve better performances than most of the baseline models. The transferability guarantees the re-usability of our SPT framework. **Working with other pre-trained encoders** To demonstrate that our method's superiority does not rely on a specific pre-trained backbone, we run our SPT method and baselines on the DeBERTa-large He et al. (2020) and GPT2-large Radford et al. (2019) backbones. The results are reported in Table 4. The results show that our method works well on the two backbones and successfully outperform the baselines. **Training efficiency of the SPT framework** Compared with LPT Liu et al. (2022), our optimization framework consumes 4-5 times training time and 1.6 times GPU memory due to bi-level optimization and multiple forward passes. However, considering that the training is done off-line, it is affordable compared to manually designing different prompt layer settings and running numerous evaluations. **Inference efficiency** We run inference on the RTE test set, with three different tasks: prompt tuning, LPT, and the learned SPT model, with batch size 32 and maximum length 128. The memory footprint and speed are recorded in Table 13 (Appendix F). We can see that during inference, all three methods consume almost equal GPU memory sizes, and SPT is 3.1% slower than LPT. The results show that our SPT method achieves superior performances while still being efficient. following three groups of experiments with open-sourced language models. **Classification tasks** Continuing the experiments in Table 1 and 4, we first experiment with the L1aMA-13b (Touvron et al., 2023) (13 billion parameters) on the SST-2, MNLI, RTE and Subj tasks. The results are presented in Table 5. We can see that by selecting proper prompt layers, our SPT method successfully help the L1aMA-13b backbone to achieve clear performance gains over the LPT baseline. In addition, we can see that LLM presents strong performances under the few-data settings. However, the LLMs requires much higher computational complexity and memory costs. **Other English tasks** We now also conduct experiments on other English tasks of different types: (a) COPA, a task focused on commonsense reasoning. (b) ShaRE-13, a nested named entity recognition (NER) task within the biomedical domain. (c) MultiArith, a task centered around arithmetic reasoning. To be consistent with Table 1, 4 and 5, we sample 100 samples from the training sets of these tasks as our training set. The hyper-parameter settings are the same with Table 1. The results of these experiments are presented in Table 6. From the above table, the following observations can be made: (1) ChatGPT demonstrates strong performance in many NLP tasks without fine-tuning. In contrast, when employing our SPT method with 100 training samples, fine-tuned L1aMA-2 13B exhibits either comparable or superior performance compared to ChatGPT. (2) On all the above tasks, when having comparable tunable parameters, SPT can outperform LoRA or LPT under the few-data setting. ### Validating our SPT-DARTS method In order to validate the effectiveness of our SPT-DARTS method, we now conduct two experiments. **Ablation on the hyper-network optimization method** The first experiment is to substitute SPT-DARTS to DARTS (Liu et al., 2019) or its variants P-DARTS (Chen et al., 2021), FairNAS (Chu et al., 2021) or \(L_{0}\) regularization method (Louizos et al., 2017). The results is presented in Table 15 (Appendix H.1). We can see that our SPT method can obtain better learned SPT models than the other methods. **SPT-DARTS on the NAS benchmark** Note that the architectural consistency learning regularization of our SPT-DARTS method is generally applicable to neural architecture search. We now evaluate SPT-DARTS on the widely used NAS benchmark, NAS-benchmark-201 (Dong and Yang, 2020). We follow the same search setting as DARTS on NAS-benchmark-201. The results are in Table 16 of Appendix H.2. The results show that our SPT-DARTS can outperform ENAS and DARTS by a clear margin on NAS-benchmark-201. ## 6 Conclusion In this work, we propose the SPT framework to automatically determine the optimal settings for prompt layers under the given PTM backbone and downstream task. We initialize a prompt hyper-network in which each layer has a prompt generator controlled by a learnable probabilistic gate. To better optimize the prompt hyper-network, we propose a novel SPT-DARTS method containing two novel modifications to the original DARTS' bi-level optimization process. Experiment results in full-data and few-shot scenarios demonstrate that SPT can achieve comparable or better performance than state-of-the-art PETuning methods while maintaining parameter and inference efficiency. \begin{table} \begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{1}{c}{COPA} & ShaRE-13 & MultiArith \\ & (acc) & (f1) & (acc) \\ \hline ChatGPT & 0.732 & 0.331 & 0.953 \\ L1aMA-2 13B + LoRA & 0.718 & 0.532 & 0.888 \\ L1aMA-13b + LPT & 0.823 & 0.536 & 0.879 \\ \hline L1aMA-13b + SPT & 0.836 & 0.553 & 0.907 \\ \hline \hline \end{tabular} \end{table} Table 6: Results of fine-tuned L1aMA-13b on 3 tasks. The fine-tuning is done in the few-data scenario (100 samples). \begin{table} \begin{tabular}{c|c|c c c c} \hline \hline \multirow{2}{*}{**Method**} & **Tunable** & **SST-2** & **MNLI** & **RTE** & **Subj** \\ & **Params** & (acc) & (acc) & (acc) & (acc) \\ \hline RoBERTa-large + LPT & 263K & 89.7 (0.8) & 52.5 (2.0) & 57.1 (1.3) & 89.7 (1.7) \\ RoBERTa-large + SPT & 152K & 90.8 (1.0) & 54.9 (1.7) & 58.9 (2.3) & 90.8 (0.9) \\ \hline L1aMA-13b + LPT & 1310K & 90.6 (0.6) & 52.9 (2.3) & 57.9 (3.0) & 90.8 (1.5) \\ L1aMA-13b + SPT & 672K & 91.3 (0.8) & 55.7 (1.9) & 59.6 (2.1) & 91.2 (0.7) \\ \hline \hline \end{tabular} \end{table} Table 5: Results on 4 GLUE tasks using the popular LLM, L1aMA-13b. The fine-tuning is done in the few-data scenario (100 samples). ## Limitations We showed that our proposed method can greatly improve the performance of prompt tuning on diverse NLU tasks and three different pre-trained models (i.e., RoBERTa-large, DeBERTa-large, and GPT2-large). However, the more large-scale pre-trained models with tens of billions or more parameters were not studied due to limited computation resources. In addition, a more comprehensive range of tasks, like text generation, should be investigated. Our framework can be easily transferred to other backbone architectures and different types of tasks. We are eager to validate our framework to a broader range of scenarios in future work. ## Ethics Statement Our proposed method aims to improve prompt tuning in terms of performance under a budget of parameter efficiency. The datasets we experiment with are widely used in previous work and, to our knowledge, do not have any attached privacy or ethical issues.
2309.12790
NTO3D: Neural Target Object 3D Reconstruction with Segment Anything
Neural 3D reconstruction from multi-view images has recently attracted increasing attention from the community. Existing methods normally learn a neural field for the whole scene, while it is still under-explored how to reconstruct a target object indicated by users. Considering the Segment Anything Model (SAM) has shown effectiveness in segmenting any 2D images, in this paper, we propose NTO3D, a novel high-quality Neural Target Object 3D (NTO3D) reconstruction method, which leverages the benefits of both neural field and SAM. We first propose a novel strategy to lift the multi-view 2D segmentation masks of SAM into a unified 3D occupancy field. The 3D occupancy field is then projected into 2D space and generates the new prompts for SAM. This process is iterative until convergence to separate the target object from the scene. After this, we then lift the 2D features of the SAM encoder into a 3D feature field in order to improve the reconstruction quality of the target object. NTO3D lifts the 2D masks and features of SAM into the 3D neural field for high-quality neural target object 3D reconstruction. We conduct detailed experiments on several benchmark datasets to demonstrate the advantages of our method. The code will be available at: https://github.com/ucwxb/NTO3D.
Xiaobao Wei, Renrui Zhang, Jiarui Wu, Jiaming Liu, Ming Lu, Yandong Guo, Shanghang Zhang
2023-09-22T11:02:57Z
http://arxiv.org/abs/2309.12790v2
# NOC: High-Quality Neural Object Cloning with 3D Lifting of Segment Anything ###### Abstract With the development of the neural field, reconstructing the 3D model of a target object from multi-view inputs has recently attracted increasing attention from the community. Existing methods normally learn a neural field for the whole scene, while it is still under-explored how to reconstruct a certain object indicated by users on-the-fly. Considering the Segment Anything Model (SAM) has shown effectiveness in segmenting any 2D images, in this paper, we propose Neural Object Cloning (NOC), a novel high-quality 3D object reconstruction method, which leverages the benefits of both neural field and SAM from two aspects. Firstly, to separate the target object from the scene, we propose a novel strategy to lift the multi-view 2D segmentation masks of SAM into a unified 3D variation field. The 3D variation field is then projected into 2D space and generates the new prompts for SAM. This process is iterative until convergence to separate the target object from the scene. Then, apart from 2D masks, we further lift the 2D features of the SAM encoder into a 3D SAM field in order to improve the reconstruction quality of the target object. NOC lifts the 2D masks and features of SAM into the 3D neural field for high-quality target object reconstruction. We conduct detailed experiments on several benchmark datasets to demonstrate the advantages of our method. The code will be released. ## 1 Introduction The neural field has made significant progress over the past few years and become one of the most popular 3D representations. The pioneering Neural Radiance Field (NeRF) [33] and its variants [35; 47; 2; 64; 10] learn the neural networks to regress the density and color from input position and use volume rendering to conduct novel view synthesis. NeuS [52] improves the 3D object reconstruction of NeRF by representing a surface with Signed Distance Function (SDF). They also develop a new volume rendering method to train the neural SDF representation. Many studies are proposed to improve the reconstruction quality and reduce the training cost [56; 49]. However, existing methods usually learn a neural field for the whole scene, ignoring the reconstruction quality of a certain object in the scene, which can be indicated by users' one-the-fly. Although there are traditional techniques proposed for target 3D reconstruction such as in-hand scanning [55], it is still non-trivial for the neural field since we need to obtain the multi-view consistent target segmentation, which is labor-intensive and time-consuming. Recently, the Segment Anything Model (SAM) [20] has shown promising potential to zero-shot segmentation, which can be helpful to segment one target object out of a scene. However, with a single human prompting, SAM can only achieve 2D segmentation within a single image, other than multiple views. In addition, how to leverage the pre-trained features of the SAM encoder to improve the reconstruction quality still remains an open question to the best of our knowledge. To address this issue, we propose Neural Object Cloning (NOC), a novel high-quality 3D target object reconstruction method that leverages the benefits of both neural field and SAM. Specifically, to separate the target object designated by users from the neural field, we train a unified 3D variation field to merge the multi-view 2D segmentation masks. Our variation field is based on the assumptions: (1) If a pixel is foreground, then at least one of the positions passed through the ray is foreground. (2) If a pixel belongs to the background, then all the positions passed through the ray are background. We design a corresponding loss for the assumptions to optimize the variation field, lifting the 2D masks to a 3D neural field. The variation field is then projected into 2D space and generates the new prompts for SAM. This process is iterative until convergence to finally separate the target object from the scene. After this, to improve the reconstruction quality aided by SAM's semantics, we further lift the features of the SAM encoder into a 3D SAM field. We introduce a lightweight output head to the neural field for learning the features and use volume rendering to render the 2D features. The 3D SAM field is directly supervised with the SAM-generated 2D features. By lifting the SAM-generated 2D features, our method can reconstruct more accurate 3D models for target objects. Our main contributions are summarized as follows: * We propose NOC, a novel method that iteratively lifts SAM's 2D mask in a single image into a unified 3D variation field, segmenting the target object out of the multi-view scene. In this way, we can reconstruct any objects prompted by users without any mask annotation. * To leverage the pre-trained visual semantics in SAM, we present a tackful strategy to lift SAM's 2D features into a 3D SAM field via knowledge distillation, which boosts the reconstruction quality of the target object. * We conduct detailed experiments on DTU and LLFF datasets, where our NOC achieves competitive render quality with 33.06 PSNR and surpasses the state-of-the-art reconstruction methods by 0.73 Chamfer distance, demonstrating the advantages of our method. ## 2 Related Work Neural Implicit Representation.Neural implicit representation, also known as neural field, has recently emerged in the field of neural processing of 3D data and multi-view 2D images [29; 38; 61; 62]. It employs coordinate-based neural networks to encode continuous signals over arbitrary dimensions at any resolution. They have demonstrated promising results in shape representation [29; 30; 38; 5; 1; 11; 63; 39], novel view synthesis [46; 27; 16; 32; 25; 41; 42; 51], and multi-view 3D reconstruction [61; 36; 17; 15; 26]. In particular, Neural Radiance Fields (NeRF) [33] and its variants [35; 47; 2; 64; 10] enable the recovery of continuous volume density and color from a limited number of observations, facilitating high-resolution image generation from arbitrary views through volume rendering. To enhance the surface reconstruction quality of NeRF, NeuS [52] leveraged a Signed Distance Function (SDF) to represent surfaces. Following methods [56; 49] have been proposed to reduce training costs and improve reconstruction quality. However, there is a lack of effective methods for target 3D reconstruction in neural representation, particularly in achieving multi-view consistency for the segmentation maps of a target object. In this study, we introduce a unified 3D variation field to separate a target object from the neural field, utilizing multi-view 2D segmentation masks obtained from a zero-shot Segment Anything Model (SAM) [20]. 2D to 3D Lifting.The problem of lifting 2D to 3D has been extensively studied in the field of 3D vision, with numerous methods proposed for various applications. For instance, [59; 60] reconstruct 3D scenes from multiple 2D images by inferring the depth and geometry of the scene. [28; 48; 21] predict the 3D human pose or shape of a person from a single or multiple 2D images. [7; 22] focus on lifting object representations from 2D images to 3D space for better object understanding and reconstruction. Traditional and neural methods have been explored for lifting 2D images to 3D representations, including voxel-volumes [57; 8], bird's-eye view maps [24], 3D point clouds [13; 34], and neural implicit representations [61; 33; 52]. Although 2D to 3D lifting is well-studied, lifting multi-view 2D segmentation into a 3D neural field is under-explored as far as we know. In this work, we propose a method to lift the 2D SAM into a variation field and a 3D SAM field for high-quality target object 3D reconstruction. Image Segmentation.Different from classification and detection, segmentation involves pixel-level interpretation and dense prediction for input images. Many efforts have been made for different segmentation tasks, including semantic segmentation for per-pixel classification [4; 69; 6; 58], instance segmentation for identifying individual object instances [12; 53; 50], and panoptic segmentation that combines semantic and instance segmentation [19; 23]. Various models and techniques have been developed for segmentation, including encoder-decoder structures [40], dilated convolutions [65], pyramid structures [68; 3], attention mechanisms [23], and transformers [9; 58]. Recently, the Segment Anything Model (SAM) [20] and its variants [71; 54; 66] have demonstrated strong zero-shot generalization ability, enabling consistent multi-view 2D segmentation masks for diverse real-world target objects in a given scene. However, SAM is currently limited to 2D segmentation of a single image, which is insufficient for achieving multi-view consistent segmentation masks for target object reconstruction in a neural field. Moreover, the potential of utilizing the encoder features of the SAM model as a foundation for improving reconstruction quality remains under-explored. In this study, we aim to leverage SAM to achieve multi-view consistent 2D masks for objects in a given scene. To achieve this, we iteratively lift the 2D masks to a 3D field to enforce consistency and generate prompts by projecting the 3D field back to 2D space. We also utilize the features extracted by the SAM model to improve the reconstruction quality of the target object. ## 3 Method In this section, we first briefly review neural surface reconstruction and the Segment Anything Model (SAM) [20]. Subsequently, we proceed to elaborate on the pipeline of the proposed Neural Object Cloning (NOC). Finally, we further elucidate the novel designs incorporated in NOC. ### Preliminaries In recent neural implicit representations, Neural Radiance fields (NeRF) [33] and NeuS [52] both exploit coordinate-based scene representations. They first construct a mapping function from spatial location \(\mathbf{x}\in\mathbb{R}^{3}\) and view direction \(\mathbf{d}\in\mathbb{R}^{2}\) to color \(\mathbf{c}\in\mathbb{R}^{3}\) and volume density \(\sigma\). Different from Figure 1: The overall pipeline of NOC. First, the user specifies the target object to be reconstructed and sends prompts to SAM for segmentation on the initial view. With multi-view images as input, we train the Unified Variation Field iteratively to lift cross-view masks into 3D space. When the Unified Variation Field converges to high-quality masks of the target objects, we finetune the pre-train neural field based on the masked images and distill SAM encoder features into 3D space to obtain better reconstruction quality. NeRF, NeuS replaces the density field in the vanilla NeRF with a signed distance field (SDF). We can extract the geometry surface \(\mathbf{S}\) of the scene by the zero-set of the SDF values: \[\mathbf{S}=\{\mathbf{x}\in\mathbb{R}^{3}|\mathbf{f}_{sdf}(\mathbf{x})=0\}, \tag{1}\] where \(\mathbf{f}_{sdf}\) is the signed distance function. Based on the function, we can further calculate the opaque density \(\rho\) and opacity values \(\alpha\). Finally, the pixel color \(\mathbf{\hat{C}}\) of a ray \(\mathbf{t}\) can be computed by the classical volume rendering function: \[\mathbf{\hat{C}}(\mathbf{t})=\sum_{i=1}^{n}\mathbf{T}(\mathbf{t}_{i})\alpha( \mathbf{t}_{i})\mathbf{c}(\mathbf{t}_{i}), \tag{2}\] where \(n\) is the number of sample points along one ray and \(\mathbf{T}\) represents the discrete accumulated transmittances, which is defined as \(\mathbf{T}_{i}=\Pi_{j=1}^{i-1}(1-\alpha_{j})\). ### Overall Pipeline As shown in Fig. 1, based on Instant-NSR, NOC is composed of two major techniques: (1) Unified Variation Field (3.2); (2) 3D SAM Feature Field (3.3). With users' specified prompts, we can obtain masks generated by SAM in the initial view. We first pre-train Unified Variation Field on the initial view and full-scene Instant-NSR [67] on all views. Based on the pre-train, Unified Variation Field can render coarse masks in other views and further aggregate prompts sent to SAM. Then more precise 2D masks generated by SAM supervise the variation field and iteratively refine the rendered masks until converge. Since we have obtained accurate target objects' masks in all scenes, we finally finetune full-scene Instant-NSR on the foreground objects. For a better render and reconstruction quality, we further distill 2D features from the SAM encoder into the 3D SAM Feature Field. ### Unified Variation Field In this section, we introduce a unified variation field to lift 2D segmentation masks from different views into 3D space. The unified variation field can be used to identify foreground and background voxels and project voxels to pixels as 2D masks. By constructing the variation field, we can obtain view-consistency masks conveniently within a short time. The core of the proposed unified variation field is to use the multi-view pixel-level precise masks generated by SAM as the supervision to estimate the voxel-level mask, which is further illustrated in Fig. 2. Formally, given a pixel's camera ray \(\mathbf{r}(t)=\mathbf{o}+t\cdot\mathbf{d}\) with camera position \(\mathbf{o}\) and view direction \(\mathbf{d}\), the corresponding pixel mask is defined as \(\mathbf{M}(\mathbf{r})\). And the supervision of \(\mathbf{M}\) is precise masks \(\mathbf{M}_{SAM}\) generated by SAM on image \(\mathbf{I}\). Although SAM supports varieties of prompts as input, including points, boxes, masks and text, we take points and boxes as prompts, which works better and save computation memory. We compute Figure 2: The illustration of the Unified Variation Field. Implicit interaction between multiple rays to decide which point is foreground or background. For a background ray, all points on it belong to the background. For a foreground ray, at least one point on it is foreground. the clustering centers and minimum bounding rectangles of 2D masks as points \(\mathbf{Point}\) and boxes \(\mathbf{Box}\) and combine them as prompts. Then SAM encodes image \(\mathbf{I}\) and prompts \(\mathbf{P}=(\mathbf{Point},\mathbf{Box})\) as features and decodes features as more fine-grained masks \(\mathbf{M}_{SAM}\). The process can be formulated as: \[\mathbf{M}_{SAM}=Dec_{M}(Enc_{I}(\mathbf{I}),Enc_{P}(\mathbf{P})), \tag{3}\] where \(End_{I}\), \(Enc_{P}\) and \(Dec_{M}\) are the image encoder, prompts encoder and mask decoder in SAM, respectively. Next, \(\mathbf{M}_{SAM}\) is used as the pseudo label to supervise voxel-level masks \(\mathbf{M}_{V}\). The relations between the pixel-level masks \(\mathbf{M}\) and voxel-level masks \(\mathbf{M}_{V}\) lie in the following aspects: (1) If a pixel belongs to the foreground, then at least one of the positions passed through the ray is foreground. (2) If a pixel belongs to the background, then all the positions passed through the ray are background. With the above two assumptions, we design a unified variation field as \(\mathbf{V}\in\mathbb{R}^{N_{x}\times N_{y}\times N_{s}}\) to lift multi-view 2D pseudo masks \(\mathbf{M}_{SAM}\) and render 2D predictive masks \(\mathbf{M}\). Specifically, we can simply apply maximization operation to formulate the relations between \(N_{s}\) points and belonging rays \(\mathbf{r}\): \[\mathbf{M}(\mathbf{r})=sigmoid(\max(\{\mathbf{V}_{\mathbf{r}(s_{i})}\cdot \omega_{\mathbf{r}(s_{i})}|i\in\{1,\dots,N_{s}\})), \tag{4}\] Where \(\omega_{\mathbf{r}(s_{i})}\) represents point-wise weights, which are in stop-gradients from SDF-field. Then we can train the unified variation field by the binary cross-entropy loss: \[\mathbf{L}_{v}=\frac{1}{\mathbf{R}}\sum_{\mathbf{r}}-\mathbf{M}_{SAM}\cdot \log(\mathbf{M})-(1-\mathbf{M}_{SAM})\log(1-\mathbf{M}), \tag{5}\] By minimizing the loss function, we can transfer foreground segmentation ability from SAM to the variation field. At the beginning of the 3D lifting iteration, the predicted masks may exhibit defects or be excessively large due to the incomplete convergence of the Variation Field. But the pseudo masks generated by SAM are mostly correct due to proper prompting operations. After several iterations of 3D lifting, the variation field corrects erroneous predictions, ensuring stable and view-consistent 2D masks. ### 3D SAM Feature Field NeRF and NeuS learn a neural field to store the density and color of spatial points. Existing works such as SemanticNeRF [70] and PanopticLifting [44] add another semantic branch to learn semantic and instance knowledge from 2D segment models. However, as a foundation segmentation model, SAM possesses the ability to surpass all previous segmentation models and contains abundant knowledge. To better leverage the dense knowledge learned by SAM, we build a lightweight output branch to the neural field and lift the features of the SAM encoder into 3D space. This branch takes 3D points, geometry features, and color features as input and outputs the semantics features of each spatial point \(\mathbf{f}(\mathbf{t}_{i})\). Similar to the color branches, volume rendering of the 3D SAM feature field can be formulated as: \[\mathbf{\hat{F}}(\mathbf{t})=\sum_{i=1}^{n}\mathbf{T}(\mathbf{t}_{i})\alpha( \mathbf{t}_{i})\mathbf{f}(\mathbf{t}_{i}), \tag{6}\] To optimize the feature field \(\mathbf{f}\), we adopt the L1 loss between the rendered features \(\mathbf{\hat{F}}(\mathbf{t})\) and SAM encoder features \(Enc_{I}(\mathbf{I})\), which can distill knowledge from SAM and help boost reconstruction quality: \[\mathbf{L}_{f}=\frac{1}{\mathbf{R}}\sum_{\mathbf{r}}\left\|\mathbf{\hat{F}}( \mathbf{r})-Enc_{I}(\mathbf{I})\right\|_{1} \tag{7}\] As for the color branch and SDF branch, we follow the previous work and adopt photometric loss and Eikonal loss [11] to supervise their training respectively, which can be defined as: \[\mathbf{L}_{c}=\frac{1}{\mathbf{R}}\sum_{\mathbf{r}}\left\|\mathbf{\hat{C}}( \mathbf{r})-\mathbf{C}(\mathbf{r})\right\|_{2}^{2},\mathbf{L}_{eik}=\frac{1}{ \mathbf{R}\cdot\mathbf{N}_{\mathbf{s}}}\sum_{\mathbf{r},\mathbf{i}}(|n|-1)^{2}, \tag{8}\] Finally, we minimize the weighted sum of the above loss: \[\mathbf{L}_{total}=\mathbf{L}_{c}+\lambda_{eik}\mathbf{L}_{eik}+\lambda_{f} \mathbf{L}_{f}+\lambda_{v}\mathbf{L}_{v}, \tag{9}\] ## 4 Experiments We conduct extensive experiments to evaluate the effectiveness of our proposed method for reconstructing the 3D object model from multi-view inputs. As shown in Section 4.1, we report a description of the experiment settings. In Section 4.2, we quantitatively compare the proposed method with other state-of-the-art rendering approach [52; 32; 61; 37] on DTU dataset [14]. And the qualitative analysis on LLFF [31] dataset is illustrated in Section 4.3. We conduct a comprehensive ablation study in Section 4.4 to evaluate the contribution of each component. In addition, we provide more quantitative comparison and qualitative analysis in Appendix. ### Experimental Settings **Datasets.** For **Quantitative comparison**, following the previous works [61; 52], we evaluate our proposed method on the selected 15 scenes from DTU dataset [14], which contains sufficient materials and challenging cases for reconstruction methods. Specifically, there are 64 or 49 images with \(1600\times 1200\) image resolution in each scene. Since the DTU dataset provides foreground mask ground truth, we first compare the quality of masks generated by the proposed variation field and SAM during training and testing. Then we divided the render and reconstruction quality of baseline into two settings, including training with mask supervision (w/ mask) and without mask supervision (w/o mask). For **Qualitative comparison**, following the previous work [32], we show visualizations on 8 challenging scenes from LLFF [31] dataset, which contains complex real-world scenes captured with roughly forward-facing images. There are 20 to 62 images with a fixed image resolution of \(1008\times 756\) in each scene, and we randomly select 1/8 of the entire images to construct the test set. Since LLFF doesn't contain specific object masks as ground truth, we first manually annotate one object in the scene and compare the visual quality with ground truth. **Implementation details.** We follow the implementation details specified in NeuS [52] and Instant-NSR [67]. we adopt Instant-NSR network architecture, which consists of two MLPs and the multi-resolution hash table to encode SDF and color, respectively. We utilize the Adam optimizer [18] with \((\beta_{1},\beta_{2})=(0.9,0.999)\) to update our neural networks, and the learning rates warm up from 0 to \(1\times 10^{-3}\) in the first 5k iterations and then controlled by the linear decay scheme to the latest learning rate of \(1\times 10^{-5}\). We set the number of rays to 4096 and sample 80 points for each ray. We first slightly pre-train NOC on full scene and then iterative train unified variation field for 3 pseudo masks update with 1k interval iterations. Finally, we finetune NOC for 20k iterations. All experiments are conducted on NVIDIA 3090 GPUs. More implementation details are reported in Appendix. Figure 3: Mask iteratively lifting illumination, in which \(M_{V}\) represents the prediction masks generated by Unified Variation Field and \(M_{SAM}\) indicates the masks provided by SAM base on prompts. Given users’ prompts of specific objects, the Unified Variation Field renders a coarse mask in another view, which leads to bad prompts for SAM and defective masks. But the Unified Variation Field lifts 2D masks from all views into 3D space and efficiently corrects its false judgments of voxels in other views. With the iterative training, \(M_{V}\) and \(M_{SAM}\) begin to shrink and finally converge to the same. ### Quantitative Comparison **Mask quality.** We first compare masks generated by our method with the foreground mask ground truth in DTU. As shown in Table.1 and Fig. 3, after iterative training, NOC achieves high mask quality approximately to ground truth. On one hand, the precise mask generated by SAM guides the variation field in distinguishing foreground and background voxels effectively. On the other hand, the variation field lifts the 2D mask to 3D space, enabling the generation of cross-view prompts that exhibit view consistency even with only a one-shot annotated view. **Novel View Synthesis.** We compare our method with the previous SOTA volume rendering approaches, including NeRF [32] and NeuS [52]. We held out 10% of the images in the DTU dataset as the testing set and the others as the training set. During training, we split the baselines into two settings: train w/o ground truth masks and w/ ground truth masks. For NOC, we use masks generated by SAM after iteration lifting for training. We compare the synthesis quality on the testing set with masks in terms of PSNR and SSIM. During the test, we use the masks predicted by methods to mask the rendered images, then we calculate the metrics between prediction and masked ground truth. As shown in Table.2, baselines train without object masks render lower quality than those trained with object masks. Our method achieves comparable performance after iterative mask lifting and SAM features distillations, showing improvement on PSNR and SSIM. The results demonstrate that our \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c c|c} \hline \hline **Scan ID** & 24 & 37 & 40 & 55 & 63 & 65 & 69 & 83 & 97 & 105 & 106 & 110 & 114 & 118 & 122 & Mean \\ \hline \multicolumn{11}{c}{_Train w/o mask setting_} \\ COLMAP & 0.81 & 2.05 & 0.73 & 1.22 & 1.79 & 1.58 & 1.02 & 3.05 & 1.40 & 2.05 & 1.00 & 1.32 & 0.49 & 0.78 & 1.17 & 1.36 \\ NeRF & 1.90 & 1.60 & 1.85 & 0.58 & 2.28 & 1.27 & 1.47 & 1.67 & 2.05 & 1.07 & 0.88 & 2.53 & 1.06 & 1.15 & 0.96 & 1.49 \\ UNISURF & 1.32 & 1.36 & 1.72 & 0.44 & 1.35 & 0.79 & 0.80 & 1.49 & 1.37 & 0.89 & 0.59 & 1.47 & 0.46 & 0.59 & 0.62 & 1.02 \\ NeuS & 1.00 & 1.37 & 0.93 & 0.43 & 1.10 & 0.65 & **0.57** & 1.48 & 1.09 & 0.83 & 0.52 & 1.20 & **0.35** & 0.49 & 0.54 & 0.84 \\ \hline \multicolumn{11}{c}{_Train w/o mask setting_} \\ IDR & 1.63 & 1.87 & 0.63 & 0.48 & 1.04 & 0.79 & 0.77 & 1.33 & 1.16 & 0.76 & 0.67 & **0.90** & 0.42 & 0.51 & 0.53 & 0.90 \\ NeuS & 0.83 & **0.98** & **0.56** & 0.37 & 1.13 & 0.59 & 0.60 & 1.45 & 0.95 & 0.78 & 0.52 & 1.43 & 0.36 & 0.45 & **0.45** & 0.77 \\ Ours & **0.82** & 1.14 & 0.60 & **0.35** & **1.01** & **0.53** & 0.63 & **1.31** & **0.86** & **0.73** & **0.51** & 1.15 & 0.45 & **0.42** & 0.46 & **0.73** \\ \hline \hline \end{tabular} \end{table} Table 3: Chamfer distances comparisons with other methods on the DTU dataset. COLMAP results are achieved by trim=0. \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c c c} \hline \hline **Scan ID** & 24 & 37 & 40 & 55 & 63 & 65 & 69 & 83 & 97 & 105 & 106 & 110 & 114 & 118 & 122 & Mean \\ \hline \multicolumn{11}{c}{_Train w/o mask setting_} \\ PSNR(NeuS) & 23.46 & 26.01 & 28.34 & 22.41 & 14.99 & 16.66 & 24.09 & 15.37 & 19.87 & 15.10 & 26.74 & 24.73 & 28.91 & 35.92 & 35.20 & 23.85 \\ PSNR(NeRF) & 21.13 & 23.00 & 18.91 & 22.97 & 25.55 & 12.7 & 22.76 & 25.19 & 28.54 & 35.29 & 18.31 & 17.91 & 21.44 & 21.48 & 22.68 & 22.52 \\ \hline SSIM(NeuS) & 0.876 & 0.611 & 0.911 & 0.903 & 0.897 & 0.744 & 0.844 & 0.840 & 0.856 & 0.826 & 0.838 & 0.871 & 0.914 & 0.968 & 0.972 & 0.858 \\ SSIM(NeRF) & 0.791 & 0.860 & 0.817 & 0.853 & 0.921 & 0.798 & 0.829 & 0.938 & 0.932 & 0.960 & 0.757 & 0.742 & 0.826 & 0.781 & 0.821 & 0.842 \\ \hline \multicolumn{11}{c}{_Train w/o mask setting_} \\ PSNR(NeuS) & 28.38 & 23.75 & 30.47 & 29.52 & 29.83 & 32.29 & 29.01 & 32.28 & 28.39 & 29.43 & 32.26 & 36.15 & 30.58 & 36.22 & 33.60 & 30.81 \\ PSNR(Ours) & 28.71 & 26.96 & 30.92 & 31.87 & 34.11 & 32.92 & 31.61 & 36.98 & 28.07 & 33.10 & 35.98 & 36.85 & 32.64 & 37.69 & 37.55 & **33.06** \\ \hline SSIM(NeuS) & 0.887 & 0.891 & 0.917 & 0.958 & 0.951 & 0.969 & 0.935 & 0.969 & 0.920 & 0.924 & 0.954 & 0.970 & 0.942 & 0.969 & 0.968 & 0.942 \\ SSIM(Ours) & 0.897 & 0.913 & 0.925 & 0.968 & 0.958 & 0.956 & 0.948 & 0.969 & 0.920 & 0.956 & 0.979 & 0.973 & 0.967 & 0.969 & 0.980 & **0.952** \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative comparisons with other methods on the task of novel view synthesis. Mean represents the average value of PSNR and SSIM. method can lift the 2D knowledge from SAM into the 3D neural field and boost the view synthesis quality of a target object. **Surface Reconstruction.** Besides, we also measure the reconstruction quality with the Chamfer distances and compare NOC with IDR [61], NeRF [32], COLMAP [43], UNISURF [37], and NeuS [52], as shown in Table 3. Same to the novel view synthesis, training without masks introduce more background noises into the neural field, while with the help of the variation field and feature field, our approach reduces the distance to 0.73 and outperforms the baseline methods. The results demonstrate that neural reconstruction can benefit from NOC pipeline. ### Qualitative Analysis We conduct the qualitative comparisons on the DTU dataset and the LLFF dataset. As shown in Fig. 4 and Fig. 5, we provide reference images and users' prompts on the initial view. On DTU datasets, we compare visualization quality with NeRF, NeuS train without masks and NeuS train with masks. NeRF shows the worst visualization quality since it models the whole scene and lacks of mesh constrained. Although NeuS contains a background model, which helps it focus on the foreground objects, its reconstruction results still inevitably show the background near the target objects. With the ground truth segmentation masks, NeuS is able to segment the object according to the ground truth masks but fails to reconstruct the objects according to one's intention. Thanks to the Unified Variation Field, NOC can generate high-quality masks of foreground objects with iterative lifting. It doesn't rely on arduous annotation work on all views and only needs one-shot annotations. Additionally, we can witness that the NOC reconstructs higher surface quality with the help of 3D SAM feature fields. On LLFF datasets, we choose one object for each scene as the target object. Since the dataset doesn't contain pixel-level masks for each object, we present visual quality here. We can witness that no whether the selected object is minor or significant in the scene, NOC can segment the target objects based on one's wishes and obtain impressive reconstruction quality. Besides, we also provide the results of one object among several foreground objects in the last two columns. This further demonstrates that with the help of NOC, we are able to reconstruct anything of the scene. Due to the space limitation, more qualitative images are included in the supplementary material. Figure 4: Qualitative comparison on DTU. ### Ablation study In the ablation study, we further study the influence of the proposed Unified Variation Field and 3D SAM Feature Field. All experiments in the ablation study are done on DTU dataset and average on all scenes. Since NOC is based on Instant-NSR, we implement the results on it. As shown in Tab. 4, with the aid of the Unified Variation Field, Inatnt-NSR isn't influenced by the background and focuses on the reconstruction of target objects. Since SAM encoder contains high-level knowledge, 2D foundation model features distilled into 3D space help to boost the reconstruction quality. With the proposed two contributions, NOC can efficiently segment and reconstruct the target object indicated by users in the scene. ### Conclusion To boost the reconstruction and rendering quality of the target object in the scene, the paper applies the Segment Anything Model to help 3D object reconstruction. The proposed method Neural Object Cloning (NOC) first leverages a unified 3D variation field to lift the multi-view 2D segmentation masks generated by SAM. With the help of the proposed variation field, NOC is capable of segmenting target objects and eliminating background interference. To boost the reconstruction quality, we further propose a 3D SAM field to lift pixel-level features into voxel space. Finally, we conduct several experiments on datasets DTU and LLFF and demonstrate NOC can obtain better reconstruction quality. For objects with highly complex textures, due to the progress of neural reconstruction, NOC may not be able to achieve desirable reconstruction results. However, our work has demonstrated the potential of large foundation models in aiding neural fields. In the future, we will further explore the connection between neural radiance fields and large foundation models. \begin{table} \begin{tabular}{l l l l} \hline \hline Variant & PSNR & SSIM & CD \\ \hline Instant-NSR & 29.94 & 0.8914 & 0.82 \\ \hline + Unified Variation Field & 32.44 & 0.9319 & 0.76 \\ + 3D SAM Feature Field & 30.76 & 0.9136 & 0.78 \\ \hline NOC (our) & 33.06 & 0.9520 & 0.73 \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation Study (%) of NOC. CD indicates the Chamfer distance.** Figure 5: Qualitative comparison on LLFF and fruit scene in DTU. These scenes have the following characteristics: the foreground objects consist of multiple independent objects, and the background is more complex. Best view in colors.
2310.20641
Performance Improvement in Multi-class Classification via Automated Hierarchy Generation and Exploitation through Extended LCPN Schemes
Hierarchical classification (HC) plays a pivotal role in multi-class classification tasks, where objects are organized into a hierarchical structure. This study explores the performance of HC through a comprehensive analysis that encompasses both hierarchy generation and hierarchy exploitation. This analysis is particularly relevant in scenarios where a predefined hierarchy structure is not readily accessible. Notably, two novel hierarchy exploitation schemes, LCPN+ and LCPN+F, which extend the capabilities of LCPN and combine the strengths of global and local classification, have been introduced and evaluated alongside existing methods. The findings reveal the consistent superiority of LCPN+F, which outperforms other schemes across various datasets and scenarios. Moreover, this research emphasizes not only effectiveness but also efficiency, as LCPN+ and LCPN+F maintain runtime performance comparable to Flat Classification (FC). Additionally, this study underscores the importance of selecting the right hierarchy exploitation scheme to maximize classification performance. This work extends our understanding of HC and establishes a benchmark for future research, fostering advancements in multi-class classification methodologies.
Celal Alagoz
2023-10-31T17:11:29Z
http://arxiv.org/abs/2310.20641v1
Performance Improvement in Multi-class Classification via Automated Hierarchy Generation and Exploitation through Extended LCPN Schemes ###### Abstract Hierarchical classification (HC) plays a pivotal role in multi-class classification tasks, where objects are organized into a hierarchical structure. This study explores the performance of HC through a comprehensive analysis that encompasses both hierarchy generation and hierarchy exploitation. This analysis is particularly relevant in scenarios where a predefined hierarchy structure is not readily accessible. Notably, two novel hierarchy exploitation schemes, LCPN+ and LCPN+F, which extend the capabilities of LCPN and combine the strengths of global and local classification, have been introduced and evaluated alongside existing methods. The findings reveal the consistent superiority of LCPN+F, which outperforms other schemes across various datasets and scenarios. Moreover, this research emphasizes not only effectiveness but also efficiency, as LCPN+ and LCPN+F maintain runtime performance comparable to Flat Classification (FC). Additionally, this study underscores the importance of selecting the right hierarchy exploitation scheme to maximize classification performance. This work extends our understanding of HC and establishes a benchmark for future research, fostering advancements in multi-class classification methodologies. Hierarchical Classification, Multi-class Classification, Hierarchical Divisive Clustering, LCPN ## I Introduction In various domains such as text classification [1, 2], image understanding [3], annotation [4, 5], and bioinformatics, particularly in the realm of protein function prediction [6, 7, 8, 9], HC has emerged as a powerful approach. HC involves the organization of data or objects into a tree-like structure with nested categories or groups, establishing a hierarchy where each category is a subset of a larger one. This hierarchical structure offers advantages in handling complex datasets with structured labels. Algorithms have been devised to harness this hierarchical structure for enhanced classification accuracy in cases where structured labels are readily available. However, in the majority of multi-class classification problems, structured labels are unavailable, and the potential performance benefits observed in hierarchical problems warrant further investigation. Only a limited number of studies have explored the advantages of inducing hierarchies from datasets typically associated with flat labels. For instance, in a web content analysis study [2], an automatic taxonomy of documents was derived without pre-defined labels through the construction of a binary tree. The tree construction involved hierarchical clustering of classes using a top-down approach, employing Spherical K-means for cluster splitting. Subsequently, in a follow-up study [10], the superiority of n-ary trees over binary trees was reported in terms of classification performance. In another investigation, researchers assumed the presence of a latent hierarchy in synthetic and various image data, resulting in a significant improvement in top-down classification tasks [11]. They explored two distinct clustering methods for constructing hierarchies. The first method involved estimating the conditional means and clustering them using Gaussian Mixture Models (GMM), while the second method measured pairwise task similarity between conditional distributions and utilized a combination of spectral embedding and GMM for clustering. These innovative approaches shed light on the potential of hierarchical structures in classification scenarios where structured labels are not readily available, opening up new avenues for research in this domain. The application of HC tasks varies based on several criteria [12, 13]. The first criterion concerns the type of hierarchical structure utilized, which is typically either a tree or a directed acyclic graph, allowing nodes to have multiple parent nodes. The second criterion relates to the depth of classification in the hierarchy. This can involve either always performing classification down to the leaf nodes, known as mandatory leaf node prediction, or stopping at any node level, known as non-mandatory leaf node prediction. The third criterion pertains to how the classifier is deployed within the hierarchical structure. Two approaches are commonly used: the global (or big-bang) classification, where a single classifier considers the entire class hierarchy, but lacks modularity, and the local classification, where local classifiers are placed at different regions of the hierarchical structure. Local classifiers can be situated on each node (local classifier per node - LCN), at each level (local classifier per level - LCL), or on parent nodes (local classifier per parent node - LCPN). While different during training, these local classifiers share a similar top-down approach during the testing phase. In the top-down approach, more general and coarse labels are predicted at higher levels, while more specific and fine labels are predicted at lower levels as choices are narrowed down moving downward. Notably, the prediction of lower-level classes depends on the higher-level predictions, meaning misclassifications at higher levels propagate down the hierarchy. In this study, the process of automatic hierarchy generation is executed by applying hierarchical clustering to class conditional means. In terms of modifying and extending the LCPN scheme, two strategies have been employed. Firstly, LCPN+ was developed to prevent predictions from occurring exclusively along a single path in the hierarchy. Additionally, LCPN+F was introduced to harmonize the LCPN and FC methods. This choice was made with the aim of striking a balance between the lack of modularity observed in global classification and the error propagation typically associated with local classification. The overarching objective of this approach is to optimize classification performance in multi-class datasets. This study presents several noteworthy contributions to the field of hierarchical multi-class classification: Novel Hierarchy Exploitation Schemes: This study introduces two innovative hierarchy exploitation schemes, LCPN+ and LCPN+F, which significantly contribute to the field of HC. These novel schemes offer a fresh perspective and advanced techniques for improving multi-class classification performance. Consistent Superiority of LCPN+F: The research highlights the consistent superior performance of the LCPN+F scheme compared to other hierarchy exploitation schemes. This contribution provides a valuable solution for enhancing HC across various datasets and scenarios. Balancing Modularity and Error Propagation: LCPN+F, in particular, strikes a balance between the modularity of global classification and the error propagation associated with local classification. This approach enhances classification performance in multi-class datasets, providing a valuable contribution to the field. Efficiency in Classification: The study not only emphasizes the effectiveness but also the efficiency of the introduced schemes. LCPN+ and LCPN+F maintain runtime performance comparable to FC, making them practical choices for real-world applications. Novel Hierarchy Generation Approach: In this study, an automatic hierarchy generation approach is introduced, which involves the utilization of hierarchical clustering applied to class conditional means. Furthermore, the study highlights the advantages of employing dimension reduction before obtaining class conditional means. This methodology offers a fresh perspective for creating hierarchies when predefined structures are unavailable. Informed Decision-Making: The research underscores the importance of informed decision-making in selecting hierarchy generation and exploitation configurations. It provides insights into the relationship between dataset characteristics and suitable hierarchy generation techniques, aiding researchers and practitioners in making better choices for their specific tasks. Extended Understanding of HC: By delving into the intricate interplay between hierarchy generation and exploitation, this work contributes to a deeper understanding of HC. It offers valuable insights that will guide future research in the domain, fostering advancements in multi-class classification methodologies. Benchmark for Further Research: This study establishes a benchmark for evaluating and comparing hierarchy exploitation schemes, paving the way for further research and experimentation in HC. It sets a standard for assessing the performance of new techniques and approaches in this field. In summary, this study's contributions offer insights into addressing the challenges of hierarchical multi-class classification, introducing a novel approach, and validating its efficacy through comprehensive experimentation. The findings presented here expand the understanding of HC methods and pave the way for further advancements in this evolving field. Open source code1 is available for the interested. Footnote 1: [https://github.com/alagoz/hge_extended_lcpn](https://github.com/alagoz/hge_extended_lcpn) ## II Methods This study is structured into two distinct phases to enhance the performance of multi-class classification: hierarchy generation and hierarchy exploitation that leverages extended LCPN schemes. The subsequent sections provide in-depth discussions of these phases and detailed explanations of the classifier selection and evaluation methodologies. ### _Hierarchy Generation_ In HC, it is a common convention for the dataset to include a pre-defined hierarchy. Typically, this hierarchy is constructed based on intuitive, semantic, or ontological taxonomies, which provide a clear structure for organizing the classes. However, when a dataset only provides flat labels without any predefined hierarchy, the task of deriving a meaningful hierarchy can be quite challenging. Relying on intuition or semantics to construct a hierarchy may not necessarily lead to improvements in classification performance. In such cases, a more pragmatic approach involves considering the attributes inherent to the flat class labels, as learned by the algorithms during training. Consequently, in this study, the challenge of hierarchy generation is addressed by treating each flat class label as an independent object and applying clustering techniques to organize them into a hierarchy. Class Conditional MeansRepresenting flat classes as objects and deriving their attributes can be achieved through the computation of class conditional means using the dataset. Class conditional means denote the average values of the features (attributes) within each class. Formally, given a dataset \(D\) comprising \(n\) instances and \(m\) features, where each instance \(i\) is represented as \(x^{(i)}=(x^{(i1)},x^{(i2)},...,x^{(im)})\), and a set of \(c\) flat classes \(\{C_{1},C_{2},...,C_{c}\}\), the class conditional means for each class \(C_{j}\), \((1\leq j\leq c)\) can be calculated as follows: \[\mu(C_{j})=(\mu(C_{j}^{1}),\mu(C_{j}^{2}),...,\mu(C_{j}^{m})) \tag{1}\] Here, \(\mu(C_{j}^{i})\) represents the mean value of the j-th feature across all instances belonging to class \(C_{j}\). Consequently, all data points associated with a particular class, represented as vectors in an m-dimensional space, are consolidated into a single point within that space. Within the context of hierarchical divisive clustering, these class conditional means serve as the input data for the clustering algorithm. Linear Discriminant AnalysisIn this specific context, the application of dimension reduction before computing class conditional means serves as a valuable strategy to enhance the quality of hierarchical clustering. This approach focuses on extracting the most discriminative features while reducing the impact of noise in the data. To accomplish this, Linear Discriminant Analysis (LDA) is chosen as the dimension reduction technique. LDA, as originally proposed by Fisher (1936) [14], is a dimensionality reduction method with a primary objective of reducing the dimensionality of a dataset while simultaneously preserving the discriminative information between different classes. In essence, LDA seeks to identify a new set of dimensions, represented as linear combinations of the original features, in a manner that maximizes the variance between classes while minimizing the variance within each class. A key advantage of LDA is its supervised nature, which means it takes class labels into consideration during its dimension reduction process. By focusing on the class-related variance, LDA can reveal the most relevant features for distinguishing between classes, contributing to more effective clustering and classification results. Dimension reduction, in this context, involves transforming data from the high-dimensional space \(\mathbb{R}^{m}\) into a lower-dimensional space \(\mathbb{R}^{k}\). The selection of the appropriate value for \(k\) is implemented as an automated process. Prior to performing the actual dimension reduction, the dimension reduction algorithm used is trained using all \(m\) components. This training process yields the percentage of variance explained by each of the components. Consequently, the sum of the explained variances across all components equals 1.0. The selection of \(k\) is determined by identifying the point at which the cumulative sum of variances surpasses or equals 0.95. In other words, \(k\) is chosen such that the sum of the variances of the first \(k\) components is at least 0.95. This method ensures that a significant portion of the variance in the data is retained in the lower-dimensional representation, helping to balance dimensionality reduction with the preservation of information. Hierarchical ClusteringWhen generating a hierarchy through clustering, two distinct approaches can be applied: agglomerative and divisive clustering. Agglomerative clustering, often referred to as a bottom-up approach, initiates with individual objects considered as separate clusters, progressively merging them as the hierarchy is built from the bottom to the top. In contrast, divisive clustering begins with a single cluster encompassing all objects and divides it as the hierarchy develops from the top to the bottom. In both approaches, the clustering process relies on assessing the (dis)similarity between objects. To enable a comprehensive comparative analysis, both agglomerative and divisive clustering methods are implemented in this study. In the case of hierarchical divisive clustering, the input to the clustering algorithm is provided by the class conditional means. For this purpose, the k-medoids algorithm [15] is chosen as the clustering method. K-medoids, also known as partitioning around medoids, represents a variation of the k-means algorithm. In contrast to k-means, where cluster centers are computed as the averages of data points, k-medoids selects actual data points as cluster centers (referred to as medoids or exemplars). This characteristic enhances the interpretability of Fig. 1: Demonstration of hierarchy generation via hierarchical divisive clustering using k-medoids as the clustering method on the Glass types dataset. (a) Data points are initially represented in a lower-dimensional 3D space and visualized as a point cloud. (b) Each class is subsequently represented as a point in the attribute space by computing class conditional means; in this case, there are six points corresponding to the six flat classes. (c) The final hierarchical structure is depicted as a binary tree after processing the class points using k-medoids. cluster centers, as they correspond to real data points within the dataset. Additionally, k-medoids demonstrates robustness against noise and outliers, making it a suitable choice for data with irregularities. While the k-medoids algorithm has the flexibility to work with a variety of dissimilarity measures, this study specifically opts for the Euclidean distance metric. Conversely, in the hierarchical agglomerative clustering approach, the class conditional means also serve as input data. These means are utilized with the linkage function from the SciPy library, which is capable of handling either a distance matrix or observation vectors as inputs. In the present study, observation vectors are provided, and a pairwise distance matrix is constructed using the Euclidean distance metric. This distance matrix facilitates the hierarchical agglomerative clustering process, enabling the linkage function to create the hierarchical structure based on the provided dissimilarity information. Figure 1 illustrates the hierarchy generation process for the Glass dataset. In this case, dimension reduction is performed prior to computing class conditional means, followed by hierarchical divisive clustering. The resulting hierarchical structure takes the form of a binary tree. It's important to emphasize that this hierarchy structure is generated exclusively using the training set. ### _Hierarchy Exploitation via Extended LCPN Schemes_ Enhancing the quality of the tree structure is a crucial step in addressing multi-class classification enhancement problems. Another critical phase is effectively utilizing this hierarchy, which involves deploying the classifier algorithm within the hierarchy structure. Depending on how the labels are considered by the classifier, two primary classification approaches emerge: global and local variants. In the global classification scheme, all labels are simultaneously considered by the classifier. This approach offers the advantage of internalizing attributes provided by the hierarchy in an encompassing way. However, it may exhibit weaknesses in handling intricate details and maintaining modularity. In contrast, local variants employ the classifier at each node, level, or parent node in the hierarchy. These schemes preserve modularity and internalize more details by leveraging local information. However, especially during the top-down prediction phase, misclassifications at higher levels can propagate errors affecting predictions at lower levels. This phenomenon is known as error propagation. To address these issues and combine the strengths of both global and local classification approaches, this study introduces novel algorithms that extend the LCPN scheme. These algorithms are named LCPN+ and LCPN+F. Before delving into the details of these algorithms, let us establish some notations. For a dataset with \(c\) classes, there will be a total of \(2c-1\) nodes, including leaf (singleton) and non-leaf (non-singleton) nodes within the hierarchy. Each node is assigned a label within the hierarchy, denoted as \(y_{p,q}^{r,*}\), characterized by four subscripts - two subscripts and two superscripts. The first subscript \(p,(0\leq p\leq 2c-2)\) corresponds to the current node's index, and the second subscript corresponds to the parent node's index of the current node. The superscripts are employed to describe child indices - given the binary tree structure, child indices can be either 0 or 1, where the former represents the left child, and the latter represents the right child. The first superscript represents the child index of the current node, while the second superscript represents the child index of the parent node of the current node. With these notations, \(y_{0}\) denotes the root node, without superscripts since it has no child index, and without the second subscript since it has no parent node. Additionally, \(f\) and \(f_{i}^{h}\) represent the flat and hierarchical classifier functions, respectively. The subscript for the hierarchical classifier \(f_{i}^{h}\) denotes the parent node index where the classifier is located. Lastly, \(p(y|f)\) represents the probability of label \(y\) predicted by the classifier function \(f\). Figure 2 illustrates a label setting in both the LCPN and LCPN+F hierarchical configurations. LCPN+SchemeThe LCPN+ scheme introduces a modification specifically for the prediction phase, while retaining the conventional LCPN approach during training. In the training phase, similar to conventional LCPN, only the classifiers at parent nodes are trained. However, during the prediction phase, a departure from the conventional setting occurs. In the standard LCPN, an instance follows a single path of prediction. For example, if a certain instance is predicted at the root node for the right child label, it restricts the instance from being labeled with any labels falling under the left child label. This effectively narrows down the choices to a single flat label for the instance. In LCPN+, labeling is carried out probabilistically, and all instances are predicted by all the classifiers. The prediction of a leaf node is then determined by considering the cascaded probabilities of that leaf node along with any connected non-leaf nodes. The classification of an instance is determined using the argmax operation, which selects the label with the highest predicted probability. This is formally defined as: \[\operatorname*{argmax}_{C_{i}\in C}\;\prod_{p,q,r,s}\in\Omega_{C_{i}}p(y_{p,q }^{r,s}|f_{q}^{h}) \tag{2}\] Here, \(C_{i}\) represents a flat label within the hierarchy from a set of flat labels denoted as \(C\), and \(\Omega_{C_{i}}\) represents the path of labels for \(C_{i}\). This path includes \(C_{i}\) itself and its parent labels. \(\Omega_{C_{i}}\) denotes the path of \(C_{i}\) and is a composite set encompassing the \(p\), \(q\), \(r\), and \(s\) sets. For example, for the label \(y_{6,3}^{1,0}\) in Figure 2 (a), the individual paths are designated as \(p=\{6,3,1\}\), \(q=\{3,1,0\}\), \(r=\{1,0,1\}\), and \(s=\{0,1\}\). Consequently, the complete path is represented as \(\Omega_{C_{6}}=\{\{6,3,1\},\{3,1,0\},\{1,0,1\},\{0,1\}\}\). LCPN+F SchemeIn the LCPN+F scheme, LCPN+ is further enhanced by combining flat classifiers with hierarchical ones within the structure. This configuration introduces modifications during both the training and testing phases. In the conventional LCPN approach, classifiers are exclusively employed at parent nodes, which may contain both non-leaf and leaf (flat) labels. In the LCPN+F scheme, classifiers at the parent nodes are dedicated solely to classifying non-leaf labels, leaving leaf labels to be classified by flat classifiers. When a parent node contains both leaf and non-leaf labels, the classifier at that node is trained conventionally but used only for predicting the non-leaf label during testing, bypassing the leaf label. However, if a parent node has both of its children as leaf, it is completely deactivated, neither used for training nor prediction. Consequently, in the training phase, only the classifiers that remain active are trained. The prediction phase in the LCPN+F scheme is similar to LCPN+ with the main difference being that leaf label predictions are made by a multi-class flat classifier. Similar to LCPN+, predictions are made probabilistically, where the prediction for each leaf node is computed through the chain rule of conditional probabilities down to that leaf. The leaf with the highest probability is then predicted. This is formally defined as: \[\operatorname*{argmax}_{C_{i}\in C}\prod_{p^{\prime}\cup t,q^{\prime}\cup u,r ^{\prime}\cup v,s^{\prime}\cup u\in\,\Omega_{C_{i}}}p(y^{r^{\prime},s^{\prime }}_{p^{\prime},q^{\prime}}|f^{h}_{q})p(y^{v,w}_{t,u}|f) \tag{3}\] Here, \(t\), \(u\), \(v\), and \(w\) represent the last elements of individual paths for \(\Omega_{C_{i}}\), while \(p^{\prime}\), \(q^{\prime}\), \(r^{\prime}\), and \(s^{\prime}\) are the remaining elements for the respective paths. The last term in the probability chain always represents the probabilistic output of a flat classifier. Both in LCPN+ and LCPN+F, error propagation, typically encountered in LCPN, is circumvented by not constraining test instances to follow a single path. Furthermore, the incorporation of local information addresses a limitation of the global classification approach, thus having potential to improve the overall performance of the scheme. Additionally, for both schemes, the modifications enable the prediction phase to be implemented in parallel, potentially resulting in more efficient computation. Furthermore, the LCPN+F scheme offers Fig. 2: Hierarchy exploitation using extended LCPN schemes LCPN+ (a) and LCPN+F (b). Prediction in terms of conditional probabilities for each leaf (flat) and non-leaf node are displayed within individual boxes, including the node labels that are annotated based on the hierarchical structure and the classifier used for the prediction of the labels. Predictions for each label are made probabilistically. In LCPN, hierarchical classifiers predict all labels, whereas in LCPN+F, only leaf nodes are predicted by hierarchical classifiers, with the remaining labels predicted by the flat classifier. In the LCPN+F scheme in this specific hierarchy, hierarchical classifiers \(f^{h}_{2}\) and \(f^{h}_{5}\) are deactivated since they solely have leaf nodes as children. enhanced efficiency as some parent nodes can be deactivated based on the hierarchy's structure. ### _Classifiers and Evaluation_ For each dataset, various classifiers are tested, and the best-performing classifier is selected for use in the experiments. Time Series Forest (tsf) [18] is exclusively chosen for time series datasets. The classifiers and their abbreviations are as follows: xgb Extreme Gradient Boosting [16] is a high-speed, accurate machine learning algorithm that combines multiple decision trees for robust predictions. rf Random Forest [17] is an ensemble learning method that constructs multiple decision trees to enhance accuracy and reduce overfitting. lda LDA [14] is a technique used in both classification and dimensionality reduction, aiming to find a lower-dimensional representation of data while maximizing class separation. nb Naive Bayes is a probabilistic classification algorithm based on Bayes' theorem, assuming feature conditional independence. tsf Time Series Forest is a machine learning algorithm specifically designed for time series classification tasks. It is an ensemble method that leverages randomization and decision trees to make accurate predictions on time series data. Before comparing performances, some hyperparameters of the selected models are fine-tuned based on cross-validation results using the GridSearchCV function from the scikit-learn library. Table I presents the datasets and classifiers forming the baseline models for each dataset with their tuned parameters. Performance assessment relies on the F1-score metric, utilizing macro averaging for multi-class classification. The evaluation involves 5-fold cross-validation. Since hierarchical clustering is supervised, the training set examples are used for hierarchy tree construction, potentially generating up to 5 distinct trees. Performance improvement in using HC over FC is quantified through a metric called Learning Efficiency (LE) [19]. LE is calculated as the ratio of F1-scores obtained from HC to FC. When LE is greater than 1 (LE \(>\) 1), it indicates an enhancement in classification performance when employing HC. ## III Experiments The experiments are conducted utilizing five real-world datasets sourced from various databases such as UCR [20], UCI [21], and scikit-learn [22]. Neither of these datasets is furnished with pre-defined hierarchical labels, rendering them well-suited for the purposes of this study. This section offers a concise overview of the datasets used in the research, followed by the presentation of the experimental results. ### _Datasets_ **Glass Identification dataset**[23] contains information about various chemical properties of glass, and the task is to classify different types of glass based on these attributes. The data consists of 214 instances, where each instance represents a different type of glass. There are ten attributes provided for each instance, such as the refractive index, sodium oxide content, magnesium oxide content, aluminum oxide content, silicon oxide content, potassium oxide content, calcium oxide content, barium oxide content, iron oxide content, and the type of glass. The target variable in the Glass dataset is the type of glass, which can belong to one of seven classes representing different glass types. The classification task aims to predict the type of glass based on the given chemical composition attributes. **ProximalPhalanxTW dataset**[24] contains time series data, specifically related to hand movement. The dataset records measurements of the position of the proximal phalanx of a human hand over time. The dataset consists of several numerical attributes, where each attribute represents a different aspect of the hand movement. The specific number of attributes and instances may vary depending on the version of the dataset. Researchers often use the ProximalPhalanxTW dataset for tasks related to time series classification and pattern recognition. It can be employed to develop and evaluate algorithms for recognizing and categorizing hand movements based on the recorded data. **Yeast dataset**[25] represents a challenging problem of predicting the cellular localization sites of proteins in yeast cells. The dataset contains information about 1,484 proteins from Saccharomyces cerevisiae (commonly known as baker's yeast). Each protein is described by eight attributes, which are numerical and represent certain features extracted from the protein sequence. The target variable in the Yeast dataset is the cellular localization site, and it can belong to one of ten classes, representing different subcellular compartments where the proteins are located within the yeast cells. The primary task with the Yeast dataset is to classify proteins into their respective localization sites based on the provided attributes. Due to its complexity and multiclass nature, this dataset is widely used for evaluating and comparing the performance of various classification algorithms. **Faces dataset**[26] The Olivetti Faces dataset, sourced from AT&T Laboratories Cambridge, includes 400 grayscale images portraying 40 unique individuals. Each person is represented by 10 distinct images, showcasing variations in lighting, facial expressions, and poses. These images have a resolution of 64x64 pixels and are commonly employed by researchers in computer vision and machine learning for tasks such as face recognition and facial expression analysis. **FiftyWords dataset**[27] consists of a collection of handwritten texts. It contains samples of 50 different words, each handwritten by multiple individuals. Each sample is represented as a time series of pen-tip coordinates. The dataset consists of numerical attributes that represent the (x, y) coordinates of the pen-tip as it moves across a writing surface. Each time series captures the trajectory of the pen for a specific word. Researchers often use the FiftyWords dataset for tasks related to time series classification and pattern recognition. It's a valuable resource for developing and evaluating algorithms for handwritten word recognition. ### _Results_ All datasets undergo a one-time shuffling process before implementing cross-validation, with the random state set to 0 to ensure consistent data point distribution across multiple runs. In addition to LDA, various dimension reduction techniques, including PCA, Neighborhood Components Analysis [28], Isomap [29], and Locally Linear Embedding [30] are employed. Among these, LDA consistently delivers the best results. For hierarchical clustering, different distance metrics other than Euclidean are experimented with for both the k-medoids and linkage functions. Surprisingly, the Euclidean distance metric consistently yields higher-quality hierarchical trees, enhancing HC performance. Additionally, apart from k-medoids, k-means and Gaussian Mixture Model (GMM) clustering methods are also explored. However, k-medoids predominantly produces superior hierarchies, leading to improved HC performance. After these initial configurations, results are obtained by generating hierarchies following dimensionality reduction applied to the datasets. Clustering is performed using both divisive and agglomerative approaches. Hierarchy exploitation involves the utilization of four distinct schemes: global, LCPN, LCPN+, and LCPN+F, with each scheme being systematically evaluated. The LE results for each scheme are presented in Table II, with cases of performance improvement over the baseline being highlighted in bold font. It is evident that the choice of hierarchy exploitation scheme has a discernible impact on the performance of HC. Notably, the standout finding is the consistent superiority of the LCPN+F scheme in comparison to the other schemes. This trend is particularly pronounced when employing the LCPN+F scheme in scenarios involving hierarchical agglomerative clustering, where performance improvements over FC are observed across all datasets, except for the Yeast dataset. However, when hierarchical agglomerative clustering is used, suboptimal results are noted, especially in the cases of Yeast, Glass, and FiftyWords datasets. This underscores the suitability of the LCPN+F scheme for hierarchical agglomerative clustering during the hierarchy generation process. The second-best performing scheme is LCPN+, which consistently delivers modest enhancements over the LCPN scheme in most instances. Notably, exceptions to this pattern are observed in the ProximalPhalanxTW and Yeast datasets. Surprisingly, for both divisive and hierarchical agglomerative clustering, these two datasets exhibit a performance trend contrary to that observed in the other datasets when utilizing the LCPN, LCPN+, and LCPN+F schemes. In most other datasets, there is a progressive increase in performance as one moves from LCPN to LCPN+ and then to LCPN+F schemes, while LCPN achieving the best performance in the ProximalPhalanxTW and Yeast datasets. Regarding global classification, erratic fluctuations in performance are observed, with notable declines, particularly in the case of hierarchical agglomerative clustering applied to datasets like Yeast, Faces and FiftyWords. Overall, global classification exhibits the poorest performance among the tested schemes. In a general sense, hierarchies generated through the hierarchical agglomerative clustering approach consistently outperform those generated using hierarchical agglomerative clustering. An exception to this trend is observed only in the case of LCPN, which achieves better results when employing hierarchies generated through hierarchical agglomerative clustering. Hierarchy exploitation schemes are further examined while generating hierarchies without applying dimension reduction to the data points during the hierarchy generation phase. The results of the LE technique are displayed in Table III, with performance improvements over the baseline (FC) once again highlighted in bold font. In general, a decline in performance when comparing scenarios without dimension reduction is observed. Remarkably, global classification surprisingly demonstrates significantly better performance in the ProximalPhalanxTW dataset than it typically exhibits in other cases. However, it still yields notably low results for the Yeast, Faces, and FiftyWords datasets when hierarchies are generated using hierarchical agglomerative clustering. Additionally, in the case of hierarchical divisive clustering, the behavior of the ProximalPhalanxTW and Yeast datasets differs from their performance when dimension reduction is applied. They exhibit the best performance with the LCPN+F scheme, with a progressive decline as moving to LCPN+ and then LCPN. It is essential to highlight that the ProximalPhalanxTW dataset consistently shows performance improvement across all LCPN and its extended variants --LCPN+ and LCPN+F--across all cases, suggesting that certain datasets are particularly well-suited for HC. As an exception, an improvement is noted in the Yeast dataset when using LCPN for hierarchy exploitation and hierarchical agglomerative clustering for hierarchy generation. In contrast, LCPN+F exhibits more modest performance, achieving performance improvements over FC only in the ProximalPhalanxTW and FiftyWords datasets, both of which are time series datasets. Meanwhile, when employing hierarchies generated through hierarchical agglomerative clustering, both time series datasets--ProximalPhalanxTW and FiftyWords--consistently demonstrate high performance, regardless of whether dimension reduction is applied. This observation provides valuable insights into the characteristics of time series datasets whether dimension reduction is employed or not before the hierarchy generation phase. However, further analysis of additional examples is required to elucidate these properties fully and arrive at more conclusive findings. Lastly, empirical runtime analysis was conducted to assess the speed and efficiency of hierarchy generation and exploitation schemes. For the empirical computational cost analysis, the total running time of the 5-fold cross-validation procedure was considered. It is important to emphasize that the HC schemes encompassed an additional hierarchy generation step during the training phase, which inherently included dimension reduction. This hierarchy generation step was factored into the total running time calculations. The empirical runtime analysis, as shown in Figure 3, focuses on scenarios involving hierarchy trees generated through both hierarchical divisive clustering and hierarchical agglomerative clustering, in conjunction with dimension reduction. As anticipated, FC consistently emerged as the fastest-running algorithm in the experiments. The LCPN+, LCPN+F, and global classification schemes exhibited runtime performance comparable to FC in both hierarchical divisive clustering and hierarchical agglomerative clustering scenarios. They operated at a pace approximately three times slower than FC. However, LCPN stands out as the slowest algorithm in this context, with runtime speeds approximately 100 times slower than FC. Furthermore, it is worth noting that LCPN+ and LCPN+F consistently displayed very similar runtime performance, while global classification, in general, was observed to be slower. An exception to this trend was identified in the Glass dataset within the hierarchical agglomerative clustering context, where Fig. 3: Empirical runtime of hierarchy exploitation schemes when hierarchy was generated using divisive (a) and agglomerative (b) hierarchical clustering approach. Speed of Global, LCPN+ and LCPN+F reschmes are comparable to FC while LCPN is the slowest among them. global classification exhibited a faster runtime. ## IV Discussions In this study, a comprehensive empirical analysis was conducted to examine various aspects of HC performance. Multiple experiments and considerations were performed to understand the factors influencing HC outcomes. In this discussion, the implications of the findings are discussed, considering aspects such as data preprocessing, dimensionality reduction, distance metrics, clustering methods, and hierarchy exploitation schemes. Data Preprocessing and Dimensionality ReductionThe experiments began with a data preprocessing step involving a one-time shuffling process. A consistent data point distribution was maintained across multiple runs by setting the random state to 0. Additionally, different dimension reduction techniques, including PCA, NCA, Isomap, and LLE, were employed. Remarkably, LDA consistently outperformed other techniques, highlighting its effectiveness in feature extraction for HC. Choice of Distance Metric and Clustering MethodFor hierarchical clustering, various distance metrics beyond the Euclidean distance were explored, testing their suitability for both k-medoids and linkage functions. Surprisingly, the Euclidean distance metric consistently produced higher-quality hierarchical trees, resulting in improved HC performance. Additionally, different clustering methods, including k-means and GMM, were experimented with, but it was found that k-medoids predominantly generated superior hierarchies, leading to improved HC performance. Hierarchy Exploitation SchemesThe analysis extended to hierarchy exploitation schemes, where four distinct schemes--global, LCPN, LCPN+, and LCPN+F--were systematically evaluated. The results indicated a discernible impact on HC performance based on the choice of hierarchy exploitation scheme. Notably, the consistent superiority of the LCPN+F scheme compared to the other schemes was observed. This trend was prevalent across most datasets, except for the Yeast dataset. Conversely, suboptimal results in certain cases, including Yeast, Glass, and FiftyWords datasets, were observed when hierarchical agglomerative clustering was used. These findings underscored the suitability of the LCPN+F scheme for hierarchical divisive clustering during hierarchy generation. The second-best performing scheme was LCPN+, consistently delivering modest enhancements over the LCPN scheme in most cases. However, exceptions were observed, notably in the ProximalPhalanxTW and Yeast datasets. Surprisingly, these two datasets exhibited performance trends contrary to those observed in other datasets when utilizing the LCPN, LCPN+, and LCPN+F schemes. In most other datasets, there was a progressive increase in performance as the transition was made from LCPN to LCPN+ and then to LCPN+F schemes, while LCPN achieved the best performance in the ProximalPhalanxTW and Yeast datasets. The analysis also touched upon global classification, which exhibited erratic fluctuations in performance, with notable declines, particularly in the case of hierarchical agglomerative clustering applied to datasets like Yeast, Faces, and FiftyWords. Overall, global classification demonstrated the poorest performance among the tested schemes. Impact of Dimension ReductionAn analysis was conducted that excluded dimension reduction in the hierarchy generation phase. The results indicated a decline in performance when comparing scenarios without dimension reduction. Notably, global classification surprisingly demonstrated significantly better performance in the ProximalPhalanxTW dataset than in other cases, although it still yielded notably low results for the Yeast, Faces, and FiftyWords datasets when hierarchies were generated using hierarchical agglomerative clustering. The behavior of the ProximalPhalanxTW and Yeast datasets differed in the case of hierarchical divisive clustering without dimension reduction, with a preference for the LCPN+F scheme and a progressive decline as LCPN+ and LCPN were employed. Time Series Datasets and Dimension ReductionIn the context of time series datasets, specifically the ProximalPhalanxTW and FiftyWords datasets, a notable observation emerged. It was consistently observed that the ProximalPhalanxTW dataset exhibited performance improvements across all LCPN variants--LCPN, LCPN+, and LCPN+F--across all cases. This suggests that certain datasets are inherently well-suited for HC, irrespective of the hierarchy exploitation scheme employed. Furthermore, in both scenarios, whether dimension reduction was applied or not, LCPN+F consistently demonstrated improvements over FC in the context of time series datasets. This observation hints at the potential versatility and effectiveness of the LCPN+F scheme in managing time series data, regardless of whether dimension reduction is integrated into the process. However, to validate and explore this finding further, additional time series datasets should be investigated. Runtime AnalysisAn empirical runtime analysis provided insights into the efficiency of hierarchy generation and exploitation schemes. Despite an additional hierarchy generation step during the training phase, the LCPN+, LCPN+F, and global classification schemes exhibited runtime performance comparable to FC in both hierarchical divisive clustering and hierarchical agglomerative clustering scenarios. They operated at approximately three times the runtime of FC. In conclusion, a comprehensive analysis was conducted to understand the factors influencing HC performance. The findings contribute to informed decisions in designing HC systems tailored to specific datasets and applications. ### _Future Directions_ The analyses conducted in this study have provided valuable insights into the impact of both hierarchy generation and hierarchy exploitation on HC performance. Consequently, the task of selecting the most appropriate configuration for both hierarchy generation and hierarchy exploitation for a given dataset emerges as a critical endeavor in achieving desired enhancements in multi-class classification. Specifically, establishing a deeper understanding of the relationship between dataset characteristics and suitable hierarchy generation techniques can facilitate more informed decision-making when choosing an appropriate technique. Furthermore, once a hierarchy is generated, it may possess unique properties that can inform the selection of the most effective exploitation scheme. Another promising avenue for future research involves the exploration of additional exploitation schemes. Identifying the optimal configuration for a specific dataset necessitates a comprehensive analysis across multiple datasets. As a result, future work will encompass an expansion of this analysis to include a broader range of datasets, further enhancing the understanding of the intricate interplay between data characteristics, hierarchy generation techniques, and exploitation schemes in the realm of HC. ## V Conclusions This study has delved deeply into the realm of HC, providing a comprehensive analysis of its performance. Notably, novel hierarchy exploitation schemes have been developed, specifically, LCPN+ and LCPN+F, marking significant advancements in this field. Throughout the analyses conducted, LCPN+F emerged as the standout performer, consistently delivering exceptional results in terms of both classification efficiency and effectiveness. The findings underscore the pivotal role of selecting the appropriate configuration for both hierarchy generation and exploitation in HC. These newly introduced schemes, LCPN+ and LCPN+F, have demonstrated their prowess in this context. Additionally, the empirical runtime analysis has highlighted the efficiency of these schemes, with LCPN+ and LCPN+F performing commendably while maintaining runtime performance comparable to FC. In summary, this study represents a substantial stride forward in the domain of HC, introducing novel schemes in the form of LCPN+ and LCPN+F, which exhibit superior performance and efficiency. These schemes hold immense promise for enhancing multi-class classification tasks, and the findings presented here provide valuable insights that will guide future research endeavors in this field.
2309.15799
Infinite Size-Biased Orders
The infinite random size-biased order with arbitrary positive size parameters is introduced in terms of independent exponential random variables. We collect basic properties and constructions of the order, some of which belong to the folklore, and show how the order type (e.g. ${\mathbb Z}_{>0}, {\mathbb Q}$ or any other possible) depends on parameters.
Alexander Gnedin
2023-09-27T17:19:47Z
http://arxiv.org/abs/2309.15799v1
# Infinite Size-Biased Orders ###### Abstract The infinite random size-biased order with arbitrary positive size parameters is introduced in terms of independent exponential random variables. We collect basic properties and constructions of the order, some of which belong to the folklore, and show how the order type (e.g. \(\mathbb{Z}_{>0},\mathbb{Q}\) or any other possible) depends on parameters. ## 1 Introduction Permutations of finite sets are particularly exciting combinatorial structures due to interplay between their roles as bijections and (linear) orders. Though these are inseparable, from the time of Cauchy the early literature was careful to distinguish the term 'permutation' from'substitution', the latter being understood as the operation of replacing each element of the set by some other. For infinite sets the hypostases truly disentangle. The self-bijections of \(\mathbb{Z}_{>0}\), introduced by Vitali [41] as infinite substitutions, correspond to the orders obtained by re-arranging the positive integers in a sequence. In this paper under infinite permutation we shall mean this sort of order, corresponding to Vitali's substitution; see [9] for a wider use of the term. But such infinite permutations constitute just one isomorphism type from the continuum universe of types of countable (infinite) orders [33]. Recall that an order is _dense_ if for any two distinct elements there is a third element strictly between them. By Cantor's isomorphism theorem every countable dense order without first and last elements is of the type \(\mathbb{Q}\), while any other type can be realised as a suborder of the rationals. For many reasons, however, it is more natural to consider countable orders embedded in reals, though a dense order may be realised by a set not dense in the topological sense. For random order chosen from some probability distribution over the space of orders on a given countable set, the type may be a nontrivial random variable. We say that a random order is _pure_ if it has one type almost surely. For instance, pure orders of types \(\mathbb{Z}_{>0}\) and \(\mathbb{Z}\) have been introduced recently as extensions of Mallows distribution on finite permutations [18, 19]. To compare, the analogue of finite uniformly distributed permutation is the _exchangeable_ order, which has the type \(\mathbb{Q}\) and is derived from the natural order on the values sampled in the i.i.d. fashion from a continuous distribution over reals. The idea of the latter construction was just sketched in an abstract by Rubin [34] and employed later as a model for the'secretary' problem with infinitely many choice options [15]. We will focus on a rich infinite-parameter family of random orders whose type varies with parameters. A good starting point is the following explicit construction in the spirit of finite 'order statistics' model of ranking [8]. Let \(I\) be a countable set of items equipped with a _size_ function \(w(i)>0\). For independent exponential random variables \(X_{i},i\in I\), with \(X_{i}\) having rate \(w(i)\), define a random relation \(\triangleleft\) on \(I\) by setting \[i\triangleleft j\Longleftrightarrow X_{i}<X_{j},\quad i\neq j. \tag{1}\] With probability one the values of \(X_{i}\)'s are all pairwise distinct, therefore (1) defines a strict linear order, modulo a null event. We call \(\triangleleft\) a _size-biased order_ on \(I\). The probability of relation \(i\triangleleft j\) is \(w(i)/(w(i)+w(j))\), which exceeds \(1/2\) whenever \(w(i)>w(j)\), so the intuitive effect of size-biasing is that the items of bigger size are more likely to precede those with smaller. Finite orders of the kind emerge from the statistical procedure of size-biased sampling without replacement from a finite stratified population. Yates and Grundy [43] introduced the method as follows: select the first unit [stratum, item] with probability proportional to size, and the second with probability proportional to the size of the remaining units, and so on. The list of size-biased picks makes up what is now called size-biased permutation. Another eminent origin of the concept is the Luce model of choice [20], which postulates a condition on the probability that an item is recognised as 'the best' within a set of choosable alternatives. Yellott [44] explored the connection between Luce's axiomatic approach and the representation of choice via independent random variables from the Gumbel translation family (equivalent to (1) by passing to \(-\log X_{i}\)). Fishburn [13] studied conditions on random rankings compatible with Luce's axiom. Steele [36] identified all possible limits of empirical distributions constructed from the choice probabilities. Many results related to the finite size-biased orders appeared (sometimes in disguise) in the work on nonuniform permutation models [30], reliability [39], nonstationary models for records [6, 22, 25, 42] and exponential order statistics [10, 21, 22]. See [5, 12, 28] for different aspects and further pointers to the extensive literature. For countable ground set \(I\) and summable size function, i.e. satisfying \(\sum_{i\in I}w(i)<\infty\), constructing the order by size-biased sampling works without change. The term'size-biased permutation' was first applied to this infinite case in a study of ecological diversity [23]. We can indeed speak of a permutation, because eventually every element gets sampled. Another interpretation of \(\triangleleft\) as 'age ordering' derives from sampling _with_ replacement from the discrete probability distribution with masses \(w(i)/\sum_{j\in I}w(j),\ i\in I\). By the i.i.d. sampling, where repetitions are inevitable, the order \(\triangleleft\) corresponds to the random succession of distinct items listed as they appear for the first time in the sample. The induced ordering of \(w(i)\)'s (which need not be strict) is also called size-biased; this offers a useful alternative to the arrangement of the collection of sizes by decrease, especially in the analysis of discrete distributions with random masses such as the Poisson-Dirichlet random measures and their relatives [11, 16, 24]. The infinite exchangeable order appears in the special case of constant size, e.g. \(w(i)\equiv 1\). This and other size-biased orders with nonsummable size functions provide a natural framework to model processes of records. Suppose \(I=\mathbb{Z}_{>0}\), so the items are arranged in a sequence. In the event \(A_{i}:=\{X_{i}=\min(X_{1},\ldots,X_{i})\}\) we speak of a record at index \(i\). The setting (1) of exponential variables is an instance of Nevzorov's model for records, which has a characteristic feature that the \(A_{i}\)'s are independent [6, 22]1. The summable case has only limited interest for the theory, because the number of records is then finite (see [1], Section 6.3). In the exchangeable case the infinitude of records is implied from \(\mathbb{P}[A_{i}]=1/i\), and the situation is similar for record models with \(w(i)=i^{\alpha}\), \(\alpha\geq-1\) found in Pfeifer [25], as well as for more general regularly varying size functions considered by Steele [36]. Footnote 1: Our records are _lower_ (or minimal). To literally fit in Nevzorov’s \(F^{\alpha}\) scheme for _upper_ records we need to consider the negatives \(-X_{i}\). The infinitude of records is an intrinsic property, which does not depend on a particular enumeration of items by \(\mathbb{Z}_{>0}\), and is equivalent to existence of the first element in \(\triangleleft\). The dichotomy begs a deeper question about the type of an infinite size-biased order. For summable size function the type is \(\mathbb{Z}_{>}\), while in the exchangeable case it is \(\mathbb{Q}\). What are the other possibilities? Our interest to the question stems from the Arratia, Barbour and Tavare conjecture ([2] p. 38, further cited as ABT), which suggests that \(\triangleleft\) has the type of some suborder in \(\mathbb{Z}\) (which may be \(\mathbb{Z}_{<0},\mathbb{Z}_{>0}\) or \(\mathbb{Z}\)), provided the multiset of sizes \((w(i),i\in I)\) has no accumulation points strictly inside \((0,\infty)\). A related question concerning existence of few _last_ elements in the order was addressed in [7]. The rest of the paper is organised as follows. In Section 2 we review constructions of size-biased orders and their properties. A novelty here is an insertion algorithm that outputs the order for arbitrary size function which need not be summable. In Section 3 we show that the ABT conjecture fails without additional conditions and give a complete classification into possible types. Section 4 is devoted to examples. ## 2 Constructions and properties ### Distribution Fix countable \(I\) equipped with a size function \(w(i)>0\). We may label \(I\) by \(\mathbb{Z}_{>0}\) where appropriate, but many properties we are interested in will not depend on such labelling. The algebraic approach to size-biased orders relies on the homogeneous rational functions \[p_{n}(x_{1},\ldots,x_{n})=\prod_{k=1}^{n}\frac{x_{k}}{x_{k}+x_{k+1}+\cdots+x_ {n}} \tag{2}\] of any arity \(n\geq 1\). The symmetrisation of \(p_{n}\) is constant \(1\). Other useful identities is a cycle reversion formula \[p_{2}(x_{1},x_{2})p_{2}(x_{2},x_{3})\cdots p_{2}(x_{n-1},x_{n})=p_{2}(x_{n},x_{ n-1})\cdots p_{2}(x_{3},x_{2})p_{2}(x_{2},x_{1})\] and a transposition identity \[\frac{p_{n+m+2}(a_{1}\ldots,a_{n},x,y,b_{1},\ldots,b_{m})}{p_{n+m+2}(a_{1} \ldots,a_{n},y,x,b_{1},\ldots,b_{m})}=\frac{x+(b_{1}+\cdots+b_{m})}{y+(b_{1}+ \cdots+b_{m})}. \tag{3}\] The _chain_ or _ranking_ probabilities are obtained by the specialisation of indeterminates in (2) as \(x_{k}=w(i_{k})\), that is \[\mathbb{P}[i_{1}\triangleleft\cdots\triangleleft i_{n}]=p_{n}(w(i_{1}),\ldots, (w(i_{n})) \tag{4}\] for \(\{i_{1},\ldots,i_{n}\}\subset I\). This concluded from (1) using the memoryless property of the exponential distribution. Permutations of labels yield probabilities of other chains of relations. Formula (4) is sometimes adopted as definition of the size-biased order on a finite set \(J=\{i_{1},\ldots,i_{n}\}\). If \(K\supset J\) is another finite set, then the restriction of the size-biased order from \(K\) to \(J\) has distribution (4), which can be verified from the identities like \[p_{2}(x,y)=p_{3}(x,y,z)+p_{3}(x,z,y)+p_{3}(z,x,y).\] In this sense the finite size-biased orders are _consistent_. Intrinsic definition of the order.For infinite set, (4) assumes the role of 'finite-dimensional distribution'. By consistency, the existence of a unique random order satisfying (4) follows by Kolmogorov's measure extension theorem. This approach may be regarded as an intrinsic definition of size-biased order, i.e. not relying on (1) or other explicit construction. Many formulas involving \(p_{n}\)'s are most elementary shown from the representation (1). For instance, the recursion \[p_{n+m}(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})=\] \[p_{n+1}(x_{1},\ldots,x_{n},y_{1}+\cdots+y_{m})\,p_{m}(y_{1}, \ldots,y_{m}) \tag{5}\] follows by recognising in the left-hand side the ranking probability for \(n+m\) (independent) exponentials, and in the right-hand side identifying the first factor with the ranking probability for \(n+1\) exponentials of which the last is the minimum of \(m\) other exponentials. For \(\{i_{1},\ldots,i_{n}\}\cap J=\varnothing,|J|<\infty\) and \(w(J):=\sum_{j\in J}w(J)\), we have by (5) in obvious notation \[\mathbb{P}[i_{1}\triangleleft\cdots\triangleleft i_{n}\triangleleft J]=p_{n+1 }(w(i_{1}),\ldots,w(i_{n}),w(J)). \tag{6}\] Further features of the size-biased order obvious from (1) are * dissociation: restrictions of \(\triangleleft\) to disjoint \(J\) and \(K\) are independent, 2. Luce's property: for \(i\in J\subset K\) (finite sets), the probability that \(i\) is a size-biased pick from \(K\) (i.e. \(\triangleleft\)-first in \(K\)) is equal to the product of probability that the size-biased pick from \(K\) falls in \(J\) and the probability that a size-biased pick from \(J\) is \(i\). The term 'dissociation' derives from the eponymous property of the bivariate array of indicators \(1_{\{i\triangleleft j\}}\), which means independence of disjoint principal subarrays (cf [38] p. 339). Proving the property directly from (2) is a nice exercise in algebra, which amounts to showing that \[\sum p_{n+m}(z_{1},z_{2},\ldots,z_{n+m})=p_{n}(x_{1},\ldots,x_{n})p_{m}(y_{1}, \ldots,y_{m}),\] where the sum is over all'shuffles' (as in Stanley [35] p. 482) of \(x_{1},\ldots,x_{n}\) and \(y_{1},\ldots,y_{m}\) with variables from each block appearing in their original succession, for instance \(x_{1},x_{2},y_{1},x_{3},y_{2},y_{3},\ldots,x_{n}\). Luce's theory starts with a function representing the probability of recognising \(i\) as 'best' in context \(J\); then with the analogue of (ii) postulated, one concludes that the function is realisable as size-biased pick for some \(w\) defines of sizes to items. See [36] for a concise derivation. Continuing the above list: 1. for finite chains \(\cdots\triangleleft i\triangleleft j\triangleleft\cdots\) and \(\cdots\triangleleft j\triangleleft i\triangleleft\cdots\) that only differ by transposition of adjacent terms, the first has higher probability if \(w(i)>w(j)\), 2. for finite \(J\subset I\), the most likely ordering is the one which has the items arranged by decreasing size, and the least likely by increasing. Property (iv) follows from (iii) by induction, and (iii) is clear from (3). For _random_ subsets of \(I\) the size-biased ordering (permutation) is defined by conditioning. Let \(\tau_{1},\ldots,\tau_{n}\) be a random sequence from \(I\) with pairwise distinct elements. We say that the sequence is in size-biased order if \[\mathbb{P}[\tau_{1}=i_{1},\ldots,\tau_{n}=i_{n}\,|\,\{\tau_{1},\ldots,\tau_{n} \}=\{i_{1},\ldots,i_{n}\}]=\mathbb{P}[i_{1}\triangleleft\cdots\triangleleft i _{n}].\] The property is preserved by the prefix deletion: 1. if \(\tau_{1},\ldots,\tau_{n}\) is in size-biased order, then \(\tau_{m},\ldots,\tau_{n}\) is in size-biased order too, for \(1\leq m\leq n\), as concluded from (5). There is a minor paradox lurking here: typically a prefix of size-biased sequence is not in size-biased order, so the property fails under _suffix_ deletion. To illustrate this, let items \(i,j,k\) have sizes \(x\neq y,z\) then for their size-biased permutation \(\tau_{1},\tau_{2},\tau_{3}\) formula (3) yields \[\frac{\mathbb{P}[\tau_{1}=i,\tau_{2}=j]}{\mathbb{P}[\tau_{1}=j,\tau_{2}=i]}= \frac{x+z}{y+z}\neq\frac{x}{y}.\] This has an explanation in terms of (1). For independent exponential variables \(X_{1},X_{2},X_{3}\), given \(X_{3}\) is the largest the probability of \(X_{1}<X_{2}\) depends on the rate of \(X_{3}\). To extend the deletion theme, let \(\tau_{1},\ldots,\tau_{n}\) be a size-biased permutation of \(J\). As follows from consistency, if an item is chosen independently (from \(\tau_{1},\ldots,\tau_{n}\)) according to some distribution on \(J\) and deleted from the permutation, then the resulting sequence is in size-biased order. In the case of uniform choice from \(J\) the deletion is equivalent to removing a term appearing in uniformly random place of the permutation. Hence deleting \(\tau_{i}\) with uniformly chosen \(i\) yields a size-biased permutation. The latter implies a property observed in [28] (Corollary 6) for certain random size functions. ### Construction from a Poisson scatter Suppose \(I=\mathbb{Z}_{>0}\). Independent exponential variables can be realised as in the Ballerini-Resnick embedding of records in extremal process [3]. To that end, consider a unit rate Poisson point process in the domain \(D=[0,\,\sum_{i}w(i))\times[0,\infty)\) of the \((t,x)\)-plane. Split \(D\) in semi-infinite strips by vertical lines at points \(w(1)+\cdots+w(i)\) and let \((T_{i},X_{i})\) be the height of the lowest Poisson atom in the \(i\)th strip. Clearly, \(X_{i}\) is \(w(i)\)-rate exponential, \(T_{i}\) uniformly distributed between two division points, and all variables \(T_{i},X_{i}\) are jointly independent. This realisation makes obvious many features of \(\triangleleft\), in particular that the type of the order is \(\mathbb{Z}_{>0}\) if and only if \(\sum_{i}w(i)<\infty\), in which case the overall lowest point in \(D\) has the \(t\)-component uniform on \([0,\,\sum_{i}w(i))\), hence coinciding with \(T_{i}\) with probability proportional to \(w(i)\). ### Algorithms Infinite Lehmer codes.For a sequence of integers \(r_{1},r_{2},\ldots\) where \(r_{i}\) takes values in \(\{1,\ldots,i\}\), there is a unique order on \(\mathbb{Z}_{>0}\) which places \(i\) in position \(r_{i}\) relative to \(\{1,\ldots,i\}\). For instance \(1,2,1,3,\ldots\) means that the order restricted to \(\{1,2,3,4\}\) is the permutation \(3,1,4,2\). The value \(r_{i}=1\) may be interpreted as record in the sense of extreme-value theory, or that \(i\) is the best item among \(\{1,\ldots,i\}\) in Luce's theory of choice. We shall call \(r_{i}\) the _relative rank_, and, by analogy with finite permutations, call the whole sequence _Lehmer code_, since \(i-r_{i}\) measures the discordance between \(i\) and its relative position in another order. Note that \(i-r_{i}+1,i\geq 1\) is the Lehmer code of the order reversal. A random Lehmer code \(R_{1},R_{2},\ldots\) corresponds to a random order on \(\mathbb{Z}_{>0}\). For the exchangeable order, the relative ranks are independent, with \(R_{i}\) uniform over \(\{1,\ldots,i\}\). For the infinite Mallows order [18] with parameter \(q>0\), the relative ranks are independent, with \(i-R_{i}\) having truncated geometric distribution on \(\{0,1,\ldots,i-1\}\) whose weights are proportional to \(1,q,\ldots,q^{i-1}\). Uniform shuffles.Fisher and Yates [14] in the introduction to their statistical tables suggested two algorithms to generate a uniform random permutation of \(n\) items (Examples 12 and 12.2 in the 4th edition). Both algorithms start with a source list \(S\) with \(n\) elements that are moved, one at a time, to a target list \(T\) where the items appear shuffled. The list \(T\) is originally empty. The algorithm now known as the Fisher-Yates shuffle iterates the following elemental operation: 1. pick uniformly at random an item from list \(S\) and move it to the rear of list \(T\). The second algorithm iterates another operation: 1. remove the front item from list \(S\) and insert it in list \(T\) in a uniformly random position. Note that when \(T\) has \(i-1\) elements there are \(i\) gaps where the next element from \(S\) can be inserted. Though perhaps less practical computationally, the second method has the advantage that it allows one to simulate the infinite exchangeable random order in terms of the Lehmer code \(R_{1},R_{2},\ldots\) be independent, with uniformly distributed on \(\{1,\ldots,i\}\). The variable \(R_{i}\), called _relative rank_, is the position occupied by \(i\) in the list \(T\) when the item is inserted. Since there are infinitely many records, the list \(T\) will not converge, coordinatewise, to some infinite permutation. Size-biased shuffles.Suppose now that the list \(S\) is infinite, with items having positive sizes whose total is finite. The size-biased version of aI is the familiar size-biased pick whose iterates yield a size-biased permutation. Nevertheless, it is instructive to represent a single size-biased pick as a cycle, where a pointer driven by a Bernoulli process moves through \(S\). A cycle begins with the pointer at the front element of \(S\). Each time the pointer is at an item with some size \(x\) and let the total of the sizes strictly to the right of the pointer be \(t\). With probability \(t/(t+x)\) the pointer passes to the next item. Otherwise the current item is placed to the rear of list \(T\), and the pointer returns to the front of now reduced \(S\) to start the next cycle. Showing that a size-biased pick gets moved relies on \[\frac{t_{1}}{t_{1}+x_{1}}\cdots\frac{t_{k-1}}{t_{k-1}+x_{k-1}}\frac{x_{k}}{t_{ k}+x_{k}}=\frac{x_{k}}{t_{0}},\quad\text{where}\ \ t_{i-1}=t_{i}+x_{i},\ i\geq 1.\] The analogue of aII is the operation of size-biased insertion, which we introduce as a cycle of moves. 1. A cycle begins with the pointer in the leftmost gap of \(T\). Let \(x\) be the size of the front item in \(S\), fixed for the length of the cycle, and let \(t\) be the variable sum of sizes in \(T\) to the right of the pointer. With probability \(t/(t+x)\) the pointer moves to the next gap in \(T\). Otherwise the item is removed from \(S\) and placed in the gap with the pointer, then the pointer returns to the leftmost gap of \(T\) to start the next cycle. For the \(i\)th cycle, if the front item in \(S\) has size \(x\) the conditional probability of \(R_{i}=k,1\leq k\leq i\), given the sizes in \(T\) are \(x_{1},\ldots,x_{i-1}\) is \[\left(\prod_{\ell=1}^{k-1}\frac{x_{\ell}+\cdots+x_{i-1}}{x_{\ell}+\cdots+x_{ i-1}+x}\right)\frac{x}{x_{k}+\cdots+x_{i-1}+x}. \tag{7}\] To explain the insertion rule in terms of (1), we condition on the ranking event \(X_{1}<\cdots<X_{i-1}\). Given that, the spacings can be represented as \[X_{1}=\frac{Y_{1}}{x_{1}+\cdots+x_{i-1}},X_{2}-X_{1}=\frac{Y_{2}}{x_{2}+\cdots+x _{i-1}},X_{i-1}-X_{i-2}=\frac{Y_{i-1}}{x_{i-1}},\] where \(x_{k}=w(k)\) and \(Y_{1},Y_{2},\ldots\) are independent standard exponential variables, see [22] p. 19. Now for \(X_{i}\) another independent exponential variable with rate \(x=w(i)\), the variable falls below \(X_{1}\) with (conditional) probability \(x/(x_{1}+\cdots+x_{i-1}+x)\), and given this does not happen falls between \(X_{1}\) and \(X_{2}\) with probability \(x/(x_{2}+\cdots+x_{i-1}+x)\), and so on. The case of any other ranking of \(X_{1},\ldots,X_{i-1}\) is reduced to this by re-labelling. After each insertion the list \(T\) is in size-biased order by the virtue of (1). Confirming this algebraically means checking that (7) coincides with the quotient \[\frac{p_{i}(x_{1},\ldots,x_{k-1},x,x_{k+1},\ldots,x_{i-1})}{p_{i-1}(x_{1}, \ldots,x_{i-1})},\] which in turn determines the conditional distribution of \(R_{i}\) given \(R_{1},\ldots,R_{i-1}\), hence the law of the infinite size-biased order expressed in terms of the relative ranks. Although the relative ranks are not independent, the events \(A_{i}=\{R_{i}=1\}\) are independent. This property characterises Nevzorov's model for records among a class of order statistics models [6]. By the virtue of \(\mathbb{P}[A_{i}]=w(i)/(w(1)+\cdots+w(i))\) the number of records is infinite iff \(\sum_{i}w(i)<\infty\), iff \(\triangleleft\) belongs to the type \(\mathbb{Z}_{>0}\). Time reversal of the Tsetlin library.Consider size-biased version of the familiar top-to-random shuffle. Let \(S\) be a finite list. In one step, the front element of \(S\) is taken and size-biasedly inserted (i.e. using the iterates of (aII-sb)) back into \(S\). It is intuitively clear that the equilibrium of this Markov chain is the size-biased permutation of \(S\). Indeed, if \(S\) is a size-biased permutation (of the set of elements of \(S\)) then removing the front element leaves the other elements in size-biased order, and the subsequent insertion yields again a size-bised permutation. In fact, the described process if the time reversal of Tsetlin library, as has been observed by Pitman and Yakubovich [29]. The formula for transition probability of time-reversed Markov chain in equilibrium becomes the obvious algebraic identity \[\frac{p_{n+1}(x_{1},\ldots,x_{k},y,x_{k+1},\ldots,x_{n})}{p_{n+1}(y, x_{1},\ldots,x_{n})}\,p_{2}(y,y+x_{1}+\cdots+x_{n})=\] \[\frac{p_{n+1}(x_{1},\ldots,x_{k},y,x_{k+1},\ldots,x_{n})}{p_{n}(x_ {1},\ldots,x_{n})}.\] ## 3 Classification in types We procede to the classification of size-biased orders. The (noncommutative) operation \(\nearrow\) will be used to denote the type of an order obtained by putting one set on top of the other (the notation adopted in [33] is \(+\)). For instance, \(\mathbb{Q}\nearrow\mathbb{Z}\) is the order type of \([\mathbb{Q}\cap(0,1)]\cup\{2^{k}+1:k\in\mathbb{Z}\}\). Using the realisation (1), we may relate the type of \(\triangleleft\) to the topological properties of the random set of points \(\{X_{1},X_{2},\ldots\}\). Let \(\mu\) be the mean measure of the set, which to a finite interval \((x,y)\subset(0,\infty)\), assigns the mass \[\mu(x,y)=\sum_{i=1}^{\infty}\mathbb{P}[X_{i}\in(x,y]]=\sum_{i=1}^{\infty}(e^{ -w(i)x}-e^{-w(i)y}), \tag{8}\] equal to the expected number of exponential points falling in the interval. By the Borel-Cantelli lemma we have a dichotomy: the number of points in \((x,y]\) is almost surely finite if \(\mu(x,y)<\infty\), and almost surely infinite if \(\mu(x,y)=\infty\). The measure \(\mu\) is absolutely continuous in the sense that it has a finite decreasing density \[\varphi(x):=\sum_{i\in I}w(i)e^{-w(i)x},\] for \(x>\beta\), where \(\beta\in[0,\infty]\) is the convergence abscissa of the series. Note that for finite interval \((x,y)\subset(0,\infty)\)\(\mu(x,y)<\infty\) for \(x>\beta\), while \(\mu(x,y)=\infty\) for \(x<\beta\). In case \(\beta=\infty\) the random set \(\{X_{1},X_{2},\ldots\}\) is dense in \(\mathbb{R}_{\geq 0}\). Moreover, in this case by Tsirelson's universality result [37] it is possible to define the \(X_{j}\)'s on some probability space together with a sequence of i.i.d. standard exponential variables \((Y_{1},Y_{2},\ldots)\) in such a way that the countable sets \(\{X_{1},X_{2},\ldots\}\) and \(\{Y_{1},Y_{2},\ldots\}\) coincided almost surely. Case \(\sum_{i}w(i)<\infty\).We have seen already that \(\triangleleft\) is of the type \(\mathbb{Z}_{>0}\). This is confirmed by arguing that since \(w(i)\to 0\) and \(e^{-xw(i)}-e^{-yw(i)}\sim w(i)(y-x)\), the series (8) converges everywhere, \(\beta=0\) and \(X_{i}\to\infty\) as \(i\to\infty\). Case \(w(i)\to 0\) and \(\sum_{i}w(i)=\infty\).The series (8) diverges everywhere, since the generic term is asymptotic to \(w(i)(y-x)\), hence \(\triangleleft\) has the type \(\mathbb{Q}\). This instance disproves the ABT conjecture stated in Introduction. Case \((w(i),i\in I)\) has accumulation point in \((0,\infty)\).The series (8) diverges everywhere, because infinitely many terms \(e^{-xw(i)}-e^{-yw(i)}\) are bounded away from \(0\). Hence \(\beta=\infty\), \(\{X_{1},X_{2},\ldots\}\) is dense in \(\mathbb{R}_{>0}\). Thus \(\triangleleft\) is a dense order of the type \(\mathbb{Q}\). The picture is less obvious for size functions with \(w(i)\to\infty\), when large values \(X_{i}\) are rare. To treat this class we need a lemma. **Lemma 1**: _Suppose \(w(i)\to\infty\), then \(\beta\) is given by_ \[\beta=\limsup_{i\to\infty}\frac{\log i}{w(i)}. \tag{9}\] _Thus the series (8) converges for \(x>\beta\) and diverges for \(0\leq x<\beta\) regardless of \(y\in(x,\infty]\) (so including \(y=\infty\))._ _Proof._ Factoring \(e^{-xw(i)}-e^{-yw(i)}=e^{-xw(i)}(1-e^{-(y-x)w(i)})\) we see that \(\mu(x,y)<\infty\) is equivalent to convergence of the Dirichlet series \[\sum_{i=1}^{\infty}e^{-xw(i)}. \tag{10}\] Formula (9) is a specialisation of the general formula for the convergence abscissa of the series (cf. [4] p.489). Case \(w(i)\to\infty\) and \(0<\beta<\infty\).By the lemma, the closure of \(\{X_{i}\}\cap(0,\beta)\) is \([0m\beta]\). Within \((\beta,\infty)\) the sequence \(\{X_{i}\}\) has some finite random number of points if (10) converges at \(x=\beta\), or has infinitely many points converging to \(\beta\) if (10) diverges at \(x=\beta\). Accordingly, the type of \(\triangleleft\) is either \(\mathbb{Q}\nearrow\mathbb{F}\) with \(\mathbb{F}\) being a finite ordered set of random cardinality, or \(\mathbb{Q}\nearrow\mathbb{Z}_{<0}\). In the former case the type is not pure since \(\mathbb{F}\) may have any finite cardinality with positive probability. Case \(w(i)\to\infty\) and \(\beta=0\).We have then \(X_{i}\to 0\), hence \(\triangleleft\) has the order type \(\mathbb{Z}_{<0}\). In this case \(\triangleleft\) is representable as a left-sided sequence. The construction by insertion (aII-sb) will produce a list \(T\) converging coordinatewise if its elements are enumerated from the rear to front. Case \(w(i)\to\infty\) and \(\beta=\infty\).Then \(\{X_{i}\}\) is dense in \((0,\infty)\), hence the order type of \(\triangleleft\) is \(\mathbb{Q}\). It remains to consider the combined cases where the collection of items can be split in two infinite sequences, say \((i^{\prime}_{k}:w(i^{\prime}_{k})\leq 1)\) and \((i^{\prime\prime}_{k}:w(i^{\prime\prime}_{k})>1)\), such that \(\sum w(i^{\prime}_{k})<\infty\) and \(w(i^{\prime\prime}_{k})\to\infty\). For the subseries of (10) taken over \((i^{\prime\prime}_{k})\) the convergence abscissa is \(\beta^{\prime\prime}=\limsup_{k\to\infty}(\log i^{\prime\prime}_{k})/w(i^{ \prime\prime}_{k})\) as in Lemma 1. Proceeding with these assumptions we distinguish the following possibilities. Case \(0<\beta^{\prime\prime}<\infty\).If the subseries converges at \(x=\beta^{\prime\prime}\) the type of \(\triangleleft\) is \(\mathbb{Q}\nearrow\mathbb{Z}_{>0}\), otherwise it is \(\mathbb{Q}\nearrow\mathbb{Z}\). Case \(\beta^{\prime\prime}=0\).Then \(\{X_{i}\}\) has accumulation points \(0\) and \(\infty\), and \(\triangleleft\) is of type \(\mathbb{Z}\). Case \(\beta^{\prime\prime}=\infty\).Then \(\triangleleft\) is of type \(\mathbb{Q}\). To summarise our findings, we have shown: **Theorem 1**: _The above classification of order types of \(\triangleleft\) is complete._ Regarding the ABT conjecture we obtain: **Corollary 1**: _The size-biased order \(\triangleleft\) can be embedded in \(\mathbb{Z}\) if and only if each of the following three conditions holds:_ * \((w(i),i\in I)\) _has no accumulation points other than_ \(0\) _and_ \(\infty\)_,_ * \(\sum_{\{i:w(i)\leq 1\}}w(i)<\infty\)_,_ * _if_ \(\infty\) _is an accumulation point then_ \(\limsup_{\{i:w(i)>1\}}\frac{\log i}{w(i)}=0\) ## 4 Examples Throughout we assume \(I=\mathbb{Z}_{>0}\). There are some natural choices for \(w\) resulting in the order type \(\mathbb{Z}_{>0}\). Karamata-Stirling indicators.For \(\theta>0\) \[w(i)=\frac{(\theta)_{i-1}}{(i-1)!},\ i\in\mathbb{Z}_{>0}, \tag{11}\] is the only choice of sizes (with \(w(1)=1\)) leading to the record indicators with \[\mathbb{P}[A_{i}]=\frac{\theta}{\theta+i-1},\] see [17]. This profile of success probabilities plays important role in the study of random partitions and other combinatorial structures [27]. The size-biased order type is \(\mathbb{Q}\). Regular varying size functions.Steele [36] showed that if the distributions \[F_{n}(t)=\sum_{i\leq nt}\frac{w(i)}{w(1)+\cdots+w(n)},\quad t\in[0,1],\] weakly converge, then the limit must be a beta distribution \(F(t)=t^{\theta}\) for some \(\theta\in[0,\infty]\) (where the limits are \(\delta_{0},\delta_{1}\) in the edge cases). Convergence of \(F_{n}(t_{0})\) at a single point \(t_{0}\in(0,1)\) suffices for this. The limit appears if \(w\) is regularly varying as \(i\to\infty\) with index \(\theta-1\), and if in addition \(w\) is monotone then \(i\,\mathbb{P}[A_{i}]\to\theta\). Recall that for \(0<\theta<\infty\) the regular variation amounts to the asymptotics \(w(i)\sim i^{\theta}L(i),\ i\to\infty\), with some function \(L\) of slow variation. The order \(\triangleleft\) is of the type \(\mathbb{Q}\). Clearly, (11) is a special case. In the case \(\theta=0\) (slow variation) there is some diversity of types, in particular: 1. For \(w(i)=(\log(i+1))^{1/2}\) we have \(\beta=\infty\) hence the order type is \(\mathbb{Q}\). 2. For \(w(i)=(\log(i+1))^{2}\) we have \(\beta=0\). The order the type is \(\mathbb{Z}_{<0}\). 3. For \(w(i)=\log i\) the series (10) is the Riemann zeta function, diverging for \(x\leq 1\). The order has the combined type \(\mathbb{Q}\nearrow\mathbb{Z}_{<0}\). 4. For \(w(i)=\log(i+1)+2\log\log(i+1)\) the series (8) converges for \(x\geq\beta=1\). The order \(\Sigma\) is isomorphic to \(\mathbb{Q}\nearrow\mathbb{F}\), where \(\mathbb{F}\) is a finite order of random cardinality. This example was observed in [5] as a situations where (in our terms) the order is not of the form \(\cdots\nearrow\mathbb{Z}_{<0}\), that is the reversal of the insertion list \(T\) does not converge. If \(w\) varies regularly with index \(\theta-1<-1\), the size function is summable, hence the order type is \(\mathbb{Z}_{>0}\). Geometric size function.Consider the size-biased order \(\triangleleft\) with the geometric size function \(w(i)=q^{i}\), \(q>0\). The case \(q>1\) was studied by Yang [42] as a model for records in exponentially growing population, proposed to explain the pattern of Olympic records that get broken more often than the i.i.d. theory predicts. The size-biased order shares some features with the Mallows order. Recall that the Mallows order has independent relative ranks with \(i-R_{i}\) truncated geometric. These common features are: 1. the restrictions of \(\triangleleft\) to \(\{1,\ldots,i\}\) and \(\{i+1,i+2,\ldots\}\) are independent, 2. the restriction of \(\triangleleft\) on \(\{i+1,i+2,\ldots\}\) under the shift \(i+k\mapsto k\) is pushforwarded to a distributional copy of \(\triangleleft\), 3. the order type is \(\mathbb{Z}_{>0}\) for \(q<1\), and \(\mathbb{Z}_{<0}\) for \(q>1\). These deserve some comments. Property (A) for size-biased order is an instance of dissociation, which does not hold for Mallows permutation (as one checks from restrictions on \(\{1,3\}\) and \(\{2,4\}\)). Property (B) follows from the homogeneity identities like \(p_{n}(q^{i+1},\ldots,q^{i+n})=p_{n}(1,q,\ldots,q^{n-1})\). Note also that if \(\tau_{1},\ldots,\tau_{n}\) is a size-biased permutation of \(\{1,\ldots,n\}\) with parameter \(q\) then \(n-\tau_{1},\ldots,n-\tau_{n}\) has the same distribution as the size-biased permutation with parameter \(q^{-1}\); the same property is valid for the Mallows permutation. Suppose \(0<q<1\). If \(j\triangleleft j\) for \(i<j\) we speak on an inversion. Denote \(D_{n}\) the number of inversions within \(\{1,\ldots,n\}\), this is 'Kendall's tau' measuring the distance of \(\triangleleft\) from the standard order. In terms of the Lehmer code, \(D_{n}=\sum_{i=1}^{n}(i-R_{i})\). For items \(i<j\) an inversion occurs with probability \(q^{j}/(q^{j}+q^{i})\), thus for the expected number of inversions we obtain \[\mathbb{E}[D_{n}]=\sum_{(i,j):1\leq i<j\leq n}\frac{q^{j}}{q^{j}+q^{i}}=\sum_ {k=1}^{n-1}\frac{n-k}{1+q^{-k}},\] and for \(n\to\infty\) the asymptotics becomes \[\mathbb{E}[D_{n}]\sim c_{q}\,n\,,\mbox{ with }c_{q}=\sum_{k=1}^{\infty}\left(\frac{1}{1+q ^{-k}}\right). \tag{12}\] Moreover, we even have the asymptotics for \(D_{n}\) in a strong sense: \[D_{n}\sim c_{q}n\quad\mbox{a.s.}\] Indeed, for two disjoint integer intervals \([a,b]\) and \([c,d]\) with \(b-a=d-c\), the number of inversions of the restrictions of \(\triangleleft\) are independent and have same distributions as \(D_{b-a}\) and \(D_{d-c}\), respectively, by properties (A) and (B). On the other hand, the number of inversions for union of two disjoint sets in not smaller than the sum of the inversion counts for each of the sets. With account of the already established asymptotics of the mean (12) the result follows now by applying the i.i.d. version of Kingman's subadditive ergodic theorem (see, e.g., [32], Theorem A.3). To compare, the number of inversions on \(\{1,\ldots,n\}\) for the Mallows permutation is asymptotic to \((q^{-1}-1)^{-1}n\), as follows from the distribution of relative ranks. Acknowledgement.I am indebted to Jim Pitman for useful comments and drawing my attention to paper [29].
2309.14487
Probing the limits of the Talbot-Plateau law
The Talbot-Plateau law specifies what combinations of flash frequency, duration, and intensity will yield a flicker-fused stimulus that matches the brightness of a steady stimulus. It has proven to be remarkably robust in its predictions, and here we provide addition support though the use of a contrast discrimination task. However, we also find that the visual system can register contrast when the combination of frequency and duration is relatively low. Flicker-fused letters are recognized even though they have the same physical luminance as background. We propose that this anomalous result is produced by large disparities in the duration of bright and dark components of the flash cycle, which brings about unexpected differential activation of ON and OFF retinal channels.
Ernest Greene, Jack Morrison
2023-09-25T19:30:49Z
http://arxiv.org/abs/2309.14487v1
## Probing the Limits of the Talbot-Plateau Law ## Abstract The Talbot-Plateau law specifies what combinations of flash frequency, duration, and intensity will yield a flicker-fused stimulus that matches the brightness of a steady stimulus. It has proven to be remarkably robust in its predictions, and here we provide addition support though the use of a contrast discrimination task. However, we also find that the visual system can register contrast when the combination of frequency and duration is relatively low. Flicker-fused letters are recognized even though they have the same physical luminance as background. We propose that this anomalous result is produced by large disparities in the duration of bright and dark components of the flash cycle, which brings about unexpected differential activation of ON and OFF retinal channels. Talbot-Plateau flicker-fusion luminance contrast retinal oscillations ## Significance Statement Oscillation of retinal ON and OFF circuits appear to have a role in encoding of stimulus attributes. The ability to use frequency and the timing of flash components to modify perception should prove useful for discovering the function of these oscillations. ## Introduction _[W]e have only to take a white circle, with one of its sectors painted black, and make it revolve rapidly. It will appear, as everyone knows, of a uniform gray tint... In very point of the circle the white and black parts meet the eye during the same proportion of time, and therefore the tint is uniform throughout. H.F. Talbot (1834, pg. 329)_ The goal of the research reported here is to press the limits of the Talbot-Plateau law, which involves perceived brightness of flash sequences. The Talbot-Plateau law specifies when a flash sequence will deliver light energy that makes it appear equal in brightness to a steady stimulus source. The law requires that the frequency of the flash sequence be high enough to produce
2309.04694
Redundancy-Free Self-Supervised Relational Learning for Graph Clustering
Graph clustering, which learns the node representations for effective cluster assignments, is a fundamental yet challenging task in data analysis and has received considerable attention accompanied by graph neural networks in recent years. However, most existing methods overlook the inherent relational information among the non-independent and non-identically distributed nodes in a graph. Due to the lack of exploration of relational attributes, the semantic information of the graph-structured data fails to be fully exploited which leads to poor clustering performance. In this paper, we propose a novel self-supervised deep graph clustering method named Relational Redundancy-Free Graph Clustering (R$^2$FGC) to tackle the problem. It extracts the attribute- and structure-level relational information from both global and local views based on an autoencoder and a graph autoencoder. To obtain effective representations of the semantic information, we preserve the consistent relation among augmented nodes, whereas the redundant relation is further reduced for learning discriminative embeddings. In addition, a simple yet valid strategy is utilized to alleviate the over-smoothing issue. Extensive experiments are performed on widely used benchmark datasets to validate the superiority of our R$^2$FGC over state-of-the-art baselines. Our codes are available at https://github.com/yisiyu95/R2FGC.
Si-Yu Yi, Wei Ju, Yifang Qin, Xiao Luo, Luchen Liu, Yong-Dao Zhou, Ming Zhang
2023-09-09T06:18:50Z
http://arxiv.org/abs/2309.04694v1
# Redundancy-Free Self-Supervised Relational Learning for Graph Clustering ###### Abstract Graph clustering, which learns the node representations for effective cluster assignments, is a fundamental yet challenging task in data analysis and has received considerable attention accompanied by graph neural networks in recent years. However, most existing methods overlook the inherent relational information among the non-independent and non-identically distributed nodes in a graph. Due to the lack of exploration of relational attributes, the semantic information of the graph-structured data fails to be fully exploited which leads to poor clustering performance. In this paper, we propose a novel self-supervised deep graph clustering method named Relational Redundancy-Free Graph Clustering (R\({}^{2}\)FGC) to tackle the problem. It extracts the attribute- and structure-level relational information from both global and local views based on an autoencoder and a graph autoencoder. To obtain effective representations of the semantic information, we preserve the consistent relation among augmented nodes, whereas the redundant relation is further reduced for learning discriminative embeddings. In addition, a simple yet valid strategy is utilized to alleviate the over-smoothing issue. Extensive experiments are performed on widely used benchmark datasets to validate the superiority of our R\({}^{2}\)FGC over state-of-the-art baselines. Our codes are available at [https://github.com/yisiyu95/R2FGC](https://github.com/yisiyu95/R2FGC). Deep Clustering, Graph Representation Learning, Relation Preservation, Redundancy Reduction. ## I Introduction Clustering, as one of the most classical and fundamental components in machine learning and data mining communities, has attracted significant attention. It serves as a critical preprocessing step in a variety of real-world applications such as community detection [1], anomaly detection [2], domain adaptation [3], and representation learning [4, 5, 6]. The underlying idea of clustering is to assign the samples to different groups such that similar samples are pulled into the same cluster while dissimilar samples are pushed into different clusters. Hence, clustering intuitively reflects the characteristics of the whole dataset, which could provide a priori information for various downstream domains, including computer vision and natural language processing. Among many challenges therein, how to effectively partition the whole dataset into different clusters remains a fundamental yet open challenge such that the intrinsic distribution information of the dataset can be well preserved. To achieve this goal, a large number of advanced approaches have been developed over the past decades [7, 8]. Traditional clustering methods such as subspace clustering [8] and spectral clustering [7] aim at projecting the data samples into a low-dimensional space coupled with additional constraint information so that the samples in the latent space can be clearly separated. However, the two-stage training paradigm of the traditional methods is typically sub-optimal since the representation learning and clustering are dependent on each other that should be jointly optimized. Moreover, traditional algorithms have limited model capacity that unavoidably limits their applicability and potential. Recently, benefiting from the strong representation capability of deep learning, massive deep clustering algorithms are proposed to show great potential and advantages over traditional approaches [9, 10, 11, 12, 13]. The core essence of deep clustering is to group the data samples into different clusters through deep neural networks in an end-to-end fashion. In this way, clustering and representation learning are jointly optimized to learn clustering-friendly representations without manual feature extraction. For example, CC [9] jointly learned effective representations and cluster assignments by leveraging the power of instance- and cluster-level contrastive learning in an end-to-end manner. With the prevalence of graph-structured data, Graph Neural Networks (GNNs) have been extensively studied and achieved remarkable progress for many promising graph-related tasks and applications [14, 15, 16]. One fundamental problem therein is graph clustering, which divides nodes in a graph into different clusters. GNNs can be well utilized for enhancing graph clustering performance to learn effective cluster assignments [17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. Recently, there has been an increasing body of approaches on graph clustering. For example, SDCN [18] firstly incorporated the topological structure knowledge into deep clustering accompanied by autoencoder (AE) [27] and GNN. To better combine node attributes and structure information, DFCN [21] improved the graph autoencoder (GAE) [28] and developed a fusion mechanism to dynamically integrate both sides for robust target distribution generation. Based on AE, AGCC [24] incorporated the attention mechanism to fuse learned node representations and leveraged a self-supervised mechanism to guide the clustering optimization procedure. Despite the promising achievements of previous methods, a vast majority of existing graph clustering approaches still suffer from two key limitations: (i) **Neglect the exploration of relational information.** Most existing GNN-based methods only use message passing to aggregate neighboring information of the nodes in a graph. The high-order attributive and structural relationships of the non-IID graph-structured data are not well exploited, which leads that the underlying distribution information cannot be well revealed for meaningful representations; (ii) **Fail to reduce redundant information.** Many clustering methods mainly focus on exploring graph information from multiple perspectives, unavoidably incorporating much redundant information into the learned representations, while the redundancy reduction is not taken into account, which prevents obtaining discriminative representations and excellent clustering performance. As such, it is highly promising to develop an approach that can fully explore the intrinsic relational information among nodes and decrease the redundant information for effective cluster assignments. Towards this end, this paper proposes a novel deep clustering method called Relational Redundancy-Free Graph Clustering (R\({}^{2}\)FGC). The key idea of R\({}^{2}\)FGC is to exploit attribute- and structure-level relational information among the nodes from both global and local views in a redundancy-free manner. To achieve the goal effectively, R\({}^{2}\)FGC first learns compact representations from an AE and a GAE to explore the attributive and structural information from complementary perspectives. Then, the relational information is extracted based on the learned representations from global and local views. Moreover, to fully benefit from the extracted relations, we preserve the consistent relationship such that the relational information for the same node is invariant to augmentations, whereas the correlations of the relational distribution for different nodes are reduced for learning discriminative representations. Further, R\({}^{2}\)FGC combines the redundancy-free relational learning from both attribute and structure levels with an augmentation-based fusion mechanism to optimize the embedded representations in a self-supervised fashion. Comprehensive experiments are conducted to show that the proposed method can greatly improve the clustering performance compared with the existing state-of-the-art approaches over multiple benchmark datasets. To summarize, the main contributions of our work are as follows: * **General Aspects:** This paper studies the inherent relational learning for non-IID graph-structured data and explores redundancy-free representations based on relational information for the graph clustering task. * **Novel Methodologies:** We propose a novel approach to exploit attribute- and structure-level relational information among the nodes, which aims to extract augmentation-invariant relationships for the same node and decrease the redundant correlations between different nodes. Our R\({}^{2}\)FGC is beneficial to obtain effective and discriminative representations for clustering. * **Multifaceted Experiments:** We perform extensive experiments on various commonly used datasets to demonstrate the effectiveness of the proposed approach. ## II Related Work ### _Graph Neural Networks_ Recent years have witnessed great progress in Graph Neural Networks (GNNs) and achieved state-of-the-art performance. The concept of GNNs was proposed [29] before 2010 and has become an ever-increasing theme. A general paradigm of GNNs is to iteratively update node representations by aggregating neighboring information based on message-passing [30]. Representative method Graph Convolutional Network (GCN) [31] extended the classical convolutional neural networks to the case of graph-structured data. Subsequent work Graph Attention Network (GAT) [32] further leveraged the attention mechanism [33] to dynamically aggregate the features of neighbors. With the powerful capability of GNNs, the learned graph representations can be used to serve a variety of downstream tasks, such as node classification [34, 31], graph classification [35, 36, 37], and graph clustering [38, 18]. ### _Deep Clustering_ The goal of deep clustering focus on utilizing the excellent representation ability of deep learning to serve the clustering process, which has achieved remarkable progress. Existing methods can be categorized into three main groups based on the training objectives: (i) reconstruction based methods, (ii) self-augmentation based methods, and (iii) spectral clustering based methods. The first group uses the AE to reconstruct the original input, which incorporates desired constraints on feature embeddings in the latent space. For instance, DEC [39] iteratively conducted the process of representation learning and clustering assignments via minimizing the Kullback-Leibler divergence. To preserve important data structure, IDEC [40] introduced AE to improve the clustering so that the local structure of data generating distribution can be maintained. The second group aims to encourage the consistency between original samples and their augmented samples by optimizing the networks. For example, IIC [41] sought to achieve the consistency of assignment probabilities by maximizing the mutual information of paired samples. The third group aims at constructing a robust affinity matrix for effective data partitioning. For instance, RCFE [42] utilized the idea of rank constraints and clusters data points in a low-dimensional subspace. Li et al. [43] utilized multiple features to construct affinity graphs for spectral clustering. Benefiting from the breakthroughs of GNNs on graph-structured data, GNNs are capable of organically integrating node attributes and graph structures in a united way, and have emerged as a promising way for graph clustering. The basic idea is to group the nodes in the graph into several disjoint clusters. Similar to the deep clustering methods, a vast majority of existing graph clustering approaches [18, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47] also continue the paradigm of AE, in which the GAE and the variational GAE (VGAE) are leveraged to operate on graph-structured data. For example, to dynamically learn the importance of the neighboring nodes to the center node, DAEGC [45] employed the GAE to capture a compact representation by encoding the graph structures and node attributes. EGAE [23] learned the explainable representations based on the GAE that can be also used for various tasks. Compared with previous methods, our work further explores graph clustering by simultaneously preserving the relational similarity and reducing the redundancy of the learned representations based on both AE and GAE. ### _Self-supervised Learning_ Recently, self-supervised learning (SSL) revitalizes and has achieved superior performance across numerous domains. This technique is completely free of the need for explicit labels [48], due to its powerful capability in learning effective representations from unlabeled data. The core procedure of SSL is first designing a domain-specific pretext task and training the networks on the pretext task, such that the learned representations can be more discriminative and applicable. Recently, many SSL approaches have been proposed to marry the power of SSL and deep learning [49, 50, 51, 52], and have shown competitive performance in various downstream application [53, 54, 55, 56, 57]. For example, SimCLR [49] employed multiple data augmentations and a learnable nonlinear transformation to train an encoder, such that the model can pull the feature representations from the same samples together. To alleviate the issue of the large batch size of SimCLR, MoCo [50] introduced a moving-averaged encoder to set up a dynamic dictionary for SSL. Furthermore, our proposed R\({}^{2}\)FGC inherits the advantages of SSL to preserve the consistent relation and reduce the redundant information among nodes from global and local views for graph clustering. ## III Notations and problem definition In this section, we first briefly give the basic notations and formal terminologies in a graph. Then we introduce the concept of Graph Convolutional Network (GCN) and the problem formalization of graph clustering. **Notations.** Let \(\mathcal{G}=(V,E,\mathbf{X})\) denote an arbitrary undirected graph, where \(V=\{v_{1},\ldots,v_{n}\}\) is the vertex set with \(n\) nodes, \(E\) is the edge set, \(\mathbf{X}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})^{\top}\in\mathbb{R}^{n \times d}\) is the node attribute matrix with \(\mathbf{x}_{i}\) corresponding to node \(i\) for \(i=1,\ldots,n\), and \(d\) is the dimensionality of the node attributes. \(\mathbf{A}=(a_{ij})\in\mathbb{R}^{n\times n}\) denote the adjacency matrix which is generated according to the adjacency relationships in \(E\), and \(a_{ij}=1\) if \((v_{i},v_{j})\in E\), i.e., there is an edge from node \(v_{i}\) to node \(v_{j}\), otherwise \(a_{ij}=0\). The adjacency matrix can be normalized by \(\mathbf{S}=\tilde{\mathbf{D}}^{-1/2}\tilde{\mathbf{A}}\tilde{\mathbf{D}}^{-1/2}\), where \(\tilde{\mathbf{A}}=(\tilde{a}_{ij})=\mathbf{A}+\mathbf{I}\), \(\mathbf{I}\in\mathbb{R}^{n\times n}\) is the identify matrix for adding self-connections, and \(\tilde{\mathbf{D}}=\text{diag}(\tilde{d}_{1},\ldots,\tilde{d}_{n})\) is the corresponding degree matrix with \(\tilde{d}_{i}=\sum_{j=1}^{n}\tilde{a}_{ij}\). **Graph Convolutional Network.** GCN generalizes the classical Convolutional Neural Networks to the case of graph-structured data. It utilizes the graph directly and learns new representations by aggregating the information of a node and its neighbors. In general, a layer of GCN has the form \[\mathbf{H}^{(l+1)}=\sigma(\mathbf{S}\mathbf{H}^{(l)}\mathbf{W}^{(l)}),\] where \(\mathbf{H}^{(0)}\) is the input data, \(\sigma(\cdot)\) is an activation function, such as Tanh and ReLU, \(\mathbf{H}^{(l)}\) and \(\mathbf{W}^{(l)}\) are the learned embedded representation and the trainable weight matrix in the \(l\)-th (\(l>0\)) layer, respectively. **Graph Clustering.** Given an unlabeled graph with \(n\) nodes, the target of the graph clustering task is to divide these unlabeled nodes into \(K\) disjoint clusters \(\{C_{1},\ldots,C_{K}\}\) based on a well-learned embedding matrix \(\tilde{\mathbf{Z}}\in\mathbb{R}^{n\times d^{\prime}}\), where \(d^{\prime}\) is the number of dimension of the latent embeddings. The nodes in the same cluster are highly similar and cohesive, while the nodes in different clusters are discriminative and separable. ## IV The proposed method In this section, we introduce our proposed method named Relational Redundancy-Free Graph Clustering (R\({}^{2}\)FGC). R\({}^{2}\)FGC mainly contains four parts, i.e., attribute- and structure-level representation learning module, relation preservation and de-redundancy module, augmentation-based representation fusion module, and joint optimization module for graph clustering. Figure 1 shows the framework overview of the proposed R\({}^{2}\)FGC. In the following, we present the four components and the complexity analysis for R\({}^{2}\)FGC. ### _Attribute- and Structure-level Learning Module_ AE can reasonably explore the node attribute information, whereas the GAE can effectively capture the topological structure information. To gain a more comprehensive embedding and a better performance on downstream tasks, we consider both AE and GAE to reconstruct the input and learn fusional representation. The AE module feeds the attribute information into the multi-layer perceptrons and extracts the latent representations by minimizing the reconstruction loss between the input raw data and the reconstructed data. The corresponding optimization objective is formalized as \[\min\ L_{AE} =\frac{1}{n}||\mathbf{X}-\hat{\mathbf{X}}_{AE}||_{F}^{2}, \tag{1}\] \[s.t.\ \mathbf{Z}_{AE} =\phi_{e}(\mathbf{X}),\] \[\hat{\mathbf{X}}_{AE} =\phi_{d}(\mathbf{Z}_{AE}),\] where \(\mathbf{X}\in\mathbb{R}^{n\times d}\) is the input attribute matrix and \(\hat{\mathbf{X}}_{AE}\in\mathbb{R}^{n\times d}\) is the reconstructed data, \(||\cdot||_{F}\) is the Frobenius norm, \(\mathbf{Z}_{AE}\in\mathbb{R}^{n\times d^{\prime}}\) is the learned latent representation in AE, \(\phi_{e}\) and \(\phi_{d}\) are the encoder and decoder networks, respectively. In the GAE module, following the improved version in [21], a multi-layer GCN is adopted to reconstruct the adjacency matrix and the attribute information. The corresponding reconstruction loss is formalized as \[\min\ L_{GAE} =\frac{\alpha}{n}||\mathbf{S}-\hat{\mathbf{S}}||_{F}^{2}+\frac{1} {n}||\mathbf{S}\mathbf{X}-\hat{\mathbf{X}}_{GAE}||_{F}^{2}, \tag{2}\] \[s.t.\ \mathbf{H}_{e}^{(l+1)} =\sigma(\mathbf{S}\mathbf{H}_{e}^{(l)}\mathbf{W}_{e}^{(l)}),\] \[\mathbf{H}_{d}^{(l+1)} =\sigma(\mathbf{S}\mathbf{H}_{d}^{(l)}\mathbf{W}_{d}^{(l)}),\] \[\mathbf{H}_{e}^{(0)} =\mathbf{X},\] where \(\alpha\) is a pre-defined hyper-parameter, \(\mathbf{S}\) is the normalized adjacency matrix, \(\hat{\mathbf{S}}\) is the reconstructed adjacency matrix produced by fusing respective inner products of the learned latent representation \(\mathbf{Z}_{GAE}\in\mathbb{R}^{n\times d^{\prime}}\) resulting from the graph encoder and the attribute representations \(\hat{\mathbf{X}}_{GAE}\) (i.e., the reconstructed weighted attribute matrix) resulting from the graph decoder, \(\mathbf{W}_{e}^{(l)}\) and \(\mathbf{W}_{d}^{(l)}\) are the layer-specific trainable weight matrices in the \(l\)-th graph encoder and decoder layers, respectively. The detailed fusion mechanism is discussed in the following Section IV-C, which unites the embedded representations from both AE and GAE to promote latent presentation learning in the graph augmentation fashion. ### _Relation Preservation and De-redundancy Module_ In this module, we learn the inherent relational information among the nodes based on augmentations on a given graph. One of the basic ideas is to preserve the similarity of the relational information from two augmented views, while the latent representation of the same node can vary after graph augmentation. Hence, we aim to increase the consistency of the relational information in each positive pair. It allows fine-grained mining of the node relationship. On the other hand, it is necessary to improve the discriminative capability of the resulting representations for graph clustering, thus we also decrease the correlation of the relational information in each negative pair. In the following, we first introduce the adopted graph augmentation strategies and relation extraction methods. Then, we describe the details of the subsequent relation preservation and relation de-redundancy. Based on the given graph, we first construct two different graph views through augmentations, including * **Attribute perturbation.** For each value in the attribute matrix, we disturb it by multiplying a Gaussian random number with a small variance. This strategy performs a slight disturbance on the node features, which would not essentially change the semantic information. * **Edge deletion.** We remove some edges based on the node similarity obtained from the pre-learned latent embeddings. For each node, the edges that connect the nodes with low similarity are dropped in a certain proportion. Compared with random deletion, more semantic information can be preserved by referring to the node similarity. * **Graph diffusion.** We transform the adjacency matrix to a diffusion matrix by leveraging graph diffusion [58], which contributes to providing additional local information. Technically, given the transition matrix \(\mathbf{T}\), the graph diffusion matrix \(\mathbf{U}\) is formulated as \[\mathbf{U}=\sum_{j=0}^{\infty}\theta_{j}\mathbf{T}^{j},\] where \(\theta_{j}\) is the weight coefficient. We adopt the personalized PageRank [59] to characterize graph diffusion, which is a special case. Specifically, \(\mathbf{T}\) is chosen as the normalized adjacency matrix \(\mathbf{S}\) and \(\theta_{j}=\eta(1-\eta)^{j}\) with teleport probability \(\eta\in(0,1)\). Then, the resulting diffusion matrix \(\mathbf{U}\) has the form \[\mathbf{U}=\eta(\mathbf{I}-(1-\eta)\mathbf{S})^{-1}.\] (3) After obtaining two augmented graph views \(\mathcal{G}^{1}=\{\mathbf{X}^{1},\mathbf{S}^{1}\}\) and \(\mathcal{G}^{2}=\{\mathbf{X}^{2},\mathbf{S}^{2}\}\), we perform AE and GAE on \(\mathbf{X}^{1}\) and \(\mathbf{X}^{2}\), which generates the attribute-level latent representations \(\mathbf{Z}^{1}_{AE},\mathbf{Z}^{2}_{AE}\) and the structure-level latent representations \(\mathbf{Z}^{1}_{GAE},\mathbf{Z}^{2}_{GAE}\). To meticulously characterize the relational information, we explore the similarities of each node to some anchor nodes from both global and local perspectives based on these representations. **Extraction of Global Anchors.** For capturing the global relationship of a query node \(v_{i}\in V\), the target is to sample diverse anchors from the whole graph nodes. Due to the neighborhood aggregation mechanism in GNNs, we argue that the high-degree nodes may receive more information when passing messages, while the low-degree nodes would receive less information. This may result in poor representations for the nodes with low degrees. Hence, we perform non-uniform sampling on the nodes to balance the qualities of the representations for low- and high-degree nodes. Specifically, we adopt an inverse degree-weighted distribution for sampling anchors, which puts a larger sampling probability on a lower-degree node. The sampling weight and probability for each node, respectively, are as follows, \[w_{i} =\beta^{\log(d_{i}+1)},\] \[p_{i} =\frac{w_{i}}{\sum_{v_{j}\in V}w_{j}},\text{for any }v_{i}\in V,\] where \(\beta\in(0,1)\) is a hyper-parameter to control the skewness of the distribution, and \(d_{i}\) is the degree of node \(v_{i}\). Moreover, quasi-Monte Carlo (QMC) sampling methods usually can Fig. 1: Framework overview of the proposed method \(\mathbf{R}^{2}\)FGC. Relational learning and representation fusion are performed to jointly guide the self-supervised graph clustering based on the latent representations from the encoders of AE and GAE. The relation preservation and de-redundancy contribute to exploring inherent node relationship and filter redundancy relation to learn effective and discriminative representations. achieve a higher convergence rate than Monte Carlo (MC) methods [60]. Hence, based on the defined discrete distribution, we perform multinomial sampling in the QMC fashion [61, 62]. Instead of the uniform random number (the MC fashion), we leverage the randomized one-dimensional low-discrepancy point set \(\{(2i-1)/(2M_{1})+\omega\mod 1\in[0,1]:\omega\sim U(0,1),i=1,\ldots,M_{1}\}\) to do multinomial sampling on the discrete distribution in each training epoch. Randomization is used to avoid the same sample in different epochs and increase the randomness for extracting more diverse anchors. For each node \(v_{i}\in V\), we denote the index set of the sampled anchors from the global view as \(A_{i}^{q}\) and \(|A_{i}^{q}|=M_{1}\). **Extraction of Local Anchors.** To fully explore the relational information, besides the global anchor sampling, we also concentrate on the local relational information. Graph diffusion removes the restriction of using only the direct neighbors and alleviates the problem of noisy and often arbitrarily defined edges. It leads that the diffusion matrix \(\mathbf{U}\) in (3) can acquire richer structural information in the local view compared with traditional GNNs. Hence, we leverage graph diffusion to generate the local anchors according to the scores in \(\mathbf{U}\). Specifically, the values in the \(i\)-th row of \(\mathbf{U}\) can reflect the influence between node \(v_{i}\) and all the other nodes. We select the nodes with the \(M_{2}\) largest scores in the \(i\)-th row of \(\mathbf{U}\) as the local anchors of node \(v_{i}\). It makes that the local anchors of \(v_{i}\) share similar semantic information to \(v_{i}\), which allows us to extract more effective local relations. For each node \(v_{i}\in V\), we denote the index set of the local anchors as \(A_{i}^{l}\) and \(|A_{i}^{l}|=M_{2}\). Based on these global- and local-view anchor sets \(A_{i}^{g},A_{i}^{l},i=1,\ldots,n\), we extract the relational information of the nodes in the sense of similarity. We use the AE latent representations \(\mathbf{Z}_{AE}^{1}=(\mathbf{z}_{AE,1}^{1},\ldots,\mathbf{z}_{AE,n}^{1})^{ \top},\mathbf{Z}_{AE}^{2}=(\mathbf{z}_{AE,1}^{2},\ldots,\mathbf{z}_{AE,n}^{2} )^{\top}\) to illustrate the detailed process. Specifically, given a query node \(v_{i}\in V\), we calculate the similarities between the embedded representation of \(v_{i}\) in \(\mathbf{Z}_{AE}^{1}\) and the embeddings of these anchors in \(\mathbf{Z}_{AE}^{2}\) by \[r_{g}^{1}(i,k_{g}) =(\mathbf{z}_{AE,i}^{1})^{\top}\mathbf{z}_{AE,k_{g}}^{2},k_{g} \in A_{i}^{g},\] \[r_{l}^{1}(i,k_{l}) =(\mathbf{z}_{AE,i}^{1})^{\top}\mathbf{z}_{AE,k_{l}}^{2},k_{l} \in A_{i}^{l}.\] Similarly, we also calculate the similarities between the embedding of \(v_{i}\) in \(\mathbf{Z}_{AE}^{2}\) and those of the anchors in \(\mathbf{Z}_{AE}^{2}\) by \[r_{g}^{2}(i,k_{g}) =(\mathbf{z}_{AE,i}^{2})^{\top}\mathbf{z}_{AE,k_{g}}^{2},k_{g} \in A_{i}^{g},\] \[r_{l}^{2}(i,k_{l}) =(\mathbf{z}_{AE,i}^{2})^{\top}\mathbf{z}_{AE,k_{l}}^{2},k_{l} \in A_{i}^{l}.\] Hereafter, let \(\mathbf{r}_{c}^{u}(i)\) be the relation vector composed by \(r_{c}^{u}(i,k)\) with \(k\) traversing the whole index set \(A_{c}^{u}\) of node \(v_{i},i\in\{1,\ldots,n\}\), and \(\mathbf{R}_{c}^{u}=(\mathbf{r}_{c}^{u}(1),\ldots,\mathbf{r}_{c}^{u}(n))^{\top}\) be the relation matrix for any \(u\in\{1,2\},c\in\{g,l\}\). **Relation Preservation.** To make the relational information invariant to augmentation, we maximize the proximity of \(\mathbf{r}_{c}^{1}(i)\) and \(\mathbf{r}_{c}^{2}(i)\) from both global and local views, i.e., we maximize the attribute-level relational similarities of all the positive pairs under augmentation, which are formulated by \[R_{AE}^{g}=\frac{1}{n}\sum_{i=1}^{n}\left(\frac{\mathbf{r}_{l}^{1}(i)^{\top} \mathbf{r}_{g}^{2}(i)}{||\mathbf{r}_{l}^{1}(i)||\cdot||\mathbf{r}_{l}^{2}(i)|| }\right)^{2}.\] and \[R_{AE}^{l}=\frac{1}{n}\sum_{i=1}^{n}\left(\frac{\mathbf{r}_{l}^{1}(i)^{\top} \mathbf{r}_{l}^{2}(i)}{||\mathbf{r}_{l}^{1}(i)||\cdot||\mathbf{r}_{l}^{2}(i)|| }\right)^{2}.\] We can similarly obtain the structure-level relational similarities \(R_{GAE}^{g}\) and \(R_{GAE}^{l}\) corresponding to GAE from both views. This operation helps to learn representations that are more reflective of the relationships between the attribute and topological information of all the nodes. **Relation De-redundancy.** In addition, besides preserving the relational similarity under augmentations, the discriminative capability of the latent representation is also important for the downstream graph clustering task. Hence, we decrease the correlations of the relation vectors for different nodes from both global and local views. It contributes to filtering redundant information and improving the separating capability for better clustering performance. Specifically, we minimize the attribute-level relational correlations of all the negative pairs, which are formulated as follows, \[C_{AE}^{g}=\frac{1}{n(n-1)}\sum_{i,j=1,i\neq j}^{n}\left(\frac{\mathbf{r}_{g}^{1 }(i)^{\top}\mathbf{r}_{g}^{2}(j)}{||\mathbf{r}_{g}^{1}(i)||\cdot||\mathbf{r} _{g}^{2}(j)||}\right)^{2}\] and \[C_{AE}^{l}=\frac{1}{n(n-1)}\sum_{i,j=1,i\neq j}^{n}\left(\frac{\mathbf{r}_{l}^{1 }(i)^{\top}\mathbf{r}_{l}^{2}(j)}{||\mathbf{r}_{l}^{1}(i)||\cdot||\mathbf{r}_{l }^{2}(j)||}\right)^{2}.\] In like manner, we can obtain the corresponding structure-level loss under GAE from global and local views, denoted by \(C_{GAE}^{g}\) and \(C_{GAE}^{l}\), respectively. Based on the above discussion, we can capture the augmentation-invariant relational information and conduct redundancy-free relational learning by minimizing the total relation loss \(L_{RE}=L_{REA}+L_{REG}\) with \[L_{REA}= C_{AE}^{g}+C_{AE}^{l} \tag{4}\] \[-R_{AE}^{g}-R_{AE}^{l},\] \[L_{REG}= C_{GAE}^{g}+C_{GAE}^{l}\] \[-R_{GAE}^{g}-R_{GAE}^{l}.\] The loss \(L_{RE}\) takes into account both efficient representation learning and reduction of redundant information upon the relation extraction of the nodes, which allows for better guidance of downstream tasks. ### _Augmentation-based Representation Fusion Module_ In this section, to obtain fine-grained representations of the nodes, we discuss the fusion mechanism of the attribute- and structure-level latent representations based on augmentations. First, we take a weighted summation of the four parts to fuse the embedded representations from the two levels as follows, \[\tilde{\mathbf{Z}}_{c}=\mathbf{W}_{1}\odot(\mathbf{Z}_{AE}^{1}+\mathbf{Z}_{AE}^{2 })+\mathbf{W}_{2}\odot(\mathbf{Z}_{GAE}^{1}+\mathbf{Z}_{GAE}^{2}),\] where \(\mathbf{W}_{1},\mathbf{W}_{2}\in\mathbb{R}^{n\times d^{\prime}}\) are trainable weight matrices to control the importance of the two types of representations and \(\odot\) is the Hadamard product. Based on \(\tilde{\mathbf{Z}}_{c}\), we further blend the embeddings from both global and local views to refine the fused information. From the local view, we adopt the neighborhood aggregation operation on \(\tilde{\mathbf{Z}}_{c}\) to enhance the local information, whereas, from the global view, we utilize the self-correlation matrix of the nodes characterized by \(\tilde{\mathbf{Z}}_{c}\) to improve the exploitation of the global information, which is normalized by the softmax function. Specifically, the final formula of the fused representation is \[\tilde{\mathbf{Z}}=\delta\mathbf{S}\tilde{\mathbf{Z}}_{c}+softmax(\mathbf{S} \tilde{\mathbf{Z}}_{c}\tilde{\mathbf{Z}}_{c}^{\top}\mathbf{S}^{\top})\mathbf{S }\tilde{\mathbf{Z}}_{c}, \tag{5}\] where \(\delta\) is a trainable weight parameter. With \(\tilde{\mathbf{Z}}\), we can obtain the reconstructed attribute matrix \(\hat{\mathbf{X}}_{AE}\) in (1) and weighted attribute matrix \(\hat{\mathbf{X}}_{GAE}\) in (2) by feeding \(\tilde{\mathbf{Z}}\) into the decoders of AE and GAE, respectively. The reconstructed adjacency matrix is calculated by fusing the self-correlations of the learned representations in GAE, which is formulated as \[\hat{\mathbf{S}}=\frac{1}{2}(\mathbf{Z}_{GAE}^{1}(\mathbf{Z}_{GAE}^{1})^{\top }+\mathbf{Z}_{GAE}^{2}(\mathbf{Z}_{GAE}^{2})^{\top})+\hat{\mathbf{X}}_{GAE} \hat{\mathbf{X}}_{GAE}^{\top}.\] The above fusion process is similar to [21]. In addition, under the neighbor aggregation mechanism, GCN updates node representations by aggregating information from the neighbors. However, when stacking multiple layers, the learned representations would become indistinguishable, seriously degrading the performance, which is the so-called over-smoothing issue [63, 64]. Hence, it is important to balance the message aggregation ability and over-smoothing issue. To alleviate the problem in GAE, we incorporate a novel propagation-regularization loss to enhance information capturing while alleviating over-smoothing defined as \[L_{PR}=\sum_{\mathbf{H}\in\mathcal{E}}\nu(\mathbf{H},\mathbf{S}\mathbf{H}),\] where \(\mathcal{E}\) contains the embedding matrix in each layer of both the encoder and decoder in GAE and \(\nu(\cdot)\) is the metric function, such as the cross entropy, Kullback-Leibler (KL) divergence, and the Jensen-Shannon divergence. Propagation regularization simulates a deep GCN by supervision at a low cost, which enables current embeddings to capture further information contained in the deeper layer. Compared with directly increasing the GCN layers, we can more finely balance the information capture ability and the over-smoothing problem by adjusting the weight of the loss. Thereby, the total reconstruction loss is computed by \[L_{REC}=L_{AE}+L_{GAE}+\epsilon L_{PR}, \tag{6}\] where \(\epsilon\) is the pre-defined hyper-parameter to adjust the influence ratio, and the reconstruction losses \(L_{AE}\) and \(L_{GAE}\) in AE and GAE are defined in (1) and (2), respectively. ### _Joint Optimization Module for Graph Clustering_ Graph clustering is essentially an unsupervised task with no feedback available as reliable guidance. To this end, we perform a clustering layer on the fused representation \(\tilde{\mathbf{Z}}\) in (5) and use the soft labels derived by a probability distribution as a self-supervised signal to jointly optimize the redundancy-free relational learning framework for graph clustering. First, by using the student's \(t\)-distribution as a kernel, we calculate the soft cluster assignment probabilities \(\mathbf{Q}_{1}=(q_{1,ij}),\mathbf{Q}_{2}=(q_{2,ij}),\mathbf{Q}_{3}=(q_{3,ij}) \in\mathbb{R}^{n\times K}\) upon the latent embeddings \(\tilde{\mathbf{Z}},(\mathbf{Z}_{AE}^{1}+\mathbf{Z}_{AE}^{2})/2,(\mathbf{Z}_{ GAE}^{1}+\mathbf{Z}_{GAE}^{2})/2\), respectively, to measure the similarities between the latent representations and cluster centroids, i.e., each value indicates the probability of assigning the \(i\)th node to the \(j\)th cluster. For example, \(q_{1,ij}\) is computed as follows, \[q_{1,ij}=\frac{(1+||\tilde{\mathbf{z}}_{i}-\boldsymbol{\mu}_{j}||^{2})^{-1}}{ \sum_{k=1}^{K}(1+||\tilde{\mathbf{z}}_{i}-\boldsymbol{\mu}_{k}||^{2})^{-1}},\] where \(\tilde{\mathbf{Z}}=(\tilde{\mathbf{z}}_{1}^{\top},\ldots,\tilde{\mathbf{z}}_{ n}^{\top})^{\top}\) and \(\boldsymbol{\mu}_{j}\)'s are the cluster centroids. The \(q_{2,ij}\) and \(q_{3,ij}\) can be calculated similarly. The \(\boldsymbol{\mu}_{j}\)'s are initialized by performing \(k\)-Means on the pre-trained fused representation. When the network is well trained, we adopt the fusion-based assignment matrix \(\mathbf{Q}_{1}\) to measure the cluster assignment probability of all the nodes, i.e., \[y_{i}=\text{argmax}_{j\in\{1,\ldots,K\}}q_{1,ij}, \tag{7}\] where \(y_{i}\) is the predicted cluster of node \(v_{i}\) for \(i=1,\ldots,n\). Next, we introduce an auxiliary confident probability distribution \(\mathbf{P}=(p_{ij})\in\mathbb{R}^{n\times K}\) to improve the confidence of the soft assignment, which is derived from \(\mathbf{Q}_{1}\) and formulated as \[p_{ij}=\frac{q_{1,ij}^{2}/\sum_{i=1}^{n}q_{1,ij}}{\sum_{k=1}^{K}(q_{1,ik}^{2}/ \sum_{i=1}^{n}q_{1,ik})}.\] To make the data representation close to cluster centroids and improve cluster cohesion, we minimize the KL divergence loss between \(\mathbf{P}\) and \(\mathbf{Q}_{1},\mathbf{Q}_{2},\mathbf{Q}_{3}\) as follows, \[L_{CLU}=\sum_{i=1}^{n}\sum_{j=1}^{K}p_{ij}\log\frac{p_{ij}}{(q_{1,ij}+q_{2,ij} +q_{3,ij})/3}. \tag{8}\] By utilizing the confident distribution \(\mathbf{P}\), the process self-supervises the cluster assignment without any label guidance. We integrate the latent representations from AE, GAE, and the fusion mechanism in the self-supervised clustering procedure to obtain more accurate clustering results. To sum up, the total loss \(L\) in the whole framework of R\({}^{2}\)FGC is composed of the relation loss, the reconstruction loss, and the self-supervised clustering loss, i.e., \[L=L_{RE}+L_{REC}+\kappa L_{CLU}, \tag{9}\] where \(\kappa\) is a pre-defined hyper-parameter to balance the weight of the clustering loss. The training process of our proposed R\({}^{2}\)FGC is summarized in Algorithm 1. ### _Computational Complexity Analysis_ For the scalability of large-scale datasets, we adopt the mini-batch stochastic gradient descent to optimize our method. Assume that the batch size is \(B\) and the dimensions of each layer of AE and GAE are \(\tilde{d}_{1},\ldots,\tilde{d}_{L_{1}}\) and \(\tilde{d}_{1},\ldots,\tilde{d}_{L_{2}}\), respectively. Given a graph with \(n\) nodes and \(|E|\) edges, the dimension of the original attributes is \(d\). The time complexities of AE and GAE are \(O(n\sum_{i=1}^{L_{1}}\tilde{d}_{i}\tilde{d}_{i-1})\) and \(O(|E|\sum_{i=1}^{L_{2}}\tilde{d}_{i}\tilde{d}_{i-1})\) with \(\tilde{d}_{0}=\tilde{d}_{0}=d\), respectively. For each batch, the complexity of the relation learning module is \(O(B(B+d^{\prime})(M_{1}+M_{2}))\) based on \(d^{\prime}\)-dimensional latent representations. Moreover, we perform the representation fusion and propagation regularization in \(O(B^{2}d^{\prime}+B\log B)\) time and conduct the self-supervised clustering in \(O(BK+B\log B)\) time with \(K\) classes in the task. Hence, the total computational complexity of our method R\({}^{2}\)FGC is \(O(n\sum_{i=1}^{L_{1}}\tilde{d}_{i}\tilde{d}_{i-1}+|E|\sum_{i=1}^{L_{2}}\tilde {d}_{i}\tilde{d}_{i-1}+n(B+d^{\prime})(M_{1}+M_{2})+n(Bd^{\prime}+K))\), which is linearly related to the numbers of nodes and edges. ## V Experiments In this section, we first introduce the experimental settings and then conduct experiments to validate the effectiveness of R\({}^{2}\)FGC. We aim to answer the following research questions. * **RQ1:** Compared with state-of-the-art methods, does our method R\({}^{2}\)FGC achieve better performance for self-supervised graph clustering? * **RQ2:** How do different components of the proposed method contribute to the clustering performance? * **RQ3:** How do the hyper-parameters in R\({}^{2}\)FGC affect the final clustering performance? * **RQ4:** How is the convergence of the proposed model under different datasets? * **RQ5:** Is there any supplementary analysis that can illustrate the superiority of R\({}^{2}\)FGC? ### _Experimental Settings_ **Datasets.** For comparison, we perform the proposed method R\({}^{2}\)FGC on five commonly used benchmark datasets. Four of them are graph datasets, including a paper network ACM1, a shopping network AMAP2, a citation network CITE3, and an author network DBLP4; another is a non-graph dataset, i.e., a record dataset HHAR [65]. Following [18], for the non-graph data, the adjacency matrix is generated by the undirected \(k\)-nearest neighbor graph. Table I briefly summarizes the information of these benchmark datasets. Footnote 1: [http://dl.acm.org/](http://dl.acm.org/) Footnote 2: [https://github.com/shchur/gnn-](https://github.com/shchur/gnn-) benchmark/raw/master/data/npz/amamap.electronics photo.nprz Footnote 3: [http://citeseer.ist.psu.edu/index](http://citeseer.ist.psu.edu/index) Footnote 4: [https://dblp.uni-rizer.de](https://dblp.uni-rizer.de) **Compared Methods.** To illustrate the superiority of our proposed R\({}^{2}\)FGC, we compare its clustering performance with some state-of-the-art clustering methods, which are divided into four categories, i.e., the classical shallow clustering method \(k\)-means, the AE-based methods, the GCN-based methods, and the combination of AE and GCN. The AE-based methods contains AE [27], DEC [39], and IDEC [40]. They convert the raw data to low-dimensional codes to learn feature representations by AE and then perform clustering over the learned latent embeddings. The GCN-based methods include GAE, VGAE [28], DAEGC [45], and ARGA [46]. They adopt the GCN encoder to learn the node content and topological information for clustering. In addition, some methods combine AE and GCN to boost the embedded representations for clustering, which contains SDCN [18], DFCN [21] and AGCC [24]. These methods integrate GCN with AE from different perspectives to jointly train the clustering network. **Training Procedure.** The training of our method R\({}^{2}\)FGC includes two phases. First, following [21], the AE and GAE are pre-trained independently for 30 epochs to minimize their respective reconstruction loss functions. Both sub-networks are integrated into a united framework for another 100 epochs to obtain the initial representations and cluster centroids. Then, we train the whole network for at least 300 epochs until convergence to minimize the total loss in (9). Following the compared methods, to alleviate the adverse influence of the randomness, we repeat the experiment 10 times to evaluate our method and report the mean values and the standard deviations (i.e., mean\(\pm\)std) of the considered metric values. We implement our method using PyTorch 1.8.0 and Pytorch Geometric 1.7.2, which can easily train GNNs for a variety of applications associated with graph-structured data. **Parameter Settings.** For a fair comparison, we adopt the same parameter setting for AE and GAE as [21], i.e., the layers of the encoder (/decoder) for AE and GAE are set to 4 and 3, respectively; the dimensions of the encoder (/decoder) for AE are set to 128, 256, 512, 20 in turn; the dimensions of the encoder (/decoder) for GAE are set to 128, 256, 20 in turn. The network is trained with the Adam optimizer. The learning rate is set to 5-5 for ACM, 1e-4 for DBLP, and 1e-3 for AMAP, CITE, and HHAR. The hyper-parameters \(M_{1}\) and \(M_{2}\) are set to {256, 8}. Moreover, the parameters \(\alpha,\eta,\beta,\epsilon,\kappa\) are set to 0.1, 0.2, 0.8, 5e3, 10, respectively. The optimization stops when the validation loss comes to a plateau. **Evaluation Metrics.** To evaluate the clustering performance of each compared method, we adopt four widely used evaluation metrics following [18], i.e., Accuracy (ACC), Normalized Mutual Information (NMI), Average Rand Index (ARI), and Macro F1-score (F1). For each metric, a larger value implies a better clustering result. ### _Performance Comparison (RQ1)_ The experimental results of our method and eleven compared methods on five benchmark datasets are reported in Table II, in which the bold and underlined values indicate the best results in all the methods and all the baselines, respectively. From these results, we have the following observations. * Compared with shallow clustering method \(k\)-means, these deep graph clustering methods clearly show preferable performance. It indicates that the strong capability for learning representation of deep neural network methods enables exploit more meaningful information from graph-structured data for clustering. * The purely AE-based methods (AE, DEC, and IDEC) perform worse than the methods combining AE and GCN (SDCN, DFCN, and AGCC) in most cases. The reason may be that the AE-based methods only leverage the attribute information to learn the latent representation, which overlooks the structure-level semantic information. Similarly, the purely GCN-based methods (GAE, VGAE, DAEGC, and ARGA) also show inferior performance than SDCN, DFCN, and AGCC in most circumstances. It indicates that integrating AE into GCN can capture the attribute and structure information more effectively from complementary views. * Our method R\({}^{2}\)FGC achieves the best clustering performance compared with all the baselines in terms of the four considered metrics over all the datasets. For both graph and non-graph data, our approach represents a significant improvement over the baselines. For example, compared with the best results among all the baselines, for the ACM dataset, our method relatively improves 1.68%, 4.35%, 5.10%, 1.82% on ACC, NMI, ARI, F1; for the AMAP dataset, our method improves 5.72%, 6.75%, 11.55%, 5.18% on ACC, NMI, ARI, F1; for the DBLP dataset, our method improves 6.51%, 16.29%, 19.87%, 6.39% on ACC, NMI, ARI, F1, respectively. * The reasons for the superiority of our method R\({}^{2}\)FGC are that a) R\({}^{2}\)FGC extracts the inherent relational information based on AE and GAE from both local and global views under augmentation, which allows for better exploration of both attribute and structure information; b) Under augmentation, R\({}^{2}\)FGC preserves the consistent relationship among the nodes but not the latent representations, which expects to learn more essential representations of the semantic information; c) R\({}^{2}\)FGC decreases the redundant relation among the nodes for learning discriminative and meaningful representations, which can better serve the graph clustering; d) R\({}^{2}\)FGC couples AE and GAE together in the representation fusion mechanism to fully integrate and refine the attribute and structure information; e) R\({}^{2}\)FGC also brings the propagation regularization to mitigate the possible over-smoothing problem caused by GAE to promote the clustering performance. With the addition of relation extraction, relation preservation and de-redundancy strategies, R\({}^{2}\)FGC outperforms all the baselines upon the fusion mechanism of AE and GAE and the regularization method of alleviating over-smoothing. ### _Ablation Study (RQ2)_ In this section, to further investigate the validity of our proposed method, we conduct some ablation experiments to study the contribution of each component of R\({}^{2}\)FGC. We mainly focus on the influence of global-view relation extraction (gloRE), local-view relation extraction (locRE), relation preservation (REpre), relation de-redundancy(REder), and propagation regularization (PR). In addition, we make some discussion on the proposed global sampling strategy. **Effects of gloRE and locRE.** In the relation extraction module, we explore the inherent relation from both global and local views. The former view learns the global relation of the nodes and the latter concerns the neighbor relation. We perform some ablation experiments to verify the respective effectiveness of the global- and local-view strategies. Specifically, we consider the following two cases: * R\({}^{2}\)FGC w/o gloRE: R\({}^{2}\)FGC without considering the global-view relation extraction; * R\({}^{2}\)FGC w/o locRE: R\({}^{2}\)FGC without considering the local-view relation extraction; The corresponding results are displayed in Table III. From the comparison of R\({}^{2}\)FGC w/o gloRE and R\({}^{2}\)FGC w/o locRE, for the ACM dataset, local-view relation extraction has a greater effect than the global-view one on clustering in terms of ACC, NMI, ARI, and F1, while for the CTE and DBLP datasets, the global-view relation extraction may have a more prominent contribution. As for the AMAP and HHAR datasets, R\({}^{2}\)FGC w/o gloRE and R\({}^{2}\)FGC w/o locRE show close metric values, which indicates that global- and local-view extractions almost play equal roles. Moreover, R\({}^{2}\)FGC consistently shows better performance than R\({}^{2}\)FGC w/o gloRE and R\({}^{2}\)FGC w/o locRE over the five considered datasets. Hence, these results illustrate that both views are necessary and important for achieving good clustering performance. **Effects of gloRE, locRE, and PR.** Additionally, relation preservation is used to learn the effective representations by preserving the consistent relation information, whereas the relation de-redundancy conduces to reduce the confusing information, which benefits obtaining discriminative embeddings. Moreover, we adopt propagation regularization to relieve the over-smoothing issue. Hence, we also explore their respective efficiencies in the ablation experiments, i.e., four cases are considered as follows: * R\({}^{2}\)FGC w/o REpre: R\({}^{2}\)FGC without considering the relation preservation; * R\({}^{2}\)FGC w/o REder: R\({}^{2}\)FGC without considering the relation de-redundancy; * R\({}^{2}\)FGC w/o REpre & REder: R\({}^{2}\)FGC without both relation preservation and de-redundancy; * R\({}^{2}\)FGC w/o PR: R\({}^{2}\)FGC without adopting the propagation-regularization trick. The corresponding results are also shown in Table III. Comparing R\({}^{2}\)FGC w/o REpre with R\({}^{2}\)FGC w/o REder, it is observed that relation preservation outperforms relation de-redundancy for the ACM dataset; relation de-redundancy shows more significant power to improve the clustering performance for the AMAP, CITE, and DBLP datasets; these two strategies have almost equal impact on the HHAR dataset. Moreover, by contrasting with R\({}^{2}\)FGC w/o REpre & REder, both R\({}^{2}\)FGC w/o REpre and R\({}^{2}\)FGC w/o REder give better clustering results with higher metric values, which implies that both relation preservation and relation de-redundancy possess the capability to promote the effect of graph clustering. In addition, by comparing R\({}^{2}\)FGC w/o PR and R\({}^{2}\)FGC w/o REpre & REder, for the ACM, AMAP, and HHAR datasets, the over-smoothing issue has a more significant impact on clustering performance, whereas, for the CITE and DBLP datasets, the relation extraction is more important for good performance. For example, R\({}^{2}\)FGC on the ACM dataset has 1.99% relative improvement over R\({}^{2}\)FGC w/o PR in terms of NMI; R\({}^{2}\)FGC on the CITE dataset obtains 6.28% improvement over R\({}^{2}\)FGC w/o REpre & REder in terms of ARI. Hence, these results demonstrate that all of the proposed components in R\({}^{2}\)FGC are efficient for reaching informative representation and good performance for graph clustering. **Discussion on Global Sampling Strategy.** To further illustrate the effectiveness of the QMC inverse degree-weighted distribution sampling for extracting global anchors, we perform experiments to compare it with two MC cases on the AMAP, DBLP, and HHAR datasets, i.e., we consider the following three cases: * R\({}^{2}\)FGC with QMC global sampling (Ours): R\({}^{2}\)FGC with considering QMC inverse degree-weighted distribution sampling, i.e., the low-discrepancy point set is used in the multinomial sampling; * R\({}^{2}\)FGC with MC global sampling: R\({}^{2}\)FGC with considering MC inverse degree-weighted distribution sampling, i.e., the uniform random numbers are used; * R\({}^{2}\)FGC with SRS: R\({}^{2}\)FGC with considering simple random sampling for drawing global anchors. The results are depicted in Figure 2. Comparing the three strategies, R\({}^{2}\)FGC with QMC global sampling shows better performance over the three considered datasets in terms of the average ACC, NMI, ARI, and F1 scores. Moreover, R\({}^{2}\)FGC with MC global sampling outperforms R\({}^{2}\)FGC with SRS, which implies that inverse degree-weighted distribution sampling is indeed effective to avoid poor representations. In addition, from the error bars in Figure 2, we can also find that R\({}^{2}\)FGC with QMC global sampling leads to smaller variances for the metric values, which benefits from the high convergence rate of the QMC sampling strategy. The sampled global anchor set is a better representation of the target distribution, which motivates the subsequent representation learning to have a better and more stable performance. In this way, our proposed sampling method guarantees good robustness to relation extraction and thus to clustering performance. ### _Parameter Sensitivity Analysis (RQ3)_ In this section, we examine the sensitivity of the proposed R\({}^{2}\)FGC to the hyper-parameters. For the global- and local-view relation extraction in Section IV-B, we need to pre-define the numbers of the global and local anchors \(M_{1}\) and \(M_{2}\) for sampling. Hence, we investigate the effect of varying \(M_{1}\) and \(M_{2}\) on ACM, AMAP, CITE, and HHAR datasets. For each dataset, we consider \(M_{1}=\{128,256,512,1024\}\) and \(M_{2}=\{4,8,16,32\}\). When \(M_{1}\) is varied, we fix \(M_{2}\) to its optimal setting as in Section V-A, and vice versa. Additionally, we explore the impact of two loss weight parameters \(\epsilon\) and \(\kappa\) on ACM, AMAP, CITE, and DBLP datasets. We vary \(\epsilon\) across \(\{5{\rm e}2,1{\rm e}3,5{\rm e}3,1{\rm e}4\}\) and \(\kappa\) across \(\{1,5,10,50\}\). The results are depicted in Figures 3 and 4, respectively. **Performance of Different Amounts of Global Anchors.** From Figure 3, it can be seen that the average accuracies for the considered datasets are relatively stable as \(M_{1}\) changes. It may be due to that with the QMC multinomial sampling, the drawn anchors can well mimic the defined inverse degree-weighted distribution even if \(M_{1}\) is small. It helps to solve the problem caused by varying qualities of the learned representations for the nodes with different degrees. On the other hand, we draw different samples in different training epochs based on the randomization strategy, which increases the diversity Fig. 2: The performance comparisons \(w.r.t.\) different global sampling strategies on the AMAP, DBLP, and HHAR datasets. of the samples to catch a broad relationship, even with a small number of global anchors. Therefore, the clustering performance is robust to the number of global anchors based on the proposed sampling strategy. **Performance of Different Amounts of Local Anchors.** As for \(M_{2}\), it can be found that on the four datasets, as \(M_{2}\) increases, it promotes the clustering performance first and then shows a weakening tendency. The possible reason may be that small \(M_{2}\) cannot well collect the neighboring information, whereas large \(M_{2}\) may absorb nodes involved in other clusters, which can disturb the extraction of local relation. Hence, a moderate number of local anchors is preferable and a well-designed deterministic sampling is desirable to avoid the intake of inconsistent information from other nodes. **Performance of Different Amounts of Loss Weights.** As shown in Figure 4, when \(\kappa\) is small, increasing \(\epsilon\) leads to a decrease in model performance. This is because large \(\epsilon\) enhances the information aggregation ability of the nodes, which is equivalent to a deep GCN and thus increases the risk of over-smoothing, while small \(\kappa\) means a low self-supervision ability, which results in poor cohesion and insufficient discrimination in node representations. As the increase of \(\kappa\), better representation cohesion achieves, and increasing \(\epsilon\) appropriately is promising to improve the performance by balancing the strength of neighbor aggregation in GCN and the weakness of over-smoothness. However, when \(\epsilon\) becomes excessively large, there may be a slight decline in performance due to the over-smoothing issue on some datasets. In addition, when \(\epsilon\) is fixed, increasing \(\kappa\) results in an increasing trend in model performance on ACM and AMAP datasets. However, on CITE and DBLP datasets, excessively large \(\kappa\) leads to a decreasing trend. One possible reason is that CITE and DBLP represent citation networks and author networks, respectively, where different articles and individuals may belong to distinct disciplines or communities. Forcing strong cohesion in these cases may lead to suboptimal results. Overall, we recommend to set \(\kappa\) around 10 and \(\epsilon\) around 5e3 for satisfying performance. When dealing with a new dataset, a small-scale hyper-parameter tuning around the recommended values is needed due to the dataset's specific characteristics. ### _Empirical Convergence Analysis (RQ4)_ In this section, we analyze the convergence of our proposed method R\({}^{2}\)FGC, the curves of the training losses are shown in the Figure 5. It can be observed that our method demonstrates graceful convergence across different datasets AMAP and HHAR. The reason behind this can be attributed to our pre-training learning based on AE and GAE, which provides us Fig. 4: The performance comparisons \(w.r.t.\) different loss weight parameters \(\kappa\) and \(\epsilon\) on the ACM, AMAP, CITE, and DBLP datasets. Fig. 3: The performance comparisons \(w.r.t.\) different amounts of global anchors \(M_{1}\) and local anchors \(M_{2}\) on the ACM, AMAP, CITE, and HHAR datasets. with a well-initialized representation. As a result, the initial loss optimization has the correct gradient direction, leading to a rapid decrease in loss. Additionally, our method effectively maintains the relational similarity between nodes in the graph while reducing the redundancy of learned representations. This allows the learned representations to possess highly rich semantic information and strong discriminative capabilities. It enables similar nodes closer to each other while better distinguishing unrelated nodes, facilitating the formation of clusters. This also motivates the training objective to converge to lower value, leading to better clustering performance. ### _Analysis of Over-smoothing Issue (RQ5)_ To verify the superiority of the proposed propagation-regularization loss in alleviating over-smoothing issue, we compare the effects of different GCN layers in GAE encoder and different values of \(\epsilon\) by mean average distance (MAD) and clustering performance (i.e., accuracy). MAD reflects the smoothness of node representations by calculating the mean of the average cosine distance between the nodes and other nodes [64]. A smaller MAD indicates a higher global smoothness. The analysis results are shown in Figure 6. It can be observed that as the number of GCN layers in GAE encoder increases, indicated by the dashed line in the figure, both MAD and accuracy exhibit a decreasing trend. It implies that larger GCN layers cause nodes to immensely absorb information from farther neighbors and thus can lead to indistinguishable node representations, exacerbating the over-smoothing issue and resulting in performance degradation. On the other hand, our proposed propagation-regularization loss shows a slight decrease in MAD and a gradual increase in clustering performance as \(\epsilon\) increases to a certain value (e.g., 2e3 for AMAP, 1e4 for DBLP). This suggests that our propagation-regularization loss is equivalent to simulating a GCN of a fractional layer, which possesses the capability to ease the increase of smoothness and meanwhile enhance the expressive power of node representations, thereby promoting the clustering performance. However, when \(\epsilon\) is particularly large, both MAD and accuracy decrease sharply. This is because excessively large \(\epsilon\) amplifies the risk of over-smoothness. Therefore, selecting an appropriate weight is crucial in balancing node expressiveness and the over-smoothing issue. ### _Visualization of Clustering Results (RQ5)_ To visually verify the validity of our proposed R\({}^{2}\)FGC, we plot 2D \(t\)-distributed stochastic neighbor embedding (\(t\)-SNE) visualizations [66] for the learned representations on the ACM, CITE, DBLP, and HHAR datasets. We compare the \(t\)-SNE visualizations of the embeddings resulting from R\({}^{2}\)FGC with those from the raw data and DFCN (the best method among the baselines in Section V-B) to enable a visual comparison. The plots are shown in Figure 7. The results of \(t\)-SNE on the four raw data clearly have poor separability for different clusters. Compared with the raw data, more distinguishing visualizations in R\({}^{2}\)FGC and DFCN demonstrate that deep graph clustering methods indeed make great performance improvements. Comparing R\({}^{2}\)FGC with DFCN, the latent representations obtained by our method R\({}^{2}\)FGC show better separability for different clusters, where the samples from the same cluster have better aggregation and those from different clusters have a bigger gap. Such a phenomenon illustrates that our proposed method learns more discriminative representations and produces more effective cluster assignments compared with state-of-the-art methods. ## VI Conclusion In this paper, we study self-supervised deep graph clustering and propose a novel method termed R\({}^{2}\)FGC. R\({}^{2}\)FGC introduces the relational learning for the graph-structured data, in which the attribute- and structure-level relation information among nodes are extracted based on AE and GAE. To achieve effective representations, R\({}^{2}\)FGC preserves consistent relations among the nodes under augmentation, whereas the redundancy relation is filtered for discriminative representations. R\({}^{2}\)FGC also cooperates a representation fusion mechanism with the relational learning to instruct downstream self-supervised clustering tasks jointly. Experimental results on various benchmark datasets demonstrate the validity and superiority of the proposed method. In the future, we aim to extend relational learning to other scenarios including multi-view graph clustering, interpretable clustering, and other promising applications such as face clustering and text clustering. Fig. 5: The curves of the training loss against the number of epochs on the AMAP and HHAR datasets. Fig. 6: The comparisons of the mean average distance (MAD) and clustering accuracy \(w.r.t\) different GCN layers in GAE encoder and regularization parameter \(\epsilon\) on the AMAP and DBLP datasets. ## Acknowledgments The authors are grateful to the anonymous reviewers for critically reading the manuscript and for giving important suggestions to improve their paper.
2309.16290
Hodge and Frobenius colevels of algebraic varieties
We provide new improved lower bounds for the Hodge and Frobenius colevels of algebraic varieties (over $\mathbf{C}$ or over a finite field) in all cohomological degrees. These bounds are expressed in terms of the dimension of the variety and multi-degrees of its defining equations. Our results lead to an enhanced positive answer to a question raised by Esnault and the first author.
Daqing Wan, Dingxin Zhang
2023-09-28T09:36:36Z
http://arxiv.org/abs/2309.16290v1
# Hodge and Frobenius colevels of algebraic varieties ###### Abstract. We provide new improved lower bounds for the Hodge and Frobenius colevels of algebraic varieties (over \(\mathbf{C}\) or over a finite field) in all cohomological degrees. These bounds are expressed in terms of the dimension of the variety and multi-degrees of its defining equations. Our results lead to an enhanced positive answer to a question raised by Esnault and the first author. ## 1. Background and motivation These notes present our study on cohomological colevels of algebraic varieties in both the Betti and \(\ell\)-adic contexts, which is motivated by our previous \(p\)-adic results [25]. We provide new improved lower bounds for the Hodge and Frobenius colevels of algebraic varieties (over \(\mathbf{C}\) or over a finite field) in all cohomological degrees. These bounds are expressed in terms of the dimension of the variety and multi-degrees of its defining equations. See Theorems 2.2, 2.3 and 2.5. These theorems lead to an enhanced positive answer to a question (Question 1.10) raised by Esnault and the first author in [15]. Before delving into the specific results to be stated in Section 2, let us, in this introductory section, review some background materials that illustrate the flavors of the problems we are about to deal with. **Hodge colevel and Frobenius colevel.** Let us recall, in two different contexts, the definitions of "cohomological colevel". **Definition 1.1** (Colevels of cohomology).: Let \(k\) be a field. Let \(X\) be a finite type, separated scheme over \(k\). * (\(\ell\)-adic case). If \(k\) is a finite field of \(q\) elements, let \(\overline{k}\) be a fixed algebraic closure of \(k\). Let \(\ell\) be a prime number different from the characteristic \(p\) of \(k\). In this situation, the geometric Frobenius operates on the compactly supported \(\ell\)-adic cohomology \(\operatorname{H}^{i}_{\mathrm{c}}(X_{\overline{k}};\mathbf{Q}_{\ell})\). We say \(\operatorname{H}^{i}_{\mathrm{c}}(X_{\overline{k}};\mathbf{Q}_{\ell})\) has _(Frobenius) colevel \(\geq m\)_, if all the Frobenius eigenvalues are divisible, as algebraic integers, by \(q^{m}\). * (Betti case). Assume \(k=\mathbf{C}\). The compactly supported cohomology \(\operatorname{H}^{i}_{\mathrm{c}}(X^{\mathrm{an}};\mathbf{Q})\) underlies a canonical mixed Hodge structure. We say \(\operatorname{H}^{i}_{\mathrm{c}}(X^{\mathrm{an}};\mathbf{Q})\) has _(Hodge) colevel
2309.10839
Anharmonic effects in nuclear recoils from sub-GeV dark matter
Direct detection experiments are looking for nuclear recoils from scattering of sub-GeV dark matter (DM) in crystals, and have thresholds as low as ~ 10 eV or DM masses of ~ 100 MeV. Future experiments are aiming for even lower thresholds. At such low energies, the free nuclear recoil prescription breaks down, and the relevant final states are phonons in the crystal. Scattering rates into single as well as multiple phonons have already been computed for a harmonic crystal. However, crystals typically exhibit some anharmonicity, which can significantly impact scattering rates in certain kinematic regimes. In this work, we estimate the impact of anharmonic effects on scattering rates for DM in the mass range ~ 1-10 MeV, where the details of multiphonon production are most important. Using a simple model of a nucleus in a bound potential, we find that anharmonicity can modify the scattering rates by up to two orders of magnitude for DM masses of O(MeV). However, such effects are primarily present at high energies where the rates are suppressed, and thus only relevant for very large DM cross sections. We show that anharmonic effects are negligible for masses larger than ~ 10 MeV.
Tongyan Lin, Chia-Hsien Shen, Mukul Sholapurkar, Ethan Villarama
2023-09-19T18:00:00Z
http://arxiv.org/abs/2309.10839v1
# Anharmonic effects in nuclear recoils from sub-GeV dark matter ###### Abstract Direct detection experiments are looking for nuclear recoils from scattering of sub-GeV dark matter (DM) in crystals, and have thresholds as low as \(\sim 10\) eV or DM masses of \(\sim 100\) MeV. Future experiments are aiming for even lower thresholds. At such low energies, the free nuclear recoil prescription breaks down, and the relevant final states are phonons in the crystal. Scattering rates into single as well as multiple phonons have already been computed for a harmonic crystal. However, crystals typically exhibit some anharmonicity, which can significantly impact scattering rates in certain kinematic regimes. In this work, we estimate the impact of anharmonic effects on scattering rates for DM in the mass range \(\sim 1-10\) MeV, where the details of multiphonon production are most important. Using a simple model of a nucleus in a bound potential, we find that anharmonicity can modify the scattering rates by up to two orders of magnitude for DM masses of \(\mathcal{O}(\text{MeV})\). However, such effects are primarily present at high energies where the rates are suppressed, and thus only relevant for very large DM cross sections. We show that anharmonic effects are negligible for masses larger than \(\sim 10\) MeV. ###### Contents * I Introduction * II Dark matter scattering in a crystal * II.1 Harmonic approximation * II.2 Anharmonic crystal properties * II.3 Toy anharmonic potential * III Analytic results for structure factor * III.1 Harmonic oscillator * III.2 Perturbation theory for anharmonic oscillator: \(q\ll\sqrt{2m\omega_{0}}\) * III.2.1 Limitations of perturbation theory * III.2.2 Impulse Approximation for \(q\gg\sqrt{2m\omega_{0}}\) * IV Numerical results for 1D anharmonic oscillator * IV.1 Impact on DM scattering rates * V Conclusions * A Interatomic potentials * B Power counting in perturbation theory * C Impulse approximation * D Exact results for Morse potential ## I Introduction Over the past few decades, a significant theoretical and experimental effort has been dedicated to detect dark matter (DM), but the particle nature of DM still remains a mystery. Direct detection experiments look for the direct signatures left by halo DM depositing energy inside the detectors. Traditionally, such experiments have looked for elastic nuclear recoils induced by DM particles in detectors [1]. This strategy has had tremendous sensitivity for DM particles with masses higher than the GeV-scale that interact with nuclei [2; 3; 4]. However, in recent years it has also been recognized that sub-GeV dark matter models are also compelling and motivated dark matter candidates [5; 6; 7; 8; 9; 10; 11]. These DM particles would leave much lower energy nuclear recoils, motivating experimental efforts to lower the detector thresholds for nuclear recoils. Inelastic processes like the Migdal effect [12; 13; 14; 15; 16] or bremsstrahlung [17] provide alternative channels to detect nuclear scattering in the sub-GeV DM regime. The majority of experiments achieving lower thresholds in nuclear recoils (down to \(\sim 10\) eV) are doing so with crystal targets [18; 19; 20; 21], although there is also progress in using liquid helium [22]. Future experiments like SPICE [23] will reach even lower thresholds by measuring athermal phonons produced in crystals like GaAs and Sapphire (i.e. Al\({}_{2}\)O\({}_{3}\)). In crystal targets, DM-nucleus scattering can deviate substantially from the picture of a free nucleus undergoing elastic recoils. Nuclei (or atoms) are subject to forces from the rest of the lat tice, which play a role at the lower energies relevant for sub-GeV DM. For recoil energies below the typical binding energy of the atom to the lattice (\(\mathcal{O}(10\) eV)), the atoms are instead treated as being bound in a potential well. At even lower energies, the relevant degrees of freedom are the collective excitations of the lattice, known as phonons. In this regime, single phonon excitations with typical energies \(\lesssim 0.1\) eV are possible. In the DM scattering rate, crystal scattering effects are all encoded within a quantity known as the dynamic structure factor, \(S(\mathbf{q},\omega)\). The differential cross section for a DM particle of velocity \(\mathbf{v}\) and mass \(m_{\chi}\) to scatter with energy deposition \(\omega\) and momentum transfer \(\mathbf{q}\) can be written in terms of \(S(\mathbf{q},\omega)\) as: \[\frac{d\sigma}{d^{3}\mathbf{q}d\omega}=\frac{b_{p}^{2}}{\mu_{\chi}^{2}}\frac{1 }{v}\frac{\Omega_{c}}{2\pi}|\tilde{F}(\mathbf{q})|^{2}S(\mathbf{q},\omega) \delta(\omega-\omega_{\mathbf{q}}), \tag{1}\] Here \(b_{p}\) is the scattering length of the DM with a proton, \(\mu_{\chi}\) is the reduced DM-proton mass, \(\Omega_{c}\equiv V/N\) is the volume of the unit cell in the crystal with total volume \(V\) and \(N\) unit cells, and \(\omega_{\mathbf{q}}=\mathbf{q}\cdot\mathbf{v}-q^{2}/2m_{\chi}\) is equal to the energy \(\omega\) lost by the DM particle when it transfers momentum \(\mathbf{q}\) to the lattice. The \(\mathbf{q}\)-dependence of the DM-nucleus interaction is encapsulated in the DM form factor \(\tilde{F}(\mathbf{q})\). \(S(\mathbf{q},\omega)\) can thus be viewed as a form factor for the crystal response. For a recent review, see Ref. [24]. Understanding \(S(\mathbf{q},\omega)\) in crystals is critical to direct detection of sub-GeV dark matter. Thus far, the limiting behavior of \(S(\mathbf{q},\omega)\) is well understood [25]. In the limit of large \(\omega\) and \(q\) (\(\omega\gtrsim\) eV and \(q\sim\sqrt{2m_{N}\omega}\) for nucleus of mass \(m_{N}\)), the structure factor behaves as \(S(\mathbf{q},\omega)\propto\delta\left(q^{2}/(2m_{N})-\omega\right)\), reproducing the cross section for free elastic recoils. At low \(\omega\) comparable to the typical phonon energy \(\omega_{0}\) and \(q\) comparable to the inverse lattice spacing, \(S(\mathbf{q},\omega)\) instead is dominated by single phonon production. The intermediate regime, particularly \(q\sim\sqrt{2m_{N}\omega_{0}}\), is dominated by multiphonon production. For a large number of phonons being produced, this should merge into the free nuclear recoil limit. For DM masses below \(\sim\)MeV, the momentum-transfers are smaller than the typical inverse lattice spacing of crystals, \(q<2\pi/a\sim\mathcal{O}(\text{keV})\), where \(a\) is the lattice spacing. The dominant process is the production of a single phonon. In recent years, the single phonon contribution to \(S(\mathbf{q},\omega)\) has been computed extensively in a variety of materials, often using first-principles approaches for the phonons [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. In most of the crystals, single phonons have a maximum energy of \(\mathcal{O}(100\) meV), however, requiring extremely low experimental thresholds to detect them. Production of multiphonons is an enticing channel to look for sub-GeV DM with detectors having thresholds higher than \(\mathcal{O}(100\) meV). They are also important to understand in the near term as experiments lower their thresholds. However, multiphonon production has been more challenging to compute. The numerical first-principles approach taken for single phonon production does not scale well with number of phonons being produced, where even the two-phonon rate becomes very complicated. Alternate analytic methods are thus valuable. In Fig. 1, we show a classification of the different regimes in which a multiphonon calculation has been performed, including this work. We discuss the details of these regimes and calculations below. One analytic approach was taken in Ref. [37], which calculated the two-phonon rate in the long-wavelength limit, but this study was limited to the regime \(q<2\pi/a\) and focused on acoustic phonons only. For \(q>2\pi/a\), a different approximation is possible, the incoherent approximation, which drops interference terms between different atoms of the crystal in calculating \(S(\mathbf{q},\omega)\). Then scattering is dominated by recoiling off of individual atoms. This approach was taken in [25], which found a general \(n\)-phonon production rate scaling as \((q^{2}/(2m_{N}\omega_{0}))^{n}\). This result also showed how the free nuclear recoil cross section was reproduced in the multiphonon structure factor as \(q\gg\sqrt{2m_{N}\omega_{0}}\). However, one limitation of the multiphonon production rate in Ref. [25] was that it worked in the harmonic approximation, where higher order phonon interactions like the three-phonon interaction are neglected. Typical crystals have some anharmonicity which introduces phonon self-interactions, leading to various observable effects like phonon decays, thermal expansion, and thermal conductivity of crystals [38; 39; 40]. Using a simplified model of anharmonic phonon interactions, Ref. [25] estimated that anharmonic three-phonon interactions may give the dominant contribution to the two-phonon rate \(q<2\pi/a\), and are larger than the harmonic piece by almost an order of magnitude in the regime. On the other hand, we do not expect anharmonic effects to be important in the opposite limit of large \(q\) (\(q\gg\sqrt{2m_{N}\omega_{0}}\)), where the nucleus can be treated as a free particle. It is thus necessary to bridge these two extremes and estimate the anharmonic effects in the intermediate regime where multi-phonons dominate the scattering. In this work, we estimate the anharmonic effects on the rate of multiphonon production by working in the incoherent approximation and \(q>2\pi/a\). In this limit, the multiphonon scattering rate looks similar to that of an atom in a potential [41], although the spectrum of states is smeared out due to interactions between neigh boring atoms. Given this similarity, we will take a toy model of an atom in a 1D potential. This gives a simple approach to including anharmonic effects, which is also illustrated in the right panel of Fig. 1. The anharmonic corrections to the atomic potential only capture a part of the contributions to anharmonic phonon interactions, but they have a similar size (in the appropriate dimensionless units) and should give a reasonable estimate of the size of the effect. We can therefore use this approach to estimate theoretical uncertainties and gain analytic understanding for the multiphonon production rate. However, the result should not be taken as a definitive calculation of the anharmonic corrections. Fortunately, we will find that anharmonic corrections are large only in certain parts of the phase space which are more challenging to observe, and that the multiphonon rate quickly converges to the harmonic result for DM masses above a few MeV. The outline of this paper is as follows: In Sec. II, we discuss the formalism of DM scattering in a crystal and the dynamic structure factor, which encodes the information about the crystal response. We consider the calculation of the structure factor under the incoherent approximation, and motivate the anharmonic 1D toy potentials we use in this paper. In Sec. III, we study the behavior of the dynamic structure factor analytically for the anharmonic 1D potentials. Using perturbation theory, we show that anharmonic corrections can dominate for \(q\ll\sqrt{2m_{N}\omega_{0}}\) and become more important for higher phonon number. In the opposite limit \(q\gg\sqrt{2m_{N}\omega_{0}}\), we use the impulse approximation to show that anharmonic corrections are negligible and that the structure factor indeed approaches that of an elastic recoil. In Sec. IV, we present numerical results for the structure factor in anharmonic 1D potentials obtained from realistic atomic potentials in various crystals. In Sec. IV.1, we calculate the impacts of including anharmonic effects on DM scattering rates. We conclude in Sec. V. Appendix A gives the details of the modeling of the interatomic forces on the lattice, used to extract 1D single atom potentials. Appendix B gives additional details of the analytic perturbation theory estimates of the anharmonic structure factor. Appendix C includes additional details relevant to the impulse approximation calculation. Appendix D summarizes the exactly solveable Morse potential model, which further validates the results in the main text. Figure 1: (**Left**) Due to the computational challenges of obtaining the multiphonon scattering rate in crystals, analytic approximations are valuable. Here we show a classification of regimes in which a multiphonon calculation has been performed, as well as approximations made in each case. In this work, we show that anharmonic corrections can be significant for \(q\lesssim\sqrt{2m_{N}\omega_{0}}\) (Sec. III.2) but are negligible when \(q\gg\sqrt{2m_{N}\omega_{0}}\) (Sec. III.3). We obtain results for all \(q\) using numerical calculations (Sec. IV.1). (**Right**) To estimate anharmonic effects, we take a toy model of dark matter scattering off an atom in a 1D anharmonic potential. We obtain the anharmonicity by fitting to empirical models of interatomic potentials. Dark matter scattering in a crystal Consider DM that interacts with nuclei in the crystal. We will parameterize the interaction with the lattice by a coupling strength \(f_{\mathbf{\ell}d}\) relative to that of a single proton, where \(\mathbf{\ell}\) denotes the lattice vector of a unit cell and \(d\) denotes the atoms in the unit cell. In the DM scattering cross section, (1), the material properties of the crystal are encoded in the structure factor \(S(\mathbf{q},\omega)\) which is defined as, \[S(\mathbf{q},\omega)\equiv\frac{2\pi}{V}\sum_{f} \Big{|}\sum_{\mathbf{\ell}}\sum_{d}f_{\mathbf{\ell}d}\langle\Phi_{f}|e^{i \mathbf{q}\cdot\mathbf{r}_{\mathbf{\ell}d}}|0\rangle\Big{|}^{2}\] \[\times\delta(E_{f}-E_{0}-\omega), \tag{2}\] where \(|\Phi_{f}\rangle\) is the final excited state of the crystal with energy \(E_{f}\) and \(\mathbf{r}_{\mathbf{\ell}d}\) denotes the position of the scattered nucleus. The crystal is considered to be in the ground state \(|0\rangle\) initially. Note for simplicity we assume a pure crystal where each atom has a unique coupling strength; the scattering is modified if there is a statistical distribution for the interaction strengths at each lattice site, for instance if different isotopes are present [25]. The states \(|\Phi_{f}\rangle\) are the phonon eigenstates of the lattice Hamiltonian, \[H_{\text{lattice}}=\sum_{\mathbf{\ell}d}\frac{p_{\mathbf{\ell}d}^{2}}{2m_{\mathbf{\ell}d }}+V_{\text{lattice}}+E_{0}, \tag{3}\] where the first term is the kinetic energy of the atoms in the lattice and the lattice potential \(V_{\text{lattice}}\) in general is given by, \[V_{\text{lattice}} =\frac{1}{2}\sum_{\mathbf{\ell},\mathbf{\ell}^{\prime},d^{\prime}}\sum_{ \alpha,\beta}k_{\alpha\beta}^{(2)}(\mathbf{\ell}d,\mathbf{\ell}^{\prime}d^{\prime})\ u_{\alpha}(\mathbf{\ell}d)\ u_{\beta}(\mathbf{\ell}^{\prime}d^{\prime})\] \[+\frac{1}{3!}\sum_{\mathbf{\ell},\mathbf{\ell}^{\prime},d^{\prime},\mathbf{ \ell}^{\prime\prime},d^{\prime\prime}}\sum_{\alpha,\beta,\gamma}k_{\alpha \beta\gamma}^{(3)}(\mathbf{\ell}d,\mathbf{\ell}^{\prime}d^{\prime},\mathbf{\ell}^{\prime \prime}d^{\prime\prime})\] \[\times u_{\alpha}(\mathbf{\ell}d)\ u_{\beta}(\mathbf{\ell}^{\prime}d^{ \prime})\ u_{\gamma}(\mathbf{\ell}^{\prime\prime}d^{\prime\prime})\] \[+... \tag{4}\] where the \(u_{\alpha}(\mathbf{\ell}d)\) is the displacement from the equilibrium position in the Cartesian direction \(\alpha\) for the atom at the position \(d\) in the unit cell located at \(\mathbf{\ell}\), and \(k_{\alpha\beta}^{(2)}\), \(k_{\alpha\beta\gamma}^{(3)}\) are the second-, and third-order force constants respectively. Note that as the displacements are considered around equilibrium, we do not have a term in the potential which is linear in the displacements. A number of approximations are useful in evaluating \(S(\mathbf{q},\omega)\). The first is the harmonic approximation, which amounts to keeping the terms up to second-order force constants and neglecting the higher order terms (\(k_{\alpha\beta\gamma}^{(3)}=0\)). This vastly simplifies the Hamiltonian into a harmonic oscillator system, and has been used in most previous calculations of DM scattering in crystals. While this is generally an excellent approximation in crystals, including higher order terms in the Hamiltonian (anharmonicity) is necessary to explain a number of observable effects, as we will discuss further below. The second approximation is the incoherent approximation, used for scattering with momentum transfers much bigger than the inverse lattice spacing of the crystal, \(q\gg 2\pi/a\). In this limit, we drop the interference terms between different atoms in the crystal in (II). This amounts to summing over the squared matrix elements of individual atoms in the structure factor in (II), \[S(\mathbf{q},\omega) \approx\frac{2\pi}{V}\sum_{f}\sum_{\mathbf{\ell}}\sum_{d}|f_{\mathbf{ \ell}d}|^{2}\Big{|}\langle\Phi_{f}|e^{i\mathbf{q}\cdot\mathbf{r}_{\mathbf{\ell}d }}|0\rangle\Big{|}^{2}\] \[\times\delta(E_{f}-E_{0}-\omega). \tag{5}\] The calculation of the structure factor then simplifies to computing matrix elements \(\Big{|}\langle\Phi_{f}|e^{i\mathbf{q}\cdot\mathbf{r}_{\mathbf{\ell}d}}|0\rangle \Big{|}^{2}\) which are identical for the atoms in all unit cells \(\ell\). Below, we will first discuss this calculation under the approximation of a harmonic crystal, before going on to setting up a model that accounts for anharmonicity in crystals. ### Harmonic approximation In the harmonic approximation, the lattice Hamiltonian can be written as a sum of harmonic oscillators in Fourier space [42], \[H_{\text{lattice}}^{\text{Harmonic}}=\sum_{\nu}^{3n}\sum_{\mathbf{q}}\omega_ {\mathbf{q},\nu}(\hat{a}_{\mathbf{q},\nu}^{\dagger}\hat{a}_{\mathbf{q},\nu}+ \frac{1}{2}), \tag{6}\] where the phonon eigenmodes of the lattice are labelled by the momentum \(\mathbf{q}\) and the \(3n\) branches \(\nu\) with \(n\) being the number of atoms in the unit cell. The \(\hat{a}_{\mathbf{q},\nu}^{\dagger}\) (\(\hat{a}_{\mathbf{q},\nu}\)) are the creation (annihilation) operators, and \(\omega_{\mathbf{q},\nu}\) are the energies of the phonons. The lattice eigenstates that appear in (II) can then be written as, \[|\Phi_{n}\rangle=\hat{a}_{\mathbf{q}_{1},\nu_{1}}^{\dagger}\hat{a}_{\mathbf{q} _{2},\nu_{2}}^{\dagger}...\hat{a}_{\mathbf{q}_{n},\nu_{n}}^{\dagger}|0\rangle, \tag{7}\] where \(|\Phi_{n}\rangle\) is an \(n\)-phonon state. The displacement operators in this harmonic approximation are given by, \[\mathbf{u}(\mathbf{\ell}d)=\sum_{\nu}^{3n}\sum_{\mathbf{q}}\sqrt{ \frac{1}{2Nm_{d}\omega_{\mathbf{q},\nu}}} \Big{(} \mathbf{e}_{\mathbf{q},\nu}(d)\,\hat{a}_{\mathbf{q},\nu}\,e^{i \mathbf{q}\cdot\mathbf{r}_{\mathbf{\ell}d}^{2}-i\omega_{\mathbf{q},\nu}t}\] \[+\ \text{h.c.}\Big{)}, \tag{8}\] where the \(\mathbf{e_{q,\nu}}(d)\) indicates the eigenvector of the displacement of atom \(d\) for that phonon. The equilibrium position of the atom is denoted by \(\mathbf{r}_{\mathbf{\ell}d}^{0}\). Using \(\mathbf{r}_{\mathbf{\ell}d}=\mathbf{r}_{\mathbf{\ell}d}^{0}+\mathbf{u}(\mathbf{\ell}d)\) inside (2), the dynamic structure factor can be calculated in the harmonic approximation. This approach has been applied to calculate single-phonon excitations using numerical results for phonon energies and eigenvectors [27; 28; 29; 30; 32; 33; 34], but becomes computationally much more burdensome for multi-phonons in the final state. Under both the incoherent and harmonic approximations, it is possible to compute the multiphonon structure factor in (5). This was given in Ref. [25] as an expansion in the number of phonons produced \(n\), \[S(\mathbf{q},\omega) \approx 2\pi\sum_{d}n_{d}\,|f_{d}|^{2}e^{-W_{d}(\mathbf{q})}\sum_{n} \frac{1}{n!}\Big{(}\frac{q^{2}}{2m_{d}}\Big{)}^{n}\] \[\times\left(\prod_{i=1}^{n}\int d\omega_{i}\frac{D_{d}(\omega_{i} )}{\omega_{i}}\right)\delta\left(\sum_{j=1}^{n}\omega_{j}-\omega\right), \tag{9}\] where \(D_{d}(\omega)\) is the partial density of states in the crystal, normalized to \(\int d\omega D_{d}(\omega)=1\). \(W_{d}(\mathbf{q})\) is the Debye-Waller factor defined as, \[W_{d}(\mathbf{q})=\frac{q^{2}}{4m_{d}}\int d\omega^{\prime}\,\,\frac{D_{d}( \omega^{\prime})}{\omega^{\prime}}. \tag{10}\] (9) shows that with higher momentum \(q\), there is an increased rate of multiphonons; the typical phonon number is \(n\sim\frac{q^{2}}{2m\bar{\omega}}\) with \(\bar{\omega}\) a typical phonon energy. In the limit of \(n\gg 1\), this reproduces the nuclear recoil limit. In the incoherent approximation above, we still assumed the final states \(|\Phi_{f}\rangle\) are the phonon eigenstates of the harmonic lattice Hamiltonian in (6). Let us now make a further approximation that the final states are isolated atomic states, where each atom is bound in a potential. Assuming an isotropic potential, and a single frequency \(\omega_{0}\) for the oscillators, a toy atomic Hamiltonian for atom \(d\) in the lattice can be written as, \[H_{d}^{\text{toy}}=\frac{p_{d}^{2}}{2m_{d}}+\frac{1}{2}m_{d}\omega_{0}^{2}r_{ d}^{2}, \tag{11}\] where \(\mathbf{r}_{d}\) is the displacement of the atom \(d\) from its equilibrium position. Following (5), the dynamic structure factor can be written as, \[S_{\text{toy}}(\mathbf{q},\omega) =2\pi\sum_{d}n_{d}|f_{d}|^{2}\sum_{n}\left|\langle\vec{n}|e^{i \mathbf{q}\cdot\mathbf{r}_{d}}|0\rangle\right|^{2}\] \[\times\delta(E_{n}-E_{0}-\omega), \tag{12}\] where \(|\vec{n}\rangle\) are the energy eigenstates of the toy harmonic Hamiltonian considered for atom \(d\), with \(\vec{n}=\{n_{x},n_{y},n_{z}\}\). The energies with respect to the ground state equilibrium are given by \(E_{n}-E_{0}=n\omega_{0}\) with \(n=n_{x}+n_{y}+n_{z}\). We have also absorbed the sum over the lattice vector \(\mathbf{\ell}\) and the volume \(V\) into the density \(n_{d}\) of atom \(d\) in the lattice. As shown in [41], this structure factor is given by, \[S_{\text{toy}}(\mathbf{q},\omega) \approx 2\pi\sum_{d}n_{d}|f_{d}|^{2}e^{-2W_{d}^{\text{toy}}(q)}\] \[\times\sum_{n}\frac{1}{n!}\left(\frac{q^{2}}{2m_{d}\omega_{0}} \right)^{n}\delta\left(n\omega_{0}-\omega\right), \tag{13}\] where the Debye-Waller factor in the toy model is given by, \(W_{d}^{\text{toy}}(q)=q^{2}/4m_{d}\omega_{0}\). This picture can be simplified even further by considering a toy one-dimensional harmonic potential for the atom \(d\) given by \[V_{d}(x)=\frac{1}{2}m_{d}\omega_{0}^{2}x^{2}. \tag{14}\] Note that in general \(\omega_{0}\) will depend on the atom \(d\) within the unit cell, but we suppress this dependence for simplicity. The structure factor in this 1D case is exactly the same expression as the toy three-dimensional case in (13), as expected given the isotropic 3D potential assumed. A derivation of the 1D result is given in Sec. III.1. The toy model of DM scattering off a 1D harmonic potential gives a simple intuitive picture for the result in (9). We see a very similar form of the structure factor in (13), but with a discrete spectrum of states for the isolated oscillator of the toy model. By assuming that the final states are isolated atomic states, we have effectively neglected the interactions between atoms, and the excited states of all the atoms are discrete and degenerate. In a real material, the interaction with neighboring atoms will lead to a splitting of the degenerate levels, and give a broad spectrum of allowed energy levels (the phonon spectrum). The interpretation for the structure factor is therefore also somewhat different in the two cases, as it gives a probability for exciting the \(n\)th excited state in an isolated oscillator. But we will still continue to refer the \(n\)th excited state as the \(n\)-phonon state to make the connection with the full incoherent structure factor in (9). The similarity in the structure factor gives a route forward to including anharmonic effects, which is much easier to understand in the toy model. We can proceed by including anharmonic corrections to the 1D potential in (14), and in some cases obtain analytic results that illustrate their importance. In order to quantitatively estimate the impact on dark matter scattering rates, a few remaining ingredients are needed. In practice, the toy model can give very different results in certain parts of parameter space due to the discrete spectrum assumed and depending on the choice of \(\omega_{0}\). We therefore need a prescription to identify the appropriate \(\omega_{0}\) for the isolated oscillator, and to smear it out appropriately to mimic a real material. Comparing Eqs. 9 and 13, we see that the complete structure factor can be attained by making a replacement \[\frac{\delta(n\omega_{0}-\omega)}{\omega_{0}^{n}}\rightarrow\left(\prod_{i=1}^ {n}\int d\omega_{i}\frac{D(\omega_{i})}{\omega_{i}}\right)\delta\left(\sum_{j= 1}^{n}\omega_{j}-\omega\right). \tag{15}\] In this expression, we can identify \(D(\omega)/(\omega\overline{\omega^{-1}})\) as a normalized probability distribution for \(\omega\), where \(\overline{\omega^{-1}}=\int d\omega^{\prime}D(\omega^{\prime})/\omega^{\prime}\). This distribution yields a mean value for \(\omega\) of \((\overline{\omega^{-1}})^{-1}\). The right hand side of (15) is proportional to the joint probability distribution for total energy \(\omega\), and we can simplify it when \(n\gg 1\) by applying the Central Limit Theorem. This allows us to replace the right hand side with a Gaussian, which simplifies computations: \[\frac{\delta(n\omega_{0}-\omega)}{\omega_{0}^{n}}\rightarrow\frac{\left( \overline{\omega^{-1}}\right)^{n}}{\sqrt{2\pi n\sigma^{2}}}e^{-\frac{\left( \omega-\overline{\omega^{-1}}-1\right)^{2}}{2n\sigma^{2}}}\Theta(\omega_{\rm max }-\omega). \tag{16}\] Note we have included a cutoff at multiples of the maximum allowed energy in the density of states, \(\omega_{\rm max}=n\times(\min(\omega)|D(\omega)=0)\) so that we do not include the region where \(D(\omega_{i})=0\) on the right hand side of (15). The width of the Gaussian for \(n=1\) is given by \[\sigma=\sqrt{\frac{\overline{\omega}}{\overline{\omega^{-1}}}-\frac{1}{\left( \overline{\omega^{-1}}\right)^{2}}} \tag{17}\] and \(\overline{\omega}=\int d\omega^{\prime}D(\omega^{\prime})\omega^{\prime}\). This discussion therefore makes it clear that we should identify the frequency of the 1D toy model as \(\omega_{0}=1/\overline{\omega^{-1}}\), which can be calculated numerically given the phonon density of states. This approach is validated in Fig. 2, where we compare our previous result using the full density of states [25] to the prescription described above. Note that small deviations at low mass arise from the lack of a cutoff at the Brillouin zone momentum in the previous density of states result. We reiterate that in this work, we shall include this Brillouin zone cutoff across all rate calculations since the incoherent approximation and subsequent approximations are only valid in this regime. We will utilize this prescription to extend the multiphonon calculations for an anharmonic potential. To set up toy 1D anharmonic potentials, we first need to understand the anharmonic properties of typical crystals to extract the behavior of the potentials. We do this in the following subsection. Figure 2: **Comparison of scattering in a harmonic crystal to 1D harmonic oscillator.** The dotted lines show the DM cross section reach computed using the multiphonon structure factor in a harmonic crystal, (9), and assuming the incoherent approximation [25]. Using the structure factor of the toy 1D harmonic oscillator in (13) combined with the energy smearing prescription in (16) gives a very similar result (solid lines). There are some small deviations at low momentum since we place a hard cut on the allowed momentum transfer \(q>2\pi/a\approx 2\) keV for the 1D oscillator. ### Anharmonic crystal properties In general, a crystal lattice will exhibit some anharmonicity. Anharmonicity technically refers to the presence of non-zero force constants which are higher than second-order in the lattice potential in (4). For example, cubic anharmonicity in the crystal is parameterized by the third-order force constants \(k^{(3)}_{\alpha\beta\gamma}(\mathbf{\ell}d,\mathbf{\ell}^{\prime}d^{\prime},\mathbf{\ell}^{ \prime\prime}d^{\prime\prime})\) in (4). Such force constants can be computed with DFT methods, similar to the harmonic case [43]. In the presence of such terms, the phonon eigenstates are no longer the harmonic phonon eigenstates of the crystal, and higher order phonon interactions, such as a three-phonon interaction, will be present. Calculating the full dynamic structure factor in (5) for a crystal with such anharmonicity would require accounting for these higher order force tensors in both the matrix elements and in the final states, which quickly becomes a very challenging numerical problem. The rough size of the anharmonic force constants can be inferred from measurable crystal properties, however. We will briefly discuss some of the anharmonic effects below, and use them to justify our estimate of anharmonic effects. An important effect of keeping cubic or higher order terms in (4) is to introduce interactions between the phonon modes which are the eigenstates of the harmonic Hamiltonian. For example, from (8), we can see that a cubic term in the displacements \(\mathbf{u}(\mathbf{\ell}d)\) will introduce three-phonon interactions like (i.e. annihilation of two phonons to create a single phonon) or (i.e. decay of a single phonon into two phonons) in the Hamiltonian at the first order in the anharmonic force constant \(k^{(3)}\). Phonon lifetimes in crystals are thus directly related to the anharmonic force constants, and can be measured to estimate the size of the anharmonicity [44; 45; 40]. Anharmonicity is also necessary to explain thermal expansion and conductivity in crystals. In particular, the linear volume expansion coefficient of crystals can be directly written in terms of the mode Gruneisen constants \(\gamma_{\mathbf{q}\nu}\) which is defined for phonon modes labelled by the momentum \(\mathbf{q}\) and branch index \(\nu\) as [46], \[\gamma_{\mathbf{q}\nu}=-\frac{V}{\omega_{\mathbf{q}\nu}}\frac{\partial\omega_ {\mathbf{q}\nu}}{\partial V}. \tag{18}\] Note that the change in volume in the equation above is at a fixed temperature. In a purely harmonic crystal, the phonon frequencies are determined by the second-order force constants which do not get modified with changes in volume, thus leading to zero Gruneisen constant. However, in the presence of cubic anharmonicity, the phonon frequencies are determined by the effective second-order force constants, which receive corrections depending on both the third-order force constants \(k^{(3)}\) and the changes in volume, thus giving a non-zero Gruneisen constant [47]. An increase in volume leads to larger displacements of atoms, which typically makes the effective second order constants and the phonon frequencies smaller, providing a positive Gruneisen constant. In the case of a non-zero Gruneisen constant, the free energy of the crystal, which has a harmonic contribution \(\propto\Delta V^{2}\), receives a volume-dependent correction \(\propto-\Delta V\gamma_{\mathbf{q}\nu}\bar{E}_{\mathbf{q}\nu}\), where \(\bar{E}_{\mathbf{q}\nu}\) is the mean energy in the phonon mode \(\mathbf{q}\nu\) at a particular temperature [38]. As the temperature increases, the mean energy \(\bar{E}_{\mathbf{q}\nu}\) goes up, and thus this leads to a new equilibrium volume which minimizes the free energy. For a positive Gruneisen constant, this leads to thermal volume expansion. The Gruneisen constants are thus directly related to the cubic force constants of the material, and have also been used to extract them [47]. Concretely, the relationship between the mode Gruneisen constants and the anharmonic force constants for weak anharmonicity can be shown to be [48], \[\gamma_{\mathbf{q}\nu} =-\frac{1}{6\omega_{\mathbf{q},\nu}^{2}}\sum_{\mathbf{\ell},\mathbf{\ell}^ {\prime}d^{\prime},\mathbf{\ell}^{\prime\prime}}\sum_{\alpha\beta\delta}k^{(3)}_{ \alpha\beta\delta}(\mathbf{0}d,\mathbf{\ell}^{\prime}d^{\prime},\mathbf{\ell}^{\prime\prime }d^{\prime\prime})\] \[\times\frac{e^{\beta}_{\mathbf{q},\nu}(d^{\prime})^{*}e^{\delta }_{\mathbf{q},\nu}(d^{\prime\prime})}{\sqrt{m_{d^{\prime}}m_{d^{\prime\prime} }}}\ r^{0,\alpha}_{\mathbf{0}d}\ e^{i\mathbf{q}\cdot(\mathbf{\ell}^{\prime\prime}-\bm {\ell}^{\prime})}, \tag{19}\] where the \(e^{\beta}_{\mathbf{q},\nu}(d)\) indicates the displacement of atom \(d\) in the Cartesian direction \(\beta\) for the phonon \(\mathbf{q}\nu\), and \(r^{0,\alpha}_{\mathbf{0}d}\) is the equilibrium position of atom \(d\) in the Cartesian direction \(\alpha\) for the unit cell at the origin. To get a rough estimate of the maximum anharmonicity strength in the crystal, the relation in (19) can be inverted and written in terms of the maximal mode Gruneisen constant \(\gamma^{\max}\) found in a crystal, \[k^{(3)}\sim\frac{6m_{d}\omega_{0}^{2}\gamma^{\max}}{l}, \tag{20}\] where \(\omega_{0}\) is the typical phonon energy of the lattice and \(l\) is the nearest neighbor distance. Now consider a typical displacements \(\sim(\sqrt{2m_{d}\omega_{0}})^{-1}\) of an atom in the crystal; the change in the potential energy \(\delta V_{\mathrm{anh}}\) due to anharmonic force constant estimated above is given by, \[\frac{\delta V_{\mathrm{anh}}}{\omega_{0}} \sim\frac{1}{\omega_{0}}\frac{1}{3!}\ k^{(3)}(\sqrt{2m_{d}\omega _{0}})^{-3}\] \[\sim 0.02\ \Big{(}\frac{m_{d}}{28\ \mathrm{GeV}}\Big{)}^{-0.5}\ \Big{(} \frac{\omega_{0}}{30\ \mathrm{meV}}\Big{)}^{-0.5}\] \[\times\Big{(}\frac{\gamma^{\max}}{1.5}\Big{)}\ \Big{(}\frac{l}{2.35\ \bar{A}}\Big{)}^{-1}, \tag{21}\] where in the second line we use parameters for Si. We use an estimate for the maximal value of the mode Gruneisen constant in Si from [38] at 0K. In Ge, the maximal Gruneisen constant is similar to that in Si, while in GaAs, it could be as high as 3.5 for certain phonon modes [38]. The Gruneisen constant thus provides a rough estimate of the overall anharmonicity in the crystal, including the cubic terms which depend on displacements of multiple atoms. In this paper, we will work with a toy model of anharmonic interactions similar to the 1D oscillator model in Sec. II.1. In particular, we consider excitations for an isolated atom in a 1D anharmonic potential. The anharmonicity is controlled by force constant terms like \(k^{(3)}_{\alpha\beta\gamma}(\mathbf{\ell}d,\mathbf{\ell}^{\prime}d^{\prime},\mathbf{\ell} ^{\prime\prime}d^{\prime\prime})\) with \(\mathbf{\ell}d=\mathbf{\ell}^{\prime}d^{\prime}=\mathbf{\ell}^{\prime\prime}d^{\prime\prime}\) which characterize the modification to the potential of a single atom in a lattice. Since the Gruneisen constants involve a sum over many cubic force terms, we instead directly obtain the single-atom anharmonic force constants with an empirical model of the lattice. We model the lattice assuming empirical interatomic potentials, which have been shown to accurately reproduce phonon dispersions and transport properties [49]. Concretely, we assume the Tersoff-Buckingham-Coulomb interatomic potential with the parameter set given in Ref. [49] (see Appendix A for details). We then fix all atoms at their equilibrium positions except for one atom denoted by \(\mathbf{\ell}d\), which is displaced by a small distance in different directions. The single atom potential calculated from this procedure is shown in Fig. 3 for Si, with deviations from the harmonic potential that depend on the direction of displacement. The maximum anharmonicity is along the direction of the nearest neighbor atom. Along this direction, we find that the typical change in the potential energy for an atom displaced by \(r\sim(\sqrt{2m_{d}\omega_{0}})^{-1}\) is, \[\frac{\delta V_{\text{anh}}}{\omega_{0}}\sim 0.01. \tag{22}\] Comparing this estimate with (21), we see that the anharmonicity strength inferred from the potential of a single atom is roughly of the same size as the overall anharmonicity strength of the lattice inferred from the Gruneisen constant. Thus, even though we do not perform a full calculation of the structure factor for an anharmonic crystal including the modification of the phonon spectrum and the lattice states, the comparison above suggests that the effects in a full calculation are expected to be similar in magnitude to the effects we estimate in this work using single atom potentials. ### Toy anharmonic potential As shown in Sec. II.1 for the harmonic crystal, the features of the dynamic structure factor under the incoherent approximation can be well-approximated with just a 1D toy potential for an individual atom. This gives a much simpler path to calculating DM scattering in anharmonic crystals for \(q\gg 2\pi/a\), where many phonons may be produced. In contrast, prior work including anharmonicity focused on the limit \(q\ll 2\pi/a\), restricted to two phonons [37], and does not scale well to large number of phonons. We can then stitch together the two approaches to gain a more complete understanding of anharmonic effects. In this work, we take a 1D anharmonic potential and calculate the 1D structure factor, in order to simplify the problem as much as possible. Taking the 1D approximation is more subtle in the presence of anharmonicity since a generic potential in 3D is not separable, unlike the harmonic case. Denoting the small displacement around equilibrium by \(r\), and the polar and azimuthal directions by \(\theta\) and \(\phi\) respectively, the potential energy for atom Figure 3: **Single atomic potential:** Potential of a single atom displaced along various directions with all other atoms at their equilibrium positions. In zincblende Si, the largest anharmonicity is in the direction of the nearest-neighbor atom, while the smallest anharmonicity is in the direction of the next-nearest-neighbor. We have also included a third direction orthogonal to the other two, with intermediate anharmonicity strength. in the lattice can be expanded in powers of \(r\) as, \[V_{d}(r,\theta,\phi)= \frac{1}{2}m_{d}\omega_{0}^{2}r^{2}\] \[+\sum_{k\geq 3}\lambda_{k}\omega_{0}f_{k}(\theta,\phi)(r\sqrt{2m_{d} \omega_{0}})^{k}, \tag{23}\] where \(\lambda_{k}\) are dimensionless constants parameterizing the degree of anharmonicity at \(k^{\rm th}\) order, and \(f_{k}(\theta,\phi)\) are functions which specify the angular dependence and whose range is \([-1,1]\). Solving the full 3D problem would require numerically finding the eigenstates of this general potential, while in the 1D case we can make much more progress analytically. We will therefore select directions of maximum anharmonicity and use this for our simplified 1D problem. Our expectation is that this gives a conservative estimate of the importance of anharmonic couplings, in that the full 3D calculation would give somewhat reduced effects. As discussed in Sec. II.2, we can extract realistic single atom potentials by modeling the interatomic potentials on the lattice and displacing a single atom (see Appendix A for details). We typically find that, for small displacements around equilibrium, the anharmonicity is dominated by the cubic and quartic terms parametrized by \(\lambda_{3}\) and \(\lambda_{4}\), respectively. Motivated by these observations, we consider the following forms of toy potentials in our study: * **Single cubic or quartic perturbations:** We first consider a harmonic potential with a single perturbation, \[V_{d}(x)=\frac{1}{2}m_{d}\omega_{0}^{2}x^{2}+\lambda_{k}\omega_{0}(\sqrt{2m_ {d}\omega_{0}}x)^{k},\] (24) where \(k=3\) or \(4\). This case is amenable to perturbation theory, and in Sec. III.2, we apply it to discuss the power counting of anharmonic corrections. * **Morse potential:** It is possible to obtain exact (non-perturbative) analytic results for the Morse potential defined by, \[V_{\rm Morse}(x)=B\Big{(}e^{-2ax}-2e^{-ax}\Big{)},\] (25) where \(a\) is a parameter controlling the width of the potential and \(B\) is the normalization. We fit these two parameters to the cubic anharmonicity estimated from the single atom potentials discussed earlier, and calculate the dynamic structure factor for this potential in App. D. * **Fit to realistic atomic potentials:** We numerically calculate the structure factor in a potential with both cubic and quartic terms, where the dimensionless anharmonic couplings are obtained by fitting to the actual single atom potential. The potential in this case is given by \[V_{d}(x) =\frac{1}{2}m_{d}\omega_{0}^{2}x^{2}+\lambda_{3}\omega_{0}(\sqrt{ 2m_{d}\omega_{0}}x)^{3}\] \[\quad+\lambda_{4}\omega_{0}(\sqrt{2m_{d}\omega_{0}}x)^{4}.\] (26) We find that typically, \(\lambda_{3}\sim 0.01\), and \(\lambda_{4}\sim 10^{-4}\). For the 1D toy potentials discussed above, we compute the 1D dynamic structure factor in the incoherent approximation (\(q\gg 2\pi/a\)): \[S_{\rm toy}(q,\omega) =2\pi\sum_{d}n_{d}\left|f_{d}\right|^{2}\sum_{f}\big{|}\left< \Phi_{f}|e^{iqx}|\Phi_{0}\right>\big{|}^{2}\] \[\times\delta(E_{f}-E_{0}-\omega).\] (27) Again, we have summed over all atoms of type \(d\) in the lattice and defined the number density of atom \(d\) by \(n_{d}\). The wavefunctions \(|\Phi\rangle\) are the eigenfunctions of the Hamiltonian, \[H_{\rm toy}=\frac{p^{2}}{2m_{d}}+V_{d}(x).\] (28) The computation of the dynamic structure factor then boils down to computing the ground state \(|0\rangle\) and the excited eigenstates \(|\Phi_{f}\rangle\) for this Hamiltonian, and calculating the structure factor under the incoherent approximation as in Eq. (III.2). As discussed in Sec. II.1, for a 1D toy model the phonon levels are discrete and in a real crystal there is a broad spectrum of energy levels. Similar to the harmonic case, we need a prescription to account for this smearing of energies. In the case with anharmonicity, the spectrum is shifted. The 1D toy model will instead give a modified energy-conserving delta function: \[\delta(f(n)\omega_{0}-\omega), \tag{29}\] where \(f(n)\omega_{0}\) is the energy difference between the \(n\)th excited state and the ground state. \(f(n)\) will depend on the exact form of the potential. Guided by the harmonic result, we again shall fix \(\omega_{0}=1/\sqrt{\omega^{-1}}\) and introduce a width to the delta function in a similar fashion: \[\delta(f(n)\omega_{0}-\omega)\rightarrow\frac{1}{\sqrt{2\pi f(n)\sigma^{2}}}e ^{-\frac{(\omega-f(n)\omega_{0})^{2}}{2f(n)\sigma^{2}}}. \tag{30}\] This is in the 1D approximation, and that including the full 3D anharmonic potential would be expected to have an additional effect on the spectrum of states. However, in practice the anharmonicity is sufficiently small that the shift of the spectrum is subdominant to the other anharmonic effects in the structure factor. This forms the basis of the toy model we consider in this paper. Focusing on the high \(q\) regime where the incoherent approximation applies, we consider independent lattice sites and calculate scattering in them with 1D toy anharmonic potentials. We now describe different approaches to understand the dynamic structure factor in this setting. ## III Analytic results for structure factor In this section, we study the features of the structure factor for a 1D anharmonic potential with analytic methods. This will allow us illustrate the general behavior for the limits \(q\ll\sqrt{2m_{d}\omega_{0}}\) and \(q\gg\sqrt{2m_{d}\omega_{0}}\). First, we review the derivation of the structure factor for a 1D harmonic potential. For \(n\)-phonon production in the harmonic limit, the structure factor in the regime \(q\gg\sqrt{2m_{d}\omega_{0}}\) is \(\propto q^{2n}/(2m_{d}\omega_{0})^{n}\). Treating the anharmonic 1D potential as a perturbation, we then show that the \(q\) dependence of the \(n\)-phonon term can be substantially modified in the regime \(q\ll\sqrt{2m_{d}\omega_{0}}\), leading to large anharmonic corrections. In particular, we obtain the power counting of the structure factor in powers of \(q\) and the anharmonicity parameter \(\lambda_{k}\), which allows us to roughly identify the regime of \(q\) where we expect the anharmonic effects to be dominant. As we will see later, this proves useful to explain the numerical results for realistic potentials. Finally, we will also use the impulse approximation to perform an analytic estimate of the structure factor in the regime \(q>\sqrt{2m_{d}\omega_{0}}\). We show that the nuclear recoil limit is reproduced, with the structure factor approximated by a Gaussian envelope similar to the harmonic case. Anharmonic terms give rise to slightly modified shape of the Gaussian, which have negligible impact on scattering rates. ### Harmonic oscillator First, we briefly review the calculation of the dynamic structure factor in the harmonic approximation. In this case the potential \(V_{d}(x)\) is given by \[V_{d}(x)=\frac{1}{2}m_{d}\omega_{0}^{2}x^{2}. \tag{31}\] The energy \(E_{n}\) of the \(n\)-th excited state \(|n\rangle\) of this simple harmonic oscillator is given by, \[E_{n}=\Big{(}n+\frac{1}{2}\Big{)}\omega_{0}. \tag{32}\] The structure factor in Eq. (27) thus becomes, \[S_{\rm toy}(q,\omega)=2\pi\sum_{d}n_{d}\ |f_{d}|^{2}\sum_{n}\big{|} \left<n|e^{iqx}|0\right>\big{|}^{2}\delta(n\omega_{0}-\omega). \tag{33}\] The matrix element can be evaluated in the following way, \[\langle n|e^{iqx}|0\rangle =\frac{1}{\sqrt{n!}}\left<0|a^{n}e^{iqx}|0\right>\] \[=\frac{1}{\sqrt{n!}}\left<0|e^{iqx}\Big{(}a+\frac{iq}{\sqrt{2m_{ d}\omega_{0}}}\Big{)}^{n}|0\right>\] \[=\frac{1}{\sqrt{n!}}\Big{(}\frac{iq}{\sqrt{2m_{d}\omega_{0}}} \Big{)}^{n}\left<0|e^{iqx}|0\right>\] \[=\frac{1}{\sqrt{n!}}\Big{(}\frac{iq}{\sqrt{2m_{d}\omega_{0}}} \Big{)}^{n}e^{-\frac{q^{2}}{4m_{d}\omega_{0}}}, \tag{34}\] where we use \(e^{-iqx}ae^{iqx}=a+\frac{iq}{\sqrt{2m_{d}\omega_{0}}}\) in the second equality. Plugging the above matrix element to the structure factor in (33) becomes, \[S_{\rm toy}(q,\omega) =2\pi\sum_{d}n_{d}\ |f_{d}|^{2}e^{-2W_{d}^{\rm toy}(q)}\] \[\times\sum_{n}\frac{1}{n!}\left(\frac{q^{2}}{2m_{d}\omega_{0}} \right)^{n}\delta\left(n\omega_{0}-\omega\right), \tag{35}\] where \(W_{d}^{\rm toy}(q)=q^{2}/(4m_{d}\omega_{0})\) is the Debye-Waller factor in the toy model. The structure factor follows a Poisson distribution with mean number of phonons \(\mu=q^{2}/(2m_{d}\omega_{0})\), as also shown in the case of the 3-dimensional harmonic oscillator in [41]. ### Perturbation theory for anharmonic oscillator: \(q\ll\sqrt{2m\omega_{0}}\) We now turn to more general case where small anharmonic terms are included in the 1D toy potential. An exact solution is no longer possible. But as we will see, in the kinematic regime \(q\ll\sqrt{2m_{d}\omega_{0}}\), we can use perturbation theory to obtain the behavior of the structure factor and illustrate the importance of the anharmonic corrections as a function of momentum and energy deposition. Our goal in this section then is to obtain the power counting of the anharmonic contributions to the structure factor. The toy Hamiltonian we consider is given by, \[H_{\rm toy}=\frac{p^{2}}{2m_{d}}+\frac{1}{2}m_{d}\omega_{0}^{2}x^{2}+\lambda_{k} \omega_{0}(\sqrt{2m_{d}\omega_{0}}x)^{k}. \tag{36}\] We will concretely consider \(k\) equal to \(3\) and \(4\), corresponding to a leading cubic and quartic anharmonicity, respectively. Treating the dimensionless anharmonicity parameter \(\lambda_{k}\) as a perturbation, the eigenstates \(|\Phi_{n}\rangle\) are given by \[|\Phi_{n}\rangle=|n\rangle+\lambda_{k}\ |\psi_{n}^{(1)}\rangle+\lambda_{k}^{2}\ |\psi_{n}^{(2)}\rangle+..., \tag{37}\] and \(E_{n}^{\prime}\) are the perturbed energies, \[E_{n}^{\prime}=\Big{(}n+\frac{1}{2}\Big{)}\omega_{0}+\lambda_{k}\ c_{n}^{(1)}+ \lambda_{k}^{2}\ c_{n}^{(2)}+... \tag{38}\] With time-independent perturbation theory, the dynamic structure factor can be explicitly computed at different orders in \(\lambda_{k}\) using (27). We defer the details of the explicit calculation to Appendix B. Instead, from the structure of the expansion we can already learn about the relevant corrections. In general, we can express the dynamic structure factor as an expansion in both \(\lambda_{k}\) and \(q^{2}/(2m_{d}\omega_{0})\). At zeroth order in \(\lambda_{k}\), we see from (35) that the \(n\)-phonon term appears with a \(q\)-scaling of \(q^{2n}/(2m_{d}\omega_{0})^{n}\). As we will show below, anharmonicity introduces departures from this \(q\)-scaling at higher orders of \(\lambda_{k}\). In the kinematic regime under consideration (\(q\ll\sqrt{2m_{d}\omega_{0}}\)), powers of \(q^{2}/(2m_{d}\omega_{0})\) smaller than \(n\) can lead to large anharmonic corrections to the \(n\)-phonon term in the structure factor.1 The aim of this section is thus to illustrate the behavior of the \(q\)-scaling at different orders of \(\lambda_{k}\). Footnote 1: Perturbation theory in \(\lambda_{k}\) is still valid. For instance, the expansion in (37) still holds. But the harmonic contribution in the structure function could be suppressed by small \(q\) for multi-phonon states. The general expression for the dynamic structure factor in the toy model can be written as, \[S_{\rm toy}(q,\omega)=2\pi\sum_{d}n_{d}\ |f_{d}|^{2}e^{-2W_{d}^{ \rm toy}(q)}\times \tag{39}\] \[\sum_{n}\delta(E_{n}^{\prime}-E_{0}^{\prime}-\omega)\Bigg{[} \frac{1}{n!}\Big{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Big{)}^{n}\] \[+\sum_{i\geq 1}\Big{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Big{)}^{i} \left(a_{n,i}\ \lambda_{k}^{\nu(n,i)}\ +{\cal O}\Big{(}\lambda_{k}^{\nu(n,i)+1} \Big{)}\right)\Bigg{]}\] For each \(n\), the harmonic contribution appears at \({\cal O}((q^{2}/(2m_{d}\omega_{0}))^{n})\) as seen in (13); note that we do not include the Debye-Waller factor in this power counting discussion since it always appears as an overall factor. The anharmonic corrections are included here as an expansion in powers of \(q^{2}/(2m_{d}\omega_{0})\) which are denoted by \(i\). From the orthogonality of the states \(|\Phi_{n}\rangle\) with the ground states, we see that the dynamic structure factor should vanish for \(q\to 0\), which in turn implies that \(i\geq 1\). Each power \(i\) of \(q^{2}/(2m_{d}\omega_{0})\) appears with non-zero powers of \(\lambda_{k}\), denoted by \(\nu(n,i)\). Here the power \(\nu(n,i)\) is the smallest _allowed_ power of \(\lambda_{k}\) for a given phonon number \(n\) and the power \(i\) of \(q^{2}/(2m_{d}\omega_{0})\). However, numerical cancellations can sometimes force this leading behavior to vanish. Typically, the bigger the difference in \(i\) and \(n\), the larger the power of \(\lambda_{k}\) that is required. We will explicitly see the behavior of the powers \(\nu(n,i)\) for \(k\) equal to \(3\) and \(4\) below, but we first discuss the implications of this form. For the single phonon structure factor (i.e. for \(n=1\)), the anharmonic terms are always suppressed compared to the harmonic term because of the additional powers of \(\lambda_{k}\) and \(q^{2}/(2m_{d}\omega_{0})\). But for phonon numbers \(n>1\), it is possible for anharmonic contributions to dominate for \(q\ll\sqrt{2m_{d}\omega_{0}}\). As a simple example, in the 3-phonon state, the harmonic contribution to the structure factor is proportional to \(q^{6}/(2m_{d}\omega_{0})^{3}\), while the anharmonic result contains \(\lambda_{3}^{2}q^{4}/(2m_{d}\omega_{0})^{2}\). So when \(q\ll\sqrt{2m_{d}\omega_{0}}\), the anharmonic effect can lead to a large correction to the dynamic structure factor. In a generic \(n\)-phonon state, the harmonic piece scales as \((q^{2}/(2m_{d}\omega_{0}))^{n}\). Comparing this with the anharmonic term \(\propto\lambda_{k}^{\nu(n,i)}q^{2i}/(2m_{d}\omega_{0})^{i}\), we note that the anharmonic term dominates the harmonic term for \(q\ll\sqrt{2m_{d}\omega_{0}}\lambda_{k}^{\nu(n,i)/(2(n-i))}\). For small enough \(q\), the behavior is governed by the anharmonic effects. Of course, at even smaller \(q\sim q_{\rm BZ}\) one would expect the incoherent approximation to break down. For the values of \(\lambda\) in realistic materials, we find that the dominance of the anharmonic terms can happen for \(q\) above \(q_{\rm BZ}\), particularly for larger \(n\). These corrections become larger with \(n\) since the harmonic piece is progressively more suppressed in \(q^{2}/(2m_{d}\omega_{0})\). We now illustrate the origin of the \(\lambda_{k}\) powers \(\nu(n,i)\) with an example in the case of \(k=3\). In this case, the perturbation \(x^{3}\sim(a+a^{\dagger})^{3}\) implies the leading correction to the state can change the oscillator number by \(\pm 1\) or \(\pm 3\). Then the perturbed eigenstates have the schematic form: \[|\Phi_{n}\rangle\sim|n\rangle +\lambda_{3}\ (|n-3\rangle+|n-1\rangle+|n+1\rangle+|n+3\rangle)\] \[+{\cal O}(\lambda_{3}^{2}). \tag{40}\] We neglect the numerical prefactor in front of each state. Note that the terms are only present if the integer labelling the state is non-negative, for example for the ground state \(|\Phi_{0}\rangle\sim|0\rangle+\,\lambda_{3}\,\left(|1\rangle+|3\rangle\right)+ \mathcal{O}(\lambda_{3}^{3})\). The matrix element appearing in the \(n\)-phonon structure factor can be expressed as, \[\langle\Phi_{n}|e^{iqx}|\Phi_{0}\rangle\sim b_{0}+\lambda_{3}b_{1}+\lambda_{3}^ {2}b_{2}+\mathcal{O}(\lambda_{3}^{3}), \tag{41}\] where the coefficients are schematically given by, \[b_{0} \sim\langle n|e^{iqx}|0\rangle \tag{42}\] \[b_{1} \sim\langle n-3|e^{iqx}|0\rangle\,+\] \[\langle n-1|e^{iqx}|0\rangle+\langle n+1|e^{iqx}|0\rangle\,+\] \[\langle n+3|e^{iqx}|0\rangle+\langle n|e^{iqx}|1\rangle+\langle n |e^{iqx}|3\rangle \tag{43}\] In order for given term in the coefficient to be nonzero, a minimum number of powers of \(iqx\) are required in the series expansion for \(e^{iqx}\). This therefore links the powers of \(q\) with powers of \(\lambda_{3}\). Taking \(n=3\) as an example, then \(b_{0}\propto(iq)^{3}\) at leading order in the \(q\) expansion. Meanwhile, \(b_{1}\propto(iq)^{2}+(iq)^{4}+...\). Note that the matrix elements \(\langle 0|e^{iqx}|0\rangle\) and \(\langle 3|e^{iqx}|3\rangle\) in \(b_{1}\) contain terms proportional to \((iq)^{0}\), but they cancel each other, consistent with a matrix element that always vanishes as \(q\to 0\). Also note that the coefficients \(b_{0},b_{1}\) always alternate in even or odd powers of \((iqx)\) and therefore alternate in being purely real or imaginary. The resulting matrix element squared thus goes as \[|\langle\Phi_{3}|e^{iqx}|\Phi_{0}\rangle|^{2} \sim|b_{0}+\lambda_{3}^{2}b_{2}+\mathcal{O}(\lambda_{3}^{4})|^{2 }+|\lambda_{3}b_{1}+\mathcal{O}(\lambda_{3}^{3})|^{2},\] \[\sim q^{6}+\lambda_{3}^{2}(q^{4}+\mathcal{O}(q^{6}))+\mathcal{O}( \lambda_{3}^{4}). \tag{44}\] For the cubic interaction, only even powers of \(\lambda_{3}\) appear in the matrix element squared due to the alternating even and odd powers of \((iqx)\) in the \(b\) coefficients. In this example, in order to achieve the minimum \(q\) scaling of \(q^{2}\), higher powers of \(\lambda_{3}\) are required, which will introduce more terms in the expansion. Here we see a correction to the matrix element squared at \(O(q^{2}\lambda_{3}^{4})\). The explicit derivation of \(\nu(n,i)\) is given in Appendix B. The minimum power of \(\lambda_{3}\) required to get the leading behavior \(\propto q^{2}/(2m_{d}\omega_{0})\) in the anharmonic terms is given by, \[\nu(n,1)=\begin{cases}\max\Bigl{(}4\times\left\lceil\tfrac{(n-1)}{6}\right\rceil \,\ 2\Bigr{)}&\text{for odd}\ n\\ 4\times\left\lceil\tfrac{(n+2)}{6}\right\rceil-2&\text{for even}\ n\end{cases} \tag{45}\] The minimum power of \(\lambda_{3}\) as a function of the phonon number \(n\) and the power \(i\) of \(q^{2}/(2m_{d}\omega_{0})\) for \(i>1\) is given by, \[\nu(n,i)=\max\Bigl{(}2\times\left\lceil\frac{|n-i|}{3}\right\rceil\,\ 2\Bigr{)},\quad i>1. \tag{46}\] We show the expansion of the structure factor in the powers of \(\lambda_{3}\) and \(q^{2}/(2m_{d}\omega_{0})\) schematically in Fig. 4, where we drop the numerical coefficients for all the terms and only illustrate the behavior of the powers of \(\lambda_{3}\) and \(q^{2}/(2m_{d}\omega_{0})\). In the right part of the schematic, we show the behavior of the \(n\)-phonon term for \(n>3\), and in the left part of the schematic, we show the expansion for \(n=1,\,2,\) and \(3\). The relationship between the powers in \(\lambda_{3}\) and the powers of \(q^{2}/(2m_{d}\omega_{0})\) in (46) can also be understood in the following way. The powers of \(q^{2}/(2m_{d}\omega_{0})\) that appear at \(\mathcal{O}(\lambda_{3}^{5})\) can range from \(n-3\nu/2\) to \(n+3\nu/2\), with the minimum power allowed being \(1\), and \(\nu\) being an even positive integer. Contributions from powers larger than \(n\) are suppressed in the kinematic regime \(q\ll\sqrt{2m_{d}\omega_{0}}\). But powers smaller than \(n\) can lead to significant corrections in the same regime. For example, the anharmonic contribution to the 2-phonon structure factor has a leading behavior \(\propto\ \lambda_{3}^{2}q^{2}/(2m_{d}\omega_{0})\), which is expected to dominate the harmonic behavior \(\propto\ q^{4}/(2m_{d}\omega_{0})^{2}\) for small enough \(q\) (explicitly for \(q\lesssim\sqrt{2m_{d}\omega_{0}}\lambda_{3}\)). Assuming \(m_{d}\sim 28\) GeV, \(\omega_{0}\sim 40\) meV, and a typical value of \(\lambda_{3}\sim 0.01\), we expect the anharmonic contribution to start to dominate for \(q\lesssim 0.5\) keV. This kinematic regime does not strictly satisfy the conditions for the incoherent approximation which are assumed in this calculation. However, it is interesting to note here that the size of this anharmonic correction roughly matches onto the result for the 2-phonon structure factor in the long-wavelength limit (\(q\ll 1/a\)) [25; 37], where it was found that anharmonic interactions give up to an order of magnitude correction to the structure factor. At the edge of the Brillouin Zone \(q\sim 2\pi/a\sim\mathcal{O}(\text{keV})\), with the typical values used above, we find in the toy model an \(O(\sim 25\%)\) correction at the boundary of the valid region for the incoherent approximation. For \(k\) equal to \(4\), which corresponds to a quartic perturbation to the harmonic potential, the calculation proceeds similarly to the cubic case discussed above, except for some key differences. All the coefficients \(b_{i}\) are either real or imaginary based on whether \(n\) is even or odd respectively, and hence the anharmonic corrections appear in all orders of \(\lambda_{4}\). We thus have corrections at \(\mathcal{O}(\lambda_{4})\). For even \(n\), coefficients \(b_{i}\) only have even powers of \(q\), and thus cannot generate terms \(\propto q^{2}\) in the squared matrix element. The leading behavior for even \(n\) is thus \(\propto q^{4}\). For odd \(n\) however, the leading behavior is \(\propto q^{2}\), and the Figure 4: Expansion of the structure factor in phonon number \(n\), powers of \(q^{2}/(2m_{d}\omega_{0})\), and powers of \(\lambda_{3}\) for a cubic perturbation (\(k=3\) in (36)). The right part shows the general behavior of the \(n\)-phonon term for \(n>3\), while the left part shows the expansion for \(n=\)1, 2, and 3. Shaded terms show the dominant contributions when \(q\ll\sqrt{2m_{d}\omega_{0}}\), which comes from the anharmonic terms for \(n\geq 2\). Here we just illustrate the power counting; individual terms might not be present if there is a numerical cancellation in the coefficients. Figure 5: Expansion of the structure factor in phonon number \(n\), \(q^{2}/(2m_{d}\omega_{0})\), and \(\lambda_{4}\) for a quartic perturbation (\(k=4\) in (36)). The right part shows the general behavior of the \(n\)-phonon term for \(n>3\), while the left part shows the expansion for \(n=\)1, 2, and 3. Shaded terms show the dominant contributions when \(q\ll\sqrt{2m_{d}\omega_{0}}\), which comes from the anharmonic terms for \(n>2\). Similar to the above, individual terms might not be present if there is a numerical cancellation in the coefficients. minimum power of \(\lambda_{4}\) is given by, \[\nu(n,1)=\max\Bigl{(}2\times\left\lceil\frac{(n-1)}{4}\right\rceil\,\ 1\Bigr{)}. \tag{47}\] For powers \(i\) greater than 1, the minimum power of \(\lambda_{4}\) for any phonon number \(n\) is given by, \[\nu(n,i>1)=\max\Bigl{(}\left\lceil\frac{|n-i|}{2}\right\rceil\,\ 1\Bigr{)}. \tag{48}\] We show the expansion of the structure factor in the powers of \(\lambda_{4}\) and \(q^{2}/(2m_{d}\omega_{0})\) schematically in Fig. 5, where we drop the numerical coefficients for all the terms and only illustrate the behavior of the powers of \(\lambda_{4}\) and \(q^{2}/(2m_{d}\omega_{0})\). Similar to Fig. 4, we are only illustrating the minimum allowed powers of \(\lambda_{k}\) in perturbation theory for \(n>3\). Due to numerical cancellations, the leading \(\lambda_{k}\) power can vanish in some cases. #### iv.2.1 Limitations of perturbation theory Our analysis has focused on the regime \(q\ll\sqrt{2m_{d}\omega_{0}}\) because this corresponds to a low mean phonon number. For large enough \(n\), perturbation theory will start to break down. Equivalently, for a given \(n\), perturbation theory will only be valid for \(\lambda_{k}\) sufficiently small. For a particular phonon number \(n\), if the energy correction in (38) is of the same order as the unperturbed energy eigenvalue \((n+\frac{1}{2})\omega_{0}\), the perturbation can no longer be treated as small. Based on this, we set an upper bound on \(|\lambda_{k}|\) by requiring that \[|E_{n}^{\prime}-\Bigl{(}n+\frac{1}{2}\Bigr{)}\omega_{0}|\sim 0.1\times \Bigl{(}n+\frac{1}{2}\Bigr{)}\omega_{0}. \tag{49}\] At leading order, the correction for \(k\) equal to 3 (i.e. a cubic perturbation) is given by \[E_{n}^{\prime}-\Bigl{(}n+\frac{1}{2}\Bigr{)}\omega_{0} =\lambda_{3}^{2}\ \omega_{0}\ (9n^{3}+9(n+1)^{3}\] \[+(n+3)(n+2)(n+1)\] \[+n(n-1)(n-2))+\mathcal{O}(\lambda_{3}^{4}). \tag{50}\] The equivalent result for \(k=4\) reads, \[E_{n}^{\prime}-\Bigl{(}n+\frac{1}{2}\Bigr{)}\omega_{0} =\lambda_{4}\ \omega_{0}\ ((n+1)(n+2)+(n+1)^{2}\] \[+2(n+1)(n+2)+n^{2}\] \[+n(n-1))+\mathcal{O}(\lambda_{4}^{2}). \tag{51}\] Using the equations above, we get the critical value of \(\lambda_{3}^{2}\) and \(\lambda_{4}\) compatible with the perturbation theory expansion. These are shown in Fig. 6. With the analytic structures of the energy corrections shown above, we see that the perturbativity bound on \(\lambda_{3}^{2}\) (\(\lambda_{4}\)) has a scaling \(\propto 1/n^{2}(\propto 1/n)\), where \(n\) is the phonon number. For typical values of \(\lambda_{3}\sim 0.01\), we see that the perturbation theory is valid only up to \(n\sim 6-7\). Furthermore, perturbation theory is impractical for calculating corrections at small \(q\) and very high phonon number \(n\), since these corrections will be a very high order in the anharmonicity parameter. To deal with these limitations, we consider two different approaches in this paper. Since high \(n\) is associated with high \(\omega\) and \(q\), in the next section we will use the impulse approximation to account for anharmonic effects at high \(q\). In Appendix D, we also study a special anharmonic potential, the Morse potential, where it is possible to obtain exact results. We use this as a case study to validate the perturbation theory and impulse approximation results. ### Impulse Approximation for \(q\gg\sqrt{2m\omega_{0}}\) As we have shown, perturbation theory quickly goes out of control beyond the first few number of phonons. Resumming the anharmonic interaction is usually needed for the structure factor when \(q\) or \(\omega\) is large. Consider the following phase space \[\text{Impulse regime:} q\gg\sqrt{2m\omega_{0}}, \tag{52}\] \[\omega\sim\frac{q^{2}}{2m}+\mathcal{O}(\sqrt{\omega\omega_{0}}),\] Figure 6: Perturbativity bound on \(\lambda_{3}^{2}\) and \(\lambda_{4}\) as a function of phonon number \(n\). The bound is based on the criteria of (49) that the leading correction to the energy \(E_{n}\) is at most 10%. The dashed line shows the typical coupling sizes in Si and Ge crystals. It has previously been shown [25; 50] in the harmonic case, that one can calculate the structure factor by using a saddle point approximation in the time-integral representation of the structure factor. This is called the "impulse approximation" since the steepest-descent contour is dominated by small times, which can be interpreted physically as an impulse. We begin with the structure factor in Eq. (27), which can be decomposed as contributions from each atom \(d\), \(S_{\text{toy}}(q,\omega)=\sum_{d}n_{d}|f_{d}|^{2}S_{\text{toy},d}(q,\omega)\). Then we rewrite the energy conservation delta function as a time integral \[S_{\text{toy},d}(q,\omega)\] \[=\int dt\,e^{-i\omega t}\,\langle\Phi_{0}|e^{-iqx}|\Phi_{f} \rangle\,\,\langle\Phi_{f}|e^{iHt}e^{iqx}e^{-iHt}|\Phi_{0}\rangle\] \[=\int dt\,e^{-i\omega t}\,\langle\Phi_{0}|e^{-iqx}e^{iqx(t)}| \Phi_{0}\rangle\,, \tag{53}\] where in the second equality we use the fact that \(|\Phi_{0}\rangle\) and \(|\Phi_{f}\rangle\) are eigenfunctions of \(H\), and in the third equality we use the completeness relation and the time-dependent position operator \(x(t)=e^{iHt}xe^{-iHt}\). The final expression is the well-known structure factor in the time domain. Using the above representation of the structure factor, \[S_{\text{toy},d}(q,\omega) =\int_{-\infty}^{\infty}dt\,\,\langle e^{-iqx}e^{iqx(t)}\rangle\, \,e^{-i\omega t}\] \[=\int_{-\infty}^{\infty}dt\,\,\langle e^{-iqx}e^{iHt}e^{iqx} \rangle\,\,e^{-i(E_{0}+\omega)t}, \tag{54}\] We can further simplify this using the fact that \(e^{iqx}\) acts as a translation operator on momentum \(p\), \(e^{-iqx}\,p\,e^{iqx}=p+q\). Applying the translation on the full Hamiltonian yields \[e^{-iqx}H(x,p)e^{iqx}=H(x,p+q). \tag{55}\] Here we generalize the impulse approximation to any 1D Hamiltonian, \(H(x,p)=\frac{p^{2}}{2m}+V(x)\), which satisfies \[H(x,p+q)=H(x,p)+\frac{q^{2}}{2m}+\frac{q}{m}p. \tag{56}\] One can also generalize impulse approximation to a generic potential \(V(x,p)\) as long as the above holds in the limit of large \(q\).2 In other words, we require that the Hamiltonian in the large momentum limit is dominated by the kinetic energy \(\frac{p^{2}}{2m}\), not the potential. We can then obtain reliable theoretical predictions in the impulse regime even with large number of phonons. Footnote 2: In this case, the impulse regime in Eq. (52) needs to be replaced as \(\omega\sim\frac{q^{2}}{2m}+\frac{q}{m}\langle p\rangle\) and we impose Eq. (56) holds up to \(\mathcal{O}\left(\omega_{0}^{2}/q\right)\) correction. Applying the above to Eq. (54), the structure function now reads \[S_{\text{toy},d}(q,\omega) =\int_{-\infty}^{\infty}dt\,\,\Big{\langle}e^{iH(x,p+q)t}\Big{\rangle} \,\,e^{-i(E_{0}+\omega)t}\] \[\approx\int_{-\infty}^{\infty}dt\,\,\Big{\langle}e^{i\left(H+ \frac{qq}{m}\right)t}\Big{\rangle}\,e^{-i\left(E_{0}+\omega-\frac{q^{2}}{2m} \right)t}, \tag{57}\] where we translate the momentum in the first line and use Eq. (56) in the second line. Note that \(H=H(x,p)\) throughout and we drop the argument for brevity. The last line is exact for potentials that depend only on \(x\). Now we can apply the saddle point approximation to evaluate the time integral. Defining \(H^{\prime}\equiv H+\frac{pq}{m}\), we can write \[S_{\text{toy},d}(q,\omega)=\int_{-\infty}^{\infty}dt\,e^{f(t)}, \tag{58}\] where \[f(t)\equiv\ln\langle e^{iH^{\prime}t}\rangle-it\left(E_{0}+\omega-\frac{q^{2 }}{2m}\right). \tag{59}\] In order to calculate this object, we can expand \(\ln\langle e^{iH^{\prime}t}\rangle\) in small \(t\). The first few terms in this expansion are given by \[f(0) =0\] \[f^{\prime}(0) =i\Big{(}\frac{q^{2}}{2m}-\omega\Big{)}\] \[f^{\prime\prime}(0) =i^{2}\Big{(}\langle H^{\prime 2}\rangle-\langle H^{\prime} \rangle^{2}\Big{)}\] \[=-\frac{q^{2}}{m^{2}}\Big{(}\langle p^{2}\rangle-\langle p \rangle^{2}\Big{)}\] \[f^{(3)}(0) =i^{3}\Big{(}\langle H^{\prime 3}\rangle-3\langle H^{\prime} \rangle\langle H^{\prime 2}\rangle+2\langle H^{\prime}\rangle^{3}\Big{)}\] \[=-i\Big{(}\frac{q^{2}}{m^{2}}\langle p[H,p]\rangle+\frac{q^{3}}{m ^{3}}\langle p^{3}\rangle\Big{)}\] \[f^{(4)}(0) =i^{4}\Big{(}-6\langle H^{\prime}\rangle^{4}+12\langle H^{\prime }\rangle^{2}\langle H^{\prime 2}\rangle^{2}\] \[\qquad-3\langle H^{\prime 2}\rangle^{2}-4\langle H^{\prime} \rangle\langle H^{\prime 3}\rangle+\langle H^{\prime 4}\rangle\Big{)}\] \[=-\frac{q^{2}}{m^{2}}\langle[p,H]^{2}\rangle+\frac{q^{3}}{m^{3}} \langle[[p,H],p^{2}]\rangle\] \[\qquad+\frac{q^{4}}{m^{4}}\Big{(}\langle p^{4}\rangle-3\langle p ^{2}\rangle^{2}\Big{)} \tag{60}\] \[\ldots\] In the harmonic approximation, only the terms proportional to \(q^{2}\) are nonzero. As a result, only the first few expansion terms are needed as long as \(t\ll\frac{1}{\omega_{0}}\) since \(f^{(n+1)}/f^{(n)}\) is of order \(\omega_{0}\). Then one can solve for the saddle point \(t_{I}\) by solving \(f^{\prime}(t_{I})\approx f^{\prime}(0)+f^{\prime\prime}(0)t_{I}=0,\) which gives \[it_{I}=\frac{m^{2}(\omega-\frac{q^{2}}{2m})}{q^{2}\sigma_{p}^{2}}, \tag{61}\] where \[\sigma_{p}^{2}\equiv\langle p^{2}\rangle-\langle p\rangle^{2}=\langle p^{2}\rangle. \tag{62}\] In the last equality we use the fact that \(\langle p\rangle=0\) for a \(V(x)\) potential since \(\langle p\rangle\propto\langle[x,H]\rangle=0\). Although \(t_{I}\) is formally imaginary, its magnitude is small and close to the origin in the impulse regime. Since there is no pole around this saddle point, we can approximate the time integral by the saddle point and find \[S_{\text{toy},d}(q,\omega) \approx\sqrt{\frac{2\pi}{-f^{\prime\prime}(t_{I})}}\,e^{f(t_{I})}\] \[=\frac{\sqrt{2\pi}m}{q\sigma_{p}}\exp\left(-\frac{m^{2}(\omega- \frac{q^{2}}{2m})^{2}}{2q^{2}\sigma_{p}^{2}}\right). \tag{63}\] For large energy depositions the Gaussian becomes narrowly peaked around \(\omega=q^{2}/2m\), and this reproduces the nuclear recoil limit [25]. In the presence of anharmonic interactions, other powers of \(q\) will be present in the expansion of (60). In general, the \(f^{(n)}\) term will have a \(q^{n}\) term with coefficient of \(\mathcal{O}(\lambda).\) In this case, \(f^{(n+1)}/f^{(n)}\sim q\sqrt{\omega_{0}/m}\). Higher orders will then be important in the expansion of \(f(t)\) for sufficiently large \(q\) or \(t\). For a given \(q\), the higher order corrections become relevant for \(|t|\gtrsim\sqrt{m/\omega_{0}}/q\sim 1/\sqrt{\omega\omega_{0}}\) in the impulse regime. Including these corrections is difficult in general, but we can continue to use the second order expansion giving (63) as long as \(|t|\lesssim 1/\sqrt{\omega\omega_{0}}.\) According to (61), this corresponds to a condition on how close \(\omega\) is to \(q^{2}/(2m)\). Since \(q^{2}\sim 2m\omega\) and \(\sigma_{p}^{2}\sim m\omega_{0}\), this implies that \[|t_{I}|\sim\frac{|\omega-\frac{q^{2}}{2m}|}{\omega\omega_{0}}\to|\omega-\frac {q^{2}}{2m}|\lesssim\sqrt{\omega\omega_{0}}. \tag{64}\] We see that the distance of \(\omega\) from \(\frac{q^{2}}{2m}\) sets the size of \(t_{I}\), which in turn tells us the regime for the validity for the approximation (63). The condition (64) is approximately the same condition that \(\omega\) is within the Gaussian width in (63), and keeping terms in \(f(t)\) only up to \(f^{\prime\prime}(0)\) is self-consistent near \(\omega=\frac{q^{2}}{2m}\). Therefore, in the presence of anharmonic interactions, the above structure factor result (63) remains valid in the impulse regime (52). The only modification is in \(\sigma_{p}^{2}\). Considering perturbations in \(V(x)\) up to \(x^{4}\) and recalling that the expectation value is with respect to the full ground state, we find that \[\sigma_{p}^{2}=\langle p^{2}\rangle=\frac{m\omega_{0}}{2}\Big{(}1-44\lambda_{3 }^{2}+12\lambda_{4}+\cdots\Big{)} \tag{65}\] at leading order in \(\lambda_{3},\lambda_{4}.\) The nuclear recoil limit is again reproduced, with a small modification to the width of the Gaussian envelope due to anharmonic couplings. Note that in order to calculate the structure factor far from \(\omega=\frac{q^{2}}{2m}\), we must include additional orders in \(f(t)\) and \(t_{I}\). We do not perform these higher order calculations for the final results in this paper since they have a negligible effect on the integrated rates, but we provide the procedure for completeness in App. C. Finally, we approximate the effect that introducing the full crystal lattice has on this single atom result. Up until the evaluation of various moments of \(H^{\prime}\), the impulse approximation is fully model-independent. We just have to make an adjustment to the final evaluation of \(\langle p^{2}\rangle\). The states in the full crystal theory are smeared by the phonon density of states, so we calculate \(\langle p^{2}\rangle\) via the following prescription \[\langle p^{2}\rangle =\frac{m\omega_{0}}{2}\Big{(}1+g(\lambda)\Big{)}\] \[\xrightarrow{\text{crystal}}\int d\omega^{\prime}D(\omega^{\prime })\frac{m\omega^{\prime}}{2}\Big{(}1+g(\lambda)\Big{)}, \tag{66}\] where \(g(\lambda)\) is the anharmonic correction calculated in the single-atom potential. Essentially, we have used the average single phonon energy to calculate \(\langle p^{2}\rangle.\) In the harmonic limit, (63) then exactly matches the impulse result from [25]. In summary, in this section we have demonstrated the general behavior of anharmonic effects with \(q\) and \(\omega\). We have shown that they are indeed negligible at high \(q\) and \(\omega\sim q^{2}/2m_{d}\), consistent with the intuition that scattering can be described by elastic recoils of a free nucleus. The effects grow for \(q\ll\sqrt{2m_{d}\omega_{0}}\) and at low \(q\) they may dominate the structure factor. This roughly matches onto the results of Refs. [25; 37], which found that for \(q<2\pi/a\) anharmonic effects can have a large impact on the two-phonon rate. ## IV Numerical results for 1D anharmonic oscillator Having demonstrated the analytic behavior of the dynamic structure factor in the previous section, we now turn to obtaining numerical results using realistic potentials. We will perform concrete calculations for Si and Ge as representative materials while briefly commenting on others. As discussed in Sec. II.2, we adopt an empirical model of interatomic interactions that encodes the anharmonicity in the potential. We use this empirical model to calculate a single atom potential, which we then use to evaluate the structure factor numerically. As stated in Sec. II.3, we start by fitting the single atom potential in a particular direction onto a 1D potential of the form, \[V_{d}(x) =\frac{1}{2}m_{d}\omega_{0}^{2}x^{2}+\lambda_{3}\omega_{0}(\sqrt{ 2m_{d}\omega_{0}}x)^{3}\] \[\quad+\lambda_{4}\omega_{0}(\sqrt{2m_{d}\omega_{0}}x)^{4}. \tag{67}\] In the fit, \(\omega_{0},\lambda_{3},\lambda_{4}\) are free parameters but in order to reproduce the harmonic limit, we then make the replacement \(\omega_{0}=1/\overline{\omega^{-1}}\), which is calculated from the phonon density of states and gives a slightly different numerical value. This is motivated by the harmonic case discussed in Sec. II.1. We do not consider anharmonic terms \(\propto x^{k}\) for \(k\geq 5\) as we observe that the anharmonic potential along any direction is dominated by the cubic and the quartic terms. We find that the maximum anharmonicity is typically along the nearest neighbor direction \((x,y,z)=(1,1,1)\). For computing results, we will consider the potential along this direction, which represents maximum anharmonicity, as well as the potential in an orthogonal direction \((x,y,z)=(1,-2,1)\), which represents an intermediate value for the anharmonicity. Using the aforementioned interatomic models, we find anharmonicity strengths ranging from \(\lambda_{3}\sim 6\times 10^{-3}\) to \(10^{-2}\) and \(\lambda_{4}\sim(2-3)\times 10^{-4}\). For Si and Ge, the results are same for either atom in the unit cell. Given the 1D potential in (67), we find exact solutions of the 1D eigenvalue and eigenvector problem using a simple finite difference method. We take a first order discretization of the Laplace operator and solve the discretized time-independent Schrodinger equation in a box. The box grid interval size must be small enough to resolve the maximum momentum scales of interest, which in this case depends on the highest excited state needed in the calculation. Also, the minimum box size required depends on the spatial extent of the highest excited state used. As seen in Sec. III.3, the impulse approximation suffices for \(q>\mathcal{O}(\text{few})\times\sqrt{2m_{d}\omega_{0}}\). Beyond this momentum, we no longer need to calculate excited states since the structure factor in the impulse limit is independent of the details of the highly excited states. The _n_th excited state is most relevant at momenta \(q\sim\sqrt{n}\sqrt{2m\omega_{0}}\). Therefore, to complete our calculation below the impulse limit, we include the first 10 excited states. The results for these eigenstates are converged above a box size of \(\sim 10/\sqrt{2m\omega_{0}}\) and grid size of \(\sim 0.1/\sqrt{2m\omega_{0}}\). We now use these numerical eigenstates and energies to calculate the structure factor in Eq. (27). We apply a prescription for the energy-conserving delta function similar to that used in the harmonic 1D oscillator, Eq. (15). The final result at momenta below the impulse regime (\(q<2\sqrt{2m\omega_{0}}\)) is, \[S(q, \omega)=2\pi\sum_{d}n_{d}\ |f_{d}|^{2}\sum_{f}\big{|}\left\langle \Phi_{f}|e^{iqx}|\Phi_{0}\right\rangle\big{|}^{2}\] \[\times\frac{1}{\sqrt{2\pi f(n)\sigma^{2}}}e^{-\frac{(\nu-f(n) \omega_{0})^{2}}{2f(n)\sigma^{2}}}\times\Theta(\omega_{\text{max}}-\omega), \tag{68}\] where \[\omega_{0} =\Big{(}\int d\omega\omega^{-1}D(\omega)\Big{)}^{-1}, \tag{69}\] \[\sigma =\sqrt{\frac{\int d\omega\omega D(\omega)}{\omega_{0}}-\frac{1}{ \omega_{0}^{2}}},\] (70) \[\omega_{\text{max}} =f(n)\times(\min(\omega)|D(\omega)=0) \tag{71}\] and \(f(n),|\Phi_{0}\rangle,|\Phi_{f}\rangle\) are given by the numerically solved eigenenergies and eigenstates, respectively. \(D(\omega)\) is the single phonon density of states calculated with DFT [51]. In this work we assume equal couplings of DM with all nucleons so that \(f_{d}=A_{d}\), where \(A_{d}\) is the atomic mass number. In the equations above, we have included a sum over all atoms in the unit cell \(d\) with density \(n_{d}\), and in general the atomic potentials and density states can also depend on \(d\), although for Si and Ge we do not include this. In the impulse regime (\(q>2\sqrt{2m\omega_{0}}\)), we have shown in Sec. III.3 that the structure factor for any position-dependent potential is approximated by a Gaussian envelope, \[S(q,\omega)\approx\sum_{d}n_{d}|f_{d}|^{2}\sqrt{\frac{2\pi}{\frac{q^{2}}{m^{2 }}\langle p^{2}\rangle}}e^{-\frac{\left(\omega-\frac{q^{2}}{m^{2}}\right)^{2} }{2\frac{q^{2}}{m^{2}}\langle p^{2}\rangle}}\, \tag{72}\] where the the expectation values are all computed in the ground state and adjusted to the average single phonon energy via (66). Now we simply use the numerical ground state of the anharmonic potential (67) to calculate \(\langle p^{2}\rangle\) and therefore obtain the structure factor. Note that the anharmonic contribution is essentially negligible in the impulse limit, since corrections to \(\langle p^{2}\rangle\) are \(\propto\lambda_{3}^{2},\lambda_{4}\). Fig. 7-8 shows numerical results on the structure factor for Si and Ge, taking the maximum anharmonicity in either case. In Fig. 7, the structure factor as a function of \(q\) is shown. As \(\omega\) (and therefore minimum phonon Si, Multiphonon Structure Factor and Impulse Approximation number \(n\)) is increased, there is a larger anharmonic correction at small \(q\). This can be understood by looking at the \(q\) scalings discussed in Sec. III.2 and illustrated in Fig. 4 and Fig. 5. At low \(q\) and thus DM mass, the contributions from the anharmonic structure factor can give smaller powers of \(\frac{q^{2}}{2m_{d}\omega_{0}}\) compared to the leading harmonic term \(\left(\frac{q^{2}}{2m_{d}\omega_{0}}\right)^{n}\), so the enhancement grows with \(n\). At high \(q\), results converge to the harmonic result, consistent with our discussion of the impulse regime in Sec. III.3. We see this also in Fig. 8, which shows the structure factor at different \(q\). The impulse approximation becomes better as \(q\gg\sqrt{2m_{d}\omega_{0}}\), and is indistinguishable from the harmonic case. ### Impact on DM scattering rates We now use the numerical results for the structure factor to compute the DM scattering rates for a range of DM masses and experimental thresholds. Our results are summarized in Figs. 9-10. We consider DM masses in the range \(\sim 1-10\) MeV. The lower end of the mass range is chosen such that the momentum transfers are large enough to satisfy the condition for the incoherent approximation (i.e. \(q>2\pi/a\)), while at the upper end of masses it is expected that scattering is described by the impulse approximation [25]. It is precisely this mass range where details of multiphonon production are important. We will also consider the two cases of scattering through heavy and light mediators. The goal will be to identify the region of parameter space where the anharmonic effects on the dynamic structure factor affect the scattering rates the most. In the isotropic limit, the observed DM event rate per unit mass is given by [25] \[R=\frac{1}{4\pi\rho_{T}}\frac{\rho_{\chi}}{m_{\chi}}\frac{\sigma_{p}}{\mu_{ \chi}^{2}}\int\!d^{3}\mathbf{v}\,\frac{f(\mathbf{v})}{v}\int\limits_{q_{-}}^{q _{+}}\!\!dq\,\int\limits_{\omega_{\mathrm{th}}}^{\omega_{+}}\!\!d\omega\,q\,| \tilde{F}(q)|^{2}S(q,\omega), \tag{73}\] where \(\rho_{\chi}\) is the DM energy density, \(\rho_{T}\) is the mass density of the target material, \(m_{\chi}\) is the DM mass, \(\mu_{\chi}\) is the DM-nucleon reduced mass, \(\sigma_{p}\) is the DM-nucleon cross section, and \(f(\mathbf{v})\) is the DM velocity distribution. The structure factor \(S(q,\omega)\) is given by our numerical results (68)-(72) and the integration bounds are determined by the kinematically allowed phase space \[q_{\pm} \equiv m_{\chi}v\left(1\pm\sqrt{1-\frac{2\omega_{\mathrm{th}}}{m_ {\chi}v^{2}}}\right), \tag{74}\] \[\omega_{+} \equiv qv-\frac{q^{2}}{2m_{\chi}}, \tag{75}\] where the energy threshold of the experiment is denoted by \(\omega_{\mathrm{th}}\). The \(q\)-dependence of the DM-nucleus interaction can be encapsulated in the DM form factor \(\tilde{F}(q)\), where \(\tilde{F}(q)=1\) indicates an interaction through a heavy mediator, and \(\tilde{F}(q)=q_{0}^{2}/q^{2}\) indicates an interaction through a light mediator for a reference momentum transfer of \(q_{0}\). Note that in general, the strength of the anharmonicity varies with the direction of the recoil of the nucleus, and the structure factor will depend on the direction of the momentum transfer. For simplicity, we are assuming that the anharmonicity strength is uniform in all directions. Our estimate with the maximum anharmonicity thus provides an upper bound on the anharmonic effects on DM scattering. The DM mass sets the typical momentum-transfer scale \(q\) of the scattering, and the experimental energy threshold \(\omega_{\mathrm{th}}\) sets the phonon number \(n\). Hence, to identify the DM masses and experimental thresholds where anharmonic effects start to become important, we first need to understand the \(q\)-values where the anharmonic corrections are large for a particular phonon number \(n\). We can estimate this using the perturbation theory results in Sec. III.2. Note that in our numerical calculation, we find that \(\lambda_{3}\) generally provides the larger anharmonic contribution, so we will focus on a purely cubic perturbation in this discussion. For the analysis of a cubic perturbation discussed in Sec. III.2, we showed that anharmonic effects introduced additional terms to the \(n\)-phonon structure factor of the form \(\propto\lambda_{3}^{\nu(n,i)}\Big{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Big{)}^{i}\), see (39). Therefore when \(q\) is lower than the scale \[q\lesssim\sqrt{2m_{d}\omega_{0}}\lambda_{3}^{\nu(n,i)/(2(n-i))}, \tag{76}\] terms in the anharmonic structure factor can be of comparable size to the harmonic structure factor. In order to find the largest \(q\)-scale where the anharmonic contribution starts to become relevant, we can evaluate (76) for all positive \(i<n\), and find the minimum possible exponent of \(\lambda_{3}\). For \(n=2\) or \(3\), the minimum exponent is achieved for \(i=1\), for which \(\nu(n,1)=2\). This gives a \(q\)-scaling of \(q\sim\sqrt{2m_{d}\omega_{0}}\lambda_{3}^{1/(n-i)}.\) This tells us that for the 2-phonon case, the anharmonic contribution should begin to become important at \(q\sim\sqrt{2m_{d}\omega_{0}}\lambda_{3}\) while for the 3-phonon case, the anharmonic contribution becomes important at \(q\sim\sqrt{2m_{d}\omega_{0}}\lambda_{3}^{1/2}\). For a larger number of phonons, this scaling is approximately \(q\sim\sqrt{2m_{d}\omega_{0}}\lambda_{3}^{1/3}\). So we see that higher energy excitations have more significant anharmonic contributions at larger momentum transfers. Below the \(q\)-scale identified above, the anharmonic contributions are expected to increase substantially with decreasing \(q\), as terms \(\propto q^{2i}\) for \(i<n\) dominate the harmonic scaling \(\propto q^{2n}\). We now recast our analysis concretely in terms of DM mass and experimental energy thresholds as follows. For both massive and massless mediators, the event rate for \(n\geq 2\) phonons is always dominated by the large \(q\) portion of phase space and energy depositions near the threshold. Therefore the enhancement in the rate due to the anharmonicity roughly corresponds to the enhancement in structure factor evaluated at \(S(q=2m_{\chi}v,\omega=\omega_{\rm th})\), where \(v\) is the DM velocity. Inserting \(q=2m_{\chi}v\) into the condition in (76) gives a condition on the DM mass: \[m_{\chi}\lesssim\begin{cases}\frac{\sqrt{2m_{d}\omega_{0}}\lambda_{3}}{2\times 1 0-3}&n=2\\ \frac{\sqrt{2m_{d}\omega_{0}}\lambda_{1}^{1/2}}{2\times 10-3}&n=3\\ \frac{\sqrt{2m_{d}\omega_{0}}\lambda_{3}^{1/3}}{2\times 10-3}&n>3,\end{cases} \tag{77}\] where \(10^{-3}\) is the typical DM velocity. In order to determine the appropriate phonon number \(n\) for a given \(\omega_{\rm th}\) we must take into account the subtlety that each excitation energy is smeared across a width, as discussed in Sec. II.3 and also given in (70). To solve for the smallest \(n\) that contributes appreciably above \(\omega_{\rm th}\), we solve the following equation: \[\omega_{\rm th}=n\omega_{0}+\sqrt{n}\sigma, \tag{78}\] where \(\sigma\) is the single-phonon width as defined in (70) and we have for simplicity taken \(f(n)=n\). Applying (77)-(78) to Si with \(\omega_{0}=31\) meV, \(\sigma=18\) meV, and \(m_{d}=26\) GeV, we find the following results \[m_{\chi}\lesssim\begin{cases}0.2\ \text{MeV}\frac{\lambda_{3}}{10-2}&\omega_{ \rm th}=80\ \text{meV}\\ 2.0\ \text{MeV}\Big{(}\frac{\lambda_{3}}{10-2}\Big{)}^{1/2}&\omega_{\rm th}=120 \ \text{meV}\\ 4.5\ \text{MeV}\Big{(}\frac{\lambda_{3}}{10-2}\Big{)}^{1/3}&\omega_{\rm th}\geq 160 \ \text{meV}\end{cases} \tag{79}\] Below these masses, anharmonic corrections become large. The last line applies for thresholds above 160 meV which corresponds to \(n\geq 4\), and these \(n\)-phonon terms all give the same condition on DM mass. Note that this is only a heuristic, which does not include for example the combinatorial pre-factors or cancellations in the perturbation theory calculation. Nonetheless, we do see the same qualitative features in the complete numerical result which is given in Fig. 9. In order to generalize (79) to other materials, we give the necessary energy scales in Tab. 1. Despite large differences in \(\omega_{0}\), the momentum scale \(\sqrt{2m_{d}\omega_{0}}\) ends up being about the same in all crystals. Then the typical DM mass scale for anharmonic effects to become important is also about the same for a fixed phonon number \(n\). However, the differences in \(\omega_{0}\) mean that the threshold corresponding to a given \(n\) can vary significantly. For a given threshold, GaAs and Ge have the largest phonon number. Since anharmonic corrections become more important with larger \(n\), GaAs and Ge will therefore have larger anharmonic contributions compared to Diamond at the same threshold. In Fig. 9, we present the ratio of scattering rates in the anharmonic case to the harmonic case in Si and Ge, taking two representative cases for the couplings. We also present the cross-sections corresponding to an observed rate of 3 events per kg-yr in Fig. 10. The bands depict the possible uncertainty that anharmonicity introduces to an experimental reach, with the solid line giving the harmonic result and the dot-dashed the result for maximal anharmonicity. We do not show the effects above the cross sections of \(\sigma_{n}\gtrsim 10^{-28}\) cm\({}^{2}\) as for these large interaction strengths, the DM is expected to lose a significant energy in 1 km of Earth's crust through scattering, thus rendering DM with such cross sections unobservable in underground direct detection experiments [52]. For \(m_{\chi}>10\) MeV, the typical \(q\) becomes similar or larger than \(\sqrt{2m_{d}\omega_{0}}\), where there is negligible difference in the anharmonic and harmonic structure factors. The rates will also start to be dominated by the impulse \begin{table} \begin{tabular}{|c||c|c|c|} \hline \multicolumn{4}{|c|}{Materials} \\ \hline & \(\omega_{0}\) [meV] & \(\sigma\) [meV] & \(\sqrt{2m_{d}\omega_{0}}\) [keV] \\ \hline GaAs & 16.9 & 9.5 & 48.8 \\ Ge & 18.2 & 10.6 & 49.6 \\ Si & 30.8 & 17.6 & 40.3 \\ Diamond & 109.6 & 35.8 & 49.7 \\ Al\({}_{2}\)O\({}_{3}\) & 51.6 & 20.4 & 51.1 \\ \hline \end{tabular} \end{table} Table 1: **Single phonon properties for various crystals.** Using these energy scales, for a given experimental threshold we can estimate the DM masses where anharmonic effects become large, (76)-(79). For crystals with non-identical atoms in a unit cell, we show the quantities averaged across atoms. The relative importance of anharmonic effects in the different materials will mainly be governed by the different phonon energies \(\omega_{0}\). regime \(q\gg\sqrt{2m_{d}\omega_{0}}\). In this case, the structure factor calculated with an anharmonic potential is nearly identical to that calculated in the harmonic case, as discussed in Sec. III.3. We have also seen this behavior with numerical computations in Fig. 8. The anharmonic and harmonic scattering rates are also essentially identical for DM masses \(m_{\chi}>10\) MeV. For DM masses \(m_{\chi}<10\) MeV (i.e. \(q<\sqrt{2m_{d}\omega_{0}}\)), the ratio of the anharmonic to harmonic rate begins to grow with decreasing DM mass. As the typical \(q\) decreases with decreasing DM mass, the leading anharmonic term \(\propto\frac{q^{2}}{2m_{d}\omega_{0}}\) grows faster compared to the harmonic term \(\propto\left(\frac{q^{2}}{2m_{d}\omega_{0}}\right)^{n}\) for \(n\geq 2\). The effect is more pronounced for higher thresholds or equivalently higher \(n\), since the harmonic term is even more suppressed. Therefore at larger thresholds, the anharmonic effects start becoming important already at larger masses and also grows much more quickly as the DM mass is decreased. For a given DM mass, this also implies that the spectrum of events will have larger anharmonic corrections on the high energy tail of events. However, the rates are also highly suppressed in this tail, and only observable for high scattering cross sections. At DM masses \(m_{\chi}<1\) MeV, the slope of the ratio of the anharmonic rate to the harmonic rate starts to decrease slightly, which is an artifact of the Brillouin zone momentum cutoff that we apply across all rate calculations. The incoherent and subsequent approximations are not guaranteed to be justified in this regime, so this effect should not be treated as physical. For sub-MeV DM masses, the phonons again should be treated as collective excitations, similar to the calculation of Ref. [37]. Lastly, we note an interesting feature that the anharmonic scattering rate is strictly greater than the har Figure 9: **Ratio of anharmonic to harmonic rate.** For each material (Ge and Si) we consider two representative values of the anharmonic couplings. The larger set corresponds to a direction of maximal anharmonicity while the other set corresponds to an orthogonal direction of intermediate anharmonicity. Anharmonic effects become more important for DM masses near the MeV scale and for larger energy thresholds. monic rate in the entire parameter space that we probe. This is a consequence of the sign of the leading \(q\)-scaling term \(\frac{q^{2}}{2m_{d}\omega_{0}}\). For the production of an excited state \(|\Phi_{f}\rangle\) in the crystal, the term in the dynamic structure factor \(\propto q^{2}\) can only come from the term \(|\langle\Phi_{f}|iqx|\Phi_{0}\rangle|^{2}\), as the mixing term \(\propto\langle\Phi_{f}|I|\Phi_{0}\rangle\langle\Phi_{f}|\frac{(iqx)^{2}}{2}| \Phi_{0}\rangle^{*}\) and its conjugate are zero from orthogonality. Thus, the sign of the term \(\propto q^{2}\) in the anharmonic structure factor is strictly positive for producing an excited state, whereas there is no corresponding term \(\propto q^{2}\) in the harmonic case for \(n\geq 2\) phonons. Since we are probing the \(q\ll\sqrt{2m_{d}\omega_{0}}\) regime, this leading term quickly dominates the structure factor. Thus, the anharmonic scattering rate exceeds the harmonic rate in this regime. A consequence of this is that we expect the harmonic crystal result gives a lower bound on the scattering. ## V Conclusions Scattering of DM with nuclei in crystals necessarily goes through production of one or many phonons for DM masses smaller than \(\sim 100\) MeV. Previous work has focused on calculating the multiphonon scattering rates in a harmonic crystal under the incoherent approximation (i.e. \(q>q_{\rm BZ}\) or DM mass \(\gtrsim\) MeV). In this work, we have studied the effects of anharmonicities in the crystal on the scattering rates, while still working within the incoherent approximation. In order to obtain a tractable calculation of anharmonic effects, we have simplified the problem into a toy model of a single atom in a 1D anharmonic potential. In this toy model, scattering into multiphonons can still be well-approximated by applying a smearing on the spec Figure 10: **Cross section uncertainty.** Comparison of the cross section corresponding to 3 events/kg-yr in the harmonic (solid) and anharmonic (dot-dashed) cases. The anharmonic result is shown for maximal anharmonicity, and so the shaded band represents our estimate of the theoretical uncertainty due to anharmonic effects. The effects are primarily important for high thresholds and low DM masses, corresponding to large \(\sigma_{n}\), which is generally in tension with existing astrophysical or terrestrial constraints. trum of quantized states to account for the phonon spectrum of a lattice. We extract anharmonic couplings by modeling the interatomic potentials of Si and Ge, which give rise to realistic single atom potentials. This approach allows us to obtain an analytic understanding and first estimate of the impact of anharmonicity, although the numerical results should not be taken as a definitive rate calculation. We find that the harmonic crystal results of Ref. [25] can be safely assumed for DM masses down to \(\sim 10\) MeV. Below \(\sim 10\) MeV, this assumption cannot be taken for granted. In this regime, we find that anharmonic effects on the scattering rates increase with decreasing DM mass and increasing experimental thresholds. Anharmonic corrections up to two orders of magnitude are possible for DM masses \(\sim\) a few MeV and for experimental thresholds \(\sim\) a few times the typical single phonon energy of the crystal. These findings are consistent with Refs. [37; 25], which studied two-phonon production from sub-MeV DM and found up to an order of magnitude larger rate from anharmonic couplings. The size of the corrections is dependent on the material through the anharmonicity strength of that crystal and also, non-trivially, through the typical single phonon energies of the material. For a particular energy threshold, crystals with lower single phonon energies exhibit larger corrections since they require larger phonon numbers to be produced. For example, anharmonic effects in Ge can be larger by almost an order of magnitude than those in Si for similar DM parameter space and thresholds, even though the anharmonic couplings in the two crystals are similar. This is a consequence of the difference in \(q\) scaling of the harmonic and anharmonic contributions, which become more pronounced with larger phonon number. Materials with low single-phonon energies, such as GaAs and Ge, therefore have the largest anharmonic effects. The effects will be reduced in Diamond and Al\({}_{2}\)O\({}_{3}\), which have even higher single phonon energies than Si. The relevance of anharmonic effects to direct detection experiments depends on the DM cross section. The effects are largest for low DM masses and high thresholds, in other words on the tails of the recoil spectrum where the rates are small. For a typical benchmark exposure of 1 kg-yr, the anharmonic corrections become sizeable for DM-nucleon cross sections above \(\sim 10^{-34}\) cm\({}^{2}\). Being agnostic about any terrestrial or astrophysical constraints on the DM model and only requiring the DM to be observable in underground direct detection experiments, the upper bound on the DM cross section is \(\sigma_{n}\lesssim 10^{-28}\) cm\({}^{2}\)[52]. This comes from considering an overburden of \(\sim\) km. On the other hand, these very high DM-nucleon cross sections are typically excluded by terrestrial and astrophysical constraints for the simplest sub-GeV dark matter models [53; 54]. DM-nucleon cross sections \(\sigma_{n}\gtrsim 10^{-41}\) cm\({}^{2}\) (\(\sigma_{n}\gtrsim 10^{-31}\) cm\({}^{2}\)) are constrained for typical models with a heavy mediator (light dark photon mediator) for a DM mass \(\sim\) MeV. With these constraints, we see from Fig. 10 that the anharmonic effects can only impart corrections of at most an order of magnitude for experiments with kg-yr exposure. Experiments with exposures above kg-yr could see larger anharmonic effects, since they would be more sensitive to the events at high phonon number for MeV-scale DM. However, for solid-state direct detection experiments, achieving exposures significantly bigger than a kg-yr is challenging. Thus, for near-future crystal target experiments, we conclude that the anharmonic effects are only important up to \(\mathcal{O}(1)\) factors at masses of \(\sim\) a few MeV for the simplest DM models. ###### Acknowledgements. We are grateful to Simon Knapen and Xiaochuan Lu for useful discussions, and Simon Knapen for feedback on the draft. TL and EV were supported by Department of Energy grant DE-SC0022104. EV was also supported by a Sloan Scholar Fellowship. MS and CHS were supported by Department of Energy Grants DE-SC0009919 and DE-SC0022104. CHS was also supported by the Ministry of Education, Taiwan (MOE Yushan Young Scholar grant NTU-112V1039). ## Appendix A Interatomic potentials In order to produce results for a real crystal, we adopt atomic potentials based on Ref. [49]. The interatomic potentials used here are a combination of various commonly used empirical potentials. We choose to use the Tersoff-Buckingham-Coulomb interatomic potential defined in Ref. [49] using the parameters in the set labeled "TBC-1", though other interatomic potentials may be chosen and give similar estimates for the anharmonicity strengths. This potential includes a three-body Tersoff potential, originally defined in [55], which we restate here for reference. \[E =\frac{1}{2}\sum_{i}\sum_{i\neq j}V_{ij}\] \[V_{ij} =f_{C}(r_{ij})\big{(}f_{R}(r_{ij})+b_{ij}f_{A}(r_{ij})\big{)}, \tag{101}\] where the sum is over nearest-neighbor, and \(r_{ij}\) is the distance between neighbors \(i,j\). The function \(f_{C}\) is a cutoff function that keeps the interaction short ranged, \(f_{R}\) and \(f_{A}\) are repulsive and attractive interactions, and \(b_{ij}\) is a three-body term that is a function of the bonding angle of the third body with the atoms \(i,j\). Explicitly, these functions are defined as \[f_{C}(r)=\left\{\begin{array}{cc}1&r<R-D\\ \frac{1}{2}-\frac{1}{2}\sin\left(\frac{\pi}{2}\frac{r-R}{D}\right)&R-D<r<R+D \\ 0&r>R+D\end{array}\right. \tag{100}\] \[f_{R}(r)=A\exp\left(-\lambda_{1}r\right) \tag{101}\] \[f_{A}(r)=-B\exp\left(-\lambda_{2}r\right)\] (102) \[b_{ij}=\left(1+\beta^{n}\zeta_{ij}{}^{n}\right)^{-\frac{1}{2n}}\] (103) \[\zeta_{ij}=\sum_{k\neq i,j}f_{C}\left(r_{ik}\right)g\left[\theta _{ijk}\left(r_{ij},r_{ik}\right)\right]\] \[\qquad\times\exp\left[\lambda_{3}{}^{m}\left(r_{ij}-r_{ik}\right) ^{m}\right]\] (104) \[g(\theta)=1+\frac{c^{2}}{d^{2}}-\frac{c^{2}}{\left[d^{2}+(\cos \theta-\cos\theta_{0})^{2}\right]}, \tag{105}\] where \(\theta_{ijk}\) is the angle between the displacement vectors \(r_{ij}\) and \(r_{ik}\). \(R,D,A,B,\beta,n,c,d,\theta_{0},\lambda_{1},\lambda_{2},\lambda_{3}\) are constants that can be found in Ref. [49]. Note that the notation in this section matches that of Ref. [49] and is standalone from the main text. Specifically, the parameters \(\lambda_{1},\lambda_{2},\lambda_{3}\) are not to be confused with the anharmonicity strengths defined in the main text. In practice, anharmonicity arises from the asymmetry between the repulsive and attractive terms. The directional dependence of the anharmonicity strength is a result of the crystal's zincblende structure and bond angle-dependent potential. The other components of this interatomic model include a long-range two-body Buckingham term \[V(r)=Ce^{-r/\rho}-\frac{E}{r^{6}}, \tag{106}\] and a screened Coulombic interaction defined by \[V(r) =q^{2}\Big{[}\frac{\mathrm{erfc}(\alpha r)}{r}-\frac{\mathrm{ erfc}(\alpha r_{c})}{r_{c}}\] \[+\Big{(}\frac{\mathrm{erfc}(\alpha r_{c})}{r_{c}^{2}}+\frac{2 \alpha}{\sqrt{\pi}}\frac{e^{-\alpha^{2}r_{c}^{2}}}{r_{c}}\Big{)}(r-r_{c}) \Big{]} \tag{107}\] \[\times\Theta(r_{c}-r) \tag{108}\] Here \(q\) is the effective atomic charge, \(\alpha\) is a damping parameter, and \(r_{c}\) is a cutoff. As discussed in Ref. [49], the full interatomic potential model is a sum of the three aforementioned interactions. All of the free parameters are fit onto the actual second, third, and fourth order forces calculated from DFT. This gives an analytic interatomic potential that produces the correct single-phonon dispersions and also captures the anharmonicity in the potential by fitting onto the higher order interatomic forces from DFT. ## Appendix B Power counting in perturbation theory In this appendix, we work out the explicit relation between the powers of \(q^{2}\) and \(\lambda_{k}\) in the perturbation theory calculation for the anharmonic Hamiltonian in (36). The primary object we focus on in the dynamic structure factor is the squared matrix element \(|\langle\Phi_{n}|e^{iqx}|\Phi_{0}\rangle|^{2}\), where \(|\Phi_{n}\rangle\) are the eigenstates of the anharmonic Hamiltonian. With perturbation theory, the eigenstates can be expanded in powers of \(\lambda_{k}\) as in (37). The corrections to the \(n\)th final state up to second order in \(\lambda_{k}\) are given by, \[|\psi_{n}^{(1)}\rangle =\sum_{k\neq n}\frac{V_{kn}}{(n-k)}|k\rangle, \tag{109}\] \[|\psi_{n}^{(2)}\rangle =\sum_{k\neq n}\sum_{l\neq n}\frac{V_{kl}V_{ln}}{(n-k)(n-l)}|k \rangle-\frac{1}{2}|n\rangle\sum_{k\neq n}\frac{|V_{kn}|^{2}}{(n-k)^{2}},\] where \(V_{ij}\equiv\,\langle i|(\sqrt{2m_{d}\omega_{0}}x)^{k}|j\rangle\). In terms of the standard ladder operators of the harmonic oscillator, \(V_{ij}\) are given by, \[V_{ij}=\,\langle i|(a+a^{\dagger})^{k}|j\rangle\,. \tag{110}\] This tells us that \(V_{ij}\) can only be non-zero when \(i-j\) is one of the following: \(-k\), \(-k+2\),..., \(k-2\), \(k\). With these selection rules, the corrections in Eqs. 109 can be schematically written as, \[|\psi_{n}^{(1)}\rangle \sim|n-k\rangle+|n-k+2\rangle+...\] \[\quad+|n+k-2\rangle+|n+k\rangle \tag{111}\] \[|\psi_{n}^{(2)}\rangle \sim|n-2k\rangle+|n-2k+2\rangle+...\] \[\quad+|n+2k-2\rangle+|n+2k\rangle \tag{112}\] This pattern continues for higher orders in \(\lambda_{k}\) such that at \(\mathcal{O}(\lambda_{k}^{j})\), we have, \[|\psi_{n}^{(j)}\rangle \sim|n-(j\times k)\rangle+|n-(j\times k)+2\rangle+...\] \[\quad+|n+(j\times k)-2\rangle+|n+(j\times k)\rangle. \tag{113}\] Note that the sum should only include terms for which the integer labelling the state is non-negative. With the knowledge of the unperturbed states appearing in \(|\Phi_{n}\rangle\), the matrix element \(\langle\Phi_{n}|e^{iqx}|\Phi_{0}\rangle\) can also be expanded in \(\lambda_{k}\), \[\langle\Phi_{n}|e^{iqx}|\Phi_{0}\rangle\sim b_{0}+\lambda_{k}b_{1}+\lambda_{k}^ {2}b_{2}+..., \tag{114}\] where the coefficients \(b_{j}\) are given by, \[b_{0} \sim\langle n|e^{iqx}|0\rangle\] \[b_{1} \sim\langle\psi_{n}^{(1)}|e^{iqx}|0\rangle+\langle n|e^{iqx}|\psi_ {0}^{(1)}\rangle \tag{111}\] \[b_{2} \sim\langle\psi_{n}^{(2)}|e^{iqx}|0\rangle+\langle\psi_{n}^{(1)}| e^{iqx}|\psi_{0}^{(1)}\rangle+\langle n|e^{iqx}|\psi_{0}^{(2)}\rangle\] In general, the coefficient \(b_{j}\) is schematically given by, \[b_{j} \sim\langle\psi_{n}^{(j)}|e^{iqx}|0\rangle+\langle\psi_{n}^{(j-1) }|e^{iqx}|\psi_{0}^{(1)}\rangle+...\] \[+\langle\psi_{n}^{(1)}|e^{iqx}|\psi_{0}^{(j-1)}\rangle+\langle n |e^{iqx}|\psi_{0}^{(j)}\rangle. \tag{112}\] To study the powers of \(q\) appearing in \(b_{j}\), we first need to understand the structure of the matrix element \(\langle n_{1}|e^{iqx}|n_{2}\rangle\) for general eigenstates \(|n_{1}\rangle\) and \(|n_{2}\rangle\) of the unperturbed harmonic oscillator. This matrix element is given by the following, \[\langle n_{1}|e^{iqx}|n_{2}\rangle =\sum_{l=\frac{n_{1}-n_{2}l+n_{1}-n_{2}}{2}}^{n_{1}}\frac{\sqrt{ n_{1}!n_{2}!}}{l!(n_{1}-l)!(n_{2}-n_{1}+l)!}\times\] \[\Big{(}\frac{iq}{\sqrt{2m_{d}\omega_{0}}}\Big{)}^{n_{2}-n_{1}+2l }e^{-\frac{q^{2}}{4m_{d}\omega_{0}}}. \tag{113}\] We learn that the matrix element \(\langle n_{1}|e^{iqx}|n_{2}\rangle\) contains powers of \(iq/(\sqrt{2m_{d}\omega_{0}})\) ranging from \(|n_{1}-n_{2}|\) to \(n_{1}+n_{2}\). Note again that the Debye-Waller factor \(e^{-\frac{q^{2}}{4m_{d}\omega_{0}}}\) is not included in this power counting since \(e^{-\frac{q^{2}}{4m_{d}\omega_{0}}}\approx 1\) in the regime of interest. Combining this information with the structure of \(b_{j}\) in (112) and the structure of \(|\psi_{n}^{(j)}\rangle\) in (110), the powers of \(q\) in \(b_{j}\) can be identified: \[b_{j}\sim e^{-\frac{q^{2}}{4m_{d}\omega_{0}}}\Big{\{}\Big{(} \frac{iq}{\sqrt{2m_{d}\omega_{0}}}\Big{)}^{n-jk}+\Big{(}\frac{iq}{\sqrt{2m_{d} \omega_{0}}}\Big{)}^{n-jk+2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+...+\Big{(}\frac{iq}{\sqrt{ 2m_{d}\omega_{0}}}\Big{)}^{n+jk}\Big{\}}. \tag{114}\] Note that only those terms with powers of \(q\) larger or equal to \(1\) are present. Terms \(\propto q^{0}\) have to cancel as they otherwise lead to \(q^{0}\) terms in the squared matrix element \(|\langle\Phi_{n}|e^{iqx}|\Phi_{0}\rangle|^{2}\), which is forbidden due to orthogonality of eigenstates. As the kinematic regime under consideration is of \(q\ll\sqrt{2m_{d}\omega_{0}}\), we will focus on powers of \(q\) less than \(n\), which corresponds to the harmonic case. We see from the equation above that the lowest powers of \(q\) decrease with increasing values of \(j\). Thus, higher order corrections in \(\lambda_{k}\) appear with lower powers in \(q\). Eventually, at a sufficiently high power of \(\lambda_{k}\), we get a coefficient \(b_{j}\) with the minimum power of \(q\) equal to \(1\). The squared matrix element can then be written in general as, \[|\langle\Phi_{n}|e^{iqx}|\Phi_{0}\rangle|^{2}=e^{-\frac{q^{2}}{2m _{d}\omega_{0}}}\times\Bigg{[}\frac{1}{n!}\Big{(}\frac{q^{2}}{2m_{d}\omega_{0}} \Big{)}^{n}\] \[+\sum_{i\geq 1}\Big{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Big{)}^{i} \Big{(}a_{n,i}\,\lambda_{k}^{\nu(n,i)}+\mathcal{O}\Big{(}\lambda_{k}^{\nu(n,i) +1}\Big{)}\Big{)}\Bigg{]}, \tag{115}\] where the first term on the right hand side \(\propto q^{2n}\) is the harmonic term, and the anharmonic corrections are expanded in powers of \(q^{2}\) which are denoted by \(i\), with \(i\geq 1\). Every power \(i\) appears with a minimum _allowed_ power \(\nu(n,i)\) of \(\lambda_{k}\). To study the behavior of \(\nu(n,i)\), we first note that, for even \(k\), the matrix element \(\langle\Phi_{n}|e^{iqx}|\Phi_{0}\rangle\) is purely real or purely imaginary, depending on whether \(n\) is even or odd respectively. For instance, if \(n\) is even, then \(b_{0}\) is purely real. Higher orders in \(\lambda_{k}\) lead to insertions of \((a+a^{\dagger})^{k}\) and therefore matrix elements where the difference in the harmonic oscillator states is also even, so that all coefficients \(b_{j}\) are real in this case. But for odd \(k\), the \(b_{j}\) coefficients will alternate in being real and imaginary. This changes the structure of the squared matrix element depending on \(k\), as we will see below. **Odd \(k\):** We will first consider odd \(k\). In this case, the squared matrix element can be written as, \[|\langle\Phi_{n}|e^{iqx}|\Phi_{0}\rangle|^{2} \sim|b_{0}+\lambda_{k}^{2}b_{2}+\lambda_{k}^{4}b_{4}+...|^{2}\] \[\qquad\qquad\qquad+|\lambda_{k}b_{1}+\lambda_{k}^{3}b_{3}+...|^{2} \tag{116}\] \[\sim|b_{0}|^{2}+\lambda_{k}^{2}(|b_{1}|^{2}+(b_{0}b_{2}^{*}+b_{0} ^{*}b_{2}))\] \[\qquad+\lambda_{k}^{4}(|b_{2}|^{2}+(b_{0}b_{4}^{*}+b_{0}^{*}b_{4} )+(b_{1}b_{3}^{*}+b_{1}^{*}b_{3}))\] \[\qquad+\mathcal{O}(\lambda_{k}^{6})\] (117) \[\sim e^{-\frac{q^{2}}{2m_{d}\omega_{0}}}\Big{[}\frac{1}{n!}\Big{(} \frac{q^{2}}{2m_{d}\omega_{0}}\Big{)}^{n}\] \[\qquad+\lambda_{k}^{2}\Big{\{}\Big{(}\frac{q^{2}}{2m_{d}\omega_{0 }}\Big{)}^{n-k}+\Big{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Big{)}^{n-k+1}\] \[\qquad+...+\Big{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Big{)}^{n+k} \Big{\}}\] \[\qquad+\lambda_{k}^{4}\Big{\{}\Big{(}\frac{q^{2}}{2m_{d}\omega_{0 }}\Big{)}^{n-2k}+\Big{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Big{)}^{n-2k+1}\] \[\qquad+...+\Big{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Big{)}^{n+2k} \Big{\}}+\mathcal{O}(\lambda_{k}^{6})\Big{]}. \tag{118}\] Thus we see that we get corrections at even orders in \(\lambda_{k}\), with the lowest non-zero power being \(\lambda_{k}^{2}\). In general, at \(\mathcal{O}(\lambda_{k}^{j})\) for an even \(j=2j^{\prime}\), the lowest power of \(q^{2}\) is \(n-(j^{\prime}\times k)\), and the highest power is \(n+(j^{\prime}\times k)\). Note that only terms with positive powers of \(q^{2}\) are present. The term \(\propto q^{2}\) can also subtly cancel in some cases as there is no term \(\propto q^{0}\) in coefficients \(b_{j}\). We will deal with this case later below. But to get a power \(i>1\) of \(q^{2}\), the lowest non-zero \(j^{\prime}\) is \(\lceil\frac{|n-i|}{k}\rceil\), with the lowest \(j\) given by \(2\times\lceil\frac{|n-i|}{k}\rceil\). Thus, in the squared matrix element, the lowest non-zero power \(\nu(n,i)\) required is given by, \[\nu(n,i)=\max\Bigl{(}2\times\lceil\frac{|n-i|}{k}\rceil\;,\;2\Bigr{)}. \tag{101}\] To get the lowest power \(i=1\) of \(q^{2}\) i.e. the term \(\propto q^{2}\), the only possible way is to get the term \(\propto q^{1}\) in the coefficient \(b_{j}\) as there is no term \(\propto q^{0}\). For odd \(n\), the term \(\propto q^{1}\) in \(b_{j}\) can only be generated at an even \(j\), since that is the only way to satisfy \(n-jk=1\). For every even \(j=2j^{\prime}\), the powers of \(q\) in \(b_{j}\) range from \(n-(2k)\times j^{\prime}\) to \(n+(2k)\times j^{\prime}\). The lowest \(j^{\prime}\) to get a term \(\propto q^{1}\) is then given by \(\lceil\frac{|n-1|}{2k}\rceil\), with \(j\) given by \(2\times\lceil\frac{|n-1|}{2k}\rceil\). For an even \(n\), the term \(\propto q^{1}\) in \(b_{j}\) can only be generated for an odd \(j\). For every odd \(j=2j^{\prime}-1\), the lowest power of \(q\) in \(b_{j}\) is \(n+k-(2k)\times j^{\prime}\). The lowest \(j^{\prime}\) to get a term \(\propto q^{1}\) is then given by \(\lceil\frac{|n+k-1|}{2k}\rceil\), with \(j\) given by \(2\times\lceil\frac{|n+k-1|}{2k}\rceil-1\). In the squared matrix element, the lowest non-zero power \(\nu(n,1)\) required is given by, \[\nu(n,1)=\begin{cases}\max\Bigl{(}4\times\lceil\frac{|n-1|}{2k}\rceil\;,\;2 \Bigr{)}&\text{for odd $n$}\\ 4\times\lceil\frac{|n+k-1|}{2k}\rceil-2&\text{for even $n$}\end{cases} \tag{102}\] **Even \(k\):** Now we consider even \(k\). In this case, the squared matrix element is, \[|\langle\Phi_{n} |e^{iqx}|\Phi_{0}\rangle|^{2}\sim|b_{0}+\lambda_{k}b_{1}+\lambda_{ k}^{2}b_{2}+...|^{2} \tag{103}\] \[\sim|b_{0}|^{2}+\lambda_{k}((b_{0}b_{1}^{*}+b_{0}^{*}b_{1}))\] \[\quad\quad+\lambda_{k}^{2}(|b_{1}|^{2}+(b_{0}b_{2}^{*}+b_{0}^{*}b _{2})+\mathcal{O}(\lambda_{k}^{3})\] (104) \[\sim e^{-\frac{q^{2}}{2m_{d}\omega_{0}}}\Bigl{[}\frac{1}{n!} \Bigl{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Bigr{)}^{n}\] \[\quad\quad+\lambda_{k}\Bigl{\{}\Bigl{(}\frac{q^{2}}{2m_{d}\omega_{ 0}}\Bigr{)}^{n-k/2}+\Bigl{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Bigr{)}^{n-k/2+1}\] \[\quad\quad+...+\Bigl{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Bigr{)}^{n +k/2}\Bigr{\}}\] \[\quad\quad+\lambda_{k}^{2}\Bigl{\{}\Bigl{(}\frac{q^{2}}{2m_{d} \omega_{0}}\Bigr{)}^{n-k}+\Bigl{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Bigr{)}^{n-k+1}\] \[\quad\quad+...+\Bigl{(}\frac{q^{2}}{2m_{d}\omega_{0}}\Bigr{)}^{n +k}\Bigr{\}}+\mathcal{O}(\lambda_{k}^{3})\Bigr{]}. \tag{105}\] Thus we see that we get corrections at all orders in \(\lambda_{k}\), with the lowest non-zero power being \(\lambda_{k}\). In general, at \(\mathcal{O}(\lambda_{k}^{j})\), the lowest power of \(q^{2}\) is \(n-(j\times k)/2\), and the highest power is \(n+(j\times k)/2\). Following similar arguments to the case of odd \(k\) discussed earlier, \(\nu(n,i)\) for \(i>1\) is given by, \[\nu(n,i)=\max\Bigl{(}\lceil\frac{|n-i|}{k/2}\rceil\;,\;1\Bigr{)}. \tag{106}\] Another difference between the case of even \(k\) considered here and that of odd \(k\) is that we do not get an \(i=1\) term for even \(n\), as all terms in the coefficients \(b_{j}\) contain even powers of \(q\). This means that the leading term will always go as \(q^{4}\), with a \(\lambda_{k}\) power determined by (106) for \(i=2\). For odd \(n\), the lowest power of \(q\) in \(b_{j}\) is \(n-k\times j\). Thus, in the squared matrix element, the lowest non-zero power \(\nu(n,1)\) required is given by, \[\nu(n,1)=\max\Bigl{(}2\times\lceil\frac{|n-1|}{k}\rceil\;,\;1\Bigr{)}. \tag{107}\] The calculations in this appendix up to this point consider the overall scaling behavior of the powers of \(q^{2}\) and \(\lambda_{k}\) in the squared matrix element. We have neglected combinatorial factors at several steps in the calculations that enter into the numerical coefficients \(a_{n,i}\) in (107). Sometimes, the numerical coefficients can also cancel with each other, and the naive leading behavior estimated in this section can vanish. In order to give concrete examples of the numerical coefficients, we perform explicit calculations of the squared matrix element using perturbation theory with \(k=3\) (i.e. a cubic perturbation), and phonon numbers \(n=1,\;2,\;3,\;\text{and}\;4\). We perform this explicit calculation only up to \(\mathcal{O}(\lambda_{3}^{2})\). The results of various numerical coefficients are presented below. For a single-phonon production (i.e. \(n=1\)), the coefficients \(a_{n,i}\) are given by, \[a_{1,1} =44 \tag{108}\] \[a_{1,2} =-82\] (109) \[a_{1,3} =5. \tag{110}\] For a two-phonon production (i.e. \(n=2\)), the coefficients are given by, \[a_{2,1} =8 \tag{111}\] \[a_{2,2} =59\] (111) \[a_{2,3} =-56\] (112) \[a_{2,4} =2.5. \tag{113}\] For a three-phonon production (i.e. \(n=3\)), the coefficients are, \[a_{3,2} =18 \tag{114}\] \[a_{3,3} =37\] (115) \[a_{3,4} =-23.04\] (116) \[a_{3,5} =0.77. \tag{117}\] Note that we do not show the coefficient \(a_{3,1}\) as it appears at \(\mathcal{O}(\lambda_{3}^{4})\). Finally, for a four-phonon production (i.e. 4), the coefficients are evaluated to be, \[a_{4,1} =0 \tag{111}\] \[a_{4,2} =0\] (112) \[a_{4,3} =0.097\] (113) \[a_{4,4} =0.05\] (114) \[a_{4,5} =-0.012\] (115) \[a_{4,6} =1.81\times 10^{-4}. \tag{116}\] Note that the coefficients \(a_{4,1}\) and \(a_{4,2}\) amount to zero because of a numerical cancellation between the two terms in the \(b_{1}\) coefficient in Eq. 115. The leading behavior of the terms proportional to \(q^{2}\) and \(q^{4}\) in the structure factor is instead \(q^{2}\lambda_{3}^{6}\) and \(q^{4}\lambda_{3}^{4}\), respectively. As these numerical coefficients appear through combinations and interferences of several combinatorial factors at various steps of the calculation, it is hard to provide a general expression for them. By looking at the examples above however, we can make some general observations. Typically, we see that the coefficients follow a pyramid structure, with \(a_{n,i}\) being the largest for \(i\) near \(n\), and decreasing with \(i\) away from \(n\). We also find that the coefficients can vary by orders of magnitude from each other. The terms with \(i\) near \(n\) receive contributions from several individual matrix elements, and in general seem to be larger. We expect to see this pattern continue for higher phonon numbers as well. The exact values of these coefficients play a role in determining where the anharmonic corrections dominate, and so our power counting approach only gives an \(O(1)\) estimate. ## Appendix C Impulse approximation In Sec. III.3, we calculated the structure factor via the saddle point approximation in the regime defined by (52). This regime corresponded to values of \(\omega\) near \(\frac{q^{2}}{2m}\) and within the Gaussian width of (63). As discussed in the main text, in order to calculate the tail of the structure factor far from \(\omega=\frac{q^{2}}{2m}\), more expansion terms are needed in \(f\). Here we discuss this extension of the impulse approximation. First, in the special case of a harmonic potential, we can start from the full result in Eq. (35). After rewriting the energy conservation delta function as a time integral, we find that \[f(t)=-i\omega t+\frac{q^{2}}{2m_{d}\omega_{0}}(e^{i\omega_{0}t}-1) \tag{117}\] Solving \(f^{\prime}(t)=0\) gives the exact result \[t_{I}=\frac{i}{\omega_{0}}\ln\left(\frac{q^{2}}{2m_{d}\omega}\right). \tag{118}\] Using the saddle point approximation for \(\omega\gg\omega_{0}\), we find \[S_{\text{toy},d}(q,\omega)\sim\frac{1}{\sqrt{\omega\omega_{0}}}\,e^{-2W_{ \text{toy}}(q)}\left(\frac{q^{2}}{2m\omega}\right)^{\frac{\omega}{\omega_{0} }}e^{\frac{\omega}{\omega_{0}}}. \tag{119}\] The same result can also be derived by approximating the sum over phonon states as an integral in Eq. (35). The saddle point approximation for the harmonic oscillator holds as long as \(\omega\gg\omega_{0}\), and we no longer have a condition on how close \(\omega\) is to \(\frac{q^{2}}{2m}\). In the impulse regime, \(\omega\sim\frac{q^{2}}{2m}\), one can check that it reduces to the previous result in Eq. (63). We see in this exact result that the tail at large \(\omega\) is Poissonian instead of Gaussian. For general potentials, this exact analytic result is no longer possible, but we can still calculate corrections to the tail. First, we start by giving the exact saddle point equation: \[0=f^{\prime}(t_{I})=-i\left(E_{0}+\omega-\frac{q^{2}}{2m}\right)+i\frac{ \left\langle H^{\prime}e^{iH^{\prime}t_{I}}\right\rangle}{\left\langle e^{iH ^{\prime}t_{I}}\right\rangle} \tag{120}\] which is valid at all orders. We begin by noticing that saddle point equation (120) is satisfied exactly at \(\omega=\frac{q^{2}}{2m}\) by \(t_{I}=0\). Then, \(\omega\)-derivatives of \(t_{I}\) at \(\omega=\frac{q^{2}}{2m}\) can be found by taking \(\omega\)-derivatives of (120) and solving for \(t_{I}^{(n)}[\omega=\frac{q^{2}}{2m}]\). This allows us to calculate \(t_{I}[\omega=\frac{q^{2}}{2m}]\) in an iterative fashion. The first few terms are \[t_{I}[\frac{q^{2}}{2m}] =0 \tag{121}\] \[t_{I}^{\prime}[\frac{q^{2}}{2m}] =\frac{i}{\left\langle H^{\prime}\right\rangle^{2}-\left\langle H ^{\prime 2}\right\rangle}\] (122) \[t_{I}^{\prime\prime}[\frac{q^{2}}{2m}] =i\frac{-2\langle H^{\prime}\rangle^{3}+3\langle H^{\prime} \rangle\langle H^{\prime 2}\rangle-\langle H^{\prime 3}\rangle}{\left(\langle H^{\prime} \rangle^{2}-\langle H^{\prime 2}\rangle\right)^{3}}\] (123) \[t_{I}^{(3)}[\frac{q^{2}}{2m}] =\frac{i}{\left(\langle H^{\prime}\rangle^{2}-\langle H^{\prime 2 }\rangle\right)^{5}}\times\] \[\left(6\langle H^{\prime}\rangle^{6}-18\langle H^{\prime}\rangle^{ 4}\langle H^{\prime 2}\rangle+3\langle H^{\prime 2}\rangle^{3}\right.\] \[\left.+\,8\langle H^{\prime}\rangle^{3}\langle H^{\prime 3}\rangle-14 \langle H^{\prime}\rangle\langle H^{\prime 2}\rangle\langle H^{\prime 3}\right.\] \[\left.+\,3\langle H^{\prime 3}\rangle^{2}-\langle H^{\prime 2} \rangle\langle H^{\prime 4}\rangle\right.\] \[\left.+\,\langle H^{\prime 2}\rangle\big{(}12\langle H^{\prime 2} \rangle^{2}+\langle H^{\prime 4}\rangle\big{)}\right)\] where \(t_{I}^{(n)}\) denotes the \(n\)th \(\omega\)-derivative of \(t_{I}\). In the harmonic case, this series resums to (118). For general potentials, one can then use the expansions (60) and (122) to calculate \[S_{\text{toy},d}(q,\omega)\approx\sqrt{\frac{2\pi}{-f^{\prime\prime}(t_{I})}} e^{f(t_{I})} \tag{124}\] to a desired order. ## Appendix D Exact results for Morse potential The Morse potential is a special case of an anharmonic potential where the structure factor is analytically solvable. We will use this case to illustrate the behavior of the structure factor discussed in Sec. III.2. We also use it to validate the numerical calculations used in our final results and check the validity of the impulse approximation in the regime where there are \(n>10\) phonons. The Morse potential is defined as \[V_{\text{Morse}}=B\Big{(}e^{-2ax}-2e^{-ax}\Big{)}, \tag{108}\] where \(a\) is a parameter controlling the width of the potential and \(B\) is the normalization. Expanding this potential in powers of \(x\) gives \[V_{\text{Morse}}=-B+Ba^{2}x^{2}-Ba^{3}x^{3}+\frac{7}{12}Ba^{4}x^{4}+... \tag{109}\] Matching the quadratic and the cubic terms with (26), we find that \[a =-4\lambda_{3}\sqrt{2m\omega_{0}} \tag{110}\] \[B =\frac{\omega_{0}}{64\lambda_{3}^{2}}. \tag{111}\] Note that the Morse potential has fewer free parameters than the anharmonic potential up to fourth order in the displacements, so we cannot simultaneously fit \(\lambda_{4}\). Nonetheless, the realistic potential as obtained App. A are well approximated by this Morse potential due to the dominance and Morse-like behavior of the \(f_{R}\) and \(f_{A}\) terms in the Tersoff part of the potential. The Morse potential approximation of our anharmonic potential is then given by \[V_{\text{Morse}}=\frac{\omega_{0}}{64\lambda_{M}^{2}}\Big{(}e^{8\lambda_{M} \sqrt{2m\omega_{0}}}-2e^{4\lambda_{M}\sqrt{2m\omega_{0}}x}\Big{)}, \tag{112}\] where we take \(\lambda_{M}=\lambda_{3}\) in order to fit up to third order anharmonicities. In this potential, the structure factor (68) is exactly calculable since the Morse eigenstates and eigenenergies are known analytically. These results [56] give squared matrix elements between the ground state and \(n\)th excited state of \[|\langle\Phi_{n}|e^{iqx}|\Phi_{0}\rangle|^{2}=\frac{(2K-2n-1)(2K-1 )}{n!\Gamma(2K)\Gamma(2K-n)}\] \[\times\Bigg{|}\frac{\Gamma(n+\frac{i(q/\sqrt{2m\omega_{0}})}{4 \lambda_{M}})\Gamma(2K+\frac{i(q/\sqrt{2m\omega_{0}})}{4\lambda_{M}}-n-1)}{ \Gamma(\frac{iq/\sqrt{2m\omega_{0}}}{4\lambda_{M}})}\Bigg{|}^{2}, \tag{113}\] with energy gaps \[E_{n}-E_{0}=\big{(}n-\frac{n(1+n)}{2K}\Big{)}\omega_{0}, \tag{114}\] where \(K=\frac{1}{32\lambda_{M}^{2}}\). Note that these formulae are only valid for \(n<K-\frac{1}{2}\) since above this excited state, the eigenstates are unbound and have a different analytic form. For \(\lambda_{M}\sim 0.01\), this condition requires \(n\lesssim 312\), which corresponds to an energy gap of \(\mathcal{O}(\text{eV})\). Recoil energies at this scale are comparable to the size of a typical lattice potential well and thus the free nuclear recoil approximation holds. Then, for typical anharmonicity strengths, the discrete states of the Morse potential that we have used in this analysis are sufficient to estimate the anharmonic effect in the multiphonon scattering regime. Putting together expressions (113)-(110), (68), and (72) we can calculate the structure factor in the Morse potential in both the large and small \(q\) regime. We show these results for \(\lambda_{M}=\lambda_{3}\) in Figs. 11 and 12. Fig. 11 provides a check for our numerical results in Sec. IV. Here we see that the numerical calculations and corresponding analytic Morse results are almost identical. There is a modified \(q\) scaling of the structure factor compared to the harmonic case, as was already illustrated in Fig. 7. We can also obtain this behavior analytically with the Morse potential. Expanding the expression (113) to lead Figure 11: Comparison of analytic structure factor in the Morse potential and the numerical calculation for Si as described in Sec. IV. We find that the two methods give almost the same result due to the fact that the Morse potential well approximates the single-atom potential along the nearest-neighbor direction. ing order in \(q\) and subsequently in \(\lambda_{M}\), we get explicitly, \[|\langle 2|e^{iqx}|0\rangle|^{2} =8\lambda_{M}^{2}q^{2}+\ldots \tag{111}\] \[|\langle 3|e^{iqx}|0\rangle|^{2} =\frac{512}{3}\lambda_{M}^{4}q^{2}+\ldots\] (112) \[|\langle 4|e^{iqx}|0\rangle|^{2} =6144\lambda_{M}^{6}q^{2}+\ldots,\] (113) \[|\langle 5|e^{iqx}|0\rangle|^{2} =\frac{1572864}{5}\lambda_{M}^{8}q^{2}+\ldots,\] (114) \[|\langle 6|e^{iqx}|0\rangle|^{2} =20971520\lambda_{M}^{10}q^{2}+\ldots, \tag{115}\] where the ellipses include higher orders in both \(q\) and \(\lambda_{M}\). The leading \(\lambda_{M}\) scalings are consistent with those illustrated in Fig. 4 for \(n=2\) and \(3\). For \(n=4\), the leading \(\lambda_{M}\) scaling differs from the power counting in Fig. 4, but matches with the explicit results obtained using perturbation theory as presented in Appendix B. An exact numerical cancellation modifies the leading behavior to \(\lambda_{M}^{6}q^{2}\). We see that the leading behavior in \(q,\lambda_{M}\) for \(n>4\) also differs from the \(x^{3}\)-theory power counting, suggesting a generic presence of cancellations at lower orders of \(\lambda_{3}\) for the \(q^{2}\) dependence. In Fig. 12, we demonstrate that the impulse approximation remains robust for \(q\gg\sqrt{2m_{d}\omega_{0}}\) and \(n>10\) excited states. Note that we can also calculate corrections to \(\langle p^{2}\rangle\) in the Morse ground state exactly: \[\langle p^{2}\rangle=\frac{m\omega_{0}}{2}(1-16\lambda_{M}^{2}), \tag{116}\] which is used in the impulse regime result (63). The impulse result is almost identical between the Morse and harmonic cases, since the Gaussian width is only corrected at order \(\lambda_{M}^{2}\), which is \(\sim 10^{-4}.\) This is also borne out in the full calculation of the structure factor shown in Fig. 12.
2309.06083
On the weighted trigonometric Bojanov-Chebyshev extremal problem
We investigate the weighted Bojanov-Chebyshev extremal problem for trigonometric polynomials, that is, the minimax problem of minimizing $\|T\|_{w,C({\mathbb T})}$, where $w$ is a sufficiently nonvanishing, upper bounded, nonnegative weight function, the norm is the corresponding weighted maximum norm on the torus ${\mathbb T}$, and $T$ is a trigonometric polynomial with prescribed multiplicities $\nu_1,\ldots,\nu_n$ of root factors $|\sin(\pi(t-z_j))|^{\nu_j}$. If the $\nu_j$ are natural numbers and their sum is even, then $T$ is indeed a trigonometric polynomial and the case when all the $\nu_j$ are 1 covers the Chebyshev extremal problem. Our result will be more general, allowing, in particular, so-called generalized trigonometric polynomials. To reach our goal, we invoke Fenton's sum of translates method. However, altering from the earlier described cases without weight or on the interval, here we find different situations, and can state less about the solutions.
Béla Nagy, Szilárd Gy. Révész
2023-09-12T09:26:02Z
http://arxiv.org/abs/2309.06083v1
# On the weighted trigonometric ###### Abstract We investigate the weighted Bojanov-Chebyshev extremal problem for trigonometric polynomials, that is, the minimax problem of minimizing \(\|T\|_{w,C(\mathbb{T})}\), where \(w\) is a sufficiently nonvanishing, upper bounded, nonnegative weight function, the norm is the corresponding weighted maximum norm on the torus \(\mathbb{T}\), and \(T\) is a trigonometric polynomial with prescribed multiplicities \(\nu_{1},\ldots,\nu_{n}\) of root factors \(|\sin(\pi(t-z_{j}))|^{\nu_{j}}\). If the \(\nu_{j}\) are natural numbers and their sum is even, then \(T\) is indeed a trigonometric polynomial and the case when all the \(\nu_{j}\) are 1 covers the Chebyshev extremal problem. Our result will be more general, allowing, in particular, so-called generalized trigonometric polynomials. To reach our goal, we invoke Fenton's sum of translates method. However, altering from the earlier described cases without weight or on the interval, here we find different situations, and can state less about the solutions. Dedicated to Vitalii Vladimirovich Arestov, a leading mathematician and teacher of generations, on the occasion of his 80th anniversary Keywords: minimax and maximin problems, kernel function, sum of translates function, vector of local maxima, equioscillation, majorization 2020 Mathematics subject classification: 26A51, 26D07, 49K35 ## 1 Introduction In this paper our aim is to solve the Bojanov-Chebyshev extremal problem in the setting of weighted trigonometric polynomials. As in our earlier papers on related subjects, our approach is the so-called "sum of translates method" of Fenton, what he introduced in [10]. However, here we do not develop the whole theory for two reasons: first, in the periodic, i.e., torus setup, in [5] we have already developed much of what is possible, and second, much of what we could find useful here, is simply not holding true. In this regard, our Example 20 is an important part of the study, showing the limits of any proof in this generality. The analogous problems for the unweighted periodic case and the weighted and unweighted algebraic polynomial cases on the interval were already solved in [5] and in [7] and [9]. The results available to date do not imply, not in a direct and easy way, the corresponding result to the weighted trigonometric polynomial Bojanov-Chebyshev problem. In fact, some of them simply does not remain valid. So, the weighted trigonometric polynomial case poses new challenges and requires a careful adaptation of our methods, with an avoidance of certain obstacles - for example a bagatelle-looking, but in fact serious dimensionality obstacle in the way of proving a homeomorphism theorem, analogous to the earlier cases - and recombining our existing knowledge about the torus setup with all what can be saved and reused from the interval case. So, we heavily rely on all our earlier papers [5][8][7][6][9] on the subject, while these in themselves will not suffice to reach our goals. We will still need to devise new proofs or at least new versions for various existing arguments. We note that proving minimax- and equioscillation type results in certain contexts may be attempted without rebuilding the whole theory, just by transferring some existing results of an already better explored case to the new settings. This has already been done in [5] for the (unweighted) algebraic polynomial case of the interval, deriving it from the (unweighted) trigonometric polynomial case, explored in the major part of [5]. However, the transference was not easy and broke down for general weights (even if for even weights it seemed working). Similarly, in [14] Tatiana Nikiforova succeded in transferring certain results to the real line and semiaxis cases from the interval case - while leaving unresolved some of the related and still interesting questions. However, in both cases we can expect a more detailed and complete picture when we take the time to build up the method and explore the full strength of it right in the given context. Therefore, we did not settle with the results which could be transferred from [5], but worked out the interval case fully in [7], [8], [6]; we also think that it would be worthwhile to do so in the cases of the real line and the semiaxis. In particular, the relevant variant of the homeomorphism theorem is missed very much for the real line and the semiaxis. However, as already said and as will be explained in due course later, in the current setup that buildup does not seem to be possible, and we must be satisfied by a combination of transferred results and ad hoc arguments. From our point of view, however, the weighted trigonometric polynomial Bojanov-Chebyshev problem is not a main goal, but more of an application, which testifies the strength of the method. We try to work in a rather general framework, and prove more general results than that. In particular, the results will be valid also for generalized trigonometric polynomials (GTPs), which are introduced, e.g., in [3] Chapter A4 as follows. \[\mathcal{T}:=\bigcup_{n=1}^{\infty}\mathcal{T}_{n},\qquad\mathcal{T}_{n}:= \bigg{\{}T(\mathbf{z},t):=c_{0}\prod_{j=1}^{n}|\sin\pi(t-z_{j})|^{\nu_{j}}\ :\ c_{0}>0, \tag{1}\] \[\nu_{j}>0\:(j=1,\ldots,n),\:\mathbf{z}=(z_{1},\ldots,z_{n})\in\mathbb{C}^{n} \bigg{\}}.\] By periodicity one can assume \(0\leq\Re z_{j}<1\), and in the below extremal problems it is obvious that replacing \(\Re z_{j}\) for \(z_{j}\) can only decrease the quantity to be minimized, so that we will assume that all the \(z_{j}\)'s are real. However, fixing the ordering of \(z_{j}\) (or \(\Re z_{j}\)) has a role, with different fixed orderings posing separate extremal problems, and the ordering-specific solution being much stronger, than just a "global" minimization. As this issue has already been discussed in [5] and [7], e.g., we leave the details to the reader simply addressing the order-specific, stronger question here. One particular result ahead of us will be the following. **Theorem 1**.: _Let \(n\in\mathbb{N}\) and \(\nu_{1},\ldots,\nu_{n}>0\) be given. Put \(\nu:=(\nu_{1},\ldots,\nu_{n})\)._ _Further, let \(w:\mathbb{R}\to[0,\infty)\) be an upper bounded, nonnegative, \(1\)-periodic weight function, attaining positive values at more than \(n\) points of \(\mathbb{T}:=\mathbb{R}/\mathbb{Z}\)._ _Denote the weighted sup norm by \(\|.\|_{w}\) and consider the minimax problem_ \[M:=M(w,\nu):=\inf\big{\{}\|T(\mathbf{z},.)\|_{w}:\ T(\mathbf{z};t)=\prod_{j=1} ^{n}|\sin\pi(t-z_{j})|^{\nu_{j}}\in\mathcal{T}_{n},\] \[\exists c\in[0,1)\quad\text{ such that }\quad c\leq\Re z_{1}\leq\cdots\leq\Re z _{n}\leq c+1\big{\}}.\] _Then there exists a minimax point \(\mathbf{z}^{*}=(z_{1}^{*},\ldots,z_{n}^{*})\in\mathbb{T}^{n}\) and with the prescribed cyclic ordering of the nodes, satisfying \(\|T(\mathbf{z}^{*},.)\|_{w}=M(w,\nu)\)._ _Moreover, all \(z_{j}^{*}\)s are distinct and real, and their cyclic ordering is strict in the sense that there exists \(c\in\mathbb{R}\) such that \(c<z_{1}*<\cdots<z_{n}^{*}<c+1\) (as in the prescribed order, but with strict inequalities)._ _Furthermore, this extremal point has the equioscillation property, that is, \(\max\{T(\mathbf{z}^{*},t):\ z_{1}^{*}\leq t\leq z_{2}^{*}\}=\cdots=\max\{T( \mathbf{z}^{*},t):\ z_{j}^{*}\leq t\leq z_{j+1}^{*}\}=\cdots=\max\{T(\mathbf{z }^{*},t):\ z_{n}^{*}\leq t\leq z_{1}^{*}+1\}=M(w,\nu)\)._ The occurrence of \(c\) in the description is another simple-looking, yet important difference between the interval and torus setup. Basically, we fix here the ordering of nodes \(z_{j}^{*}\) only cyclically, that is, as they follow each other when one covers the circle once, moving continuously from some appropriate \(c\in\mathbb{T}\) in the positive (counter-clockwise) orientation until return. ## 2 Basics for the Bojanov-Chebyshev problem ### Trigonometric polynomials and generalized trigonometric polynomials It is well known that (real) trigonometric polynomials can be factorized as follows. Let \[T(t):=a_{0}+\sum_{j=1}^{n}a_{j}\sin(2\pi\,j\,t)+b_{j}\cos(2\pi\,j\,t) \tag{2}\] be a (real) trigonometric polynomial \((a_{0},a_{1},b_{1},a_{2},b_{2},\ldots,a_{n},b_{n}\in\mathbb{R}\), \(a_{n}^{2}+b_{n}^{2}\neq 0)\) of degree \(n\) (with period \(1\)). Then (see, e.g., [3] p. 10) there exist uniquely \(c_{0}\in\mathbb{R}\), \(c_{0}\neq 0\), \(z_{1},\ldots,z_{2n}\in\mathbf{C}\) such that nonreal \(z_{j}\)'s occur in conjugate pairs and \[T(t)=c_{0}\prod_{j=1}^{2n}\sin\left(\pi(t-z_{j})\right). \tag{3}\] This explains that GTPs are indeed generalizations of trigonometric polynomials1. Footnote 1: The somewhat curious fact is that root factorization of trigonometric polynomials relies on _pairs of factors_ of the form \(\sin(\pi(t-z_{j}))\), where one such factor in itself is not a trigonometric polynomial (as it is only antiperiodic, but not periodic by 1). Considering general products of root factors thus leads to GTPs. By taking logarithm of a generalized trigonometric polynomial (1), we have \[\log T(t)=\log c_{0}+\sum_{j=1}^{n}\nu_{j}\log\big{|}\sin(\pi(t-z_{j}))\big{|}.\] In this work we assume for normalization that our trigonometric polynomial or GTP is _monic_, i.e., the "leading coefficient" is \(c_{0}=1\). However, we consider weights, which are fixed, but can as well be constants, so that the weighted norm can incorporate any other prescribed leading coefficient as well. Obviously, the weighted minimax problem is equivalent to minimizing \(\log\|T(\mathbf{z},\cdot)\|_{w}=\sup_{\mathbb{T}}\log w(t)+\sum_{j=1}^{n}\nu_ {j}\log|\sin(\pi(t-z_{j}))|\). This reformulation leads to considering sums (and positive linear combinations) of translated copies of the basic "kernel function" \(\log|\sin(\pi t)|\), instead of products of root factors. That reformulation, so standard in logarithmic potential theory, will be the starting point of our presentation of the Fenton method for the current setup. ### Basics of Fenton's sum of translates approach In this paragraph we present the by now standard notations and terminology how we use Fenton's method. There is particular need for this clarification because we will use it in _two_ setups, needing it for the torus \(\mathbb{T}\), but also time to time referring to and invoking into our arguments corresponding results for the interval case. Here we start with notations and terminologies which can be equally interpreted for the torus and real line case, so with a slight abuse of notation we do not distinguish between them. However, in the next subsection we set a separated terminology with quantities for the periodic case denoted by a star, because in these notions there are some essential alterations. With this long, and sometimes doubled list of definitions, notions and terminology, these paragraphs will be boring and longish, but in later sections we will need it for precise references. Note that in most of our definitions we will _not_ assume that the considered functions and setups were periodic, and handle the periodic cases only as special cases, especially pointing out the periodicity assumption. A function \(K:(-1,0)\cup(0,1)\to\mathbb{R}\) is called a _kernel function_ if it is concave on \((-1,0)\) and on \((0,1)\), and if it satisfies \[\lim_{t\downarrow 0}K(t)=\lim_{t\uparrow 0}K(t). \tag{4}\] By the concavity assumption these limits exist, and a kernel function has one-sided limits also at \(-1\) and \(1\). We set \[K(0):=\lim_{t\to 0}K(t),\quad K(-1):=\lim_{t\downarrow-1}K(t)\quad\text{and} \quad K(1):=\lim_{t\uparrow 1}K(t).\] We note explicitly that we thus obtain the extended continuous function \(K:[-1,1]\to\mathbb{R}\cup\{-\infty\}=:\underline{\mathbb{R}}\), and that we have \(\sup K<\infty\). Also note that a kernel function is almost everywhere differentiable. A kernel function \(K\) is called _singular_ if \[K(0)=-\infty.\] ( \[\infty\] ) We say that the kernel function \(K\) is _strictly concave_ if it is strictly concave on both of the intervals \((-1,0)\) and \((0,1)\). In this paper we consider only systems of kernels which are constant multiples of each other, i.e., \[K_{j}(t)=\nu_{j}K(t) \tag{5}\] for some \(\nu_{1},\ldots,\nu_{n}>0\) and some kernel function \(K(t)\). The condition \[K^{\prime}(t)-K^{\prime}(t-1)\geq c\quad\text{for a.e. }t\in[0,1],\] ( \[PM_{c}\] ) was called "periodized \(c\)-monotonicity" in [8] and [7]. The particular case \(c=0\) deserves special attention. Then we have \[K^{\prime}(t)-K^{\prime}(t-1)\geq 0\quad\text{for a.e. }t\in[0,1].\] ( \[PM_{0}\] ) Our main objective is the study of kernels which extend to \(\mathbb{R}\)\(1\)-periodically: \[K(t-1)=K(t),\quad t\in\mathbb{R}, \tag{6}\] but sometimes we will invoke more general, not necessarily periodic kernels, too. It is straightforward that (6) implies (\(PM_{0}\)). Note that the log-trigonometric kernel \[K(t):=\log\big{|}\sin(\pi t)\big{|}\qquad(t\in\mathbb{R}), \tag{7}\] which is in the focus of our analysis, is periodic (6), strictly concave and singular (\(\infty\)) (and in particular \(K(1)=K(-1)=-\infty\), too). We will call a function \(J:\mathbb{R}\to\underline{\mathbb{R}}\) an _external \(n\)-field function on \(\mathbb{R}\)_ if it is \(1\)-periodic, bounded above and it assumes finite values at more than \(n\) different points from \([0,1)\). Comparing this definition with that of [7], we see that if is an external \(n\)-field function on \(\mathbb{R}\), then \(J\) is an external \(n\)-field function on \([0,1]\). In the opposite direction, if \(J\) is an external \(n\)-field function on \([0,1]\) and \(J(0)=J(1)\), then it can be extended \(1\)-periodically to \(\mathbb{R}\), to an external \(n\)-field function on \(\mathbb{R}\). We use external \(n\)-field functions on \(\mathbb{R}\) from now on and for simplicity, we call them field functions. With a slight abuse of notation we will also consider them as external field functions on \(\mathbb{T}:=\mathbb{R}/\mathbb{Z}\): again, on \(\mathbb{T}\) the defining properties are that \(J:\mathbb{T}\to\underline{\mathbb{R}}\), \(J>-\infty\) at more than \(n\) points of \(\mathbb{T}\), and \(J\) is upper bounded. For a field function \(J\) we define its _singularity set_ and _finiteness domain_ by \[X:=X_{J}:=J^{-1}(\{-\infty\})\cap[0,1)\quad\text{and}\quad X^{c}:=[0,1)\setminus X =J^{-1}(\mathbb{R})\cap[0,1). \tag{8}\] Then \(X^{c}\) has cardinality exceeding \(n\), in particular \(X\neq[0,1)\). Considering \(J\) as defined on \(\mathbb{T}\), we can replace \([0,1)\) with \(\mathbb{T}\) in all the above. Given \(n\in\mathbb{N}\) and \(n\) kernel functions \(K_{1},\ldots,K_{n}\), and an \(n\)-field function \(J\), pure sum of translates and sum of translates functions are defined as \[f(\mathbf{x},r) :=\,\sum_{j=1}^{n}K_{j}\left(r-x_{j}\right), \tag{9}\] \[F(\mathbf{x},r) :=J(r)+f(\mathbf{x},r) \tag{10}\] where \(r\in\mathbb{R}\), and \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\in[0,1]^{n}\); or, analogously, we can define \(f(\mathbf{y},t)\) and \(F(\mathbf{y},t)\) also for \(t\in\mathbb{T}\), \(\mathbf{y}\in\mathbb{T}^{n}\). ### Differences between the torus and the interval setting A useful step in several of our arguments-already in [5]-is the "cutting up" of the torus at an arbitrary point \(c\in\mathbb{T}\). To formalize it, we introduce the mapping \(\pi_{c}:\mathbb{R}\to\mathbb{T}\), \(\pi_{c}(r):=\{r+c\}=r+c\mod 1\). That constitutes a (multiple) covering mapping of \(\mathbb{T}\) (hence in particular it is continuous), and it is bijective on \([0,1)\), so its inverse \(\pi_{c}^{-1}:=(\pi_{c}|_{[0,1)})^{-1}:\mathbb{T}\to[0,1)\) is bijective, too. However, in the inverse direction the mapping ceases to remain continuous: it is continuous at all \(t\neq c,t\in\mathbb{T}\), but at \(c\) it has a jump. Cutting up is particularly useful when we want to prove local results like e.g. continuity of some mappings at \(\mathbf{x}\in\mathbb{T}^{n}\). Choosing \(c\) appropriately, node systems from \(\mathbb{T}^{n}\), subject to some ordering restriction and close to \(\mathbf{x}\) may correspond to node systems in \([0,1)^{n}\) admitting a specific ordering in the interval. However, ordered node systems do not have a global match on \(\mathbb{T}\). We will detail this phenomenon below. In [5] we introduced, for any permutation \(\sigma:\{1,\ldots,n\}\to\{1,\ldots,n\}\), the corresponding simplex on \([0,1]\) as \[S^{(\sigma)}_{[0,1]}:=\{(x_{1},\ldots,x_{n})\in[0,1]^{n}:0<x_{\sigma(1)}<x_{ \sigma(2)}<\ldots<x_{\sigma(n)}<1\}\subset\mathbb{R}^{n}.\] Its closure is \[\overline{S}^{(\sigma)}_{[0,1]}=\{(x_{1},\ldots,x_{n})\in[0,1]^{n}:\;0\leq x_{ \sigma(1)}\leq x_{\sigma(2)}\leq\ldots\leq x_{\sigma(n)}\leq 1\}\subset\mathbb{R}^{n}.\] An essential difference between the torus and interval setup is that in \([0,1]\) we cannot perturbe in both directions the nodes lying at the endpoints: they can be moved only towards the interval center. That restriction could be considered responsible for the need of some monotonicity assumption about the kernels when proving minimax etc. results for the interval case. However, in the torus a monotonicity assumption is in fact impossible: a periodic and monotone kernel function would necessarily be constant. On the other hand we had already seen in [5] that perturbation of node systems on the torus, in particular when we are free to decide about the direction of change of the nodes, are very useful. As a consequence, we need to consider all node systems, which may arise by means of such a perturbation, together. So, we need to consider the case, when a node passes over \(0\) and reappears at \(1\), as the same ordering. In fact, that is very natural: on the torus there is no strict ordering, but only an orientation, and the "order of nodes" can only be fixed as up to rotation. This we may call cyclic ordering. In this sense we may write \(x_{1}\preccurlyeq\ldots\preccurlyeq x_{n}\) if starting from \(x_{1}\in\mathbb{T}\) and moving in the counterclockwise direction (that is, according to the positive orientation of the circle), we pass the points in the order of their listing until after a full rotation we arrive at the initial point \(x_{1}\). Similarly for the strict precedence notation \(x_{1}\prec\ldots\prec x_{n}\). Correspondingly, arcs are defined as the set of points between two endpoints: \([a,b]:=\{x\in\mathbb{T}\ :\ a\preccurlyeq x\preccurlyeq b\}\) etc. Note that for \(n=2\ x_{1}\preccurlyeq x_{2}\) and also \(x_{2}\preccurlyeq x_{1}\) hold simultaneously for all points \(x_{1},x_{2}\in\mathbb{T}\), and that cyclic ordering of \(n\) nodes is possible in \((n-1)!\) different ways. The "large simplex on the torus" and its closure is defined as \[L :=\{(y_{1},y_{2},\ldots,y_{n})\in\mathbb{T}^{n}:\ y_{1}\prec y_ {2}\prec\ldots\prec y_{n}(\preccurlyeq y_{1})\}, \tag{11}\] \[\overline{L} :=\{(y_{1},y_{2},\ldots,y_{n})\in\mathbb{T}^{n}:\ y_{1}\preccurlyeq y _{2}\preccurlyeq\ldots\preccurlyeq y_{n}(\preccurlyeq y_{1})\}. \tag{12}\] One may consider various permutations for the cyclic ordering, but by relabeling we will always assume that the ordering is just this natural cyclic order. Pulling back by any \(\pi_{c}^{-1}\) coordinatewise we see that \[\pi_{c}^{-1}(\overline{L})=\cup_{j=0}^{n-1}\overline{S}_{[0,1]}^{(\sigma_{j} )},\quad\sigma_{j}(\ell):=\begin{cases}\ell+j,&\text{if }\ell=1,\ldots,n-j\\ \ell+j-n,&\text{if }\ell=n-j+1,\ldots,n.\end{cases} \tag{13}\] This decomposition is not disjoint, given that \(\overline{L}\) is connected. Similarly, no representation of \(L\) by disjoint \(S_{[0,1]}^{(\sigma)}\) exists, as \(L\) is connected, too. For \(\mathbf{x}\in\overline{S}_{[0,1]}^{(\sigma)}\), we have defined the intervals \[I_{0}(\mathbf{x}):=[0,x_{\sigma(1)}],\quad I_{j}(\mathbf{x}):=[x_{\sigma(j)}, x_{\sigma(j+1)}]\quad(1\leq j\leq n-1),\quad I_{n}(\mathbf{x}):=[x_{\sigma(n)},1],\] and the corresponding "interval maxima" (in fact, supremums, not maximums) as \[m_{j}(\mathbf{x}):=\sup\{F(\mathbf{x},t):\ t\in I_{j}(\mathbf{x})\},\quad j= 0,1,\ldots,n.\] Analogously, for an arbitrary \(\mathbf{y}\in\overline{L}\) we define \[I_{j}^{*}(\mathbf{y}) :=\{t\in\mathbb{T}:\ y_{j}\preccurlyeq t\preccurlyeq y_{j+1}\}, \ j=1,2,\ldots,n-1, \tag{14}\] \[I_{n}^{*}(\mathbf{y}) :=\{t\in\mathbb{T}:\ y_{n}\preccurlyeq t\preccurlyeq y_{1}\}, \tag{15}\] and the corresponding "arc maximums" \[m^{*}_{j}({\bf y}):=\sup\{F({\bf y},t):\ t\in I^{*}_{j}({\bf y})\},\quad j=1,2, \ldots,n.\] The correspondence between \(I^{*}_{k}({\bf y})\) and \(I_{j}({\bf x})\), and also between \(m^{*}_{k}({\bf y})\) and \(m_{j}({\bf x})\), can be described easily. Fix \(c\in\mathbb{T}\) arbitrarily and let \({\bf y}\in\overline{L}\). Then \(\pi^{-1}_{c}(y_{1}),\ldots,\pi^{-1}_{c}(y_{n})\in[0,1)\), moreover, setting \(x_{j}:=\pi^{-1}_{c}(y_{j})\), the coordinates of the vector \({\bf x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) follow according to the cyclic ordering of the coordinates of \({\bf y}\), that is, if \(y_{j}\prec c\preccurlyeq y_{j+1}\), then \({\bf x}\in\overline{S}^{(\sigma_{j})}_{[0,1]}\). Extending the definition of \(\pi_{c}\) and its inverse to vectors, we may write \(\pi^{-1}_{c}({\bf y})={\bf x}\in\overline{S}^{(\sigma_{j})}_{[0,1]}\). Then we find \(\pi_{c}(I_{k}({\bf x}))=\pi_{c}([x_{\sigma_{j}(k)},x_{\sigma_{j}(k+1)}])=I^{* }_{k}({\bf y})\) for \(k=1,\ldots,n-1\), while \(\pi_{c}(I_{0}({\bf x}))\cup I_{n}({\bf x}))=I^{*}_{n}({\bf y})\). Accordingly, the interval and arc maxima correspond to each other as follows. \[m^{*}_{k}({\bf y})=m_{k}({\bf x})\quad(1\leq k\leq n-1),\qquad m^{*}_{n}({\bf y })=\max(m_{0}({\bf x}),m_{n}({\bf x})). \tag{16}\] Note that this representation does depend on the ordering of the \(x_{k}\), because \(I_{0}({\bf x})=[0,x_{\sigma_{j}(1)}]=[0,\pi^{-1}_{c}(y_{j+1})]=\pi^{-1}_{c}([c,y_{j+1}])\), and \(I_{n}({\bf x})=[x_{\sigma_{j}(n)},1]=[\pi^{-1}_{c}(y_{j}),1]=\pi^{-1}_{c}([y_{ j},c])\). In [7, 9, 6] we have already investigated the following minimax and maximin problems on the interval \([0,1]\). \[\overline{m}({\bf x}):=\ \max_{j=0,1,\ldots,n}m_{j}({\bf x})=\sup\{F({ \bf x},r):\ r\in[0,1]\},\quad\underline{m}({\bf x}):=\min_{j=0,1,\ldots,n}m_{ j}({\bf x}),\] \[M(\overline{S}_{[0,1]}):=\inf\{\overline{m}({\bf x}):\ {\bf x}\in \overline{S}_{[0,1]}\},\qquad m(\overline{S}_{[0,1]}):=\sup\{\underline{m}({ \bf x}):\ {\bf x}\in\overline{S}_{[0,1]}\}.\] The analogous quantities on the torus are \[\overline{m}^{*}({\bf y}):=\ \max_{j=1,\ldots,n}m^{*}_{j}({\bf y})= \sup\{F({\bf y},t):\ t\in\mathbb{T}\},\quad\underline{m}^{*}({\bf y}):=\min_{ j=1,\ldots,n}m^{*}_{j}({\bf y}),\] \[M^{*}(\overline{L}):=\inf\{\overline{m}^{*}({\bf y}):\ {\bf y}\in \overline{L}\},\quad m^{*}(\overline{L}):=\ \sup\{\underline{m}^{*}({\bf y}):\ {\bf y}\in \overline{L}\}.\] Note that \(m^{*}_{n}(\pi_{c}({\bf x}))=\max\,(m_{n}({\bf x}),m_{0}({\bf x}))\), \[\overline{m}^{*}(\pi_{c}({\bf x}))=\overline{m}({\bf x}), \tag{17}\] and in view of (13) we also have2\(M^{*}(\overline{L})=\min_{j=1,\ldots,n}M(\overline{S}^{(\sigma_{j})})\). However, there is no similar easy formula for \(m^{*}(\overline{L})\). Footnote 2: In fact, this formula provides an alternative way to prove Theorem 12, but not of Theorem 13, so we have decided to follow the forthcoming approach. ## 3 Continuity results In this section, we collect the continuity properties of maximum functions. First, we recall Lemma 3.3 from [9]: **Lemma 2**.: _Let \(n\in\mathbb{N}\), \(\nu_{j}>0\) (\(j=1,\ldots,n\)), let \(J\) be an \(n\)-field function on \([0,1]\), and let \(K\) be a kernel function on \([-1,1]\)._ _Then \(\overline{m}:[0,1]^{n}\to\mathbb{R}\) is continuous._ From this we easily deduce the following. **Proposition 3**.: _Let \(n\in\mathbb{N}\), \(\nu_{j}>0\) (\(j=1,\ldots,n\)), let \(J\) be a field function and let \(K\) be a kernel function._ _Then \(\overline{m}^{*}:\mathbb{T}^{n}\to\mathbb{R}\) is continuous._ Proof.: Let \(\mathbf{a}\in\mathbb{T}^{n}\), \(\mathbf{a}=(a_{1},\ldots,a_{n})\) be fixed and \(c\in\mathbb{T}\setminus\{a_{1},\ldots,a_{n}\}\). We show that \(\overline{m}^{*}\) is continuous in a small neighborhood of \(\mathbf{a}\). Let \(\delta_{0}<\min_{j=1,\ldots,n}\operatorname{dist}_{\mathbb{T}}\left(c,a_{j}\right)\). We pull back \(\mathbf{y}\) to \([0,1]^{n}\) coordinatewise: \(x_{j}:=\pi_{c}^{-1}(y_{j})\), \(j=1,\ldots,n\). Then \(x_{j}\in(0,1)\) and \(x_{j}=x_{j}(y_{j})\) is continuous (since \(\operatorname{dist}_{\mathbb{T}}\left(y_{j},a_{j}\right)<\delta_{0}\), and \(\pi_{c}^{-1}\) is continuous save at \(c\)), so \(\mathbf{x}:=\mathbf{x}(\mathbf{y}):=\pi_{c}^{-1}(\mathbf{y})\) is changing continuously in the given neighborhood of \(\mathbf{a}\). We may write that \(F(\mathbf{y},t)=F(\pi_{c}(\mathbf{x}(\mathbf{y})),\pi_{c}(r))\) where \(\pi_{c}(r)=t\) and after simplifying the notation, we simply write \(F(\mathbf{y},t)=F(\mathbf{x}(\mathbf{y}),r)\), with \(t\in\mathbb{T}\) corresponding to \(r:=\pi_{c}^{-1}(t)\in[0,1)\). Therefore, with (17), we see that \(\overline{m}^{*}(\mathbf{y})=\overline{m}(\mathbf{x}(\mathbf{y}))\). The continuity of \(\overline{m}^{*}\) at \(\mathbf{a}\) follows from the continuity of \(\mathbf{x}(\mathbf{y})\) at \(\mathbf{a}\) and the continuity of \(\overline{m}\) at \(\mathbf{x}\), the latter coming from Lemma 2. We show continuity of the arc maxima functions \(m_{j}^{*}\) in some important cases. **Proposition 4**.: _Let \(n\in\mathbb{N}\) and \(k\in\{1,2,\ldots,n\}\) be fixed and let \(K_{1},\ldots,K_{n}\) be arbitrary kernel functions._ 1. _Suppose that_ \(J\) _is an arbitrary_ \(n\)_-field function and all_ \(K_{j}\)_,_ \(j=1,2,\ldots,n\) _satisfy_ \((\infty)\)_. Then_ \(m_{k}^{*}\) _is extended continuous on_ \(\overline{L}\)_._ 2. _Suppose that_ \(J\) _is an extended continuous field function. Then_ \(m_{k}^{*}\) _is extended continuous on_ \(\overline{L}\)_._ 3. _If_ \(J\) _is an upper semicontinuous_ \(n\)_-field function, then_ \(m_{k}^{*}\) _is upper semicontinuous on_ \(\overline{L}\)_._ Proof.: To see (a), let \(\mathbf{a}\in\overline{L}\) be fixed. If \(I_{k}^{*}(\mathbf{a})\neq\mathbb{T}\), then let \(c\in\mathbb{T}\setminus I_{k}^{*}(\mathbf{a})\), \(c\not\in\{a_{1},\ldots,a_{n}\}\). Then there is a \(j\in\{0,1,\ldots,n-1\}\) (see (13)) such that \(\pi_{c}^{-1}(\mathbf{a})\in\overline{S}_{[0,1]}^{(\sigma_{j})}\). Moreover, \(0<\pi_{c}^{-1}(a_{1}),\ldots,\pi_{c}^{-1}(a_{n})<1\). So, with (16) we can write \(m_{k}^{*}(\mathbf{y})=m_{k}(\mathbf{x})\) when \(\mathbf{y}\in\overline{L}\) is close to \(\mathbf{a}\) and \(\mathbf{x}=\pi_{c}^{-1}(\mathbf{y})\), \(\mathbf{x}\in\overline{S}_{[0,1]}^{(\sigma_{j})}\). Since \(\pi_{c}^{-1}:\overline{L}\to\overline{S}_{[0,1]}\) is continuous near \(\mathbf{a}\), and by Lemma 3.1 from [9], \(m_{j}:\overline{S}_{[0,1]}\to\underline{\mathbb{R}}\) is continuous, we obtain the assertion of this part. If \(I_{k}^{*}(\mathbf{a})=\mathbb{T}\), then we follow the same steps with \(c\in\mathbb{T}\setminus\{a_{k}\}\), but the arc \(I_{k}^{*}(\mathbf{a})\) -and all arcs \(I_{k}^{*}(\mathbf{y})\) with \(\mathbf{y}\) close to \(\mathbf{a}\)- necessarily split into two intervals via \(\pi_{c}^{-1}(\cdot)\), so we use the second half of (16), and we get \(m_{k}^{*}(\mathbf{y})=\max\left(m_{0}(\mathbf{x}),m_{n}(\mathbf{x})\right)\) when \(\mathbf{y}\in\overline{L}\) is close to \(\mathbf{a}\). Continuing with the same steps, we obtain the assertion. The proof of (b) is straightforward. The proof of (c) follows the same steps as that of (a), using Proposition 3.6 (a) from [9] and that the maximum of two upper semicontinuous function is again upper semicontinuous. **Example 5**.: _If \(J\) is not continuous, and \(K_{j}\) does not satisfy \((\infty)\) then \(m_{j}^{*}\) is not continuous on \(\overline{L}\)._ _To see this, take \(y^{*}\in\mathbb{T}\), where \(J\) is not continuous: we may assume \(y^{*}=1/2\). Let \(n=2\) and consider \(\mathbf{x}=(x,x)\) where \(x\approx 1/2\). Then \(m_{1}^{*}(\mathbf{x})=\sup_{I_{1}^{*}(\mathbf{x})}F(\mathbf{x},\cdot)=F( \mathbf{x},x)=J(x)+2K(0)\) and \(m_{1}^{*}((1/2,1/2))=F((1/2,1/2),1/2)=J(1/2)+2K(0)\). Hence \(m_{1}^{*}(\cdot)\) is not continuous at \((1/2,1/2)\)._ Let us remark the Berge proved a maximum theorem about partial maxima of bivariate functions, see, e.g., [1], but that result is not applicable here since his approach requires bivariate continuity. In our case, \(J\) may be discontinuous, and continuity of \(m_{j}\) or \(\overline{m}\) is thus nontrivial. ## 4 Perturbation lemmas The first perturbation lemma describes the behavior of sum of translates functions when two nodes are pulled apart. It appeared in several forms, e.g., in [5] (see Lemma 11.5), [12], Lemma 10 on p. 1069, or [7], Lemma 3.1. A similar form can be found in [10] (see around formula (15) too). **Lemma 6** (**Perturbation lemma**).: _Let \(K\) be a kernel function which is periodic (6). Let \(0\leq\alpha<a<b<\beta\leq 1\) and \(p,q>0\). Set_ \[\mu:=\frac{p(a-\alpha)}{q(\beta-b)}. \tag{18}\] 1. _If_ \(\mu=1\)_, then_ \[pK(t-\alpha)+qK(t-\beta)\leq pK(t-a)+qK(t-b).\] (19) _holds for every_ \(t\in[0,\alpha]\cup[\beta,1]\)_._ 2. _Additionally, if_ \(K\) _is strictly concave, then (_19_) holds with strict inequality._ 3. _If_ \(\mu=1\)_, then_ \[pK(t-\alpha)+qK(t-\beta)\geq pK(t-a)+qK(t-b).\] (20) _holds for every_ \(t\in[a,b]\)_._ 4. _Additionally, if_ \(K\) _is strictly concave, then (_20_) holds with strict inequality._ **Lemma 7** (**Trivial Lemma**).: _Let \(f,g,h:D\to\underline{\mathbb{R}}\) be functions on some Hausdorff topological space \(D\) and assume that_ 1. _either_ \(f,g,h\) _are all upper semicontinuous,_ 2. _or_ \(f,g\) _are extended continuous and_ \(h\) _is locally upper bounded, but otherwise arbitrary._ _Let \(\emptyset\neq A\subseteq B\subseteq D\) be arbitrary. Assume_ \[f(t)<g(t)\quad\text{ for all }t\in A. \tag{21}\] _If \(A\subseteq B\) is a compact set, then_ \[\sup_{A}(f+h)<\sup_{B}(g+h)\qquad\text{ unless }\qquad h\equiv-\infty\quad \text{on }\quad A. \tag{22}\] Proof.: The straightforward proof of (i) was given in [7] as Lemma 3.2. The proof of (ii) is similar, so we leave it to the reader. The following lemma is rather similar to Lemma 4.1. of [7]. However, there are several differences, too, in which the below version is stronger than the former version. First, here we do not assume upper semicontinuity of the field function, which was made possible by the observation that there is a version of the Trivial Lemma which relaxes on that condition on \(h\) (even if using a little more assumption regarding continuity of \(f\) and \(g\), which, on the other hand, are clearly available). Second, we assume non-degeneracy \(w_{i+1}>w_{i}\) only for indices \(i\in\mathcal{I}\), again a delicate novelty in the current version. Third, we drop the condition that \(K\) be monotone, an essentially necessary assumption for the interval \([0,1]\), but, as is already told in the Introduction, not required in the periodic case. In view of all these differences, as well as in regard of the slightly different setup of having only \(n\) arcs defined by the \(n\) node points (and not \(n+1\) intervals), we will present the full proof of the Lemma, even if its basic idea and a large part of the details are repeating the former argument. **Lemma 8** (**General maximum perturbation lemma on the torus**).: _Let \(n\in\mathbb{N}\) be a natural number, and let \(\nu_{1},\ldots,\nu_{n}>0\) be given positive coefficients. Let \(K\) be a kernel function on \(\mathbb{T}\), and let \(J\) be an arbitrary \(n\)-field function._ _Let \(\mathbf{w}\in\overline{L}\) and \(\mathcal{I}\cup\mathcal{J}=\{1,\ldots,n\}\) be a non-trivial partition, and assume that for all \(i\in\mathcal{I}\), we have \(w_{i}<w_{i+1}\) (which holds in particular, independently of \(\mathcal{I}\), if \(\mathbf{w}\in L\))._ _Then, arbitrarily close to \(\mathbf{w}\), there exists \(\mathbf{w}^{\prime}\in\overline{L}\setminus\{\mathbf{w}\}\), essentially different from and less degenerate than \(\mathbf{w}\) in the sense that_ \[w^{\prime}_{\ell}\neq w_{\ell}\quad\text{ unless }\quad\{\ell-1,\ell\} \subset\mathcal{I}\quad\text{ or }\quad\{\ell-1,\ell\}\subset\mathcal{J} \tag{23}\] _and3_ Footnote 3: Note that here \(w_{\ell}=w_{\ell+1}\) excludes \(\ell\in\mathcal{I}\), so only \(\mathcal{J}\) can contain all three indices listed. \[w^{\prime}_{\ell}\neq w^{\prime}_{\ell+1}\quad\text{unless }\quad\{\ell-1,\ell,\ell+1\}\subset\mathcal{J}\quad\text{and}\quad w_{\ell} =w_{\ell+1}, \tag{24}\] _(in particular, if \(\mathbf{w}\in L\) then necessarily \(\mathbf{w}^{\prime}\in L\)), and such that it satisfies_ \[F(\mathbf{w}^{\prime},t)\leq F(\mathbf{w},t)\text{ for all }t\in I_{i}^{*}( \mathbf{w}^{\prime})\quad\text{and}\quad I_{i}^{*}(\mathbf{w}^{\prime})\subseteq I _{i}^{*}(\mathbf{w})\text{ for all }i\in\mathcal{I}; \tag{25}\] \[F(\mathbf{w}^{\prime},t)\geq F(\mathbf{w},t)\text{ for all }t\in I_{j}^{*}( \mathbf{w})\quad\text{and}\quad I_{j}^{*}(\mathbf{w}^{\prime})\supseteq I_{j}^ {*}(\mathbf{w})\text{ for all }j\in\mathcal{J}. \tag{26}\] _As a result, we also have_ \[m_{i}^{*}(\mathbf{w}^{\prime})\leq m_{i}^{*}(\mathbf{w})\text{ for }i\in \mathcal{I}\quad\text{and}\quad m_{j}^{*}(\mathbf{w}^{\prime})\geq m_{j}^{*}( \mathbf{w})\text{ for }j\in\mathcal{J} \tag{27}\] _for the corresponding torus maxima._ _Moreover, if \(K\) is strictly concave, then the inequalities in (25) and (26) are strict for all points in the respective arcs where \(J(t)\neq-\infty\)._ _Furthermore, for strictly concave \(K\) the inequalities in (27) are also strict for all indices \(k\) with non-singular \(I_{k}^{*}(\mathbf{w})\)._ Proof.: Before the main argument, we observe that the assertion in (27) is indeed a trivial consequence of the previous inequalities (26) and (25), so we need not give a separate proof for that. Second, the inequalities (25) and (26) follow from \[f(\mathbf{w}^{\prime},t)\leq f(\mathbf{w},t)\ (\forall t\in I_{i}^{*} (\mathbf{w}^{\prime}))\quad\text{and}\quad I_{i}^{*}(\mathbf{w}^{\prime}) \subseteq I_{i}^{*}(\mathbf{w})\quad\text{for all }i\in\mathcal{I}; \tag{28}\] \[f(\mathbf{w}^{\prime},t)\geq f(\mathbf{w},t)\ (\forall t\in I_{j}^{*} (\mathbf{w}))\quad\text{and}\quad I_{j}^{*}(\mathbf{w}^{\prime})\supseteq I _{j}^{*}(\mathbf{w})\quad\text{for all }j\in\mathcal{J}. \tag{29}\] Moreover, strict inequalities for all points \(t\) with \(J(t)\neq-\infty\) will follow from (25) and (26) if we can prove strict inequalities in (28) and (29) _for all values of \(t\)_ in the said compact arcs. Furthermore, in case we have strict inequalities in (28) and (29) for all points \(t\), then for non-singular \(I_{k}^{*}(\mathbf{w})\) this entails strict inequalities also in (27) (for the corresponding \(k\)). To see this, one may refer back to the Trivial Lemma 7 (ii) with \(\{f,g\}=\{f(\mathbf{w},\cdot),f(\mathbf{w}^{\prime},\cdot)\}\), \(h=J\), \(\{A,B\}=\{I_{k}^{*}(\mathbf{w}),I_{k}^{*}(\mathbf{w}^{\prime})\}\). Here in the case when \(k=i\in\mathcal{I}\) we need to use that the arc \(I_{i}^{*}(\mathbf{w})\) is not degenerate for \(i\in\mathcal{I}\), hence if it is nonsingular, too, then either \(J|_{I_{i}^{*}(\mathbf{w}^{\prime})}\equiv-\infty\) and then we have the strict inequality \(m_{i}^{*}(\mathbf{w})>-\infty=m_{i}^{*}(\mathbf{w}^{\prime})\), or \(J(t)>-\infty\) at some points of \(I_{i}^{*}(\mathbf{w}^{\prime})\), and then using this Lemma furnishes the required strict inequality. (The other case with \(k=j\in\mathcal{J}\) is easier because nonsingularity of \(I_{j}^{*}(\mathbf{w})\) implies nonsingularity of the larger arc \(I_{j}^{*}(\mathbf{w}^{\prime})\), too, and then the application of the Lemma need not be coupled by considerations of an identically \(-\infty\) field.) So the proof hinges upon showing (23), (24), (28) and (29), for any \(n\)-field function and any kernel function, coupled with the strict inequality assertion in (28) and (29) for all \(t\) belonging to the said compact arcs, in case \(K\) is strictly concave. For \(n=0\) or \(n=1\) there is no nontrivial partition of the index set \(\{1,\ldots,n\}\), hence the assertion is void and true. For \(n=2\) there is essentially only one way to split the index set in a nontrivial way, so the statement will be part of the following, more general setup in Case \(0\), which we prove directly. Actually, the \(n=2\) case can be proved directly from Lemma 6, but we will need the more general Case \(0\) anyway. **Case \(\mathbf{0}\)**. We prove directly the assertion when \(\mathcal{I},\mathcal{J}\) contain no neighboring indices, so \(n\) must be even and \(\mathcal{I}\) and \(\mathcal{J}\) partition \(\{1,\ldots,n\}\) into the subsets of odd and even natural numbers from \(1\) to \(n\). We can assume that \(\mathcal{I}=(2\mathbb{N}+1)\cap\{1,\ldots,n\}\) and \(\mathcal{J}=2\mathbb{N}\cap\{1,\ldots,n\}\) (the other case being a simple change of the cut, i.e., starting of the listing of the cyclic ordering of nodes from one node later). Note that whenever \(w_{j}\in\mathcal{J}\) (i.e., when \(j\) is even), then we necessarily have \(j-1,j+1\in\mathcal{I}\), and \(w_{j-1}<w_{j}\leq w_{j+1}<w_{j+2}\). Denote \(\delta:=\min_{i\in\mathcal{I}}|I_{i}^{*}(\mathbf{w})|\), (where \(|I_{i}^{*}(\mathbf{w})|\) is the length of the arc \([w_{i},w_{i+1}]\)) which is positive by condition. Our new perturbed node system \(\mathbf{w}^{\prime}\) will be, with an arbitrary \(0<h<\frac{1}{2}\delta/\max\{\nu_{1},\ldots,\nu_{n}\}\), the system \[\mathbf{w}^{\prime}:=(w_{1}^{\prime},\ldots,w_{n}^{\prime})\quad\text{with} \quad w_{\ell}^{\prime}:=w_{\ell}-(-1)^{\ell}\frac{1}{\nu_{\ell}}h,\quad\ell =1,2,\ldots,n. \tag{30}\] The definition guarantees, that \(w_{\ell}^{\prime}=w_{\ell}\) happens for no index \(\ell\), furnishing (23). It is easy to see that by the choice of the perturbation lengths, no two consecutive nodes will change ordering or reach each other: for \(j\in\mathcal{J}\)\(w_{j},w_{j+1}\) are changed to become farther away, while for \(i\in\mathcal{I}\)\(w_{i},w_{i+1}\) are moved closer, but only by \(\nu_{i}h+\nu_{i+1}h<\delta<w_{i+1}-w_{i}\). It follows that the ordering of nodes remains in \(\overline{L}\). (In a minute we will see that, moreover, there remains no degenerate arc, so \(\mathbf{w}^{\prime}\in L\setminus\{\mathbf{w}\}\).) Obviously, \(I_{j}^{*}(\mathbf{w}^{\prime})\supset I_{j}^{*}(\mathbf{w})\) holds for all \(j\in\mathcal{J}\), and \(I_{i}^{*}(\mathbf{w}^{\prime})\subset I_{i}^{*}(\mathbf{w})\) holds for all \(i\in\mathcal{I}\), with the inclusions strict, furnishing the second parts of (25) and (26) (matching the second parts of (28) and (29), too). In particular, even if \(I_{j}^{*}(\mathbf{w})\) may be degenerate for some \(j\in\mathcal{J}\), i.e., for some even \(j\), the \(j\)th arc of the perturbed system will not be such: \(w_{j}^{\prime}<w_{j}\leq w_{j+1}<w_{j+1}^{\prime}\) for any even \(j\in\mathcal{J}\). We get \(\mathbf{w}^{\prime}\in L\), entailing (24). Take now an even indexed arc \(I_{2k}^{*}(\mathbf{w})=[w_{2k},w_{2k+1}]\), so \(2k\in\mathcal{J}\). (When \(2k=n\), then we must read \(2k+1=n+1\equiv 1\), i.e. \(w_{2k+1}=w_{1}\).) Our perturbation of nodes in (30) can now be grouped as _pairs of changing nodes_\(w_{2\ell-1},w_{2\ell}\) among \(w_{1},\ldots w_{2k}\), and then again among \(w_{2k+1},\ldots,w_{n}\), recalling that \(n\) is even. Now, _the pairs_ are always changed so that the arcs in between shrink, and shrink exactly as is described in Lemma 6. We apply this lemma for each pair of such nodes with the choices \(a=w_{2\ell-1}^{\prime}\), \(b=w_{2\ell}^{\prime}\), \(\alpha=w_{2\ell-1}\), \(\beta=w_{2\ell}\), \(p=\nu_{2\ell-1}\), \(q=\nu_{2\ell}\). This gives that for each such pair of changes, for \(t\)_outside of the enclosed arc_\((w_{2\ell-1},w_{2\ell})=I_{2\ell-1}^{*}(\mathbf{w})\) we have \[\nu_{2\ell-1}K(t-w_{2\ell-1}^{\prime})+\nu_{2\ell}K(t-w_{2\ell}^{\prime})\geq \nu_{2\ell-1}K(t-w_{2\ell-1})+\nu_{2\ell}K(t-w_{2\ell}). \tag{31}\] Note that \(I_{2k}^{*}(\mathbf{w})\), hence any \(t\in I_{2k}^{*}(\mathbf{w})\), is _always outside_ of the arcs \(I_{2\ell-1}^{*}(\mathbf{w})\) therefore (31) holds for the given, fixed \(t\in I_{2k}^{*}({\bf w})\) and for all \(\ell\). So we find \[f({\bf w},t) =\sum_{\ell=1}^{n/2}\left(\nu_{2\ell-1}K(t-w_{2\ell-1})+\nu_{2\ell} K(t-w_{2\ell})\right)\] \[\leq\sum_{\ell=1}^{n/2}\left(\nu_{2\ell-1}K(t-w_{2\ell-1}^{\prime} )+\nu_{2\ell}K(t-w_{2\ell}^{\prime})\right)=f({\bf w}^{\prime},t). \tag{32}\] Furthermore, all the appearing inequalities are strict in case \(K\) is strictly concave. We have proved (29), even with strict inequality under appropriate assumptions. The proof of (28) runs analogously by grouping the change of nodes as a change of pairs \(w_{2\ell},w_{2\ell+1}\) for \(\ell=1,\ldots,n/2\), writing \(w_{2(n/2)+1}=w_{n+1}=w_{1}\) according to periodicity. For these arcs \(2\ell\in{\cal J}\) and the arcs are getting larger after the perturbation, so outside these enlarged arcs-that is, for all points which belong to any \(I_{2k-1}^{*}({\bf w}^{\prime})\) for some fixed \(k\)-the changed value \(f({\bf w}^{\prime},t)\) will not exceed (and in case of strict concavity, will be strictly smaller than) \(f({\bf w},t)\). This means a nonincreasing (decreasing) change for all \(I_{i}^{*}({\bf w}^{\prime})\), entailing (28) together with the respective strict inequality statement. The proof of Case 0 is thus completed. Therefore, we have also the case \(n=2\) proved. From here we continue our argumentation by induction. Let now \(n>2\) and assume, as inductive hypothesis, the validity of the assertions for all \(n^{*}\leq\widetilde{n}:=n-1\) and for any choice of kernel- and \(n^{*}\)-field functions. In view of Case 0 above, there remains the case when there are neighboring indices \(k-1,k\) belonging to the same index set \({\cal I}\) or \({\cal J}\). In view of the cyclic ordering and to avoid indexing complications, assume that we also have \(1<k<n\), which is a possibility for any \(n\) at least \(3\). We separate two cases. **Case 1**. Assume first that \(w_{k-1}<w_{k}<w_{k+1}\) holds. Then we consider the kernel function \(\widetilde{K}:=K\), and the \(\widetilde{n}\)-field function \(\widetilde{J}:=\nu_{k}K(\cdot-w_{k})\) (which is indeed an \(\widetilde{n}\)-field function because it attains \(-\infty\) only at most at one point, namely \(w_{k}\), in case \(K\) is singular.) Correspondingly, now the sum of translates function \(\widetilde{F}\) is formed by using \(\widetilde{n}=n-1\) translates with coefficients \(\nu_{1},\ldots,\nu_{k-1},\nu_{k+1},\ldots,\nu_{n}\) and with respect to the node system \[\widetilde{{\bf w}}:=(w_{1},w_{2},\ldots,w_{k-1},w_{k+1},\ldots,w_{n}).\] Formally, the indices change: \(\widetilde{w}_{\ell}=w_{\ell}\) for \(\ell=1,\ldots,k-1\), but \(\widetilde{w}_{\ell}=w_{\ell+1}\) for \(\ell=k,\ldots,n-1\), the \(k\)th coordinate being left out. We apply the same change of indices in the partition: \(k\) is dropped out (but the corresponding index set \({\cal I}\) or \({\cal J}\) will not become empty, for it contains \(k-1\)); and then shift indices one left for \(\ell>k\): so \[\widetilde{{\cal I}} :=\{i\in{\cal I}:i<k\}\cup\{i-1:i\in{\cal I},i>k\}\mbox{ and }\] \[\widetilde{{\cal J}} :=\{j\in{\cal J}:j<k\}\cup\{j-1:j\in{\cal J},j>k\}.\] Observe that \(F(\widetilde{\mathbf{w}},t)=f(\mathbf{w},t)\) for all \(t\in\mathbb{T}\), while \[I^{*}_{\ell}(\widetilde{\mathbf{w}})=\begin{cases}I^{*}_{\ell}(\mathbf{w}),& \text{if}\quad\ell<k-1,\\ I^{*}_{k-1}(\mathbf{w})\cup I^{*}_{k}(\mathbf{w}),&\text{if}\quad\ell=k-1,\\ I^{*}_{\ell+1}(\mathbf{w}),&\text{if}\quad\ell\geq k.\end{cases}\] (Here we make a little use of the choice that \(1<k<n\), so we need not bother too much with the cyclic renumbering etc.) Note that \(I^{*}_{i}(\widetilde{\mathbf{w}})=[\widetilde{w}_{i},\widetilde{w}_{i+1}]\) is still nondegenerate whenever \(i\in\widetilde{\mathcal{I}}\), for the arc is either a former arc belonging to some \(i\in\mathcal{I}\), or the union of two such arcs. Also, ordering of nodes is kept intact, so \(\widetilde{\mathbf{w}}\in\overline{L}^{(\widetilde{n})}\) (where \(\overline{L}^{(n)}\) and \(\overline{L}^{(\widetilde{n})}\) denote the cyclic simplices of the corresponding dimension). Now we apply the inductive hypothesis for the new configuration. This yields a perturbed node system \(\widetilde{\mathbf{w}}^{\prime}\in\overline{L}^{(\widetilde{n})}\setminus \{\widetilde{\mathbf{w}}\}\), arbitrarily close to \(\widetilde{\mathbf{w}}\), with the asserted properties. It is important that here the ordering of the nodes remain the order fixed in \(\overline{L}^{(\widetilde{n})}\), so if \(\widetilde{\mathbf{w}}^{\prime}\) was closer to \(\widetilde{\mathbf{w}}\) than the distance \(\delta\) of \(w_{k}\) from \(\{w_{k-1},w_{k+1}\}\), then the \(n\)-term node system \(\mathbf{w}^{\prime}\), obtained by keeping the nodes from \(\widetilde{\mathbf{w}}^{\prime}\) and inserting back \(w_{k}\) to the \(k\)th place (and shifting the following indices by one) will again be ordered as \(\mathbf{w}\) was, i.e., \(\mathbf{w}^{\prime}\in\overline{L}^{(n)}\) (and of course \(\neq\mathbf{w}^{\prime}\), as already \(\widetilde{\mathbf{w}}^{\prime}\neq\widetilde{\mathbf{w}}\)). Moreover, \(w^{\prime}_{k}=w_{k}\) is still not equal to any of the nodes \(w^{\prime}_{k-1},w^{\prime}_{k+1}\), because \(\operatorname{dist}_{\mathbb{T}^{n}}(\mathbf{w}^{\prime},\mathbf{w})<\delta\). We now set to prove (23). First, if for some \(\ell<k\) we have \(w^{\prime}_{\ell}=w_{\ell}\), then \(\widetilde{w}^{\prime}_{\ell}=:w^{\prime}_{\ell}=w_{\ell}=:\widetilde{w}_{\ell}\), and then by the inductive hypothesis \(\{\ell-1,\ell\}\subset\widetilde{\mathcal{I}}\) or \(\widetilde{\mathcal{J}}\). This gives (23) in case \(\ell<k\), because below \(k\) the partition sets \(\mathcal{I}\) and \(\widetilde{\mathcal{I}}\) (and \(\mathcal{J}\) and \(\widetilde{\mathcal{J}}\), respectively), consist of the same indices. Take now some \(\ell>k\) with \(w^{\prime}_{\ell}=w_{\ell}\). Then \(\widetilde{w}^{\prime}_{\ell-1}=:w^{\prime}_{\ell}=w_{\ell}=:\widetilde{w}_{ \ell-1}\) and \(\{\ell-2,\ell-1\}\subset\widetilde{\mathcal{I}}\) or \(\widetilde{\mathcal{J}}\) by the inductive hypothesis. If it was \(\ell>k+1\), too, then this means \(\{\ell-1,\ell\}\subset\mathcal{I}\) or \(\mathcal{J}\), that is, (23). If, however, \(\ell=k+1\), then \(\ell-2=k-1\) and \(\ell-1=k\), and by construction we get that \(\{k-1,k+1\}\subset\mathcal{I}\) or \(\mathcal{J}\). Given that we already have from the outset that \(k\) belongs to the same index set as \(k-1\), this altogether gives \(\{k-1,k,k+1\}\subset\mathcal{I}\) or \(\mathcal{J}\), which is more than needed for (23). Finally, if \(\ell=k\), then \(w^{\prime}_{k}=w_{k}\) by construction, but then we had by assumption that \(k-1\) and \(k\) belonged to the same set \(\mathcal{I}\) or \(\mathcal{J}\), so that (23) is satisfied. Consider now the assertions of (24). As above, there is no problem with respective index sets all remaining the same or all being shifted by one, i.e., if either \(\ell<k-1\) or if \(\ell>k+1\). Recall that we assumed \(w_{k-1}<w_{k}<w_{k+1}\) at the outset, and chose the perturbation small enough to keep this strict ordering. Therefore, \(w^{\prime}_{k-1}=w^{\prime}_{k}\) and \(w^{\prime}_{k}=w^{\prime}_{k+1}\) are excluded, and only the case of \(\ell=k+1\) and \(w^{\prime}_{k+1}=w^{\prime}_{k+2}\) remains to be dealt with. Now, \(\widetilde{w}^{\prime}_{k}=:w^{\prime}_{k+1}=w^{\prime}_{k+2}:=\widetilde{w}^{ \prime}_{k+1}\), so \(\{k-1,k,k+1\}\subset\widetilde{\mathcal{J}}\) and \(\widetilde{w}_{k}:=\widetilde{w}_{k+1}\) according to the inductive hypothesis. The latter means \(w_{k+1}=w_{k+2}\), while for the indices we obtain \(\{k-1,k+1,k+2\}\subset\mathcal{J}\). But \(k-1\) and \(k\) belong to the same index set, so that also \(k\) must belong to \(\mathcal{J}\), and therefore \(\{k-1,k,k+1,k+2\}\subset\mathcal{J}\), entailing (24). Using that \(k-1\) and \(k\) belong to the same index set \(\mathcal{I}\) or \(\mathcal{J}\), it is easy to check that \(I_{i}^{*}(\widetilde{\mathbf{w}}^{\prime})\subseteq I_{i}^{*}(\widetilde{ \mathbf{w}})\) for all \(i\in\widetilde{\mathcal{I}}\) is equivalent to \(I_{i}^{*}(\mathbf{w}^{\prime})\subseteq I_{i}^{*}(\mathbf{w})\) for all \(i\in\mathcal{I}\), and \(I_{j}^{*}(\widetilde{\mathbf{w}}^{\prime})\supseteq I_{j}^{*}(\widetilde{ \mathbf{w}})\) for all \(j\in\widetilde{\mathcal{J}}\) is equivalent to \(I_{j}^{*}(\mathbf{w}^{\prime})\supseteq I_{j}^{*}(\mathbf{w})\) for all \(j\in\mathcal{J}\). Further, the assertions (25), (26) from the inductive hypotheses lead to the assertions (28), (29) for the original case. Therefore, by the preliminary observations also (25), (26) follow. Moreover, the assertion regarding strict inequalities for all \(t\) in case of a strictly concave \(K\) follow from the respective strict inequalities for the inductive hypotheses, noting that \(I_{k-1}^{*}(\widetilde{\mathbf{w}})=I_{k-1}^{*}(\mathbf{w})\cup I_{k}^{*}( \mathbf{w})\) can handle the necessary inequalities for both indices \(k-1\) and \(k\), because these belong to the same index set \(\mathcal{I}\) or \(\mathcal{J}\), and hence invoke inequalities in the same direction. **Case 2**. Consider now the case when some of the partition sets \(\mathcal{I},\mathcal{J}\) contain some neighboring indices \(k-1,k\), such that \(w_{k-1}=w_{k}\) or \(w_{k}=w_{k+1}\) holds, too. Then this index set cannot be \(\mathcal{I}\), for indices in \(\mathcal{I}\) the respective arcs were supposed to be nondegenerate. So, \(k-1,k\in\mathcal{J}\). Repeating the above argument in Case 1 then works. Let us detail, why. The main point where we needed that \(w_{k-1}<w_{k}<w_{k+1}\) was where we wanted to see that the new node system \(\widetilde{\mathbf{w}}^{\prime}\), provided by the induction hypothesis, not only preserves cyclic ordering of nodes from \(\widetilde{\mathbf{w}}\), but even with re-inserting \(w_{k}\) and thus manufacturing \(\mathbf{w}^{\prime}\) will still result in a point belonging to \(\overline{L}^{(n)}\). Observe that the inductive hypothesis, in view of \(k-1,k\in\mathcal{J}\), furnishes that \(I_{k-1}^{*}(\widetilde{\mathbf{w}})\) is subject to growth, so it will still contain the point \(w_{k}\), that is, \(w_{k-1}^{\prime}:=\widetilde{w}_{k-1}^{\prime}\leq\widetilde{w}_{k-1}:=w_{k-1 }\leq w_{k}\leq w_{k+1}=:\widetilde{w}_{k}\leq\widetilde{w}_{k}^{\prime}=:w_{k +1}^{\prime}\). It follows immediately that we will thus have \(\mathbf{w}^{\prime}\in\overline{L}\) at least. Moreover, as above, equality of other perturbed and original nodes \(w_{\ell}^{\prime}\) and \(w_{\ell}\) (\(\ell\neq k\)) can occur only when they occurred also in \(\widetilde{\mathbf{w}}\), hence in \(\mathbf{w}\), too. Checking the arising conditions for the indices can be done mutatis mutandis the above case, proving (23), on noting that for \(\ell=k\) we already have \(k-1,k\in\mathcal{J}\) by assumption. To prove (24), assume now the identity \(w_{\ell}^{\prime}=w_{\ell+1}^{\prime}\). Again, we separate cases according to the size of \(\ell\) and start with the case \(\ell<k-1\). This implies \(\widetilde{w}_{\ell}^{\prime}=:w_{\ell}^{\prime}=w_{\ell+1}^{\prime}:= \widetilde{w}_{\ell+1}^{\prime}\), hence also \(\widetilde{w}_{\ell}=\widetilde{w}_{\ell+1}\) and \(\ell-1,\ell,\ell+1\in\widetilde{\mathcal{J}}\) in view of the inductive hypothesis, so \(w_{\ell}=w_{\ell+1}\) and \(\ell-1,\ell,\ell+1\in\mathcal{J}\), too. Similarly, if \(\ell>k\) then the identity \(w_{\ell}^{\prime}=w_{\ell+1}^{\prime}\) implies \(\widetilde{w}_{\ell-1}^{\prime}=:w_{\ell}^{\prime}=w_{\ell+1}^{\prime}:= \widetilde{w}_{\ell}^{\prime}\), hence also \(\widetilde{w}_{\ell-1}=\widetilde{w}_{\ell}\) and \(\ell-2,\ell-1,\ell\in\widetilde{\mathcal{J}}\) in view of the inductive hypothesis, so \(w_{\ell}=w_{\ell+1}\) and if \(\ell>k+1\), then \(\ell-1,\ell,\ell+1\in\mathcal{J}\), too, while if \(\ell=k+1\), then we get only \(k-1,k+1,k+2\in\mathcal{J}\), but then again we remind to \(k\in\mathcal{J}\) and get \(\{k-1,k,k+1,k+2\}\subset\mathcal{J}\), proving (24). It remains to deal with \(\ell=k-1\) and \(\ell=k\), which did not occur in Case 1 above. Now, \(\ell=k-1\) means that we have the identity \(w_{k-1}^{\prime}=w_{k}^{\prime}\). Since \(k-1,k\in\mathcal{J}\) by assumption, we also have \(k-1\in\widetilde{\mathcal{J}}\), hence \(I_{k-1}^{*}(\widetilde{\mathbf{w}})\) cannot shrink, and therefore \(\widetilde{w}_{k-1}^{\prime}\leq\widetilde{w}_{k-1}\). Moreover, if we had strict inequality here, then we would have to have \(w_{k-1}^{\prime}:=\widetilde{w}_{k-1}^{\prime}<\widetilde{w}_{k-1}:=w_{k-1} \leq w_{k}=:w_{k}^{\prime}\), although we supposed the contrary now. So, we must have \(\widetilde{w}_{k-1}^{\prime}=\widetilde{w}_{k-1}\), the inductive hypothesis applies, and we derive that both \(k-2\) and \(k-1\) belong to the same index set - that is, because of \(k-1\), to \(\mathcal{J}\). However, we already know by assumption also \(k\in\mathcal{J}\), so altogether \(\{k-2,k-1,k\}\subset\mathcal{J}\), as needed. The case \(\ell=k\) is similar. If \(w_{k}^{\prime}=w_{k+1}^{\prime}\), then taking into account that \(I_{k-1}^{*}(\widetilde{\mathbf{w}})\) cannot shrink (as \(k-1\in\widetilde{\mathcal{J}}\)) we must have \(w_{k+1}^{\prime}:=\widetilde{w}_{k}^{\prime}\geq\widetilde{w}_{k}:=w_{k+1} \geq w_{k}=w_{k}^{\prime}\) entailing that all inequalities are in fact equalities, and in particular both \(w_{k+1}=w_{k}\) and \(\widetilde{w}_{k}^{\prime}=\widetilde{w}_{k}\). Referring to the inductive hypothesis this furnishes \(k-1,k\in\widetilde{\mathcal{J}}\), that is, \(k-1,k+1\in\mathcal{J}\), whilst \(k\in\mathcal{J}\) by the original condition, altogether yielding \(\{k-1,k,k+1\}\subset\mathcal{J}\), as needed. So, we proved (23) and (24) for this case, too. The proof for the remaining inequalities and strictness of them in case of a strictly concave kernel is identical to the argument in Case 1. We thus conclude the proof of Case 2, whence the whole Lemma. **Remark 9**.: Although the formulation of the Lemma is a bit complicated, one may note that assuming \(w_{i}<w_{i+1}\) for all \(i\in\mathcal{I}\) is absolutely natural and minimal. Natural, because if we want to decrease the arcs \(I_{i}^{*}(\mathbf{w})\) for all \(i\in\mathcal{I}\), then these arcs must shrink, hence we cannot perform this change when they are already degenerate one point intervals. Minimal, because we do not assume similar conditions for any \(I_{j}^{*}(\mathbf{w})\) with \(j\in\mathcal{J}\), so even the less that \(\mathbf{w}\in L\). **Remark 10**.: If we had \(\mathbf{w}\in L\) from the outset, then we will as well have \(\mathbf{w}^{\prime}\in L\). **Corollary 11**.: _If we only want respective inequalities for the \(m_{j}^{*}\), without requiring strict inclusions regarding the underlying intervals, then we can apply a further perturbation, now leading to \(\mathbf{w}^{\prime\prime}\in L\) (the point being that with different endpoint nodes!) and still satisfying the required strict inequalities between the \(m_{j}^{*}\), provided that we had strict inequalities (so, e.g., \(K\) was strictly concave) and provided the \(m_{j}^{*}\) change continuously._ Note that this latter condition of continuity of the \(m_{j}^{*}\) is satisfied if \(K\) is singular or if \(J\) is continuous, see Proposition 4 (a) and (c). ## 5 Minimax and maximin theorems ### Minimax for strictly concave kernels The following theorem contains, as a rather special case with the choice of the log-sine kernel \(K(t):=\log|\sin(\pi t)|\), the above stated Theorem 1. **Theorem 12**.: _Let \(n\in\mathbb{N}\) and \(\nu_{1},\ldots,\nu_{n}>0\), let \(K\) be a singular, strictly concave periodic kernel function, and let \(J\) be an arbitrary periodic \(n\)-field function._ _Then there exists a minimax point \(\mathbf{w}\) on \(\overline{L}\), it belongs to the open "cyclic simplex" \(L\), and it is an equioscillation point._ Proof.: We already know that \(\overline{m}^{*}\) is continuous, thus it attains its infimum at a minimum point (where, in view of \(\overline{m}^{*}:\mathbb{T}^{n}\to\mathbb{R}\), a finite minimax value is attained). Now assume for a contradiction that the obtained minimax node system \(\mathbf{w}\) is not an equioscillating system. Then there are indices with \(m_{i}^{*}(\mathbf{w})=\overline{m}^{*}(\mathbf{w})\), but not all indices are such. So, take \(\mathcal{I}:=\{i\ :\ m_{i}^{*}(\mathbf{w})=\overline{m}^{*}(\mathbf{w})\}\) and \(\mathcal{J}:=\{1,2,\ldots,n\}\setminus\mathcal{I}=\{j\ :\ m_{j}^{*}(\mathbf{w})< \overline{m}^{*}(\mathbf{w})\}\). These index sets will define a nontrivial partition of the full set of indices from \(1\) to \(n\). In order to apply the Perturbation Lemma we will need that for \(i\in\mathcal{I}\) the endpoint nodes are different: \(w_{i}<w_{i+1}\). Given that \(m_{i}^{*}(\mathbf{w})=\overline{m}^{*}(\mathbf{w})\), this is certainly so in case \(K\) is singular, for then \(m_{\ell}^{*}(\mathbf{w})=-\infty<\overline{m}^{*}(\mathbf{w})\) for any degenerate arc \(I_{\ell}^{*}(\mathbf{w})\). Let us now apply Lemma 8, which results in a new node system \(\mathbf{w}^{\prime}\), admitting the same cyclic ordering and lying arbitrarily close to \(\mathbf{w}\), and with \(m_{i}^{*}(\mathbf{w}^{\prime})<\overline{m}^{*}(\mathbf{w})\) for all \(i\in\mathcal{I}\). On the other hand, the \(m_{j}^{*}(\mathbf{w}^{\prime})\) may exceed \(m_{j}^{*}(\mathbf{w})\) for \(j\in\mathcal{J}\), but only by arbitrarily little, because \(\mathbf{w}^{\prime}\) is sufficiently close to \(\mathbf{w}\) and the \(m_{j}^{*}\)'s are continuous on \(\overline{L}\) in view of the singularity of \(K\), see Proposition 4 (a). So, in all, we will have \(m_{k}^{*}(\mathbf{w}^{\prime})<\overline{m}^{*}(\mathbf{w})\) for all \(k=1,\ldots,n\), hence \(\overline{m}^{*}(\mathbf{w}^{\prime})<\overline{m}^{*}(\mathbf{w})\), contradicting to minimality of \(\mathbf{w}\). Noting that an equioscillation node system is necessarily nondegenerate for singular kernels, we conclude the proof. ### Maximin for strictly concave kernels **Theorem 13**.: _Let \(n\in\mathbb{N}\) and \(\nu_{1},\ldots,\nu_{n}>0\), let \(K\) be a singular, strictly concave periodic kernel function, and let \(J\) be an arbitrary periodic \(n\)-field function._ _Then there exists a maximin point \(\mathbf{z}\) on \(\overline{L}\), it belongs to the open "cyclic simplex" \(L\), and it is an equioscillation point._ Proof.: Again, as all the \(m_{k}^{*}\) are continuous, so is their minimum. The maximum of that minimum is finite, because there are points with all \(m_{k}^{*}\) finite: e.g., take the above found minimax point \(\mathbf{w}\). Given that \(K\) is assumed to be singular, that gives that such a point with \(\underline{m}^{*}(\mathbf{z})>-\infty\) cannot be degenerate, i.e., \(\mathbf{z}\in L\). Also, by continuity of the \(m_{k}^{*}\) and hence of \(\underline{m}^{*}\), there exists a maximin point \(\mathbf{z}\in L\). Assume for a contradiction that this point is not an equioscillation point. Consider \(\mathcal{I}:=\{i\ :\ m_{i}^{*}(\mathbf{z})>\underline{m}^{*}(\mathbf{z})\}\) and \(\mathcal{J}:=\{1,\ldots,n\}\setminus\mathcal{I}=\{j\ :\ m_{j}^{*}(\mathbf{z})= \underline{m}^{*}(\mathbf{z})\}\). This is a nontrivial partition of the index set \(\{1,\ldots,n\}\), while \(\mathbf{z}\in L\), hence the Perturbation Lemma 8 can be applied, and we are led to a new system \(\mathbf{z}^{\prime}\in L\), with \(m_{j}^{*}(\mathbf{z}^{\prime})>m_{j}^{*}(\mathbf{z})=\underline{m}^{*}(\mathbf{ z})\) for all \(j\in\mathcal{J}\). However, we also have \(m_{i}^{*}(\mathbf{z}^{\prime})>m_{i}^{*}(\mathbf{z})-\varepsilon>\underline{m}^ {*}(\mathbf{z})\) for all \(i\in\mathcal{I}\), if \(\varepsilon\) was chosen small enough and \(\mathbf{z}^{\prime}\) close enough to \(\mathbf{z}\), because \(m_{i}^{*}\) is continuous. So, in all, we find \(\underline{m}^{*}(\mathbf{z}^{\prime})>\underline{m}^{*}(\mathbf{z})\), and \(\mathbf{z}\) could not be a maximin point. The obtained contradiction proves the assertion. ### Extension to concave kernel functions To extend Theorem 12 to general concave kernels, we apply limiting arguments similar to [9], p. 18. **Theorem 14**.: _Let \(n\in\mathbb{N}\) and \(\nu_{1},\ldots,\nu_{n}>0\), let \(K\) be a singular periodic kernel function, and let \(J\) be an arbitrary periodic \(n\)-field function._ _Then there exists a minimax point \(\mathbf{w}^{*}\) on \(\overline{L}\), it belongs to the open "cyclic simplex" \(L\), and it is an equioscillation point._ Proof.: Let \(K\) be a singular, \(1\)-periodic kernel function which is not necessarily strictly concave. Let \(K^{(\eta)}(t):=K(t)+\eta|\sin\pi t|\) where \(\eta>0\). Then \(K^{(\eta)}\) is a strictly concave kernel function, which is also singular and \(1\)-periodic. We will denote the corresponding maximum functions and minimax quantity by \(m^{*}_{j}(\eta,\mathbf{y})\), \(\overline{m}^{*}(\eta,\mathbf{y})\), and \(M^{*}(\eta,\overline{L})\). Taking into account that \(K^{(\eta)}\) converges to \(K\) uniformly, we find the same for \(F(\eta,\cdot)\searrow F\) and hence even \(\overline{m}^{*}(\eta,\mathbf{y})\searrow\overline{m}^{*}(\mathbf{y})\) uniformly. Hence \(M^{*}(\eta,L)\searrow M^{*}(L)\), too. Let \(\mathbf{e}(\eta)\in\overline{L}\) be a node system such that \(\overline{m}^{*}(\eta,\mathbf{e}(\eta))=M^{*}(\eta,\overline{L})\). By Theorem 12, \(\mathbf{e}(\eta)\in L\) and it is an equioscillating node system: \(m^{*}_{1}(\eta,\mathbf{e}(\eta))=\ldots=m^{*}_{n}(\eta,\mathbf{e}(\eta))\). Since \(\overline{L}\) is compact, there exists \(\eta_{k}\searrow 0\) such that \(\mathbf{e}(\eta_{k})\to\mathbf{e}\) for some \(\mathbf{e}\in\overline{L}\) as \(k\to\infty\). Then \(\overline{m}^{*}(\mathbf{e})=M^{*}(\overline{L})\). Indeed, \(\overline{m}^{*}(\mathbf{e})\geq M^{*}(\overline{L})\) and for the other direction, let \(a>M^{*}(\overline{L})\). Then for all sufficiently large \(k\) we have \(a\geq M^{*}(\eta_{k},\overline{L})\), so we can conclude \[a\geq M^{*}(\eta_{k},\overline{L})=\overline{m}^{*}(\eta_{k},\mathbf{e}(\eta_ {k}))\geq\overline{m}^{*}(\mathbf{e}(\eta_{k}))\] where we used \(\overline{m}^{*}(\eta,\cdot)\geq\overline{m}^{*}(\cdot)\). Letting \(k\to\infty\) we conclude, by the continuity of \(\overline{m}^{*}\) and by \(\mathbf{e}(\eta_{k})\to\mathbf{e}\), that \(a\geq\overline{m}^{*}(\mathbf{e})\) and \(M^{*}(\overline{L})\geq\overline{m}^{*}(\mathbf{e})\) follows. The claim is proved. Next we claim that \(\mathbf{e}\) is an equioscillation point. Indeed, assume for a contradiction that for some \(j\in\{1,\ldots,n\}\) we have \(m^{*}_{j}(\mathbf{e})<\overline{m}^{*}(\mathbf{e})\). Then there is \(k_{0}\in\mathbb{N}\) such that \(m^{*}_{j}(\eta_{k_{0}},\mathbf{e})<\overline{m}^{*}(\mathbf{e})\). Since \(m^{*}_{j}(\eta_{k_{0}},\cdot)\) is continuous (the kernel functions are singular; see Proposition 4 (a)), there is \(\delta>0\) such that for every \(\mathbf{y}\in\overline{L}\) with \(\operatorname{dist}_{\mathbb{T}^{n}}(\mathbf{y},\mathbf{e})<\delta\) one has \(m^{*}_{j}(\eta_{k_{0}},\mathbf{y})<\overline{m}^{*}(\mathbf{e})\), too. There is \(n_{0}\in\mathbb{N}\) such that for every \(k\geq n_{0}\) we have \(\operatorname{dist}_{\mathbb{T}^{n}}(\mathbf{e}(\eta_{k}),\mathbf{e})<\delta\). So for \(k\geq\max\{k_{0},n_{0}\}\) we can write \[m^{*}_{j}(\eta_{k},\mathbf{e}(\eta_{k}))\leq m^{*}_{j}(\eta_{k_{0}},\mathbf{ e}(\eta_{k}))<\overline{m}^{*}(\mathbf{e})=M^{*}(\overline{L})\leq\overline{m}^{*}( \mathbf{e}(\eta_{k}))\leq\overline{m}^{*}(\eta_{k},\mathbf{e}(\eta_{k})).\] This is a contradiction, since \(m^{*}_{i}(\eta_{k},\mathbf{e}(\eta_{k}))=\overline{m}^{*}(\eta_{k},\mathbf{e}( \eta_{k}))\) for each \(i\in\{1,\ldots,n\}\). Therefore, \(\mathbf{e}\) is necessarily an equioscillation point. As such, it cannot have any degenerate subarcs, for any degenerate subarc \(I^{*}_{k}(\mathbf{e})=\{e_{k}\}\) would yield a singular value \(m^{*}_{k}(\mathbf{y})=-\infty\) according to the singularity of \(K\). Hence \(\mathbf{e}\in L\). Choosing \(\mathbf{w}^{*}:=\mathbf{e}\) concludes the proof. Now we extend Theorem 13 to general concave kernels, but this time we approximate the kernel from below. **Theorem 15**.: _Let \(n\in\mathbb{N}\) and \(\nu_{1},\ldots,\nu_{n}>0\), let \(K\) be a singular periodic kernel function, and let \(J\) be an arbitrary periodic \(n\)-field function._ _Then there exists a maximin point \(\mathbf{z}^{*}\) on \(\overline{L}\), it belongs to the open "cyclic simplex" \(L\), and it is an equioscillation point._ Proof.: Let \(K\) be a singular, \(1\)-periodic kernel function which is not necessarily strictly concave. Let \(K^{(\eta)}(t):=K(t)+\eta\big{(}|\sin\pi t|-1\big{)}\) where \(\eta>0\). Then \(K^{(\eta)}\) is a strictly concave kernel function, which is also singular and \(1\)-periodic and \(K^{(\eta)}\nearrow K\) as \(\eta\searrow 0\), moreover this convergence is uniform. Again, we denote the corresponding maximum functions and maximin quantity by \(m^{*}_{j}(\eta,\mathbf{y})\), \(\underline{m}^{*}(\eta,\mathbf{y})\) and \(m^{*}(\eta,\overline{L})\). Due to the uniform convergence of \(K^{(\eta)}\)'s, we also have \(\underline{m}^{*}(\eta,\mathbf{y})\nearrow\underline{m}^{*}(\mathbf{y})\) uniformly, in the extended sense. Therefore \(m^{*}(\eta,\overline{L})\nearrow m^{*}(\overline{L})\), too. Let \(\mathbf{e}(\eta)\in\overline{L}\) be a node system such that \(\underline{m}^{*}(\eta,\mathbf{e}(\eta))=m^{*}(\eta,\overline{L})\). By Theorem 13, \(\mathbf{e}(\eta)\in L\) and it is an equioscillating node system: \(m^{*}_{1}(\eta,\mathbf{e}(\eta))=\ldots=m^{*}_{n}(\eta,\mathbf{e}(\eta))\). Since \(\overline{L}\) is compact, there exists \(\eta_{k}\searrow 0\) such that \(\mathbf{e}(\eta_{k})\to\mathbf{e}\) for some \(\mathbf{e}\in\overline{L}\) as \(k\to\infty\). Then \(\underline{m}^{*}(\mathbf{e})=m^{*}(\overline{L})\). Indeed, \(\underline{m}^{*}(\mathbf{e})\leq m^{*}(\overline{L})\) and for the other direction, let \(b<m^{*}(\overline{L})\). Then for all sufficiently large \(k\) we have \(b\leq m^{*}(\eta_{k},\overline{L})\), so we can conclude \[b\leq m^{*}(\eta_{k},\overline{L})=\underline{m}^{*}(\eta_{k},\mathbf{e}(\eta _{k}))\leq\underline{m}^{*}(\mathbf{e}(\eta_{k}))\] where we used \(\underline{m}^{*}(\eta,\cdot)\leq\underline{m}^{*}(\cdot)\). Letting \(k\to\infty\) we conclude, by the extended continuity of \(\underline{m}^{*}\) and by \(\mathbf{e}(\eta_{k})\to\mathbf{e}\), that \(b\leq\underline{m}^{*}(\mathbf{e})\) and \(m^{*}(\overline{L})\leq\underline{m}^{*}(\mathbf{e})\) follows. The claim is proved. Next we claim that \(\mathbf{e}\) is an equioscillation point. Assume for a contradiction that for some \(j\in\{1,\ldots,n\}\) we have \(m^{*}_{j}(\mathbf{e})>\underline{m}^{*}(\mathbf{e})\). Then there is \(k_{0}\in\mathbb{N}\) such that \(m^{*}_{j}(\eta_{k_{0}},\mathbf{e})>\underline{m}^{*}(\mathbf{e})\). Since \(m^{*}_{j}(\eta_{k_{0}},\cdot)\) is continuous (the kernel functions are singular; see Proposition 4 (a)), there is \(\delta>0\) such that for every \(\mathbf{y}\in\overline{L}\) with \(\operatorname{dist}_{\mathbb{T}^{n}}(\mathbf{y},\mathbf{e})<\delta\) one has \(m^{*}_{j}(\eta_{k_{0}},\mathbf{y})>\underline{m}^{*}(\mathbf{e})\), too. There is \(n_{0}\in\mathbb{N}\) such that for every \(k\geq n_{0}\) we have \(\operatorname{dist}_{\mathbb{T}^{n}}(\mathbf{e}(\eta_{k}),\mathbf{e})<\delta\). So for \(k\geq\max\{k_{0},n_{0}\}\) we can write \[m^{*}_{j}(\eta_{k},\mathbf{e}(\eta_{k}))\geq m^{*}_{j}(\eta_{k_{0}},\mathbf{e }(\eta_{k}))>\underline{m}^{*}(\mathbf{e})=m^{*}(\overline{L})\geq\underline{ m}^{*}(\mathbf{e}(\eta_{k}))\geq\underline{m}^{*}(\eta_{k},\mathbf{e}(\eta_{k})).\] This is a contradiction, since \(m^{*}_{i}(\eta_{k},\mathbf{e}(\eta_{k}))=\underline{m}^{*}(\eta_{k},\mathbf{e }(\eta_{k}))\) for each \(i\in\{1,\ldots,n\}\). Finally, if \(\mathbf{e}\) is an equioscillating node system and \(K\) is singular, then necessarily \(\mathbf{e}\in L\), as before. So we proved that there is a node system \(\mathbf{z}^{*}:=\mathbf{e}\), \(\mathbf{z}^{*}\in L\) for which \(\underline{m}^{*}(\mathbf{z}^{*})=m^{*}(\overline{L})\) and \(\mathbf{z}^{*}\) is an equioscillating node system. ### A counterexample for nonsingular kernels **Example 16**.: _If \(K\) is not a singular kernel, then there can be no equioscillation, and no maximin node systems. Moreover, all node systems can be solutions of the minimax problem._ Let \(K\equiv 0\), and \(J(1/\ell):=1-1/\ell\ (\ell\in\mathbb{N})\), and \(J(t):=0\) otherwise. Let \(n:=2\). Then there is no equioscillation. We consider the following cases. If \(y_{1}=y_{2}=0\), then \(m_{1}^{*}(\mathbf{y})=0\) and \(m_{2}^{*}(\mathbf{y})=1\). If \(y_{1}=0\), \(0<y_{2}<1\), then \(m_{1}^{*}(\mathbf{y})=1\) and \(m_{2}^{*}(\mathbf{y})=1-1/\left\lfloor 1/y_{2}\right\rfloor\). If \(y_{1}>0\) and \(y_{2}<1\), then \(m_{1}^{*}(\mathbf{y})=1-1/\left\lfloor 1/y_{1}\right\rfloor\) and \(m_{2}^{*}(\mathbf{y})=1\). It is easy to see that there is no equioscillating node system. To verify the assertions about minimax node systems, note that \(\overline{m}^{*}(\mathbf{y})=\max(m_{0}^{*}(\mathbf{y}),m_{1}^{*}(\mathbf{y}))\equiv 1\) hence every node system is a node system with the minimax value. By considering node systems \(\mathbf{y}=(0,1/n)\), we get that \(\underline{m}^{*}(\overline{L})=1\), but there is no node system with \(\underline{m}^{*}(\mathbf{y})=1\). ## 6 A partial homeomorphism result In our earlier papers [5, 8, 7] on the subject, an outstanding role was played by a new finding, not seen in earlier works of Bojanov [2] and Fenton [10]. We established, that in case of singular and strictly concave kernels a certain homemorphism exists between admissible node systems and differences of the interval or arc maxima. Since the differences all being zero is equivalent to equioscillation, such a result immediately gives the existence and uniqueness of an equioscillating node system. This entails that proving e.g. that the minimax point is an equioscillating system can be strengthened to say that this equioscillation property _characterizes_ the minimax node system. Similarly, if we further prove that a maximin node system is necessarily equioscillating, then it follows that the minimax equals to the maximin, and is attained at that unique point of equioscillation. Furthermore, from the homeomorphism result further consequences could be proved, most importantly about the intertwining of \(m_{j}\), see [7]. Above we have proved that minimax and maximin node systems exist and that they are necessarily equioscillating node systems. Therefore it is most natural to try to complete the picture by a corresponding homeomorphism theorem. In fact, even for the torus setup such a homeomorphism theorem was already proved in [5], when there were no weigths allowed, and where we assumed a few technical assumptions, too. In our present notation Corollary 9.3 of [5] runs as follows. **Theorem 17**.: _Suppose that for each \(j=0,\ldots,n\) the kernel \(K_{j}\) belongs to \(C^{2}(0,1)\) with \(K_{j}^{\prime\prime}<0\) and satisfies \((\infty)\). Let \(S:=\{\mathbf{y}\in\mathbb{T}^{n}\ :\ 0<y_{1}<\cdots<y_{n}<1\) be the open simplex, while \(y_{0}\) is understood as fixed at \(y_{0}=0\)._ _Then the difference mapping \(\Phi(\mathbf{y}):=(m_{1}^{*}(\mathbf{y})-m_{0}^{*}(\mathbf{y}),\ldots,m_{n}^{ *}(\mathbf{y})-m_{n-1}^{*}(\mathbf{y}))\) is a homeomorphism of \(S\) onto \(\mathbb{R}^{n}\)._ Observe that here the \(n+1\) "to be translated" kernels admit a symmetry: if we rotate a node system \(\mathbf{y}:=(y_{0},y_{1},\ldots,y_{n})\in\mathbb{T}^{n+1}\) by say \(t\in\mathbb{T}\), then for the new system \(\mathbf{y}_{t}:=(y_{0}+t,y_{1}+t,\ldots,y_{n}+t)\in\mathbb{T}^{n+1}\) we get exactly the same vector of arc maxima \(\mathbf{m}^{*}(\mathbf{y}_{t}):=(m_{0}^{*}(\mathbf{y}_{t}),m_{1}^{*}(\mathbf{y }_{t}),\ldots,m_{n}^{*}(\mathbf{y}_{t}))\in\mathbb{R}^{n+1}\), hence in particular also the differences of these maxima remain the same as before. Therefore it was natural to select one copy of these identical systems by fixing the value of \(y_{0}\)-also this was the only way the repetitions could be discarded and a homeomorphism could hold. In our current settings, however, there is an outer field \(J\), too. Once the weight, i.e., the field is not constant, we no longer have this rotational symmetry. The situation can be compared to the previous case regarding \(K_{0}(\cdot-y_{0})\) not as a kernel with fixed \(y_{0}\), but just as an outer field. Theorem 17 just says that we have a homeomorphism if the field is strictly concave and singular, and all the kernels satisfy the extra assumptions on differentiability etc. However, in this interpretation one thing constitutes a major difference: if we take only the \(n\) element node systems \((y_{1},\ldots,y_{n})\in\mathbb{T}^{n}\) accompanied by a field (like e.g. \(J:=K_{0}(\cdot-y_{0})\)), then there will be only \(n\) arcs, determined by the nodes, so that the arc \(I_{n}^{*}((y_{1},\ldots,y_{n}))=[y_{n},y_{1}]\) will be the union of the former two arcs \(I_{n}^{*}(\mathbf{y})=[y_{n},y_{0}]\) and \(I_{0}^{*}(\mathbf{y})=[y_{0},y_{1}]\). Similarly, the maximums will form an \(n\)-dimensional vector with \(m_{n}^{*}(y_{1},\ldots,y_{n})\) becoming the maximum of the former two maxima \(m_{n}^{*}(\mathbf{y})\) and \(m_{0}^{*}(\mathbf{y})\). The maximum differences then form an \(n-1\)-dimensional manifold, and we can no longer hope for a homeomorphism from the domain of our \(n\)-dimensional node systems to this manifold of differences. To cure this, we may consider \(K_{0}(\cdot-y_{0})\)_both_ a fixed kernel and also a field (say writing \(\widetilde{K}_{0}:=\frac{1}{2}K_{0}\) and \(J:=\frac{1}{2}K_{0}(\cdot-y_{0})\)). Then the result of Theorem 17 will refer to the original \(n+1\) arc maximums and their \(n\) differences, with a valid homeomorphism result. With this in mind, we prove that an analogous "partial" homemorphism theorem remains in effect even if there is an arbitrary weight, i.e., field. In addition, we will surpass all the other technical conditions of Theorem 17 by the more advanced technology we have developed in [8], capable of handling even non-differentiable kernels. Instead of repeating the technical steps of that proof in the torus context, we directly reduce the statement to results of [8]. In fact there we have made a substantial effort to formulate and prove results which can potentially be used even in the periodic case-and that investment brings a profit here enabling us to refer back to them. Actually, we will use the following, proved for the case of the interval setup as Theorem 18 in [8]. **Theorem 18**.: _Let \(K_{1},\ldots,K_{n}\) be strictly concave, singular kernel functions fulfilling condition (PM\({}_{0}\)) and let \(J\) be a field function satisfying either \(J(0)=\lim_{t\downarrow 0}J(t)=-\infty\) or \(J(1)=\lim_{t\uparrow 1}J(t)=-\infty\) (or both)._ _Then the difference function \(\Phi(\mathbf{x}):=\left(m_{1}(\mathbf{x})-m_{0}(\mathbf{x}),\ldots,m_{n}( \mathbf{x})-m_{n-1}(\mathbf{x})\right)\) is a locally bi-Lipschitz homeomorphism between \(Y:=\{(x_{1},\ldots,x_{n})\in\overline{S}_{[0,1]}:m_{k}(\mathbf{x})>-\infty\ (k=0,1, \ldots,n)\}\subset[0,1]^{n}\) and \(\mathbb{R}^{n}\)._ Note that here the ordering of nodes is fixed according to the simplex \(\overline{S}_{[0,1]}\); and given the singularity condition, all degenerate node systems contain a degenerate arc with \(m_{k}(\mathbf{x})=-\infty\), so in fact the admissible set \(Y\subset\overline{S}_{[0,1]}\), too. Also note that a non-admissible node system from \(\overline{S}_{[0,1]}\setminus Y\) can never be an equioscillating node system, for \(\overline{m}(\mathbf{x})>-\infty\) excludes equioscillation at the \(-\infty\) level. Thus in particular this entails existence and unicity of an equioscillating node system. **Theorem 19**.: _Let \(n\in\mathbb{N}\), let \(K_{0},K_{1},\ldots,K_{n}\) be \(n+1\) strictly concave \(1\)-periodic kernel functions and let \(J\) be a \(1\)-periodic, otherwise arbitrary \(n+1\)-field function._ _For any value \(a\in\mathbb{T}\) denote \(Y^{*}:=Y^{*}(a):=\{{\bf y}=(y_{0},y_{1},\ldots,y_{n})\in\overline{L}\subset \mathbb{T}^{n+1}\ :\ y_{0}=a,\ m_{k}({\bf y})>-\infty\ (k=0,1,\ldots,n)\}\)._ _Then the difference function \(\Phi^{*}({\bf y}):=\big{(}m_{1}^{*}({\bf y})-m_{0}^{*}({\bf y}),\ldots,m_{n}^{* }({\bf y})-m_{n-1}^{*}({\bf y})\big{)}\) is a locally bi-Lipschitz homeomorphism between \(Y^{*}(a)\) and \(\mathbb{R}^{n}\)._ _In particular, for each fixed value \(y_{0}=a\) there exists one unique equioscililating node system of the form \({\bf y}(a)=(a,y_{1}(a),\ldots y_{n}(a))\in\overline{L}\)._ Proof.: First we introduce a new field function \(J^{*}:=J+K_{0}(\cdot-a)\). Obviously \(J^{*}\) will then be a \(1\)-periodic upper bounded function which is nonsingular at more than \(n\) points mod \(1\) (while one finite value, if \(J(a)\) was finite, could be "killed" by \(K_{0}(\cdot-a)\)). So, \(J^{*}\) is an \(n\)-field function, and in view of the singularity condition on \(K_{0}\) it also satisfies the extra singularity equation \(\lim_{t\to a}J^{*}(t)=-\infty\). Interpreting the \(K_{j}\) and \(J^{*}\) as defined on the torus, we now transfer them to the interval \([0,1]\) via the covering mapping \(\pi_{a}\). We put \(\widetilde{J}(r):=J^{*}(\pi_{a}(r))\), which then becomes an \(n\)-field function on \([0,1]\) satisfying \(\lim_{r\downarrow 0}\widetilde{J}=\lim_{r\uparrow 1}\widetilde{J}=-\infty\). Also we put \(\widetilde{K_{j}}(r):=K_{j}(\pi_{a}(r)-a)\) for all \(j=1,\ldots,n\). Obviously these are singular kernel functions, Which are strictly concave, too. We apply the above Theorem 18 to this new system. Let us see what are the arising node systems \({\bf x}\in Y\). First, \({\bf x}\in\overline{S}\) transfers to the cyclic ordering \(a\preccurlyeq\pi_{a}(x_{1})\preccurlyeq\ldots\preccurlyeq\pi_{a}(x_{n})\), so that writing \({\bf y}:=(y_{0},y_{1},\ldots,y_{n})\) with \(y_{0}:=a\) and \(y_{j}:=\pi_{a}(x_{j})\ (j=1,\ldots,n)\), the ordering condition becomes \({\bf y}\in\overline{L}\). Second, the non-singularity conditions for \({\bf x}\in Y\) translate to those for \({\bf y}\in Y^{*}(a)\), because \(\pi_{a}:I_{k}({\bf x})\leftrightarrow I_{k}^{*}({\bf y})\) and \[\widetilde{F}({\bf x},r)=\widetilde{J}(r)+\sum_{k=1}^{n}\widetilde{K_{k}}(r-x_ {k})=J(t)+K_{0}(t-a)+\sum_{k=1}^{n}K_{j}(t-y_{k})\qquad(t:=\pi_{a}(r))\] makes a one-to-one correspondence between sum of translates function values on corresponding points of any \(I_{k}({\bf x})\) resp. \(I_{k}^{*}({\bf y})\); in particular, we find \(m_{k}({\bf x})=m_{k}^{*}({\bf y})\), \((k=0,\ldots,n)\), and an arc \(I_{k}^{*}({\bf y})\) will be singular if and only if the corresponding interval \(I_{k}({\bf x})\) was. So we have seen that when \({\bf y}\) runs \(Y^{*}(a)\) then \({\bf x}\) runs \(Y\), and the correspondence is one-to-one. Moreover, degenerate points do not satisfy the nonsingularity condition, hence \(Y\subset S_{[0,1]}\) and \(Y^{*}(a)\subset L\). This means that between node systems \({\bf x}\) and \({\bf y}\), the mapping \(\pi_{a}\) and even its inverse \(\pi_{a}^{-1}\) acts continuously, given that \(\pi_{a}^{-1}\) is applied only to \(y_{1},\ldots,y_{n}\) off \(a\). Summing up, \(\Phi^{*}({\bf y})=\Phi(\pi_{a}^{-1}({\bf y}))\), and even this composition mapping is continuous, moreover, it maps one-to-one to \(\mathbb{R}^{n}\). Obviously its inverse \(\Phi^{-1}\circ\pi_{a}\) is continuous, too, once \(\Phi^{-1}\) was, so the mapping is a homeomorphism between \(Y^{*}(a)\) and \(\mathbb{R}^{n}\). We also get the bi-Lipschitz property from that of \(\Phi\). Finally, uniqueness of an equioscilating node system with given fixed \(y_{0}=a\) follows from the homeomorphism result for \(Y^{*}(a)\) (where we get exactly one system with all differences zero), and from the fact that outside \(Y^{*}(a)\) all node systems provide some singular value \(m_{k}^{*}(\mathbf{y})=-\infty\), while equioscillation cannot take place on that level, for \(\overline{m}^{*}(\mathbf{y})\in\mathbb{R}\), always. Having proved the above, we may try to progress towards a Bojanov-type characteristaion result. We already know that for the global minimax point there is equioscillation; and we now established that for each fixed value of \(y_{0}\) there is exactly one equioscillating node system. So, writing \(\varphi(a)=(a,y_{1}(a),\ldots,y_{n}(a))\) for this unique equioscillation point with \(y_{0}=a\), it suffices to look for the global minimum of \(\mu(a):=\overline{m}^{*}(\varphi(a))\) over \(a\in\mathbb{T}\). The question is if these equioscillation values are always the same - as we have seen in [5] when \(J\equiv 0\) - or if \(\mu(a)\) is non-constant. Unicity of equioscillating _node systems_ cannot hold (there is one for each fixed value \(a\) of the first coordinate \(y_{0}\)), but one may hope for unicity of the _equioscillation value_. In Section 7, however, we will see that even this modified hope is deluded. Counterexamples - equioscillation does not characterize minimax or maximin, majorization occurs, minimax can be smaller than maximin Bojanov's Theorem [2] in the classical algebraic polynomial setting included an important characterization statement, too: the extremal minimax system was _characterized_ by the equioscillation property. That is, there was exactly one equioscillating node system, which was necessarily the minimax point. Fenton's classical theorem added another statement to the theory (in his context): he also proved that the unique maximin point equals to the (unique) minimax point (and hence the maximin and minimax values are equal, too). In the weighted algebraic setting we proved similarly strong results in [7]. Also, we found the same for the unweighted trigonometric setting in [5], save the trivial free rotation (which was discarded by indexing from \(0\) and fixing the value of \(y_{0}\) in \(\mathbb{T}\)). In this section we explore examples which show that we cannot expect as many results as for the interval case or for the torus without weights. Basically, we will show that \(m(L)>M(L)\) does occur. Our examples will use a singular kernel, so by our above results minimax and maximin points exist, moreover, they are equioscillation points. Therefore, the examples also mean that there are different equioscillation values, hence strict majorization occurs even between equioscillating node systems. Furthermore, we will choose the kernel to be the standard log-sine kernel \(K(t):=\log|\sin(\pi t)|\). This means that even for the classical trigonometric polynomial case one cannot expect any better results for general weights. ### A counterexample with majorization **Example 20**.: _Set \(K(t):=\log|\sin(\pi t)|\), \(n=2\), \(\nu_{1}=\nu_{2}=1\), and \(J(t)=0\) on \(\{0\}\cup[1/2,1)\) and \(J(t)=-\infty\) on \((0,1/2)\)._ _Then, the minimax and maximin values on the "cyclic simplex" \(L\) are \(m^{*}(\overline{L})=-2\log(2)\), \(M^{*}(\overline{L})=-\log(2)\), respectively._ _Moreover, for any \(\lambda\in[-2\log(2),-\log(2)]\) there is an equioscillating node system \({\bf y}\in L\) with \(\lambda=m^{*}_{1}({\bf y})=\ldots=m^{*}_{n}({\bf y})\)._ In the following we will determine all equioscillation values. We are to minimize \(\overline{m}^{*}({\bf y})\) (and maximize \(\underline{m}^{*}({\bf y})\)) on \({\bf y}\in\mathbb{T}^{2}\). By relabeling, if necessary, we can assume \({\bf y}=(y_{1},y_{2})\) with \(0\leq y_{1}\leq y_{2}<1\). First, we make the following simple observations about the behavior of the pure sum of translates function. From the evenness of \(K\) it follows that \(f({\bf y},\cdot)\) behaves symmetrically on the two intervals before and after the midpoint \(\frac{y_{1}+y_{2}}{2}\) on \(I^{*}_{1}({\bf y})\sim[y_{1},y_{2}]\), and similarly for the other arc \(I^{*}_{2}({\bf y})\). In particular, \(f({\bf y},\cdot)\) is strictly monotone increasing on \((y_{1},\frac{y_{1}+y_{2}}{2})\) and decreasing on \((\frac{y_{1}+y_{2}}{2},y_{2})\). Similarly, it is strictly increasing on \((y_{2},\frac{y_{1}+y_{2}}{2}+\frac{1}{2})\) and strictly decreasing on \((\frac{y_{1}+y_{2}}{2}+\frac{1}{2},1+y_{1})\). Moreover, \(f\) is maximal on \(I_{i}({\bf y})\) at the midpoint, and if the length of this interval is \(\ell:=|I_{i}({\bf y})|\), the length of the arc \(I_{i}({\bf y})\), then its value is \(2K(\ell/2)\). As for \(F({\bf y},\cdot)\), it follows that on any of the arcs \(I^{*}_{i}({\bf y})\) it attains its maximum on the point(s) of the arc which have \(J(t)=0\) and are closest to the midpoint among those. Also note that among the two midpoints, which are exactly of distance \(1/2\), only one can belong to the singular set \(X_{J}=(0,1/2)\). In the following we use the variables \(x=y_{1}+y_{2}\), \(z=y_{2}-y_{1}\), by which we can express \(y_{1}=\frac{x-z}{2}\), \(y_{2}=\frac{x+z}{2}\). Note that we cannot have \(0\leq y_{1}\leq y_{2}\leq 1/2\), for then \(m^{*}_{1}({\bf y})=-\infty\), which cannot be an equioscillation value, given that \(\overline{m}^{*}({\bf y})>-\infty\). Thus \(y_{2}\geq 1/2\), and we have \(x\geq y_{2}\geq 1/2\), too. So, let the first case be \(1/2\leq x<1\). Then, the midpoint \(x/2\) of the arc \(I^{*}_{1}({\bf y})\) lies in \([1/4,1/2)\), which is in the singular set \((0,1/2)\), hence the maximum will be attained at the closest nonsingular point of the arc, which is \(1/2\). That is, \(m^{*}_{1}({\bf y})=F({\bf y},1/2)=f({\bf y},1/2)\). Further, \(m^{*}_{2}({\bf y})=F({\bf y},x/2+1/2)=f({\bf y},x/2+1/2)\). We will use the identity \[f({\bf y},t) =f\left(\left(\frac{x-z}{2},\frac{x+z}{2}\right),t\right)=\log \left|\sin\pi\left(t-\frac{x-z}{2}\right)\,\sin\pi\left(t-\frac{x+z}{2} \right)\right|\] \[=-\log(2)+\log\left|\cos(\pi z)-\cos\pi(2t-x)\right|\qquad(t\in \mathbb{R}). \tag{33}\] With this, the equation \(f({\bf y},1/2)=f({\bf y},x/2+1/2)\) can be rewritten as \[\left|\cos(\pi z)-\cos\pi\left(2\frac{1}{2}-x\right)\right|=\left|\cos(\pi z) -\cos\pi\left(2\left(\frac{x}{2}+\frac{1}{2}\right)-x\right)\right|.\] The right hand side is \(1+\cos(\pi z)\). For the sign of the left hand side we observe \(x+z=2y_{2}\geq 1\), hence \(1-x\leq z\leq x\leq 1\), so by monotonicity of \(\cos(\pi s)\) for \(0\leq s\leq 1\), we get \(\left|\cos(\pi z)-\cos\pi\left(2\frac{1}{2}-x\right)\right|=\cos(\pi(1-x))- \cos(\pi z)\). So we are led to the equation \[\cos\pi(1-x)\ -\cos(\pi z)=\cos(\pi z)+1,\] and solving it for \(\cos(\pi z)\) yields \[\cos(\pi z)=\frac{\cos\pi(1-x)-1}{2}.\] The condition \(1-x\leq z\) is satisfied, since \(\cos\pi(1-x)\geq(\cos\pi(1-x)-1)/2\) in general. By simple steps, the condition \(z\leq x\) is equivalent to \(\cos(\pi x)\leq-1/3\). So if \(1/2\leq x\leq\beta_{0}:=\arccos(-1/3)/\pi\approx 0.608\), then \(z\) does not satisfy \(z\leq x\). The arising equioscillation value is \[m_{2}^{*}\left(\left(\frac{x-z}{2},\frac{x+z}{2}\right)\right)=f (\mathbf{y},x/2+1/2)\\ =-\log(2)+\log\left|\frac{\cos\pi(1-x)\;-1}{2}+1\right|\\ =-2\log(2)+\log\left(1-\cos(\pi x)\right)\] for \(\beta_{0}\leq x\leq 1\). The next case is \(1\leq x<3/2\). Since \(x+z=y_{1}+y_{2}\leq 2\), we have \(0\leq z\leq 2-x\), too. Now \(x/2\notin X_{J}\), while \(x/2+1/2\in X_{J}\) (except for \(x=1\) and \(x/2+1/2=1\)), with the closest nonsingular point from \(I_{2}^{*}(\mathbf{y})\) being \(1\) (remaining valid even in case \(x=1\)). Hence, \(m_{1}^{*}(\mathbf{y})=f(\mathbf{y},x/2)\) and \(m_{2}^{*}(\mathbf{y})=f(\mathbf{y},1)\). Now, again by (33), \(f(\mathbf{y},x/2)=f(\mathbf{y},1)\) is equivalent to \[\left|\cos(\pi z)-\cos\pi\left(2\frac{x}{2}-x\right)\right|=\left|\cos(\pi z)- \cos\pi\left(2\cdot 1-x\right)\right|. \tag{34}\] Here the left hand side is \(|\cos(\pi z)-1|=1-\cos(\pi z)\). Also, \(0\leq z\leq 2-x\leq 1\), so \(\cos(\pi z)\geq\cos\pi(2-x)\), and (34) can be written as \[1-\cos(\pi z)=\cos(\pi z)-\cos\pi(2-x).\] Solving it for \(\cos(\pi z)\) we are led to \[\cos(\pi z)=\frac{1+\cos\pi x}{2}.\] The condition \(0\leq z\leq 2-x\leq 1\) on \(z\) is equivalent to \(1\geq\cos(\pi z)\geq\cos\pi(2-x)=\cos(\pi x)\) and hence is obviously satisfied. The equioscillation value, again depending only on \(x\), is found again to be \[m_{2}^{*}\left(\left(\frac{x-z}{2},\frac{x+z}{2}\right)\right)= f(\mathbf{y},x/2)\\ =-\log(2)+\log\left|1-\frac{1+\cos(\pi x)}{2}\right|\\ =-2\log(2)+\log\left(1-\cos(\pi x)\right).\] Finally, let \(3/2\leq x<2\). This means that \(y_{1}\geq 1/2\), for \(x=y_{1}+y_{2}\leq y_{1}+1\). Then again, \(m_{1}^{*}(\mathbf{y})=f(\mathbf{y},x/2)\), for \(x/2\in[1/2,1]\). However, in \(I_{2}^{*}(\mathbf{y})\) the closest non-singular point to the midpoint \(x/2+1/2\equiv x/2-1/2\) mod \(1\) will be \(1/2\), and we will get \(m_{2}^{*}({\bf y})=f({\bf y},1/2)\). Therefore, the equioscillation equation becomes \(f({\bf y},x/2)=f({\bf y},1/2)\) and, by (33), it is \[|\cos(\pi z)-\cos\pi\left(2\frac{x}{2}-x\right)|=|\cos(\pi z)-\cos\pi\left(2 \frac{1}{2}-x\right)|.\] The left hand is \(1-\cos(\pi z)\) and the right hand side is \(|\cos(\pi z)-\cos\pi x|\). Since \(3/2\leq x<2\) and \(z+x=2y_{2}\leq 2\) implies \(0\leq z\leq 2-x<1/2\), here both terms are nonnegative, and the equation becomes \[1-\cos(\pi z)=\cos(\pi z)+\cos\pi x.\] Solving it for \(\cos(\pi z)\) we get \[\cos(\pi z)=\frac{1-\cos(\pi x)}{2}.\] Again, we verify that \(0\leq z\leq 2-x\), which is equivalent to \(1\geq\cos(\pi z)\geq\cos\pi(2-x)\). The second inequality is equivalent to \(\cos(\pi x)\leq 1/3\), and it holds for \(x\in[3/2,2]\) if and only if \(x\leq 2-\arccos(1/3)/\pi=1+\beta_{0}\), \(1+\beta_{0}\approx 1.608\). So \(z\leq 2-x\) if and only if \(x\in[3/2,1+\beta_{0}]\). The equioscillation value is \[m_{2}^{*}\left(\left(\frac{x-z}{2},\frac{x+z}{2}\right)\right) =f({\bf y},1)\\ =-\log(2)+\log\left|1-\frac{1-\cos(\pi x)}{2}\right|\\ =-2\log(2)+\log\left(1+\cos(\pi x)\right).\] Now we can collect the obtained equioscillation values coming from all the three cases: \[m_{1}^{*}({\bf y})=m_{2}^{*}({\bf y})=-2\log 2+\begin{cases}\log(1-\cos(\pi x))&( \beta_{0}\leq x<3/2)\\ \log(1+\cos(\pi x))&(3/2\leq x<1+\beta_{0})\end{cases}\] It is minimal when \(x=3/2\) (in this case \(z=1/3\)) and it is maximal when \(x=1\) (in this case \(z=1/2\)). The corresponding values are \(-2\log(2)\approx-1.386\) and \(-\log(2)\approx-0.693\). Hence we get \(M(\overline{L})=-2\log(2)\) and \(m(\overline{L})=-\log(2)\). ### A modified counterexample with continuous field function In this subsection we sketch a counterexample which is a modification of the previous one, but with a continuous field function. First, we set a new external field function \(\widetilde{J}\), and then we compute the arc maxima for the two extremal node systems from the previous counterexample. We will find that the arc maxima \(m_{i}^{*}({\bf y})\) of those two node systems will not change when we replace \(\widetilde{J}\) for \(J\), whence they will still be equioscillating with the same equioscillation values \(-\log 2\) and \(-2\log 2\), respectively. Let \(\alpha>4\pi\) be fixed and let \(\widetilde{J}\) be \(0\) on \([1/2,1]\), \(-\alpha t\) on \([0,1/4)\) and \(\alpha(t-1/2)\) on \([1/4,1/2)\); further, let \(\widetilde{J}\) be extended \(1\)-periodically to \(\mathbb{R}\). Regarding the \(x=1\), \(z=1/2\) maximin configuration, we have \(y_{1}=1/4\), \(y_{2}=3/4\). It is easy to check that \(f({\bf y},t)\) is strictly monotone decreasing on \((0,1/4]\) and on \([1/2,3/4)\) and it is strictly monotone increasing on \((1/4,1/2]\) and on \((3/4,1]\). Hence, \(\widetilde{F}({\bf y},t)=\widetilde{J}(t)+f({\bf y},t)\) is the same monotone in these intervals. Therefore \(\widetilde{m}_{1}^{*}({\bf y})=\widetilde{F}({\bf y},1/2)=F({\bf y},1/2)=f({ \bf y},1/2)=-\log(2)\) and \(\widetilde{m}_{2}^{*}({\bf y})=\widetilde{F}({\bf y},1)=F({\bf y},1)=f({\bf y},1)=-\log(2)\). Regarding the \(x=3/2\), \(z=1/3\) minimax configuration, we have \(y_{1}=7/12\), \(y_{2}=11/12\). It is easy to check that \(f({\bf y},t)\) is strictly monotone increasing (and concave) on \((-1/12,1/2]\) and \(f^{\prime}({\bf y},0)=4\pi\). Adding \(\widetilde{J}\) to it, we see that \(\widetilde{F}({\bf y},\cdot)\) is strictly monotone increasing on \((-1/12,0]\) and strictly monotone decreasing on \([0,1/4)\). Using the symmetry of \(\widetilde{F}({\bf y},\cdot)\) with respect to \(1/4\), \(\widetilde{F}({\bf y},\cdot)\) is strictly monotone increasing on \((1/4,1/2]\) and strictly monotone decreasing on \([1/2,7/12)\). Moreover, \(\widetilde{m}_{1}^{*}({\bf y})=\widetilde{F}({\bf y},0)=\widetilde{F}({\bf y},1/2)=f({\bf y},0)=-2\log(2)\). Computing \(\widetilde{m}_{2}^{*}({\bf y})\) is simpler: for \(t\in I_{2}^{*}({\bf y})=(7/12,11/12)\subset[1/2,1]\), \(\widetilde{F}({\bf y},t)=f({\bf y},t)=F({\bf y},t)\), and \(\sup f({\bf y},\cdot)\) remains the same as before. Therefore, \(\widetilde{m}_{2}^{*}({\bf y})=\widetilde{F}({\bf y},3/4)=f({\bf y},3/4)=-2 \log(2)\). Summing up, we obtained that \(\widetilde{m}(\overline{L})\leq-2\log(2)\) and \(\widetilde{M}(\overline{L})\geq-\log(2)\). Also, taking into account that \(\widetilde{J}\geq J\), hence \(\widetilde{F}\geq F\), we also know that \(\widetilde{m}(\overline{L})\geq m(\overline{L})\) and \(\widetilde{M}(\overline{L})\geq M(\overline{L})\). Therefore, \(\widetilde{M}(\overline{L})=-2\log(2)\), too. Even if we don't proceed to compute the exact maximin value, too, it is clear from what has already been done that in this example \(\widetilde{m}(\overline{L})\geq-\log(2)>\widetilde{M}(\overline{L})=-2\log 2\), hence the same phenomenon takes place for the continuous, finite kernel \(\widetilde{J}\) as before for the kernel \(J\). ## Acknowledgment This research was supported by project TKP2021-NVA-09. Project no. TKP2021-NVA-09 has been implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021-NVA funding scheme. The work of Sz. Gy. Revesz was supported in part by Hungarian National Research, Development and Innovation Fund project # K-119528.
2310.20669
Modeling multi-legged robot locomotion with slipping and its experimental validation
Multi-legged robots with six or more legs are not in common use, despite designs with superior stability, maneuverability, and a low number of actuators being available for over 20 years. This may be in part due to the difficulty in modeling multi-legged motion with slipping and producing reliable predictions of body velocity. Here we present a detailed measurement of the foot contact forces in a hexapedal robot with multiple sliding contacts, and provide an algorithm for predicting these contact forces and the body velocity. The algorithm relies on the recently published observation that even while slipping, multi-legged robots are principally kinematic, and employ a friction law ansatz that allows us to compute the shape-change to body-velocity connection and the foot contact forces. This results in the ability to simulate motion plans for a large number of potentially slipping legs. In homogeneous environments, this can run in (parallel) logarithmic time of the planning horizon
Ziyou Wu, Dan Zhao, Shai Revzen
2023-10-31T17:32:07Z
http://arxiv.org/abs/2310.20669v2
# Modeling multi-legged robot locomotion with slipping and its experimental validation ###### Abstract We propose a novel approach to the multi-legged robot locomotion with slipping and its experimental validation. We propose a novel approach to the multi-legged robot locomotion with slipping and its experimental validation. ###### Abstract Multi-legged robots with six or more legs are not in common use, despite designs with superior stability, maneuverability, and a low numbers of actuators being available for over 20 years. This may be in part due to the difficulty in modeling multi-legged motion with slipping and producing reliable predictions of body velocity. Here we present a detailed measurement of the foot contact forces in a hexapedal robot with multiple sliding contacts, and provide an algorithm for predicting these contact forces and the body velocity. The algorithm relies on the recently published observation that even while slipping, multi-legged robots are principally kinematic, and employs a friction law ansatz that allows us to compute the shape-change to body-velocity connection and the foot contact forces. This results in the ability to simulate motion plans for a large number of potentially slipping legs. In homogeneous environments this can run in (parallel) logarithmic time of the planning horizon. **Key words:** Legged locomotion, friction, modeling, ground contact force ## 1 Introduction Most recent research in legged robots has focused on bipedal or quadrupedal robots, yet the vast majority of legged animal species use six or more legs, are smaller, and therefore navigate a relatively much rougher terrain. Such "multi-legged" systems - a term we refrain from using for quadrupeds and bipeds - can exhibit complex tradeoffs between loads on the legs, and move with substantial slipping at the feet. We are not aware of any multi-legged robot which: **(1)** simultaneously measured all the ground contact forces produced while moving and possibly slipping; **(2)** accurately modeled these forces. Here we present such a robot, together with a fast algorithm for modeling multi-legged system that predicts foot forces, and also provides **(3)** the body velocity needed for producing motion plans. Recent work [1] demonstrated that multi-legged systems, even while slipping, move in a "geometric" or "principally kinematic" way, where the body velocity arises through shape change rather than momentum buildup. That work showed that there exists a "local connection", in the sense of [2, 3], describing a shape dependent linear relationship between the instantaneous time derivative of shape and the instantaneous body velocity. While conventional Coulomb-friction based modeling approaches cannot produce such a linear relationship, our algorithm gives a local connection by construction. Furthermore, it scales very favorably with the number of legs, and admits easy parallelization. ### Multi-legged contact forces Much of the work on multi-legged robots has focused on hexpedal robots, starting with [4, 5]. Hexapods are appealing because they can have a tripod of supporting legs while moving another tripod into place for support on the next step. This static stability allows for the possibility of easier control, and usable motion even when the robot is slipping. Significant prior work was done on the hexpedal robots of the RHex family [6, 7], many variations of which have been built over the past 20 years. Additional families of multi-legged robots include the RoACH robots [8], the Sprawl robots [9], various multi-legged robots that used "whegs" [10], and several studies of multi-legged robots with large numbers of legs [11, 12, 13]. Unlike more simply supported robots, in a multi-legged system every closed kinematic loop between body and ground can support an internal force that produces no net body acceleration. This implies a sizable space of contact forces which are unobservable from motion tracking, and cannot be measured using the commonly used approach of placing a force plate for the robot to move over [14]. This is because such a force plate only measures the total wrench applied to it - not the interplay of all individual foot contact forces, which may trade off in various ways step-to-step. The authors of [15] measured the individual ground contact forces of RHex by installing each individual leg with a 3d force sensor. We set out to understand the results of [1] - why do multiple legs with (presumably) Coulomb friction interactions with the substrate produce a local connection between shape velocity and body velocity which is, algebraically at least, incompatible with such a model. To address the question through experiment we assembled a hexapedal robot with 6-Degree of Freedom(DoF) force-torque sensors at the base of each leg, enabling the contact forces to be measured directly [16]. To address the question through theory, we constructed a highly simplified quasi-static model of body-ground interaction, and replaced the Coulomb friction term which is linear in normal force but non-smooth and non-linear in contact velocity with a friction ansatz that is bilinear in normal force and contact velocity [17, Chapter 4]. Here we present a refined version of this model and show that it correctly predicts the interplay of forces measured with the robot. We resolve the seeming contradiction of having the "wrong" (ansatz) friction model produce the "correct" forces, by showing that Coulomb friction and our ansatz produce very similar motion predictions, for reasons we partially demonstrate in section 3 and in [18]. From a computational perspective, we present a numerical study demonstrating that our computation time is almost independent of the number of legs, unlike the behavior of popular state-of-the-art robot simulation tools. ## 2 Results Our algorithm takes as inputs: (1) the positions \(q_{j}\) and velocities \(\dot{q}_{j}\) of the robot's feet; (2) the spring stiffness \(k_{j}\) of each leg; (3) the friction coefficients \(\mu_{j}\) and friction anisotropy \(w_{xy,j}\). As outputs it provides: (1) body height \(p_{z,0}\), pitch \(\alpha_{x}\), and roll \(\alpha_{y}\) slopes; (2) body velocities \(\dot{p}_{xy,0}\) and \(\dot{\theta}\); (3) 3D forces at the feet \(F_{j}\). ### BigANT with force-torque sensors To verify our algorithm's accuracy, we built a version of the BigANT robot with a 6-DoF force-torque sensor attached to each leg, and calibrated the sensors to report the contact forces at the feet (see [16]). BigANT is a hexpedal robot which has only one motor per leg. The leg drivetrain is a four-bar linkage designed to provide advantageous foot clearance and gearing [17, Chapter 2.2]. We recorded the robot motion using a marker-based motion tracking system running at 100 frames per second(fps). In particular, we measured the BigANT's foot positions (\(q_{j}\)) in body frame, and we estimated its body position and orientation from the markers attached to the robot chassis. We obtained foot velocities \(\dot{q}_{j}\) by differentiating \(q_{j}\) using a Savitzky-Golay filter. The remaining inputs to the algorithm were not so easy to determine. Because our model is quasi-static, the mass plays no direct role, except for its appearance in \(Mg\) as the sum-total of normal forces at the feet. The force and moment balance equations remain unchanged regardless of the units selected for force, and these affect only \(Mg\), the stiffnesses \(K\), and the friction coefficients \(\mathrm{H}\). We therefore chose \(Mg=1\). Using marker positions, we estimated the robot body's height, pitch and roll according to SS4.1. We estimated the spring constants \(K\) and two anisotropic friction model coefficients per leg (\(\mu_{k}\) and \(w_{xy,k}\) of eqn. 15) using least-squares minimization of suitable penalty function. In total, we fitted 18 constant model parameters to predict a time series of six 3-dimensional leg forces and 6-DoF body velocity measurements, thus there is little risk of over-fitting. #### 2.1.1 BigANT: tripod gait We first ran the robot through an alternating tripod gait driven with a "Buehler clock" [6], with the steering strategy described in [19]. We collected 21 tripod gait trials [20], with 4-5 cycles in each trial, and a total of 102082 frames consisting of \(84\pm 1\) cycles. The motions of the shaft angles were scheduled to have a slow ground contact phase and a fast leg reset phase. We show in figure 1 a comparison of forces and kinematics modeled by our multi-contact algorithm with viscous-Coulomb friction, our algorithm with classical Coulomb friction, and the experimental measurements. We integrated body velocity and showed the robot trajectory in figure 1. Because our physical modeling assumptions only define contact forces up to a positive scale factor, we chose a single a positive scalar \(\sigma(t)\) for every time-step, such that \[\sigma:=\arg\min_{c}\sum_{k}(c|\hat{F}_{k}|-|F_{k}|)^{2}\] between the 12 dimensional prediction \(\hat{F}\) and the measured horizontal forces \(F\) was minimized. We reported prediction error statistics in figure 2. The mean and 1st and 3rd quantile of the run time per frame for viscous-Coulomb friction is 0.19ms (0.18, 0.24). When running a single approximation with the choice of \(\epsilon=10^{-5}\), the Coulomb friction solver takes 3.7ms (3.1, 3.9) per frame. For a full set of iterations to convergence, Coulomb friction took 10.4ms (3.25,15.0) per frame. #### 2.1.2 BigANT: metachronal gait We wanted to further study why the viscous-Coulomb friction model gave similar body velocity and forces predictions as classic Coulomb friction model. Since non-slip motions provide little insight into the question of which friction force model to use, we developed a metachronal gait with exacerbated multi-legged slipping events. Each foot contacted the ground for \(2/3\) of a cycle leading to four feet, two from each side, being in contact with the ground at any time. To ensure that feet slip, we needed to ensure that the distances between contacting feet change while in stance. We facilitated this by ensuring that the contact feet have vastly incompatible velocities by making the shaft angle a cubic function of time during stance. We collected 12 metachronal slipping gait trials, with the robot moving forward 4-6 cycles in each. In total, the data consisted of 43934 frames and \(60\pm 1\) cycles. The resulting gait produced much more slipping than the tripod gait, with slipping velocities ranging in \((-51.8,111.8)\)[mm/s] (5% quantile, 95% quantile). We showed the gait in figure 3. Figure 1: A tripod gait trajectory of BigANT. We plotted the trajectory measured from motion capture (red), allowing it to be compared with the integrated body velocity prediction from out simulation with classical Coulomb friction (blue), and with our viscous-Coulomb friction law (orange). We indicated the body location and orientation (rectangles) and indicated the location of the robot body frame origin (crosses) at the beginning, half-way point and the end. We plotted three Euler angles, three body frame velocities. We calculated roll, pitch, supporting force and body velocity, and integrated turning velocity to get yaw angle (viscous-Coulomb: orange, Coulomb friction: blue). We used motion capture data as ground truth for kinematics (red). We plotted the ground contact force in \(x\), \(y\)-axis, and supporting force ratio in \(z\)-axis of each leg (leg names are: HL hind left, ML mid left, FL front left, HR hind right, MR mid right, FR front right). We used force torque sensor measurements as ground truth for forces (red). We plotted the estimated \(z\)-axis force ratio (purple), estimated \(xy\)-axis friction forces (viscous-Coulomb: orange, Coulomb friction: blue). Figure 2: Prediction error distributions for BigANT robot with tripod gait. We plotted the distribution of measurement residuals from the mean (red) to compare with residuals of the predictions of the spring supported model (purple), or residuals of predictions from viscous-Coulomb (orange) and Coulomb friction (blue). To determine whether viscous-Coulomb or classical Coulomb friction was indicated in these data, we examined the force measurements from the slipping gait. Plotting \(F_{x,j}/F_{z,j}\) against \(v_{x,j}\) (see figure 3[B]) shows the expected structure for classical Coulomb friction, namely a step function. Figure 3: We plotted: (A) Metachronal gait phase vs. motor shaft angle for all six legs. (B) Magnitude of slipping velocity vs. magnitude of planar force divided by normal force, overlaying points from all six feet. (C) Ground contact forces. (D) Planar forces with mean subtracted (red) and model prediction errors for planar forces (viscous-Coulomb: orange, Coulomb:blue). ### Demonstration with other legged systems To test whether the proposed model generalizes to other legged systems, we further examined using our model on Multiopod robots with 6-12 leaf-spring legs and an undulation gait, and on the commercially available quadruped Ghost Robotics Spirit 40. #### 2.2.1 Multipods with 6, 8, 10, and 12 legs We used the publicly available Multipod dataset [21, 22, 23] used in [1]. Each contralateral pair of legs in a multipod has two DoF - yaw and roll - and the roll (vertical) motion is further compounded with the spring flexing of the leg itself (see figure 4). Here we used a slice of these data with the robot running at frequency 0.3Hz and phase offset \(1.35\pi\) to demonstrate our algorithm. We assumed the mass of the robot is linear in the number of legs - an explicit design feature of these robots - and set \(mg=N\). We used \(K=1\) as the spring constant and isotropic friction model \(\mu_{k}=1\) on all legs. #### 2.2.2 Ghost Robotics Spirit Our physical modeling approach built upon the assumption that friction dissipates the robot's body momentum quickly in comparison to the time-scale of gait motions. We intentionally selected a commercial quadruped, the Ghost Robotics Spirit, where this assumption brakes down to test how well the connection-based model could approximate the motion of such a quadruped. We collected 921 frames comprising about 9 cycles of motion (see figure 5). Because our model has no inertia, it tends to produce spurious high frequency changes in its predictions. To obtain a more realistic time series, we added a simple model of robot inertia in the form of a first-order IIR lowpass filter \[y_{n}=\gamma y_{n-1}+(1-\gamma)x_{n},\] where \(x_{n}\) is our raw model prediction and \(y_{n}\) is the filtered prediction. We manually selected \(\gamma=0.15\) to bring the power spectral density (PSD, computed using scipy.signal.welch) of the estimated body velocities close to that of the motion tracking derived velocities. Figure 4: Modeling of Multipod with undulation gait at frequency 0.3Hz and phase offset 1.35. We showed Multipod with 6-12 legs. We plotted the estimated velocity and trajectory (blue), compared with motion tracking (red). Side velocity plots have the same unit and scale as the heading velocity plots. ### Analysis of running time We compared the computation speed between our algorithm using the viscous-Coulomb friction model and a widely-used physics simulation engine MuJoCo [24, v2.2.1] using robot models with 3 to 50 legs. Since our focus is on multi-legged contacts, our models consisted of a round disk with 3 to 50 legs equally spaced on its circumference. We gave each leg two rotational DoF, a vertical translation DoF, and limited leg motions so that their workspaces did not overlap. We tested the execution time of both MuJoCo and our algorithm at 1000 randomly chosen poses and velocities for each number of legs, and re-normalized the running time by dividing by the median execution time of the 3 legged case, to reveal how each simulation approach scaled with the number of legs (see figure 6). While both algorithms reveal an increase in execution times, our algorithm slows down by less than a factor of 3 with 50 legs, compared with a factor of 13 for MuJoCo. This suggests that an optimized implementation of our algorithm could be used for multi-legged motion planning for any practical number of contacts. Because we are using an inertia-free model of physics in the form of a local connection, the body velocity at any instant is only a function of the shape change and shape velocity at that instant. Hence, in a homogeneous environment all time-steps of a motion plan can be computed in parallel. To demonstrate the performance gains, we simulated 10,000 random poses and velocities of a hexapod robot. We used \(P=1,\cdots,4\) processors to compute the body velocity matrices in parallel, then integrated them in a single linear process (note: this over-estimates the parallelization overhead, since the product of \(N\) matrices can be parallelized to take \(\log_{2}N\) time, but was linear here). In figure 6 we show that the algorithm parallelizes well, with the overhead at four processors falling below 1.5, i.e. a net speedup of \(4/1.5\). ## 3 Discussion Multi-legged robots (with six or more legs) are not widely studied in robotics community. One reason might be that the complexity of modeling the multi-contact ground interaction constrains both motion planning and simulations for design. Motivated by a previous discovery [1] - that multi-legged robots move as if they are governed by a local connection, i.e. quasi-statically - we developed a simplified ground-interaction model and validated it experimentally. Our algorithm consists of simulating a spring supported body in a small-angle approximation for pitch and roll to obtain the vertical foot loadings. We then introduced the viscous-Coulomb ansatz to replace classical Coulomb friction in generating the horizontal forces to produce a linear set of equations which can be solved to give rise to the local connection. Our experimental verification demonstrated that while the actual contact forces were, as expected, governed by classical Coulomb friction, our viscous-Coulomb friction model gave equally good predictions of both contact forces and body velocities, while computing 50 times faster for a hexapod. Our algorithm scales to large numbers of contacts with virtually no change in execution time, and parallelizes with very low overhead. To understand how a system governed by sliding Coulomb friction can be modeled by a viscous-Coulomb friction model, one may examine compare the relative error of a viscous friction model to that of the "true" Coulomb friction. Because both models are isotropic, we can assume without loss of generality that the velocity is in the \(x\) direction. Because both models are homogeneous, we can assume without loss of generality that the speed is \(1\). What remains is to study the relative error of predicting the Coulomb friction force for contact velocities close to \((1,0)\) and the prediction obtained by using viscous drag model instead. In figure 7 we present the contours of relative error when using a single viscous friction model instead of Coulomb friction over the specified range of velocities. The plot demonstrates that with \(|\delta v|<0.2|v|\), the viscous-Coulomb force prediction for velocity \(v+\delta v\) will be within \(2\%\) of the classical Coulomb friction force prediction. The linearity between slipping Figure 5: We showed the commercial quadrepedal robot, Spirit, used in the experiment. We plotted the estimated velocity and trajectory (orange), filtered estimation (blue) and motion tracking (red). velocity and friction forces was also observed as an average relationship in numerical simulation [18] and experiments [12]. We are thus left with the conclusion that a viscous-Coulomb ansatz model for friction produces very similar predictions to those produced by the classical, tribologically accurate Coulomb friction model. Comparing the motion predictions obtained from both models, they are far more similar to each other than either is to the measured motion, suggesting that the dominant error in these models was not the use of an incorrect friction model. However, the viscous-Coulomb model, in the context of our multi-contact algorithm provides a significant performance boost. It is faster to compute; it scales better with the number of contacts; and it is easier to parallelize. From the perspective of physics, that our ansatz produces motion plans as accurate as those produced by Coulomb friction, but also provably produces a local connection and principally kinematic motion in the geometric mechanics sense, provides further justification for the observation of [1] that local connection models provide a framework that includes multi-legged locomotion. While the local connection models of [1] were data-driven, here we have shown that such models can be obtained using a principled modeling approach. The algorithm we presented here provides merely a starting point - it is a means for rapidly and accurately estimating Figure 6: (a) Plot of normalized run time of the multi-contact algorithm and MuJoCo simulation versus number of legs. We plotted the distribution of time-step computation times on \(1000\) randomly initialized configurations for each number of legs from \(3\) to \(50\). Plot indicates distribution percentiles \(2.5\) to \(97.5\) (lightly shaded); \(25\) to \(75\) (shaded); and median (dotted line). The execution times are normalized relative to the median execution time of each simulation on the \(3\) leg case. The robot configurations consisted of a disk with \(N\) equally spaced legs on the rim as illustrated by examples with \(N=3\), \(21\) and \(42\). (b) Plot of parallelization overhead splitting the algorithm over \(M\) threads. The overhead is execution time times \(M\) the number of threads, in units of the median execution time on a single thread. Perfectly parallelizable workloads give \(1\) whereas unparalellizable workloads give \(M\). We plotted the workload distributions at \(M=1\ldots 4\) for a hexapod, running \(100\) randomized trajectories each \(10000\) time-steps long (ribbon with same quantile as in (a)). Figure 7: Contour of error between viscous-Coulomb approximation to Coulomb friction around equilibrium velocity. multi-contact robot-environment interactions. Such estimates are building blocks for motion planning, model predictive control, design optimization, and many other potential applications. The algorithm itself can be extended to include contacts with non-flat ground, and the various quantities we estimated by fitting could be converted to online, adaptively tuned quantities. We hope that such advances will stimulate the adoption of multi-legged robots in field robotics, and provide reliable and adaptable bio-inspired locomotion platforms. ### Acknowledgements We would like to thank Andy Ruina for his key insight in explaining the relationship between our viscous-Coulomb ansatz and Coulomb friction. We also thank the many students who have worked in the BIRDS Lab at the University of Michigan on collecting the large robot motion datasets used herein. ## 4 Materials and methods : algorithm We propose an algorithm to estimate world frame body velocity from its body shape and shape changing velocity at current time frame. The algorithm is composed of two steps: (A) find which feet are in contact with the ground and estimate their gravity induced loading using a spring support model; (B) construct instantaneous local connection model to estimate the planar body velocity. The inputs to the spring support model (A) are: (1) the 3D positions of the feet \(q_{j}\) in the body frame; (2) the spring stiffness \(k_{j}\) of each leg. The outputs of the spring support model are: (1) body height \(p_{z,0}\), pitch \(\alpha_{x}\), and roll \(\alpha_{y}\) slopes; (2) gravity induced loading on each foot \(F_{z,j}\) and, implicit in that, which feet are in contact with the ground. Once the contacting feet are known, we solve for force and moment equilibrium using a viscous-Coulomb friction ansatz which is bi-linear in \(F_{z,j}\) and the foot sliding velocities in the world frame \(\dot{p}_{xy,j}\), providing an local connection model (B). The inputs to the connection model are: (1) the 2D positions \(q_{xy,j}\) and velocities \(\dot{q}_{xy,j}\) of the feet in the body frame; (2) the friction coefficients \(\mu_{j}\) and friction anisotropy \(w_{xy,j}\); (3) the gravity induced loading \(F_{z,j}\) computed in (A). The outputs of this local connection model are body velocities: (1) body velocities \(p_{xy,0}\) and \(\dot{\theta}\); (2) 2D traction forces at the feet \(F_{xy,j}\). Suppose we are given a system with \(N\) legs (or other contacts), indexed by \(j=1\ldots N\). The time varying foot positions in the body frame of reference are given by \(q_{j}\in\mathbb{R}^{3}\), \(j=1\ldots N\). We assume the transformation from body frame to world frame is given by a time varying rigid body transformation \(\Omega\in SE(3)\). The world frame foot positions \(p_{j}\) and velocities \(\dot{p}_{j}\) are \[p_{j}:=\Omega q_{j} \tag{1}\] \[\dot{p}_{j}=\Omega\,q_{j}+\Omega\dot{q}_{j}=\Omega\left[\Omega^{ -1}\Omega q_{j}+\dot{q}_{j}\right] \tag{2}\] Let \(p_{0}\) represent the origin of the body frame. We assume a simplified form for the rigid body transformation approximation \(\Omega^{\prime}\), where pitch \(\alpha_{y}\) and roll \(\alpha_{x}\) angles are small, so they can be approximated by their first order Taylor approximation. We also assume the rigid body motion is only time varying in the horizontal plane, i.e. \(\alpha_{x}\)\(\alpha_{y}\) and \(p_{z,0}\) vary so slowly that their derivatives can be approximated by \(0\). The detailed construction of \(\Omega^{\prime}\) and \(\Omega^{\prime-1}\dot{\Omega}^{\prime}\) are in 3. \[\Omega^{\prime}: =\begin{bmatrix}\cos\theta&-\sin\theta&\alpha_{x}&p_{x,0}\\ \sin\theta&\cos\theta&-\alpha_{y}&p_{y,0}\\ -\alpha_{x}&\alpha_{y}&1&p_{z,0}\\ 0&0&0&1\end{bmatrix}\] \[\Omega^{\prime-1}\dot{\Omega}^{\prime}: =\begin{bmatrix}0&-\dot{\theta}&0&\dot{p}_{x,0}\\ \dot{\theta}&0&0&\dot{p}_{y,0}\\ 0&0&0&0\\ 0&0&0&0\end{bmatrix} \tag{3}\] Because of these simplifying assumptions, we can decouple the movements in the \(xy\) plane, and the physical units of vertical and horizontal length are decoupled. We use the planar rotation \(\mathrm{R}_{\theta}:=\left[\begin{smallmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{smallmatrix}\right]\), and \(\mathrm{S}:=\left[\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right]\) to represent foot position and velocity in world frame \(xy\) plane with: \[p_{xy,j} =\mathrm{R}_{\theta}q_{xy,j}+p_{xy,0} \tag{4}\] \[\hat{p}_{xy,j} =\mathrm{R}_{\theta}\left(\hat{\theta}\mathrm{R}_{\theta}{}^{-1} \mathrm{SR}_{\theta}q_{xy,j}+\hat{q}_{xy,j}\right)\] \[\qquad+\hat{p}_{xy,0}+[\alpha_{y},-\alpha_{x}]^{T}q_{z,j} \tag{5}\] ### Spring Support Model : finding the contacts In this section, we show how to decouple the roll, pitch and vertical (z-axis) motion of the robot, and determine which legs are in contact and what supporting force each leg generates. We model the robot as a "body plane", with each leg assumed to be a vertical spring attached to this plane. We assume the system is at force and moment balance. A simplified version of this model, without accounting for roll and pitch, can be found in [25, 17]. A similar spring-leg model was used to study legged animals and robots [26, 13], but they did not specify how to determine which legs are in contact. Consider a pitch, roll, and height state \(\mathbf{s}=(\alpha_{x},\alpha_{y},p_{z,0})\). From (1), we have \[p_{z,j}=-\alpha_{x}q_{x,j}+\alpha_{y}q_{y,j}+q_{z,j}+p_{z,0}=p_{z,j}(\mathbf{s}). \tag{6}\] Taking \(0\) to be the ground level, and up being the positive z-axis direction, those \(p_{z,j}<0\) are in contact with the ground. Assuming the normal supporting force \(F_{z,j}(\mathbf{s})\) is linearly dependent on \(p_{z,j}\), we define the individual leg normal force and the resulting planar moment function by, \[F_{z,j}(\mathbf{s}) :=\left\{\begin{array}{cl}-K_{j}\:p_{z,j}(\mathbf{s})&\text{if }p_{z,j}(\mathbf{s})<0\\ \\ 0&\text{otherwise}\end{array}\right.\] \[M_{x,j}(\mathbf{s}) :=-q_{y,j}F_{z,j}(\mathbf{s}) M_{y,j}(\mathbf{s}):=q_{x,j}F_{z,j}(\mathbf{s}), \tag{7}\] Figure 8: Visualization of search for contact state in a 2D “robot”. We indicated the height and pitch (\(p_{z,0}\),\(\alpha\)) states searched (labels 0-3 of (a)), and visualized the pose and contacts of the “robot” in each of these states (corresponding labels of plots in (b)). Each “robot” leg (zigzag lines in (b)) defines a corresponding codimension 1 plane (line here) in (\(z_{0}\),\(\alpha\)) at which it contacts the ground (colored lines in (a) with color same as the leg in (b), (c)). At a \(p_{z,0}\) above the plane, the leg is in the air; below it the leg will be in contact and generate normal forces. With each state being searched (number label in (a)), there is a closed-form solution of the force equilibrium, which we connect to that state with a line interval (black in (a)). If the equilibrium lies in the same contact state the algorithm terminates (star; step 3). Otherwise the portion of the line segment in another contact state is counter-factual (black dashed in (a)). Instead, we switch to the new contact state and solve again. Each such transition between contact states lies on a plane correspoding to the leg that made contact (black dot in (a); circled leg in (b)). and we denote the total force and moment by, \[F_{z}(\mathbf{s})=\sum_{i=1}^{N}F_{z,j}(\mathbf{s}), M_{x}(\mathbf{s}) =\sum_{i=1}^{N}M_{x,j}(\mathbf{s}), \tag{8}\] \[M_{y}(\mathbf{s})=\sum_{i=1}^{N}M_{y,j}(\mathbf{s}).\] When \(\alpha_{x}=\alpha_{y}=0\), the total normal force at height \(z\) such that \(p_{z,N_{k}}\leq-z<p_{z,N_{k}+1}\) is \(F_{z}([0,0,z])=\sum_{j=1}^{N_{k}}K_{j}(z+p_{z,j})\). WLOG, we can sort \(p_{z,j}\) in non-increasing order, and we let \(z=-p_{z,N_{k}}\), starting with \(N_{k}=1\), where only the lowest foot is in contact. We increase \(N_{k}\) until \(F_{z}([0,0,-p_{z,N_{k}}])\leq Mg<F_{z}(0,0,-p_{z,N_{k}+1})\), and then linearly interpolate with slope \(K_{N_{k}+1}\) to find \(z^{*}\) such that force balance is achieved. For that \(z^{*}\), legs \(k=1\ldots N_{k}\) are in contact with the ground. Throughout the paper, we use the index \(k\) to vary only over legs which are in contact with the ground based on this criterion, and by \(F_{z,k}\) the normal force of those legs. Next, we solve for the full state, \(\mathbf{s}\), containing the small pitch and roll angles, and the body height, maintaining vertical force balance, and moment balance of the moments generated by the normal forces, i.e. \(F_{z}-Mg=M_{x}=M_{y}=0\). We start with an initial condition \(\mathbf{s}_{0}=(0,0,z^{*})\), with \(F_{z}=Mg\). Taking \(\alpha_{x}\),\(\alpha_{y}\) and \(z^{*}\) as unknowns and holding the legs in contact constant, these values are a solution \(\mathbf{s}_{0}^{*}\) for a 3-dimensional linear system. We check whether the legs in contact at \(\mathbf{s}_{0}^{*}\) are the same as in \(\mathbf{s}_{0}\); if so, then \(\mathbf{s}_{0}^{*}\) is the result from our model. If not, then we search along the line segment starting at \(\mathbf{s}_{0}\) and ending at \(\mathbf{s}_{0}^{*}\) for the first change in contacts, which must therefore occur on a plane describing the contact condition for the first leg which would change contact state going along this line segment. This transition point is taken as \(\mathbf{s}_{1}\), and the process repeats for the new legs in contact. Because contact forces are zero on the corresponding contact condition plane, \(F_{z}\), \(M_{x}\) and \(M_{y}\) are continuous through the change in contacting legs. The detailed expression of the equations is in SS6.1.1. As the search iterates, we may encounter a state where only one or two legs are in contact, and the linear force torque balance equation becomes under-determined. To resolve these states we include an additional assumption - that the origin of the body plane is the center of mass of the robot body. Under this assumption, when there are fewer than three legs in contact, the COM generates a moment around the contact point(s) and we tilt the body plane, i.e. change \(\alpha_{x}\) and \(\alpha_{y}\), approximating the rotation this moment would induce, until an additional leg contacts the ground. We proceed to describe the tilting directions as if they were rotations with an axis. However, the actual linear map they describe is a shearing operation whose neutral plane intersects the \(xy\) plane on a line containing the rotation axis. When only one leg is in contact, the rotation is in the plane containing the leg and the COM, around the contact point. When two legs are in contact, the rotation is around the line connecting their contact points, and in the direction of the moment the net \(F_{z}\) generates around this line. The detailed solution is in SS6.1.2 and SS6.1.3. We used a 2D "robot" in \(xz\)-plane to visually illustrate our search algorithm in figure 8. In this 2D case, the algorithm searches for robot height and pitch using foot position in \(x,z\) coordinates. The 3D model extends the contact switching lines in 2D searching space to planes in 3D, and its visualization can be found in figure 9. ### Local connection model : traction forces After knowing which legs are in contact and their gravity loading we solve for the body planar velocity \((\dot{p}_{xy,0},\dot{\theta})\), obtained by imposing force and moment balance. While classical approaches suggest Coulomb friction is the correct tribological model for sliding dry contacts, we show that a viscous-Coulomb ansatz which is bilinear in both loading force and sliding velocity makes for a linear system of equations that leads to a local connection model. #### 4.2.1 Friction forces The classical approaches to mechanics suggest that the contact between foot and ground should be modeled by Coulomb friction (middle term below). \[F_{xy,k}=-\frac{\dot{p}_{xy,k}}{\|\dot{p}_{xy,k}\|}\mu_{k}F_{z,k}=\mathrm{H}_{ k}\dot{p}_{xy,k} \tag{9}\] The choice of \(\mathrm{H}_{k}=-\mu_{k}F_{z,k}/\|\dot{p}_{xy,k}\|\) would provide equality, but this would produce the well-known problem of singularity at \(\dot{p}_{xy,k}=0\). Define \(v_{k}:=\|\dot{p}_{xy,k}\|\). We explore the tractability of alternative friction models using \[\mathrm{H}_{k}:=-\mu_{k}F_{z,k}\frac{\varepsilon+v_{k}}{\varepsilon+v_{k}^{2}} \tag{10}\] When \(\varepsilon\to 0\), \(\mathrm{H}_{k}\rightarrow\) Coulomb friction model; when \(\varepsilon\rightarrow\infty\), \(\mathrm{H}_{k}\rightarrow-\mu_{k}F_{z,k}\), the friction force becomes \(F_{xy,k}=-\mu_{k}F_{z,k}\dot{p}_{xy,k}\), a combination of viscous and Coulomb friction, depending on both slipping rate and normal force. We further deconstruct (9) in terms of \(\dot{\theta}\), \(\dot{p}_{x,0}\), and \(\dot{p}_{y,0}\): \[F_{xy,k} =\mathrm{H}_{k}\left(\mathrm{R}_{\theta}\left[\dot{\theta} \mathrm{R}_{\theta}{}^{-1}\mathrm{SR}_{\theta}q_{xy,k}+\dot{q}_{xy,k}\right]+ \dot{p}_{xy,0}\right)\] \[=\left(\mathrm{H}_{k}\mathrm{SR}_{\theta}q_{xy,k}\right)\dot{ \theta}+\mathrm{H}_{k}\,\dot{p}_{xy,0}+\left(\mathrm{H}_{k}\mathrm{R}_{\theta }\dot{q}_{xy,k}\right) \tag{11}\] ### Solving for planar body velocity From our quasi-static assumption, we have horizontal plane force and moment balance, i.e. \(\sum F_{x,k}=\sum F_{y,k}=\sum M_{z,k}=0\). From horizontal force balance, using (11), we obtain two equations in \(\dot{\theta}\), \(\dot{p}_{x,0}\), and \(\dot{p}_{y,0}\) \[0=\sum_{k=1}^{N_{k}}F_{xy,k} =\left(\sum_{k=1}^{N_{k}}\mathrm{H}_{k}\mathrm{SR}_{\theta}q_{xy,k}\right)\dot{\theta}+\left(\sum_{k=1}^{N_{k}}\mathrm{H}_{k}\right)\dot{p}_{ xy,0}\] \[\quad+\left(\sum_{k=1}^{N_{k}}\mathrm{H}_{k}\mathrm{R}_{\theta} \dot{q}_{xy,k}\right) \tag{12}\] The moment exerted by a leg is given by: \[M_{z,k} =p_{xy,k}^{\mathsf{T}}\mathrm{S}F_{xy,k}=\left(p_{xy,k}^{\mathsf{ T}}\mathrm{SH}_{k}\mathrm{SR}_{\theta}q_{xy,k}\right)\dot{\theta}+\] \[\left(p_{xy,k}^{\mathsf{T}}\mathrm{SH}_{k}\right)\dot{p}_{xy,0}+ \left(p_{xy,k}^{\mathsf{T}}\mathrm{SH}_{k}\mathrm{R}_{\theta}\dot{q}_{xy,k}\right) \tag{13}\] Giving the obvious third and final equation: \[0 =\sum_{k=1}^{N_{k}}M_{z,k}=\left(\sum_{k=1}^{N_{k}}p_{xy,k}^{ \mathsf{T}}\mathrm{SH}_{k}\mathrm{SR}_{\theta}q_{xy,k}\right)\dot{\theta}\] \[\quad+\left(\sum_{k=1}^{N_{k}}p_{xy,k}^{\mathsf{T}}\mathrm{SH}_{k }\right)\dot{p}_{xy,0}+\left(\sum_{k=1}^{N_{k}}p_{xy,k}^{\mathsf{T}}\mathrm{ SH}_{k}\mathrm{R}_{\theta}\dot{q}_{xy,k}\right) \tag{14}\] In the case when \(\varepsilon\rightarrow\infty\), \(\mathrm{H}_{k}\) being rate \(\dot{p}_{xy,k}\) independent, the three force and moment balance equations are linear in the body velocity \(\dot{p}_{xy,0}\), \(\dot{\theta}\) and foot velocity in body frame \(\dot{q}_{xy,k}\). One could solve the system by 3d matrix inversion. The detailed expression of the solution is derived in SS6.2. In addition to classic Coulomb friction and viscous friction, we consider the possibility that \(\mathrm{H}_{k}\) can be dependent on slipping direction, modeling forces generated by a wheel, skate, claw, or otherwise non-isotropic frictional contact. We consider an anisotropic viscous friction model, where \(\mathrm{H}_{k}\) is a symmetric positive semidefinite matrix, \(\mathrm{H}_{k}(q):=\mathrm{R}_{\theta}\mathrm{H}_{q,k}(q)\mathrm{R}_{\theta}{}^ {-1}\) taken to be independent of \(\dot{p}_{xy,k}\), but (possibly non-linearly) dependent on all elements of \(q\). We assume that each contact is associated with an enhanced traction direction and associated magnitude, expressed in body coordinates as a vector \(w_{xy,k}\), defined as: \[\mathrm{H}_{k}:=-\mu_{k}F_{z,k}\mathrm{R}_{\theta}(\mathrm{I}_{2}+w_{xy,k}w_{ xy,k}^{\mathsf{T}})\mathrm{R}_{\theta}{}^{-1} \tag{15}\] This changes the circular cross-section of the friction cone into an ellipsoidal one. Even with this dependence, the equations (14) and (12) are still linear in the velocities \(\dot{p}_{xy,0}\), \(\dot{\theta}\) and \(\dot{q}_{xy,k}\). Similar to SS4.3, body velocity \(\dot{p}_{xy,0}\), \(\dot{\theta}\) can still be solved linearly with respect to shape changing velocity \(\dot{q}_{xy,k}\), giving a general form: \[\mathrm{R}_{\theta}{}^{-1}\dot{p}_{xy,0} =:\sum_{k}A_{xy,k}(q)\dot{q}_{xy,k},\] \[\dot{\theta} =:-\sum_{k}A_{\theta,k}(q)\dot{q}_{xy,k},\] Where the \(A_{.,}(q)\) matrices is kinematic term in the reconstruction equation of geometric mechanics. ## 5 Materials and methods : robot experiments ### Motion capture We used 10 Qualisys Oqus-310+ cameras as marker-based tracking system, running at 100 fps with software QTM 2.17 build 4000. #### 5.1.1 Force/Torque measuring BigANT The BigANT robot has six 1 DoF legs, each actuated by a servo motor (Robotis Dynamixel MX106), via a 4-bar mechanism we chose for both foot clearance and advantageous instantaneous gearing ratios. The detailed design and characteristics can be found in [17, Chapter 2.2]. We manufactured the legs from foamcore (Elmer's Products Inc. 3/8" foam board) and fiber reinforced tape (3M Scotch #8959) using the "plates and reinforced flexures" (PARF) technique of [27]. We laser-cut the base plate for the BigANT chassis from a 1/4" ABS plastic plate. We then installed an 6 DoF Force/Torque sensor (ATI Gamma F/T sensor) underneath each leg, and used the vendor's wireless F/T sensor system to communicate the measurements to the controlling host computer at 100 Hz. We calibrated the F/T sensors according to [16]. #### 5.1.2 Multipod robots We constructed the Multipod robots 3 to 6 pairs of 2-DoF segments with spring steel cantilevers as legs. The mechanical design and characteristics can be found in [17, Chapter 2.3]. We provide detailed documentation of the motion capture data in [1], and the dataset itself can be found in [21, 22, 23]. To model the body motion of unloaded spring legs, we computed the location of each foot relative to the rigid segment to which its spring was connected using motion tracking frames in which the leg was unloaded. We then extrapolated this unloaded position to the frames where the leg was loaded. #### 5.1.3 Ghost Robotics Spirit We used a Spirit 40 v1.0 Proto Q-UGV Robot from Ghost robotics, operated through a Samsung Galaxy A50 with on board firmware version 0.12.6. ### Parameter fitting of \(K\), \(\mu\) and \(\mathrm{H}\) We estimated the spring constants \(K\) by minimizing the L2-norm difference between modeled and measured \(F_{z}\), distribution among contacting legs, while adding the coefficient of variation of \(K\) between legs as a regularization penalty. We assumed anisotropic friction coefficients \(\mathrm{H}_{k}\) (see eqn. 15), and inferred the parameters \(\mu_{k}\) and \(w_{xy,k}\) for each leg by minimizing L2-norm error between measured forces and forces calculated from slipping velocity measured by motion capture. We used scipy.optimize.least_squares for both of these parameter estimation minimizations. We solved for classical Coulomb friction with scipy.optimize.root using the LM algorithm, with the solution of the previous time-step as an initial guess. Because Coulomb friction is non-smooth, we approximated the solution starting from \(\varepsilon=10^{-5}\) (from eqn. 10), using each solution as an initial condition for another solution with smaller \(\varepsilon\) until the relative change in L2-norm of two consecutive solutions differed less than \(10^{-3}\) - a threshold less than \(5\%\) of the median ground speed measured. In the very rare cases (0.12% of the BigANT tripod gait dataset) where the Coulomb friction solver failed to converge, we used the ground truth velocity as the initial condition to obtain the Coulomb friction solution, and these converged.
2309.15969
Optical tuning of the diamond Fermi level measured by correlated scanning probe microscopy and quantum defect spectroscopy
Quantum technologies based on quantum point defects in crystals require control over the defect charge state. Here we tune the charge state of shallow nitrogen-vacancy and silicon-vacancy centers by locally oxidizing a hydrogenated surface with moderate optical excitation and simultaneous spectral monitoring. The loss of conductivity and change in work function due to oxidation are measured in atmosphere using conductive atomic force microscopy (C-AFM) and Kelvin probe force microscopy (KPFM). We correlate these scanning probe measurements with optical spectroscopy of the nitrogen-vacancy and silicon-vacancy centers created via implantation and annealing 15-25 nm beneath the diamond surface. The observed charge state of the defects as a function of optical exposure demonstrates that laser oxidation provides a way to precisely tune the Fermi level over a range of at least 2.00 eV. We also observe a significantly larger oxidation rate for implanted surfaces compared to unimplanted surfaces under ambient conditions. Combined with knowledge of the electron affinity of a surface, these results suggest KPFM is a powerful, high-spatial resolution technique to advance surface Fermi level engineering for charge stabilization of quantum defects.
Christian Pederson, Rajiv Giridharagopal, Fang Zhao, Scott T. Dunham, Yevgeny Raitses, David S. Ginger, Kai-Mei C. Fu
2023-09-27T19:41:23Z
http://arxiv.org/abs/2309.15969v1
Optical tuning of the diamond Fermi level measured by correlated scanning probe microscopy and quantum defect spectroscopy ###### Abstract Quantum technologies based on quantum point defects in crystals require control over the defect charge state. Here we tune the charge state of shallow nitrogen-vacancy and silicon-vacancy centers by locally oxidizing a hydrogenated surface with moderate optical excitation and simultaneous spectral monitoring. The loss of conductivity and change in work function due to oxidation are measured in atmosphere using conductive atomic force microscopy (C-AFM) and Kelvin probe force microscopy (KPFM). We correlate these scanning probe measurements with optical spectroscopy of the nitrogen-vacancy and silicon-vacancy centers created via implantation and annealing 15-25 nm beneath the diamond surface. The observed charge state of the defects as a function of optical exposure demonstrates that laser oxidation provides a way to precisely tune the Fermi level over a range of at least 2.00 eV. We also observe a significantly larger oxidation rate for implanted surfaces compared to unimplanted surfaces under ambient conditions. Combined with knowledge of the electron affinity of a surface, these results suggest KPFM is a powerful, high-spatial resolution technique to advance surface Fermi level engineering for charge stabilization of quantum defects. ## I Introduction Quantum point defects in wide band gap semiconductors are utilized for quantum sensing and quantum information applications due to their long spin coherence times and spin-selective optical transitions [1; 2]. For a given application, a particular charge state of the defect is required. This charge state can be passively controlled by Fermi level engineering of the host crystal, typically through bulk impurity doping [3; 4; 5]. However, control over the Fermi level can be challenging in wide band gap semiconductors due to limits in dopability [6]. For quantum sensors and integrated quantum photonics, the defect must be close (nanometers to hundreds of nanometers) to the surface. This opens up an additional control knob, surface Fermi level engineering. Control over the surface Fermi level can be realized via active techniques such as gating [7; 8] or passive techniques such as chemical surface functionalization [9; 10; 11; 12]. Control over the surface termination is additionally attractive because it not only affects the defect charge state, it also affects defect spin [13] and optical properties [14]. This surface sensitivity indicates a pressing need for engineering atomically ordered surfaces with the desired electronic structure. A large body of existing work in diamond surface science has focused on electronic applications [15]. Now, the impact of these surfaces on quantum point defect properties in diamond is receiving attention. Optical measurements have directly linked the ratio of fluorescence of near-surface neutral nitrogen-vacancy centers NV\({}^{0}\) and negatively charged NV\({}^{-}\) centers to the degree of oxidation and hydrogenation [9; 10]. Later work has shown similar switching between the neutral silicon-vacancy SiV\({}^{0}\) and negatively charged SiV\({}^{-}\)[11]. In both cases, the surface termination was switched in a binary fashion between an oxidized surface and hydrogenated surface. However a desired defect may require an intermediate termination. For example, in the case of SiV\({}^{0}\), theoretical calculations of the formation energies suggest that near-surface SiV may not be in the desired neutral charge state for strongly hydrogenated or strongly oxidized surfaces [16]. To date, the primary method for determining the effectiveness of the surface treatment is typically the direct optical probing of the quantum defects, in particular when the device size is small (microns) [8] compared to standard large-area surface characterization techniques such as ultraviolet photoelectron spectroscopy (millimeters). In this work, we combine optical probing with conductive atomic force microscopy (C-AFM) and Kelvin probe force microscopy (KPFM) techniques. The scanning probe techniques work under ambient conditions, high vacuum, on fabricated electronic devices, and even under optical excitation, enabling measurements under practical experimental conditions. We find a strong correlation between the scanning probe measurements and the charge state and emission intensity of the quantum defects. We further utilize the method to study laser-assisted oxidation which provides controlled surface modification with optical spatial resolution. Finally, corre lated measurements show a significant acceleration of oxidation of hydrogen-passivated surfaces in ambient conditions in the presence of implantation damage. ## II Sample preparation and analysis techniques Two chemical vapor deposition diamond substrates (Element Six, ELSC grade), named A and B, were implanted with Si and N. The nominal nitrogen and boron substrate concentrations are specified as less than 5 ppb and 1 ppb, respectively. Prior to implantation, the top 5 microns of the surface were etched in an Ar/Cl RIE plasma and oxygen plasma [14] to remove surface damage from polishing and prior implantation. We co-implant both samples with nitrogen and silicon at fluences of \(10^{10}\,\mathrm{cm}^{-2}\) and \(10^{11}\,\mathrm{cm}^{-2}\), respectively. Sample A (B) has a N\({}^{+}\) and Si\({}^{+}\) acceleration energy of 10 keV and 19 keV (18 keV and 34 keV), respectively. Implantation is performed at an angle of 7 degrees from normal through a TEM mask, resulting in implantation squares (Fig. 1(a)). The mean implantation depth for both impurities is 15 nm in sample A and 25 nm in sample B, as simulated by SRIM [17]. The implantation depth is chosen such that the quantum defects lie in the band-bending region of the surface, and thus are sensitive to surface terminations [9]. The implanted N was used for forming NV centers, whose charge state is very sensitive to band-bending and relatively well understood, whereas Si was used for forming SiV in order to further our understanding of SiV charge state stabilization. Sample A (B) is annealed at 800 \({}^{\circ}\)C (1200 \({}^{\circ}\)C) for 2 hours under vacuum (\(10^{-7}\) mbar) to mobilize the vacancies produced by ion implantation and form both NV and SiV centers, and to anneal out residual implantation damage. Following the vacuum anneal, the samples are boiled in a 1:1:1 mixture of sulfuric, nitric and perchloric acid for 1 hour and subsequently annealed at ambient pressure in oxygen at 465 \({}^{\circ}\)C for 4 hours to form a starting oxidized surface [10]. Next, the sample surface was exposed to a cold hydrogen plasma treatment which was developed to achieve record hydrogen termination on graphene [18]. In conventional direct-current (DC) and radio-frequency (RF) plasma processing reactors, high energy hydrogen ions generated at the plasma-substrate interface (so-called sheath) and energetic atoms generated due to charge-exchange collisions between ions and atoms can induce substantial damage by irreversible etching and sputtering. In contrast, the cold hydrogen plasma is generated by non-thermal electrons in crossed electric and magnetic fields, so-called cross-field or ExB plasma discharge (Fig. 1(b)). The sample is held beneath the confined plasma to expose the surface to hydrogen radicals (rather than ions). The electric field is directed inward, _i.e._ away from the substrate, preventing energetic ions from impinging on the substrate. AFM measurements confirm a similar RMS roughness before and after H treatment (RMS=(Fig. 1(d)). Additionally, the sample surface becomes hydrophobic which is expected for hydrogen passivation [15]. The sample mount exposes a circular window of the sample to the hydrogen plasma, with the sample edges masked (Fig. 1(c)). As we see further below, this masking allows us to probe the continuous region between the oxidized and hydrogenated surfaces. Confocal photoluminescence imaging and spectroscopy are performed at STP to monitor the luminescence from the NV\({}^{0}\), NV\({}^{-}\), and SiV\({}^{-}\). Excitation and oxidation are both performed with a 532 nm laser focused to a 500 nm diameter spot. The ratio of the NV\({}^{-}\) to NV\({}^{0}\) luminescence is used to serve as a proxy for the Fermi level in the low-excitation-power limit [9]. For KPFM and C-AFM measurements, the ground electrode is patterned onto the face of the diamond using e-beam evaporation through a shadow mask, and wire bonded to a chip-carrier. The 140-nm-thick Au electrode is deposited on a 10-nm-thick titanium adhe Figure 1: (a) Schematic of implantation geometry. The diamond substrate is implanted through a TEM grid resulting in squares of implanted N and Si. The squares are 28 μm (90 μm) wide in sample A (B). (b) Schematic of cold plasma reactor. (c) Schematic of the sample holder. A large circular window on the diamond surface is exposed to the hydrogen radicals, while the edges are masked. (d) AFM topography measurements before and after the hydrogenation show similar surface roughness. (e) Water wetting angle measurements before and after hydrogenation show the expected increase in hydrophobicity. sion layer. This lateral geometry allows both KPFM measurements and C-AFM measurements on the same area; the use of a backing plane electrode on a thick insulating substrate would prevent C-AFM measurements. We performed KPFM using both amplitude-modulated (AM-KPFM) and frequency-modulated (FM-KPFM) methods. FM-KPFM is sensitive to the electrostatic force static force gradient, rather than the electrostatic force as in AM-KPFM, and therefore yields better spatial resolution and a more accurate contact potential difference (CPD), the difference between the work function of the sample and the tip. AM-KPFM is more efficient at scanning large areas quickly [19]. The absolute value of the contact potential difference measured in the KPFM signal is dependent upon the material and geometry of the tip. All AFM measurements were taken with an MFP3D AFM (Oxford Instruments) using metal-coated cantilevers (KPFM: BudgetSensors ElectriTap190-G cantilevers; C-AFM: BudgetSensors CONTG-B contact-mode cantilevers). FM-KPFM was performed using custom code written in Igor Pro, as in previous work [20]. KPFM data were taken with lift heights of typically 10 nm. ## III Correlated PL, KPFM and C-AFM imaging We perform correlated C-AFM, KPFM and PL spectroscopy around a narrow transition region between the oxygen and hydrogen termination transition (Fig. 2(a)). C-AFM measurements (Fig. 2(b)) confirm that the hydrogen-terminated surface on the right is conductive while the left oxygen-terminated surface is insulating. This provides further verification that the hydrogen termination has pushed the Fermi level into the valence band. Additionally, a dramatic increase in resistance is observed above the implantation squares, suggesting either a loss of hydrogenation in these regions or increased scattering due to interactions between the holes and the residual implantation damage. Further measurements below support the former conclusion. Frequency-modulated KPFM in Fig. 2(c) in the same region shows a 250-320 mV potential increase from the oxidized to hydrogenated surface. The size and sign of the change in surface potential is similar to prior reports which range between 100-300 mV [21; 22]. Similar to C-AFM, one of the most striking features of the KPFM image is the ability to image the implantation squares. Additionally, we note that the CPD does not monotonically increase from the oxygen-terminated to hydrogen-terminated side, but rather has a maximum between the two regions. This behavior is observed even in the region that has not been implanted and will be discussed further below. In photoluminescence imaging, a spectral band of 653-815 nm was collected which includes PL contributions for NV\({}^{0}\), NV\({}^{-}\) and SiV\({}^{-}\). The PL image shown in Fig. 2(d) shows bright PL emission under the oxidized surface, with very low fluorescence shown under the hydrogenated surface. Spatially resolved spectra across the square Figure 2: (a) Schematic of sample. (b) C-AFM image with a 5 V sample bias. (c) Stitched FM-KPFM image taken with a 10 nm lift height. A constant -70 mV offset is applied to the right scan to match the CPD in the overlapping region. (d) Confocal photoluminescence image obtained using 1 mW of 532 nm excitation. (e)-(i) Photoluminescence spectra at the spots indicated in (d) obtained with 0.8 mW of 532 nm excitation. (j) Total PL intensity for the three defects for the spots indicated in (d). which lies on the termination boundary show that not only does the total intensity change (black spectra), but the relative contributions of the three defects, NV\({}^{-}\), NV\({}^{0}\) and SiV\({}^{-}\) vary as well. These relative contributions are extracted by fitting the total spectrum to a weighted sum of the individual defect spectra (Fig. 2(e-i)). Fig. 2(j) depicts the change in PL intensity for the three defects across the termination. The negatively charged defects monotonically quench with hydrogenation, while the neutral NV\({}^{0}\) PL increases but eventually quenches at the highest exposure. These results are consistent with prior reports of NV luminescence after hydrogen plasma treatment which show NV centers can be left in the neutral or even optically dark positively-charged NV\({}^{+}\) state [9], dependent on the level of hydrogenation. ## IV Optical control over the surface Fermi level In this section, we utilize KPFM to confirm the oxidation of the hydrogen-terminated surface via laser irradiation. Fig. 3(a) depicts a confocal image of a hydrogen-terminated region of Sample A. Prior to imaging, a "W" was patterned on an implantation square using a 20 mW 532 nm laser in air focused to a 500 nm-diameter spot. The exposed implantation square is visibly brighter, similar to the oxygen-terminated surface. AM-KPFM of the same region (Fig. 3(b)) reveals that the bright PL corresponds to a decrease in the work function. In AM-KPFM, due to contributions of the work function far from the sharp tip, the magnitude of this decrease is not quantitative. These correlated measurements suggest that the level of hydrogen termination and thus the surface Fermi level can be tuned with laser irradiation. This is perhaps not too surprising given that the etching of diamond via laser-induced local oxidation has been reported at much higher laser powers [23]. At the laser intensities used in this work, no etching is detectable via atomic-force microscopy. We tested the hypothesis that the oxidation can be continuously tuned on sample B. In Fig. 3(c), a time trace of the PL intensity from SiV\({}^{-}\) at a single excitation spot is plotted under 30 mW excitation. The increase in intensity is observed to saturate after around 15 minutes. This relatively slow process and the optical spatial resolution enable very precise control over the Fermi level. In contrast to sample A, the spectra taken on sample B after exposure show that SiV and not NV contribute to the majority of the signal. The lower formation yield of sample B NV centers is attributed to the higher annealing temperature [24]. We then exposed three 5x5 micron squares using the 30 mW excitation, varying the exposure time to obtain 3 different photon doses. These squares overlap the edge of an implantation square, allowing us to study differences in behavior between implanted and non-implanted regions. As expected, the PL intensity only increases in the Figure 3: (a) Confocal PL image after laser-assisted oxidation. Laser-assisted oxidation is performed with 20 mW of 532 nm, while confocal imaging utilizes 1 mW of 532 nm excitation. The entire time of exposure was 10 hours. (b) AM-KPFM image after laser-assisted oxidation. (c) Time dependence of the SiV\({}^{-}\) PL intensity using 30 mW of excitation. Colored data points correspond to the SiV\({}^{-}\) intensity from the 3 laser-assisted exposed regions. _inset:_ Representative PL spectra taken at the start and end of the laser exposure used to determine total PL intensity. (d) Confocal image of three laser-assisted exposed squares using 1 mW of 532 nm excitation. The dashed white line denotes the implantation square boundary. PL background from deeper native NV is observed throughout the implantation square (lower half). (e) FM-KPFM image of the laser-assisted oxidized squares. (f) Photoluminescence spectra corresponding to the four marked regions in (d) and the colored data points in (c) obtained using 1 mW of 532 nm excitation. part of the square over the implanted defects (Fig. 3(d)), while the entire exposed square is modified in the KPFM image (Fig. 3(e)). The nonlinear response in the SiV\({}^{-}\) PL intensity with laser exposure is used to estimate the exposure from the tuning curve (Fig. 3(c)). Similar to the "W", we observe an increase in PL intensity and a decrease in surface potential. Spectra in Fig. 3(f) on the exposed squares confirm that the PL enhancement is coming from the SiV\({}^{-}\). ## V Discussion and Outlook In these experiments, KPFM, correlated with photoluminescence measurements, enabled us to link the brightening of quantum defect luminescence to surface properties (rather than bulk effects). Here we discuss two additional insights. First, the work function of diamond as it transitions from hydrogenated to oxidized is non-monotonic. Second, this non-monotonic feature can be used to explain a seemingly complex spatial dependence of the work function across implantation squares. In Fig. 4(a), a horizontal line scan of the work function is shown across the hydrogenated-oxidized boundary. For simplicity, this scan is taken in a region with no implantation. An initial increase in the work function is observed as the diamond becomes oxidized, before finally dropping 250-320 meV below the hydrogenated level. This behavior can be understood by noting that the CPD has two contributions, the electron affinity \(\chi_{e}\) and the Fermi level \(E_{F}\). Specifically, the change in CPD over a material with a constant band-gap is given by \(\Delta W=\Delta\chi_{e}-\Delta E_{F}\), where W is the work function, \(\chi_{e}\) is the electron affinity, and \(E_{F}\) is the Fermi level. The electron affinity should monotonically increase as oxygen replaces hydrogen due to the opposite polarity of the two surface dipole moments. This increase can happen spatially across the surface due to masking, or over time by our laser-induced oxidation. The Fermi level also increases as the surface is oxidized; on a hydrogenated surface, it can lie as deep as 0.7 eV below the valence band [25], while oxidized diamond is pinned around 2.0-2.4 eV [26; 27] above the valence band. The dependence of the Fermi level on the surface termination is more complicated than the electron affinity dependence as it also depends on electron transfer between the diamond and the surface wetting layer. We hypothesize that the initial increase in the work function is caused by a Fermi level that remains pinned near the valence band despite the increasing electron affinity, while the eventual overall decrease suggests that the Fermi level eventually rises by more than the total increase in electron affinity. The non-monotonic behavior of the CPD with oxidation results in some unique features in the KPFM images of laser-exposed and defect-implanted surfaces. In Fig. 4(b) a KPFM scan of the square with the largest laser exposure is shown. At the edge of the exposed square, the increase in work function is again observed, presumably because the edge is less oxidized than inside the square. We can also explain two different characteristics of implantation squares that never received the laser oxidation treatment, illustrated in Fig. 4(c). In some squares, we observe an increase in work function over the entire surface of the square, while in some squares we observe only an increase in a halo around the square. For the halo squares, the interior may exhibit a slightly higher to significantly lower work function than outside the square. These results can be explained by accelerated oxidation of the diamond surface in ambient temperatures and atmosphere due to implantation damage, even at the low implantation fluence in this study corresponding to one ion per (30 nm)\({}^{2}\). Squares that are early in the oxidation process will have an all-positive surface, with rings forming later in the process when the interior of the square becomes more oxidized. This implantation-induced acceleration is significant; only the implanted regions in sample A show signs of oxidation after 2 years in ambient conditions. More Figure 4: (a) _Left:_ FM-KPFM image depicting the boundary between the masked oxidized and hydrogenated diamond surface. _Right:_ Line scan across the unimplanted region, averaged over 2.5 microns. (b) _Left:_ FM-KPFM image depicting the square created by the longest exposure to the laser._Right:_ Line scan across the square’s boundary, averaged over 1.5 microns. (c) _Left:_ FM-KPFM image of two implantation squares after hydrogen passivation. _Right:_ Line scan across the two implantation squares, averaged over 10 microns. The implantation squares are indicated by the shaded regions. needed to elucidate the mechanism for accelerated oxidation given the importance of near-surface defects for quantum technologies. In summary, we utilize correlated scanning-probe microscopy and photoluminescence to study surface oxidation of hydrogenated diamond surfaces. We continuously vary the hydrogenation/oxidation in two ways: by shadow masking an oxygen-terminated surface in a cold hydrogen plasma and by controlled laser-assisted oxidation. In both cases, we observe the non-monotonic increase in work function with oxidation. KPFM measurements are consistent with the observed charge state of quantum defects beneath these functionalized surfaces. Correlated measurements further uncover that the surfaces over implanted defects oxidize more quickly than pristine surfaces. Combined with knowledge of the electron affinity of a surface, these results indicate KPFM can be a powerful, high-spatial-resolution technique to advance surface Fermi level engineering for charge stabilization of quantum defects. ###### Acknowledgements. The initial collaborative work for this proposal was supported by the NSF through the University of Washington Materials Research Science and Engineering Center, DMR-1719797. CP and KMF acknowledge DMR-2308979 supporting the final work. CP was partially supported by DOE Office of Science award DE-SC0022210. DSG and RG acknowledge the U.S. Department of Energy ESPM (DOE-SC0013957) for supporting the scanning probe microscopy work. Plasma treatment was supported by Princeton Collaborative Research Facility (PCRF) and funded by the U.S. Department of Energy (DOE), Office of Fusion Energy Sciences under Contract Nos. DE-AC02-09CH11466. The gold electrodes were fabricated at the Washington Nanofabrication Facility / Molecular Analysis Facility, a National Nanotechnology Coordinated Infrastructure (NNCI) site at the University of Washington with partial support from the National Science Foundation via awards NNCI-1542101 and NNCI-2025489. We thank Nathalie de Leon for helpful discussions and confirmation of the SiV charge state measurement.
2307.00176
Random Discrete Probability Measures Based on Negative Binomial Process
An important functional of Poisson random measure is the negative binomial process (NBP). We use NBP to introduce a generalized Poisson-Kingman distribution and its corresponding random discrete probability measure. This random discrete probability measure provides a new set of priors with more flexibility in nonparametric Bayesian models. It is shown how this random discrete probability measure relates to the non-parametric Bayesian priors such as Dirichlet process, normalized positive {\alpha}-stable process, Poisson-Dirichlet process (PDP), and others. An extension of the DP with its almost sure approximation is presented. Using our representation for NBP, we derive a new series representation for the PDP.
Sadegh Chegini, Mahmoud Zarepour
2023-06-30T23:59:07Z
http://arxiv.org/abs/2307.00176v1
# Random Discrete Probability Measures Based on Negative Binomial Process ###### Abstract An important functional of Poisson random measure is the negative binomial process (NBP). We use NBP to introduce a generalized Poisson-Kingman distribution and its corresponding random discrete probability measure. This random discrete probability measure provides a new set of priors with more flexibility in nonparametric Bayesian models. It is shown how this random discrete probability measure relates to the nonparametric Bayesian priors such as Dirichlet process, normalized positive \(\alpha\)-stable process, Poisson-Dirichlet process (PDP), and others. An extension of the DP with its almost sure approximation is presented. Using our representation for NBP, we derive a new series representation for the PDP. [ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [ functional of a PRM(\(\mu\)) is given by \[\Psi_{\xi}(f)=\exp\left\{-\int_{\mathbb{E}}(1-e^{-f(x)})\mu(\mathrm{d}x)\right\}. \tag{1.1}\] The following straightforward proposition derives a representation for the Poisson random measure with Lebesgue mean measure. There are many different ways to show this result. Since the recursive technique introduced in Banjevic et al. (2002) is helpful in other similar situations, we present it here. **Proposition 1.1**.: _Let \(\xi\sim\mathrm{PRM}(\lambda)\) where \(\lambda\) is the Lebesgue measure on \([0,\infty)\). Then \(\xi\) can be written as follows_ \[\xi=\sum_{i=1}^{\infty}\delta_{\Gamma_{i}}, \tag{1.2}\] _where_ \[\Gamma_{i}=E_{1}+\cdots+E_{i},\] _and \((E_{i})_{i\geq 1}\) is a sequence of independent and identically distributed (i.i.d.) random variables with an exponential distribution of mean 1. Throughout this paper, \(\delta_{X}\) denotes the Dirac measure at \(X\), i.e. \(\delta_{X}(B)=1\) if \(X\in B\) and \(0\) otherwise._ Proof.: For any \(t\geq 0\), define \[\xi_{t}=\sum_{i=1}^{\infty}\delta_{\Gamma_{i}+t}\] such that \(\xi_{0}=\xi\). Now, for any nonnegative function \(f\), \[\Psi_{\xi_{t}}(f) =E(e^{-\xi_{t}(f)})=E(e^{-\sum_{i=1}^{\infty}f(\Gamma_{i}+t)})\] \[=E(E(e^{-\sum_{i=1}^{\infty}f(\Gamma_{i}+t)}|\Gamma_{1}=s))\] \[=\int_{0}^{\infty}e^{-f(s+t)}E(e^{-\sum_{i=1}^{\infty}f(\Gamma_{i }+s+t)})e^{-s}\mathrm{d}s\] \[=\int_{0}^{\infty}e^{-f(s+t)}\Psi_{\xi_{s+t}}(f)e^{-s}\mathrm{d}s.\] Using the change of variable \(s+t=v\) and multiplying both sides by \(e^{-t}\), we get \[e^{-t}\Psi_{\xi_{t}}(f)=\int_{t}^{\infty}e^{-f(v)}\Psi_{\xi_{v}}(f)e^{-v} \mathrm{d}v.\] Differentiating both sides with respect to \(t\), we get \[-e^{-t}\Psi_{\xi_{t}}(f)+e^{-t}\frac{\partial\Psi_{\xi_{t}}(f)}{ \partial t} =-e^{-f(t)}\Psi_{\xi_{t}}(f)e^{-t}\] \[\frac{\partial\Psi_{\xi_{t}}(f)}{\partial t}\frac{1}{\Psi_{\xi_{ t}}(f)} =1-e^{-f(t)}\] \[\Psi_{\xi_{t}}(f) =\exp\left(-\int_{t}^{\infty}(1-e^{-f(s)})\mathrm{d}s\right).\] Now, take \(t=0\) to get \[\Psi_{\xi_{0}}(f)=\exp\left(-\int_{0}^{\infty}(1-e^{-f(s)})\mathrm{d}s\right)\] which equals (1.1) with \(\mu(\mathrm{d}s)=\lambda(\mathrm{d}s)=\mathrm{d}s\). Applying Proposition 2.1 and 2.2 of Resnick (1986) on \(\mathrm{PRM}(\lambda)\) defined in (1.2), we can derive useful PRMs which in turn lead to other processes with applications in nonparametric Bayesian inference. First, take \(T(x)=L^{-1}(x)\) where \(L:(0,\infty)\to(0,\infty)\) is a decreasing bijection such that \(\sum_{i=1}^{\infty}L^{-1}(\Gamma_{i})<\infty\), and \[L^{-1}(y)=\inf\{x>0:L(x)\geq y\}.\] Also, let \((\zeta_{i})_{i\geq 1}\) be a sequence of i.i.d. random elements in a Polish space \(\mathbb{E}\) with a probability measure \(H\) independent from \((\Gamma_{i})_{i\geq 1}\). Then we simply find that \[\sum_{i=1}^{\infty}\delta_{L^{-1}(\Gamma_{i})} \sim\mathrm{PRM}(L), \tag{1.3}\] \[\sum_{i=1}^{\infty}\delta_{(\zeta_{i},L^{-1}(\Gamma_{i}))} \sim\mathrm{PRM}(H\times L). \tag{1.4}\] For example, \(\sum_{i=1}^{\infty}\delta_{\Gamma_{i}^{-1/\alpha}}\) follows a \(\mathrm{PRM}(L)\) with \[L(x)=x^{-\alpha}=\int_{x}^{\infty}\alpha u^{-\alpha-1}\mathrm{d}u,\ x>0,\ \alpha\in(0,1). \tag{1.5}\] Throughout this paper, we use the function \(L\) also as a measure with notation \(L(\mathrm{d}x)=\mathrm{d}L(x)\). In fact, (1.5) denotes the Levy measure of the \(\alpha\)-stable random variable \(S_{\alpha}=\sum_{i=1}^{\infty}\Gamma_{i}^{-1/\alpha}\). Notice that since \(\Gamma_{i}/i\xrightarrow{a.s}1\), \(S_{\alpha}\) converges for \(\alpha\in(0,1)\). As another example, for \(\theta>0\) and \(x>0\), take \[L(x)=\theta\int_{x}^{\infty}u^{-1}e^{-u}\mathrm{d}u. \tag{1.6}\] Then a functional of the random measure (1.4) given by \(Q=\sum_{i=1}^{\infty}L^{-1}(\Gamma_{i})\delta_{\zeta_{i}}\), is a gamma process denoted by \(\mathrm{GaP}(\theta,H)\). This means that for disjoint sets \(A_{1},\ldots,A_{k}\), the random variables \(\{Q(A_{i})\}_{1\leq i\leq k}\) are independent and \(Q(A_{i})\) has a gamma distribution with shape parameter \(\theta H(A_{i})\) and scale parameter of \(1\). Independence follows since \(Q\) is a pure jump Levy process. See Ishwaran and Zarepour (2002) for more details. This finite random measure is self-normalized as follows \[P_{\theta,H}(\cdot)=\sum_{i=1}^{\infty}\frac{L^{-1}(\Gamma_{i})}{\sum_{i=1}^{ \infty}L^{-1}(\Gamma_{i})}\delta_{\zeta_{i}}(\cdot) \tag{1.7}\] by Ferguson (1973) to define the Dirichlet process \(\mathrm{DP}(\theta,H)\) to use it as a prior on the space of all probability measures on \(\mathbb{E}\). The Dirichlet process is known as the cornerstone of the nonparametric Bayesian analysis. There has been an extensive effort to provide some generalizations and alternatives for this process; see for example, Pitman and Yor (1997), and Lijoi et al. (2005). Also, see Ishwaran and Zarepour (2002) and Zarepour and Al-Labadi (2012) for some alternative representations and approximations of this process. Another important distribution which is resulted from a PRM is the Poisson-Kingman distribution. Consult Kingman (1975) and Pitman (2003) for properties and applications of this distribution. The vector of the normalized points of the PRM defined in equation (3), which we call them Poisson-Dirichlet weights, will follow a Poisson-Kingman distribution denoted by \(\mathrm{PK}(L)\), i.e. \[\left(\frac{L^{-1}(\Gamma_{1})}{\sum_{i=1}^{\infty}L^{-1}(\Gamma_{i})},\frac{L ^{-1}(\Gamma_{2})}{\sum_{i=1}^{\infty}L^{-1}(\Gamma_{i})},\ldots\right)\sim \mathrm{PK}(L) \tag{8}\] defines a random discrete distribution on the infinite dimensional simplex \(\nabla_{\infty}:=\{(x_{1},x_{2},\ldots):x_{i}\geq 0,i=1,2,\ldots,\sum_{i=1}^{ \infty}x_{i}=1\}\). As a particular case, if in (8) we take \(L\) as the gamma Levy measure given in (6), then the Poisson-Dirichlet weights (8) are also said to have Poisson-Dirichlet distribution with parameter \(\theta\), which we denote by \(\mathrm{PD}(0,\theta)\). Also, as another special case, if in (8) one takes \(L\) as the \(\alpha\)-stable Levy measure given in (5), then the corresponding Poisson-Dirichlet weights (8) are also said to have Poisson-Dirichlet distribution with parameter \(\alpha\), which we denote by \(\mathrm{PD}(\alpha,0)\). Additionally, the random probability measure (7) is called a normalized \(\alpha\)-stable process if we employ the later Poisson-Dirichlet weights. See Ishwaran and James (2001). Equivalently, a Poisson-Kingman distribution can be constructed using subordinators. Let \((X_{t})_{t\geq 0}\) be a subordinator with Levy measure \(L\) and write \((\Delta X_{t}:=X_{t}-X_{t-})_{t>0}\) for the jump process of \(X_{t}\), and \(\Delta X_{t}^{(1)}\geq\Delta X_{t}^{(2)}\geq\cdots\) for the ordered jumps up till time \(t>0\). Then \[\left(\frac{\Delta X_{t}^{(1)}}{X_{t}},\frac{\Delta X_{t}^{(2)}}{X_{t}}, \ldots\right)\sim\mathrm{PK}(tL).\] We saw how using a PRM, the Poisson-Kingman distribution (8) is obtained and consequently how the random discrete probability measure (7) can be constructed simply by using the Poisson-Dirichlet weights of the Poisson-Kingman distribution. In the rest of the paper, we will generalize the Poisson-Kingman distribution and the resulting random discrete probability measure by utilizing the negative binomial process instead of PRM. The negative binomial process representation which we use here is itself constructed directly from a PRM, unlike the representation in Ipsen and Maller (2017) where the negative binomial process is constructed from a trimmed subordinator. The paper is organized as follows. In section 2, we derive the negative binomial process as a functional of a PRM and then using this process, we generalize the random discrete probability measure (7) and equivalently, the Poisson-Kingman distribution by adding a new parameter. As a special member of the family of the new defined random discrete probability measure, an extension of the Dirichlet process with its almost sure approximation is presented in section 3. Then, we present the general structures of the posterior and predictive processes in section 4. A justification of the role of the parameter \(r\) in clustering problem is given in section 5. In section 6, we derive a new series representation for the Poisson-Dirichlet process (Carlton, 1999), which is based on our new representation of the negative binomial process. In section 7, we provide a simulation study to compare the efficiency of our suggested approximation of the new series representation of the Poisson-Dirichlet process with other representations of this process that exist in the literature. Finally, a summary of the conclusions is given in the last section. ## 2 Negative Binomial Process In this section, we will see how the negative binomial process (NBP) is derived directly as a functional of a PRM. Later, we use this process to define a more general form of the Poisson-Kingman distribution and its corresponding random discrete probability measure. First, we note that for any constant \(c>0\), a simple use of Proposition 2.1 of Resnick (1986) shows that the process \(\sum_{i=1}^{\infty}\delta_{\Gamma_{i}+c}\) is a PRM(\(\lambda\)) on \(\mathbb{E}=[c,\infty)\), and \(\sum_{i=1}^{\infty}\delta_{\Gamma_{i}/c}\) follows PRM(\(c\lambda\)) on \(\mathbb{E}=[0,\infty)\). Now, for any nonnegative integer \(r\) and setting \(\Gamma_{0}=1\), consider the random measure \[\eta=\sum_{i=r+1}^{\infty}\delta_{\Gamma_{i}/\Gamma_{r}}.\] First, note that conditional on \(\{\Gamma_{r}=u\}\), the process \(\eta\) follows PRM(\(u\lambda\)) on \(\mathbb{E}=(1,\infty)\). So, the Laplace functional of \(\eta\) is \[E(e^{-\eta(f)})=E\left[E(e^{-\eta(f)}|\Gamma_{r}=u)\right]\] \[=\int_{0}^{\infty}E(e^{-\eta(f)}|\Gamma_{r}=u)P(\Gamma_{r}\in\mathrm{d}u)\] \[=\int_{0}^{\infty}\exp\left\{-\int_{1}^{\infty}(1-e^{-f(x)})u\lambda(\mathrm{d }x)\right\}P(\Gamma_{r}\in\mathrm{d}u)\] \[=\int_{0}^{\infty}\exp\left\{-u\int_{1}^{\infty}(1-e^{-f(x)})\lambda(\mathrm{d }x)\right\}\frac{u^{r-1}e^{-u}}{\Gamma(r)}\mathrm{d}u\] \[=\left(1+\int_{1}^{\infty}(1-e^{-f(x)})\lambda(\mathrm{d}x)\right)^{-r}.\] This is in fact the Laplace functional of the negative binomial process defined in Gregoire (1984). We denote this process by NBP(\(r,\lambda\)) and write \(\eta\sim\mathrm{NBP}(r,\lambda)\) on \(\mathbb{E}=(1,\infty)\). Following up on this example, we state the following theorem. **Theorem 2.1**.: _With a decreasing bijection \(L:(0,\infty)\to(0,\infty)\) such that \(\sum_{i=1}^{\infty}L^{-1}(\Gamma_{i})<\infty\), the following point process_ \[\kappa=\sum_{i=r+1}^{\infty}\delta_{L^{-1}(\Gamma_{i}/\Gamma_{r})} \tag{2.1}\] _follows an \(\mathrm{NBP}(r,L)\) on \(\mathbb{E}=(0,L^{-1}(1))\)._ The proof of the theorem is similar to what was presented above for the process \(\eta\). Three important examples of \(L\) are positive \(\alpha\)-stable, gamma, and inverse-Gaussian Levy measures (Lijoi et al., 2005; Al-Labadi and Zarepour, 2013, 2014). The NBP was defined in Gregoire (1984) only through its Laplace functional and no point process or subordinator representation was provided. As it is seen, the point process representation of \(\mathrm{NBP}(r,L)\) in (2.1) was derived directly as a functional of a PRM. In Ipsen and Maller (2017), a point process representation of the NBP, which equals to (2.1) in distribution, was derived using ordered jumps of a trimmed subordinator. If \((X_{t})_{t\geq 0}\) is a subordinator with Levy measure \(L\), by writing \((\Delta X_{t}:=X_{t}-X_{t-})_{t>0}\) for the jump process of \(X_{t}\), and \(\Delta X_{t}^{(1)}\geq\Delta X_{t}^{(2)}\geq\cdots\) for the ordered jumps at \(t>0\), then the point process \(\mathbb{B}^{(r)}=\sum_{i=1}^{\infty}\delta_{J_{r}(i)}\) follows \(\mathrm{NBP}(r,L)\) where \(J_{r}(i)=\frac{\Delta X_{i}^{(r+i)}}{\Delta X_{1}^{(r)}},i=1,2,\ldots\). **Remark 2.1**.: In the literature, the terminology "negative binomial process" is used for mathematically distinct concepts. Therefore, it seems necessary to clarify these concepts to avoid confusion. As stated before, in this paper, Ipsen and Maller (2017), and Ipsen et al. (2020, 2021, 2018), the negative binomial process is the one defined in Gregoire (1984). However, in engineering and computer science literature, some other definitions of the negative binomial process can be found. For example, the negative binomial process defined in Zhou and Carin (2015) is different from the one defined in Zhou et al. (2012) and Broderick et al. (2015), which all are different from Gregoire's definition that we are considering in this paper. Following Definition 2.1 of Ipsen and Maller (2017), by normalizing the points of the NBP defined in (2.1), the following sequence \[\left(\frac{L^{-1}(\Gamma_{r+1}/\Gamma_{r})}{\sum_{i=r+1}^{\infty}L^{-1}( \Gamma_{i}/\Gamma_{r})},\frac{L^{-1}(\Gamma_{r+2}/\Gamma_{r})}{\sum_{i=r+1}^{ \infty}L^{-1}(\Gamma_{i}/\Gamma_{r})},\ldots\right) \tag{2.2}\] defines a 2-parameter random discrete distribution on the infinite dimensional simplex \(\nabla_{\infty}\). In Ipsen and Maller (2017), this distribution is called a Poisson-Kingman distribution generated by \(\mathrm{NBP}(r,L)\) and is denoted by \(\mathrm{PK}^{(r)}(L)\). In particular case of \(\alpha\)-stable Levy measure, this process is denoted by \(\mathrm{PD}_{\alpha}^{(r)}\). Clearly, it is seen that the random sequence (2.2) equals \[\left(\frac{J_{r}(1)}{\sum_{i=1}^{\infty}J_{r}(i)},\frac{J_{r}(2)}{\sum_{i=1}^ {\infty}J_{r}(i)},\ldots\right)\] in distribution. Also, as pointed out in Ipsen and Maller (2017), an \(\text{NBP}(r,L)\) can be characterized as a PRM with randomized intensity measure \(\Gamma_{r}L\) where \(\Gamma_{r}\) is a Gamma\((r,1)\) random variable, i.e. \(\text{PRM}(\Gamma_{r}L)\stackrel{{ d}}{{=}}\text{NBP}(r,L)\). In other words, a gamma subordinated Levy process \(X_{\sigma_{r}}\) will follow an \(\text{NBP}(r,L)\) where \((\sigma_{r})_{r>0}\) is an independent gamma subordinator having Levy measure (1.6) with \(\theta=1\). Then by the definition of the Poisson-Kingman distribution, the random sequence (2.2) will also equal in distribution to \[\left(\frac{\Delta X_{\sigma_{r}}^{(1)}}{X_{\sigma_{r}}},\frac{\Delta X_{ \sigma_{r}}^{(2)}}{X_{\sigma_{r}}},\ldots\right). \tag{2.3}\] **Definition 2.1**.: Let \((\zeta_{i})_{i\geq 1}\) be i.i.d. random variables with values in \(\mathbb{E}\) and common distribution \(H\), then we may introduce the following random discrete probability measure on \(E\) as a functional of (2.1) or using the sequence (2.2) as follows \[P_{r,L,H}(\cdot)=\sum_{i=r+1}^{\infty}\frac{L^{-1}(\Gamma_{i}/\Gamma_{r})}{ \sum_{i=r+1}^{\infty}L^{-1}(\Gamma_{i}/\Gamma_{r})}\delta_{\zeta_{i}}(\cdot). \tag{2.4}\] We employ the notation \(\text{PKP}^{(r)}(H;L)\) for the distribution of the random discrete probability measure defined in (2.4) and we write \(P_{r,L,H}\sim\text{PKP}^{(r)}(H;L)\). ## 3 Extended Dirichlet Process and its Approximation In the case \(L\) is the \(\alpha\)-stable Levy measure given in (1.5), \(\text{PK}^{(r)}(L)\) has been investigated thoroughly. For example, it is shown that how this distribution relates to other Poisson-Dirichlet models by letting \(r\to\infty\) in Ipsen et al. (2020). Also, in Ipsen et al. (2018), this distribution is fitted to gene and species sampling data, demonstrating the utility of allowing the extra parameter \(r\) in data analysis. We may now take the probability measure (2.4) with \(L\) as the gamma Levy measure defined in (1.6) to develop a prior distribution on the space of all probability distributions. This prior would be a natural extension of the Dirichlet process (the Dirichlet process is recovered when \(r=0\)). In the following theorem, we provide an efficient approximation for our extended Dirichlet process. **Theorem 3.1**.: _Let \(W_{n}\) be a random variable with distribution Gamma\((\alpha/n,1)\). Define_ \[G_{n}(x)=\Pr(W_{n}>x)=\int_{x}^{\infty}\frac{1}{\Gamma(\alpha/n)}e^{-t}t^{ \alpha/n-1}\mathrm{d}t\] _and_ \[G_{n}^{-1}(y)=\inf\{x:G_{n}(x)\geq y\},\ \ 0<y<1.\] _Let \(L\) be the gamma Levy measure (1.6) and \((\zeta_{i})_{i\geq 1}\) be a sequence of i.i.d. random variables with values in \(\mathbb{E}\) and common distribution \(H\), independent of \((\Gamma_{i})_{i\geq 1}\), then as \(n\to\infty\)_ \[P_{n,r,H}=\sum_{i=r+1}^{n}\frac{G_{n}^{-1}\left(\frac{\Gamma_{i}}{\Gamma_{r} \Gamma_{n+1}}\right)}{\sum_{i=r+1}^{n}G_{n}^{-1}\left(\frac{\Gamma_{i}}{\Gamma _{r}\Gamma_{n+1}}\right)}\delta_{\zeta_{i}}\stackrel{{ a.s.}}{{ \longrightarrow}}P_{r,L,H}=\sum_{i=r+1}^{\infty}\frac{L^{-1}\left(\frac{ \Gamma_{i}}{\Gamma_{r}}\right)}{\sum_{i=r+1}^{\infty}L^{-1}\left(\frac{ \Gamma_{i}}{\Gamma_{r}}\right)}\delta_{\zeta_{i}}\] _on \(\mathbb{E}\) with respect to the weak topology._ Proof.: The proof is similar to that of Theorem 1 in Zarepour and Al-Labadi (2012) and the fact that \(G_{n}^{-1}\left(\frac{x}{cn}\right)\stackrel{{ a.s.}}{{ \longrightarrow}}L^{-1}\left(\frac{x}{c}\right)\). Consequently \(G_{n}^{-1}\left(\frac{\Gamma_{i}}{\Gamma_{r}\Gamma_{n+1}}\right)\stackrel{{ a.s.}}{{\longrightarrow}}L^{-1}\left(\frac{\Gamma_{i}}{\Gamma_{r}}\right)\) by taking the constant \(c=\Gamma_{r}\), \(x=\Gamma_{i}\), and \(n=\Gamma_{n+1}\) as \(\Gamma_{n+1}/n\stackrel{{ a.s.}}{{\longrightarrow}}1\) when \(n\to\infty\). Our proposed approximation has several advantages. For example, our representation avoids the use of an infinite sum and instead our finite many weights are simply the quantile functions of the Gamma\((\alpha/n,1)\) distribution evaluated at \(1-\Gamma_{i}/\Gamma_{r}\Gamma_{n+1}\). In previous representation, it is necessary to calculate \(L^{-1}\) which can not be written in a closed form. In addition, our introduced weights are stochastically decreasing contrary to stick-breaking weights in Ipsen and Maller (2017). A similar proposal for the Dirichlet process can be found in Zarepour and Al-Labadi (2012). ## 4 Posterior and Predictive Distribution To develop a full Bayesian analysis, we can generalize \(P_{r,L,H}\) defined in (2.4) by assuming that \(r\) is a realization of a random variable \(R\), on set of non-negative integers with an arbitrary probability mass function \(\pi(r)\). Also for simplicity in notations, denote the weights of \(P_{R,L,H}\) by \(p_{i}\). Therefore, we can write \[P_{R,L,H}=\sum_{i=R+1}^{\infty}p_{i}\delta_{\zeta_{i}}.\] For given observations from \(P_{R,L,H}\), the posterior and predictive distribution of the prior \(P_{R,L,H}\) can be obtained from Ongaro and Cattaneo (2004) using a recursive method. This method obtains the posterior distribution for a general random discrete probability measure of the form \[P=\sum_{i=1}^{M}p_{i}\delta_{\zeta_{i}},\] where \(M\) is an extended integer valued random variable with an arbitrary probability distribution \(p(m)\). Moreover, conditionally on \(M,\ (p_{1},\ldots,p_{M})\) has an arbitrary distribution \(Q_{M}\) on simplex \(\nabla_{M}:=\{(x_{1},\ldots,x_{M}):x_{i}\geq 0,i=1,\ldots,M\}\). \(1,2,\ldots,M,\sum_{i=1}^{M}x_{i}=1\}\). The random positions \(\zeta_{i}\)'s are i.i.d. from a diffuse probability measure \(H\) and are independent of all other random elements. In our case, for \(P_{R,L,H}\), we only need to change the role of \(M\) with \(R\), where similar results of Ongaro and Cattaneo (2004) follow easily. Following their procedure, we need to find the posterior distribution of random elements of \(P_{R,L,H}\), i.e. \((R,\mathbf{p},\mathbf{\zeta})\) where \(\mathbf{p}=(p_{R+1},p_{R+2},\ldots)\) and \(\mathbf{\zeta}=(\zeta_{R+1},\zeta_{R+2},\ldots)\). The calculations for finding the posterior of \((R,\mathbf{p},\mathbf{\zeta})\) remain similar to the ones in Propositions 4 and 5 in Ongaro and Cattaneo (2004). The final summary is provided in the following theorem. **Theorem 4.1**.: _Let \(\mathbf{X}=(X_{1},\ldots,X_{n})\) be a random sample of \(n\) observations from \(P_{R,L,H}\). The posterior process \(P_{R,L,H}|\mathbf{X}\) can be represented as_ \[(P_{R,L,H}|\mathbf{X})=\sum_{i=1}^{k}\gamma_{i}^{\mathbf{X}}\delta_{X_{i}^{*}}+\sum_{ i=R^{\mathbf{X}}+1}^{\infty}p_{i}^{\mathbf{X}}\delta_{\zeta_{i}}, \tag{4.1}\] _where \(X_{i}^{*}\)s are the distinct values among the observations \(\mathbf{X}\) and \(R^{\mathbf{X}}\) and \(\mathbf{p}^{\mathbf{X}}=(\gamma_{1}^{\mathbf{X}},\ldots,\gamma_{k}^{\mathbf{X}},p_{R^{\mathbf{X}} +1}^{\mathbf{X}},p_{R^{\mathbf{X}}+2}^{\mathbf{X}},\ldots)\) denote the posteriors of \(R\) and \(\mathbf{p}\), respectively. The distribution of \(R^{\mathbf{X}}\) and \(\mathbf{p}^{\mathbf{X}}\) are obtained using a recursive method similar to Corollaries 3 and 4 in Ongaro and Cattaneo (2004)._ To calculate the predictive distribution, take expectation of (4.1) to get \[\Pr\left\{X_{n+1}\in A|\mathbf{X}\right\}=\sum_{i=1}^{k}c_{i}^{\mathbf{X}}\delta_{X_{i }^{*}}(A)+(1-c_{1}^{\mathbf{X}}-\cdots-c_{k}^{\mathbf{X}})H(A),\] where \(c_{i}^{\mathbf{X}}=E(\gamma_{i}^{\mathbf{X}})\) for \(i=1,\ldots,k\). ## 5 Applications Sufficiently large sample from a random discrete probability measure like (1.7) or (2.4), always includes ties with positive probability. Let \(K_{n}=k\in\{1,\ldots,n\}\) be the number of distinct values among \(n\) observations. Denote \(X_{1}^{*},\ldots,X_{k}^{*}\) as distinct values among observations \(X_{1},\ldots,X_{n}\). Moreover, take \(n_{j}=\sum_{i=1}^{n}I(X_{i}=X_{j}^{*})\) for \(j=1,2,\ldots,k\). Obviously, \(\sum_{j=1}^{k}n_{j}=n\). Note that we already saw how \(k\) appeared in the posterior process (4.1). For the Dirichlet process, \(k\) grows slowly as \(n\to\infty\). As it is shown in Korwar and Hollander (1973) and Pitman (2006), if \[(X_{1},\ldots,X_{n})|P\sim P,\ \ P\sim\mathrm{DP}(\theta,H)\] then \[K_{n}/\log(n)\xrightarrow{a.s.}\theta\ \ \text{as}\ \ n\to\infty.\] It means that the random number of distinct values \(K_{n}\) grows only in a logarithmic fashion. In other perspective, the Dirichlet process prior assigns most of the largest weights to its initial points. This property causes inflexibility in the use of the Dirichlet process as a prior mixing measure in nonparametric Bayesian hierarchical mixture models (or so called density estimation problem) when it is fitted to an over-dispersed data (Lo, 1984; Escobar and West, 1995; Ishwaran and James, 2001; Lijoi et al., 2005, 2007). Adding the new parameter \(r\) and working with (2.4) instead of (1.7), will allow to remove those initial large probability weights and produce shorter ones instead. For the Dirichlet case, Table 1 shows how choosing larger values for \(r\) will lead to smoother probability weights in (2.4). Specially, in problems of density estimation similar to Lo (1984), this flexibility plays a crucial role but we do not address them here in this paper. If we use \(\alpha\)-stable Levy measure (1.5) in (1.7), this property is even more obvious. Since for the \(\alpha\)-stable random variable \(S_{\alpha}=\sum_{i=1}^{\infty}\Gamma_{i}^{-1/\alpha},\ \alpha\in(0,1)\), taking for example \(\alpha=0.5\), the first four terms of \(S_{1/2}\) have infinite variance which means that there is a huge fluctuation among the initial terms. To present an alternative interpretation, notice that, the negative binomial distribution is preferred to the Poisson distribution when the data are over-dispersed. The variance of the negative binomial distribution is larger than its mean while for the Poisson distribution, both mean and variance are equal. Therefore, we expect that the random discrete probability measure (2.4) outperforms (1.7) as a prior mixing measure in nonparametric Bayesian hierarchical mixture models fitted to the over-dispersed data. Recall that the weights of (2.4) are the normalized points of the negative binomial process (2.1). However, the weights of (1.7) are the normalized points of the Poisson process (1.3) with a certain mean measure. We can observe that the weights in (2.4) decrease much slower than that of (1.7). See Table 1 for the case that \(L\) is the gamma Levy measure (1.6). This exhibits the mechanism that the random discrete probability measure (2.4) can naturally capture the over-dispersion better. In Ipsen et al. (2021, Theorem 2.1), the growth rate of \(K_{n}(\alpha,r)=K_{n}\) is given rigorously for \(\text{PK}^{(r)}(L)\) when \(L\) is the \(\alpha\)-stable Levy measure. In other words, as \(n\to\infty\) \[K_{n}/n^{\alpha}\stackrel{{ d}}{{\longrightarrow}}Y_{\alpha,r}. \tag{5.1}\] See equation 2.8 in Ipsen et al. (2021) for the distribution of \(Y_{\alpha,r}\). The growth \begin{table} \begin{tabular}{c c c c} \hline \(r=0\) & \(r=3\) & \(r=5\) & \(r=10\) \\ \hline 0.367597022 & 0.24369485 & 0.16427045 & 0.06353087 \\ 0.168573239 & 0.23947841 & 0.14319002 & 0.05886303 \\ 0.165457071 & 0.10281577 & 0.12842541 & 0.05418117 \\ 0.149080111 & 0.08716432 & 0.10524699 & 0.05112053 \\ 0.058821776 & 0.07639968 & 0.07391248 & 0.04858739 \\ 0.056183134 & 0.05990201 & 0.06339345 & 0.04626107 \\ 0.012551887 & 0.03862790 & 0.05279059 & 0.04349017 \\ 0.007812625 & 0.03184211 & 0.04298598 & 0.03795044 \\ 0.003792634 & 0.02524151 & 0.03705701 & 0.03606621 \\ 0.001971704 & 0.01939928 & 0.03245281 & 0.03021665 \\ \hline \end{tabular} \end{table} Table 1: The first ten probability weights in (2.4) when \(L\) is the gamma Lévy measure (1.6) with \(\theta=3\). rate \(n^{\alpha}\) in (5.1) is equal to that of the Poisson-Dirichlet process (Pitman, 2006, Theorem 3.8) and the normalized generalized gamma process (Lijoi et al., 2007, Proposition 3). It is not surprising to see that the growth rates of \(K_{n}\) for these processes are equal as all these processes belong to the greater family of the random discrete probability measure (2.4). We notice that the Poisson-Dirichlet process \(\mathrm{PDP}(H;\alpha,\theta)\) is a particular case of the random discrete probability measure (2.4) with \(r=\theta/\alpha\) and \(L\) given in (6.2) and the normalized generalized gamma process is a particular case of the random discrete probability measure (2.4) with \(r=0\) and \(L\) given in (6.2). The Poisson-Dirichlet process and the normalized generalized gamma process have already been recommended in the literature to be exploited as mixing measures in nonparametric Bayesian hierarchical mixture models in order to allow the number of distinct values (the number of clusters) increases at a rate faster than that of the Dirichlet process. Therefore, the proposed random discrete probability measure (2.4) can be considered as a general alternative for the mixing measure in nonparametric Bayesian hierarchical mixture models fitted to the over-dispersed data. ## 6 A New Alternative Series Representation for the Poisson-Dirichlet Process Using NBP, we can find another series representation rather than the stick-breaking representation for the Poisson-Dirichlet process. See Carlton (1999) for properties and applications of this process in nonparametric Bayesian analysis. For \(0\leq\alpha<1,\ \theta>-\alpha\), let \((\beta_{k})_{k\geq 1}\) be a sequence of independent random variables, where \(\beta_{k}\) has the \(\mathrm{Beta}(1-\alpha,\theta+k\alpha)\) distribution. If we define \[p_{1}^{\prime}=\beta_{1},\ p_{i}^{\prime}=\beta_{i}\prod_{k=1}^{i-1}(1-\beta_ {k}),\ i\geq 2,\] then the ranked sequence of \((p_{i}^{\prime})_{i\geq 1}\) denoted by \(p_{1}\geq p_{2}\geq\ldots\) is said to have a Poisson-Dirichlet distribution with parameters \(\alpha\) and \(\theta\) denoted by \(\mathrm{PD}(\alpha,\theta)\) and we write \((p_{1},p_{2},\ldots)\sim\mathrm{PD}(\alpha,\theta)\). Also, note that \((p_{1}^{\prime},p_{2}^{\prime},\ldots)\sim\mathrm{GEM}(\alpha,\theta)\) with the notation used in Carlton (1999). Moreover, let \((\zeta_{i})_{i\geq 1}\) be i.i.d. random variables with values in \(\mathbb{E}\) and common distribution \(H\). Then the random probability measure \[P_{\alpha,\theta,H}(\cdot)=\sum_{i=1}^{\infty}p_{i}\delta_{\zeta_{i}} \tag{6.1}\] is called Poisson-Dirichlet process with parameters \(\alpha,\theta\), \(H\), and denoted by \(\mathrm{PDP}(H;\alpha,\theta)\). As it is shown in Al-Labadi and Zarepour (2014, Lemma 2.1), the \(p_{i}^{\prime}\)'s are not strictly decreasing almost surely. Therefore, the stick-breaking representation (6.1) is inefficient for simulating this process due to failure in proper stopping rules. Another approach based on Proposition 22 of Pitman and Yor (1997), is proposed in Al-Labadi and Zarepour (2014) for simulating this process. This approach is more accurate than the method based on the stick-breaking representation, however, it includes more complex steps in its algorithm. In this section, we apply our new representation of NBP (2.1) to the Proposition 21 in Pitman and Yor (1997) to give a new representation for the Poisson-Dirichlet process. In section 7, we will show how simulating the Poisson-Dirichlet process using this representation is much more efficient while it avoids the shortcoming of the stick-breaking representation for simulation purposes. Moreover, our approach is less complex compared with the algorithm A in Al-Labadi and Zarepour (2014). Now, following the Proposition 21 in Pitman and Yor (1997), for \(\theta>0\) and \(0<\alpha<1\), let \((X_{t})_{t\geq 0}\) be a subordinator having Levy measure \[L(x)=\frac{\alpha}{\Gamma(1-\alpha)}\int_{x}^{\infty}u^{-\alpha-1}e^{-u} \mathrm{d}u,\ x>0 \tag{6.2}\] and let \((\sigma_{r})_{r>0}\) be an independent gamma subordinator. Then \[\left(\frac{\Delta X_{T}^{(1)}}{X_{T}},\frac{\Delta X_{T}^{(2)}}{X_{T}},\ldots \right)\sim\mathrm{PD}(\alpha,\theta)\ \ \text{if}\ \ T=\sigma_{\theta/\alpha}. \tag{6.3}\] Comparing (6.3) with (2.3), we see that \(\mathrm{PD}(\alpha,\theta)\overset{d}{=}\mathrm{PK}^{(\theta/\alpha)}(L)\) with \(L\) given in (6.2). Since \(\mathrm{PD}(\alpha,\theta)\overset{d}{=}\mathrm{PK}^{(\theta/\alpha)}(L)\) and (2.3) is equal to (2.2) in distribution, we can conclude \(\mathrm{PDP}(H;\alpha,\theta)\overset{d}{=}\mathrm{PK}^{(\theta/\alpha)}(H;L)\) for \(\theta>0,0<\alpha<1\) and \(L\) in (6.2). In other words, the random probability measure \[P_{r,L,H}(\cdot)=\sum_{i=r+1}^{\infty}\frac{L^{-1}(\Gamma_{i}/\Gamma_{r})}{ \sum_{i=r+1}^{\infty}L^{-1}(\Gamma_{i}/\Gamma_{r})}\delta_{\zeta_{i}}(\cdot) \tag{6.4}\] with \(L\) given in (6.2) and \(r=\theta/\alpha\) is distributed as either \(\mathrm{PDP}(H;\alpha,\theta)\) or \(\mathrm{PK}^{(\theta/\alpha)}(H;L)\). Therefore, (6.4) provides another series representation for the Poisson-Dirichlet process for the case \(\theta>0\) and \(0<\alpha<1\) through a negative binomial process. Note that \(\mathrm{PK}^{(0)}(L)\overset{d}{=}\mathrm{PD}(\alpha,0)\) for \(0<\alpha<1\) and \(L\) given in (1.5) and also, \(\mathrm{PK}^{(0)}(L)\overset{d}{=}\mathrm{PD}(0,\theta)\) for \(\theta>0\) and \(L\) given in (1.6). See Ipsen and Maller (2017) for stick-breaking representations of \(\mathrm{PK}^{(r)}(L)\) with \(L\) given in (1.5) and (1.6) when \(r>0\). ## 7 Simulating a New Approximation of the Poisson-Dirichlet Process By applying a truncation method on the new series representation of the Poisson-Dirichlet process \(\mathrm{PDP}(H;\alpha,\theta)\) given in (6.4), we can approximate this process from \[P_{n,r,L,H}(\cdot)=\sum_{i=r+1}^{n}\frac{L^{-1}(\Gamma_{i}/\Gamma_{r})}{\sum_ {i=r+1}^{n}L^{-1}(\Gamma_{i}/\Gamma_{r})}\delta_{\zeta_{i}}(\cdot) \tag{7.1}\] for \(0<\alpha<1\), \(\theta>0\), \(r=\theta/\alpha\), and \(L\) given in (6.2). We can suggest a stopping rule for choosing \(n=n(\epsilon)\) as follows \[n=\inf\left\{i:\frac{L^{-1}(\Gamma_{i}/\Gamma_{r})}{\sum_{j=r+1}^{i}L^{-1}( \Gamma_{j}/\Gamma_{r})}<\epsilon\right\}\ \ \text{for}\ \epsilon\in(0,1).\] Here, we compare the approximation given in (7.1) with the Algorithm A in Al-Labadi and Zarepour (2014) which is based on Proposition 22 of Pitman and Yor (1997). The superiority of this approximation over the corresponding stick-breaking approximation is presented, particularly for the cases when \(\alpha\) is Figure 1: Sample paths of the two-parameter Poisson-Dirichlet process, where \(H\) is the uniform distribution on \([0,1]\), \(\alpha=0.1\), and \(\theta=1,10,100\). The red line denotes the cumulative distribution function of \(H\). close to \(1\). Since the weights \[\left(\frac{L^{-1}(\Gamma_{i}/\Gamma_{r})}{\sum_{i=r+1}^{n}L^{-1}(\Gamma_{i}/ \Gamma_{r})}\right)_{r+1\leq i\leq n}\] are strictly decreasing, simulating the Poisson-Dirichlet process through representation (7.1) is very efficient. Figures 1, 2, and 3 show sample paths for the approximate Poisson-Dirichlet process with different values of \(\alpha\) and \(\theta\) (\(r=\theta/\alpha\)). Clearly, the approximation (7.1) outperforms the approximation given in Algorithm A in Al-Labadi and Zarepour (2014) in all cases, as the sample paths for this approximation stay Figure 2: Sample paths of the two-parameter Poisson-Dirichlet process, where \(H\) is the uniform distribution on \([0,1]\), \(\alpha=0.5\), and \(\theta=1,10,100\). The red line denotes the cumulative distribution function of \(H\). closer to the base measure \(H\). This behavior agrees with Chebyshev's inequality. As is expected from Chebyshev's inequality, a sample path should approach the base measure faster when either \(\alpha\) or \(\theta\) gets larger. In the simulation, we set \(n=20\), \(m=20\) in Algorithm A, and \(n=20\times 20=400\) in (7.1). Throughout this section, we take the base measure \(H\) to be the uniform distribution on \([0,1]\). We also compute the Kolmogorov distance between the Poisson-Dirichlet process and \(H\) for different values of \(\alpha\) and \(\theta\) (\(r=\theta/\alpha\)). The Kolmogorov distance between \(P_{n,r,L,H}\) and \(H\), denoted by \(d(P_{n,r,L,H},H)\) Figure 3: Sample paths of the two-parameter Poisson-Dirichlet process, where \(H\) is the uniform distribution on \([0,1]\), \(\alpha=0.9\), and \(\theta=1,10,100\). The red line denotes the cumulative distribution function of \(H\). is defined by \[d(P_{n,r,L,H},H) =\sup_{x\in\mathbb{R}}|P_{n,r,L,H}(-\infty,x],H(-\infty,x]|\] \[:=\sup_{x\in\mathbb{R}}|P_{n,r,L,H}(x),H(x)|.\] For different values of \(\alpha\) and \(\theta\), we have obtained \(500\) Kolmogorov distances and reported the average of these values in Table 2. From the simulation results in Table 2, we can conclude that the new approach outperforms in all cases. ## 8 Concluding Remarks We derive the negative binomial process directly as a functional of the Poisson random measure. Then using this derivation of the negative binomial process, we provide a generalized Poisson-Kingman distribution and also a random discrete probability measure which contains many well-known priors in nonparametric Bayesian analysis such as Dirichlet process, Poisson-Dirichlet process, normalized generalized gamma process, etc. A natural extension of the Dirichlet process as a functional of our proposed series representation for the negative binomial process is obtained. We also provide an almost sure convergent approximation for this extended Dirichlet process. Then the general structures of the posterior and predictive processes are given. We also justify the role of the parameter \(r\) in clustering problem. Another by-product of our proposed series representation for the negative binomial process is a new series representation for the Poisson-Dirichlet process. It is shown that an approximation based on this new representation for the Poisson-Dirichlet process is very efficient, as illustrated in a simulation study.
2306.17723
FlipNeRF: Flipped Reflection Rays for Few-shot Novel View Synthesis
Neural Radiance Field (NeRF) has been a mainstream in novel view synthesis with its remarkable quality of rendered images and simple architecture. Although NeRF has been developed in various directions improving continuously its performance, the necessity of a dense set of multi-view images still exists as a stumbling block to progress for practical application. In this work, we propose FlipNeRF, a novel regularization method for few-shot novel view synthesis by utilizing our proposed flipped reflection rays. The flipped reflection rays are explicitly derived from the input ray directions and estimated normal vectors, and play a role of effective additional training rays while enabling to estimate more accurate surface normals and learn the 3D geometry effectively. Since the surface normal and the scene depth are both derived from the estimated densities along a ray, the accurate surface normal leads to more exact depth estimation, which is a key factor for few-shot novel view synthesis. Furthermore, with our proposed Uncertainty-aware Emptiness Loss and Bottleneck Feature Consistency Loss, FlipNeRF is able to estimate more reliable outputs with reducing floating artifacts effectively across the different scene structures, and enhance the feature-level consistency between the pair of the rays cast toward the photo-consistent pixels without any additional feature extractor, respectively. Our FlipNeRF achieves the SOTA performance on the multiple benchmarks across all the scenarios.
Seunghyeon Seo, Yeonjin Chang, Nojun Kwak
2023-06-30T15:11:00Z
http://arxiv.org/abs/2306.17723v4
# FlipNeRF: Flipped Reflection Rays for Few-shot Novel View Synthesis ###### Abstract Neural Radiance Field (NeRF) has been a mainstream in novel view synthesis with its remarkable quality of rendered images and simple architecture. Although NeRF has been developed in various directions improving continuously its performance, the necessity of a dense set of multi-view images still exists as a stumbling block to progress for practical application. In this work, we propose FlipNeRF, a novel regularization method for few-shot novel view synthesis by utilizing our proposed flipped reflection rays. The flipped reflection rays are explicitly derived from the input ray directions and estimated normal vectors, and play a role of effective additional training rays while enabling to estimate more accurate surface normals and learn the 3D geometry effectively. Since the surface normal and the scene depth are both derived from the estimated densities along a ray, the accurate surface normal leads to more exact depth estimation, which is a key factor for few-shot novel view synthesis. Furthermore, with our proposed Uncertainty-aware Emptimess Loss and Bottleneck Feature Consistency Loss, FlipNeRF is able to estimate more reliable outputs with reducing floating artifacts effectively across the different scene structures, and enhance the feature-level consistency between the pair of the rays cast toward the photo-consistent pixels without any additional feature extractor, respectively. Our FlipNeRF achieves the SOTA performance on the multiple benchmarks across all the scenarios. The codes and more qualitative results are available in our project page: [https://shawn615.github.io/flipnerf/](https://shawn615.github.io/flipnerf/). ## 1 Introduction Neural Radiance Field (NeRF) [24] has achieved great success in rendering photo-realistic images from novel viewpoints. However, the necessity of a dense set of training images remains as a practical bottleneck since it suffers from significant performance degradation when trained with sparse views. There are two mainstreams for few-shot novel view synthesis: _pre-training_ and _regularization_ methods, both of which focus on learning the 3D geometry efficiently from sparse inputs. The pre-training methods [43, 5, 6, 39, 20, 12, 19, 29, 35, 14] require large-scale datasets consisting of different scenes with multi-view images for injecting prior knowledge during the pre-training, while the regularization methods [25, 31, 15, 11, 30, 8, 17] are optimized per scene, Figure 1: **Synthesis results from sparse inputs. Our FlipNeRF significantly improves rendering quality compared to other baselines. Compared to the vanilla mip-NeRF [1] and MixNeRF [31], which is the state-of-the-art regularization method, ours reduces the noises and floating artifacts noticeably with superior surface normal estimation. Although Ref-NeRF [36] estimates smooth normal vectors, it shows much inferior rendering results to ours with a large chunk of noise under the few-shot setting.** exploiting additional training resources, _e.g_. unseen viewpoints [25, 15, 17], depth map generation [8, 30], off-the-shelf models [11, 25], and so on, for an effective regularization to alleviate overfitting. Although the prior arts achieved promising results in novel view synthesis from sparse inputs, there still exist hurdles to overcome. The large-scale datasets, which are used for pre-training methods, are expensive to collect and the NeRF model is prone to performance degradation for the out-of-distribution dataset. On the other hand, the regularization methods heavily rely on additional training resources which might not always be available and require many heuristic factors, _e.g_. the choice of off-the-shelf models, the hyperparameters for sampling unseen viewpoints, and so on. In this paper, we propose _FlipNeRF_, which is an effective regularization method exploiting the flipped reflection rays1 as additional training resources with filtering the ineffective newly generated rays. We derive a batch of flipped reflection rays from the original ray directions and estimated surface normals so that they are cast toward the same target pixels of the original input ray. Compared to the existing regularization methods which have mainly focused on the accurate depth estimation from limited input views [31, 25, 15, 30, 8], our FlipNeRF is trained to reconstruct surface normals accurately by learning to generate effective reflection rays to be used in training. Since both estimated surface normals and depths are derived from the volume densities representing underlying 3D geometry, accurately estimating the surface normals of an object naturally leads to more accurate depth maps. Footnote 1: The term ‘flipped’ is used because the reflected ray has an opposite direction (from an object to a camera). Furthermore, we propose an effective regularization loss, _Uncertainty-aware Emptiness Loss (UE Loss)_, to reduce the floating artifacts effectively while considering the uncertainty of the model's outputs by using the estimated scale parameters for mixture models. Since our FlipNeRF is built upon MixNeRF [31], which is a regularization method achieving promising results by modeling input rays with mixture density models [2], we are able to apply our proposed loss without any modification of the architecture by using the estimated scale parameters of each sample along a ray, which stand for the uncertainty of the samples' estimated probability density distributions. Additionally, inspired by [6, 11, 15] which address the feature-level consistency of targets under the sparse input setting, we encourage the consistency for the pairs of bottleneck features between the original input rays and flipped reflection rays. We leverage a Jensen-Shannon Divergence, which is based on the similarity between the probability distributions, to make the pairs of bottleneck feature distributions of original and flipped reflection rays more similar to each other improving feature consistency. We demonstrate the effectiveness of our proposed FlipNeRF through the experiments on the multiple benchmarks, _e.g_. Realistic Synthetic 360\({}^{\circ}\)[24], DTU [13], and LLFF [23]. Our method achieves state-of-the-art (SOTA) performances compared to other baselines. Especially, ours outperforms other baselines by a large margin with more accurate surface normals under the extremely sparse settings such as 3/4-view setting which are the most challenging ones. Our contributions are summarized as follows: * We propose an effective training framework for NeRF with sparse training views, called _FlipNeRF_. It leverages flipped reflection rays to provide additional training resources, resulting in more precise surface normals with our proposed masking strategy to filter the ineffective rays. * We also propose an effective regularization loss, _Uncertainty-aware Emptiness Loss (UE Loss)_, which reduces floating artifacts with considering the uncertainty of outputs, leading to more reliable estimation. * We enhance the consistency of bottleneck features between the original input rays and flipped reflection rays by Jensen-Shannon Divergence, coined as _Bottleneck Feature Consistency Loss (BFC Loss)_, improving the robustness for rendering from unseen viewpoints. * Our FlipNeRF achieves SOTA performance over the multiple benchmarks. Especially, ours outperforms other baselines by a large margin in more challenging scenarios, _e.g_. 3/4-view. ## 2 Related Works ### Neural Radiance Field Recently, Neural Radiance Field (NeRF) [24] has shown impressive performance and potential in the novel view synthesis task. NeRF represents a scene with an MLP, mapping coordinates and viewing directions to its colors and volume density, and then creates a novel view through volume rendering. Subsequent studies have developed NeRF in several directions, _e.g_. using conical frustums instead of rays [1], reparameterizing an input viewing direction as its reflection direction [36], and so on. These works have made significant progress by addressing the various issues in novel view synthesis, but there still exists a limitation in that NeRF requires a dense set of training images and a lengthy training time. Many studies have addressed these issues [10, 28, 5], including the utilization of various data structures for faster training and inference [9, 42] and attempts to train NeRF with only a few training images. Our work focuses on enhancing the performance of NeRF when a sparse set of views are provided as training images. ### Few-Shot Novel View Synthesis There are two main approaches for a few-shot novel view synthesis: the _pre-training_ and the _regularization_ method. The pre-training methods require a large dataset of multi-view scenes to provide prior knowledges of 3D geometry to a NeRF model and then optionally finetune on the target scene [5, 6, 12, 19, 29, 35, 39, 43]. On the contrary, the regularization methods [31, 15, 11, 25, 8, 30, 17] are optimized per scene without pre-training process by exploiting additional training resources, _e.g_. depth maps [8, 30] and semantic consistency [11], as an extra supervision. Among them, [25, 15, 17] adopt an unseen viewpoint sampling strategy to make up for insufficient training views. However, these sampling processes require many hand-designed factors such as the ranges of rotation, translation, jittering, and so on, which can introduce artificial biases. Our work proposes a novel regularization approach to derive a set of flipped reflection rays from estimated surface normals and utilizes these for regularization, which does not require a heuristic factor to be finetuned, resulting in more effective training strategy with limited inputs. ### Surface Normal Reconstruction There is a line of research to recover accurate textures and lighting conditions of objects with NeRF [26, 41, 38, 7, 34, 44, 3]. Although these earlier studies successfully reconstruct the high-quality isosurfaces derived from the scene representations, their rendering quality for novel views is still inferior to the NeRF-like models. Meanwhile, RefNeRF [36] achieved superior performance with remarkable quality of surface normals compared to the existing NeRF models. Since the normal vectors utilized in NeRF framework are derived from the negative normalized density gradients [34, 3, 36], which represent the underlying geometry of 3D scenes, learning an accurate density distribution along a ray is a key factor for surface normal reconstruction. However, for the few-shot novel view synthesis, the prior works mostly focus on the accurate depth estimation without attention to the surface normals, both of which are derived from the estimated volume densities. In this work, we approach the few-shot novel view synthesis problem with focusing on the surface normals, which is another critical factor for an effective learning of 3D scene geometry. To the best of our knowledge, our work is the first attempt to focus on the surface normal estimation for few-shot novel view synthesis. ## 3 Method In this work, we propose an effective regularization method for few-shot novel view synthesis with flipped reflection rays. Our FlipNeRF is built upon MixNeRF [31] which leverages a mixture model framework (Sec. 3.1). We derive a batch of flipped reflection rays and cast them toward the identical target pixels as additional training rays (Sec. 3.2). Furthermore, we propose the _Uncertainty-aware Emptiness Loss_ and _Bottleneck Feature Consistency Loss_ to alleviate the floating artifacts adaptively based on the uncertainty and enhance the consistency between the bottleneck feature distributions of the original and flipped reflection rays, respectively (Sec. 3.3 and Sec. 3.4). Finally, our FlipNeRF is trained to minimize the MSE and NLL losses as well as the proposed regularization loss terms with their corresponding balancing weights (Sec. 3.5). Fig. 2 shows an overview of our FlipNeRF. ### Preliminaries NeRF.The NeRF [24], which is an MLP-based neural network, represents a 3D scene as a continuous radiance field of RGB color and volume density. For every point sampled along a ray, the 3D coordinates \(\mathbf{x}=(x,y,z)\) and viewing directions \((\theta,\phi)\) are mapped to the colors \(\mathbf{c}=(r,g,b)\) and densities \(\sigma\): \[F(\gamma(\mathbf{x}),\gamma(\hat{\mathbf{d}}))\rightarrow(\mathbf{c},\sigma), \tag{1}\] where \(F(\cdot)\), \(\gamma(\cdot)\), and \(\hat{\mathbf{d}}\) indicate an MLP, the positional encoding for the inputs, and the 3D Cartesian unit vector used as an input viewing direction in practice, respectively. The volumetric radiance field is rendered by alpha compositing the RGB values along an input ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\)[22], where \(\mathbf{o}\) and \(\mathbf{d}\) denote the camera origin and un-normalized direction vector, _i.e_. \(\mathbf{d}=\|\mathbf{d}\|_{2}\cdot\hat{\mathbf{d}}\), respectively. Figure 2: **Overall framework of FlipNeRF. Our FlipNeRF utilize the newly generated flipped reflection rays with our proposed UE Loss and BFC Loss as well as existing MSE, NLL and Orientation losses. See Sec. 3 and Fig. 3 for more details about generation process of flipped reflection rays and the loss terms.** The volume rendering integrals are denoted as follows: \[\hat{\mathbf{c}}(\mathbf{r})=\int_{t_{n}}^{t_{f}}T(t)\sigma(\mathbf{r}(t)) \mathbf{c}(\mathbf{r}(t),\hat{\mathbf{d}})\,dt, \tag{2}\] where \(T(t)=\exp(-\int_{t_{n}}^{t}\sigma(s)\,ds)\) indicates the degree of transparency. In practice, it is approximated with numerical quadrature [24] by sampling points along a ray. The radiance field is trained to minimize the mean squared error (MSE) between rendered and GT pixels: \[\mathcal{L}_{\text{MSE}}=\sum_{\mathbf{r}\in\mathcal{R}}||\hat{\mathbf{c}}( \mathbf{r})-\mathbf{c}^{\text{GT}}(\mathbf{r})||_{2}^{2}\,, \tag{3}\] where \(\mathcal{R}\) is denoted as a set of rays. MixNeRF.Built upon mip-NeRF [1], which leveraged a cone tracing method and proposed an integrated positional encoding to address an aliasing problem, MixNeRF [31] estimates the joint probability distribution of color values and models a ray with a mixture of densities: \[p(\mathbf{c}|\mathbf{r})=\sum_{i=1}^{M}\pi_{i}\mathcal{F}(\mathbf{c};\mu_{i}^ {\mathbf{c}},\beta_{i}), \tag{4}\] where \(M\) is the number of sampled points, \(\mathcal{F}(\mathbf{c};\mu_{i}^{\mathbf{c}},\beta_{i})\) denotes the Laplacian distribution of RGB \(\mathbf{c}\) with location parameter \(\mu_{i}^{\mathbf{c}}\in\{\mu_{i}^{r},\mu_{i}^{g},\mu_{i}^{b}\}\), _i.e_. estimated RGB values of sample, and scale parameter \(\beta_{i}\in\{\beta_{i}^{r},\beta_{i}^{g},\beta_{i}^{b}\}\). The mixture coefficient \(\pi_{i}\) is derived from the estimated volume density \(\sigma_{i}\) as follows: \[\pi_{i}=\frac{w_{i}}{\sum_{m=1}^{M}w_{m}}=\frac{T_{i}(1-\exp(-\sigma_{i}\delta _{i}))}{\sum_{m=1}^{M}T_{m}(1-\exp(-\sigma_{m}\delta_{m}))}, \tag{5}\] where \(w_{i}\) and \(\delta_{i}\) indicate the alpha blending weight and sample interval, respectively. Thanks to the mixture model's capacity of representing complex distributions, MixNeRF learns the density distribution effectively with sparse inputs by minimizing the negative log-likelihood (NLL) in Eq. 4. Our FlipNeRF is built upon MixNeRF leveraging the mixture modeling framework while achieving superior rendering quality with noticeably fewer artifacts and more accurate surface normals to MixNeRF. ### Auxiliary Flipped Reflection Ray As shown in Fig. 3, we exploit a batch of flipped reflection rays \(\mathbf{r}^{\prime}\in\mathcal{R}^{\prime}\) as extra training resources, which are derived from the original input ray directions \(\mathbf{d}\) and estimated surface normals \(\hat{\mathbf{n}}\). First, we derive a flipped reflection \(\mathbf{d}^{\prime}\) from \(\mathbf{d}\) and \(\hat{\mathbf{n}}\): \[\mathbf{d}^{\prime}=2(\mathbf{d}\cdot\hat{\mathbf{n}})\hat{\mathbf{n}}- \mathbf{d}, \tag{6}\] where \(\hat{\mathbf{n}}\) denotes the weighted sum of blending weights and estimated normal vectors along a ray, _i.e_. \(\hat{\mathbf{n}}=\sum_{i=1}^{M}w_{i}\mathbf{n}_{i}\).2 Note that we use the gradient of volume density as estimated surface normals following [3, 34, 36]. Footnote 2: Technically, \(\hat{\mathbf{n}}\) is not guaranteed to be a unit vector without an explicit normalization process. However, we empirically found that the normalization rather destabilizes the training and leads to the performance degradation. Kindly refer to our supplementary material for related experiments. To generate the additional training rays based on \(\mathbf{d}^{\prime}\), we need a set of imaginary ray origins \(\mathbf{o}^{\prime}\) located in a suitable space considering the hitting point and the original input ray origins \(\mathbf{o}\). Since the vanilla NeRF models, which are trained with a dense set of images, tend to have the blending weight distribution whose peak is located on the point around the object surface \(\mathbf{p}_{s}=\mathbf{o}+t_{s}\mathbf{d}\)[8, 31], _i.e_. the \(s\)-th sample whose blending weight is the highest along a ray. Therefore, we place \(\mathbf{o}^{\prime}\) so that the \(s\)-th sample of \(\mathbf{r}^{\prime}\) is \(\mathbf{p}_{s}\): \[\mathbf{o}^{\prime}=\mathbf{p}_{s}-t_{s}\mathbf{d}^{\prime}, \tag{7}\] resulting in our proposed flipped reflection ray, \(\mathbf{r}^{\prime}(t)=\mathbf{o}^{\prime}+t\mathbf{d}^{\prime}\). Compared to the previous unseen viewpoint sampling strategies [15, 25, 17], our proposed strategy does not rely on the randomness of unseen viewpoint sampling and reduces the heuristic factors for sampling schemes, _e.g_. the range of rotation, translation, and so on. Furthermore, since our newly generated \(\mathbf{r}^{\prime}\) are cast on the identical object surfaces where the original input rays \(\mathbf{r}\) are cast, _i.e_. the target pixels are photo-consistent for the pair of \(\mathbf{r}\) and \(\mathbf{r}^{\prime}\) without Figure 3: **Flipped reflection ray generation.** Our FlipNeRF generates the flipped reflection ray \(\mathbf{r}^{\prime}\) from the estimated normal vector \(\hat{\mathbf{n}}\) and original input ray direction \(\mathbf{d}\). With our masking strategy, \(\mathbf{r}^{\prime}_{2}\) is filtered out since it does not satisfy the photo-consistency condition, _i.e_. \(\theta_{2}\) is bigger than 90\({}^{\circ}\). The smaller \(\theta\) is, the more photo-consistent the target pixel is, where the pair of \(\mathbf{r}\) and \(\mathbf{r}^{\prime}\) are cast. any sophisticated viewpoint sampling process, we are able to train \(\mathbf{r}^{\prime}\) effectively with the same GT pixels of \(\mathbf{r}\). However, since \(\hat{\mathbf{n}}\), which are used to derive \(\mathbf{d}^{\prime}\), are not the ground truth but the estimation, there exists a concern that even miscreated \(\mathbf{r}^{\prime}\), which do not satisfy photo-consistency, can be used for training. As a result, it might lead to performance degradation while providing misleading training cues. To address this problem, we mask the ineffective \(\mathbf{r}^{\prime}\) by considering the angle \(\theta\) between \(\hat{\mathbf{n}}\) and \(-\hat{\mathbf{d}}\) as follows: \[M(\mathbf{r}^{\prime})=\begin{cases}1&\text{if }\arccos{(-(\hat{\mathbf{d}} \cdot\hat{\mathbf{n}}))}<\tau\\ 0&\text{otherwise}\end{cases}, \tag{8}\] where \(-(\hat{\mathbf{d}}\cdot\hat{\mathbf{n}})\) amounts to \(\cos\theta\) of original input rays and normal vectors, and \(\tau\) indicates the threshold for filtering the invalid rays, which we set as \(90^{\circ}\) unless specified. Through this masking process, only \(\mathbf{r}^{\prime}\) which are cast toward the photo-consistent point can be remained as we intend. Finally, our proposed flipped reflection rays are modeled by mixture density like the original input rays: \[p(\mathbf{c}|\mathbf{r}^{\prime})=\sum_{i=1}^{M}\pi_{i}^{\prime}\mathcal{F}( \mathbf{c};\mu_{i}^{\mathbf{c}\prime},\beta_{i}^{\prime}). \tag{9}\] Additionally, we leverage the _Orientation Loss_ proposed in Ref-NeRF [36] to penalize the backward-facing normal vectors for learning accurate surface normals: \[l_{\text{ori.}}(\mathbf{r})=\sum_{i=1}^{M}w_{i}\max(0,\mathbf{n}_{i}\cdot\hat {\mathbf{d}})^{2}. \tag{10}\] Unlike Ref-NeRF, we penalize underlying density gradient normal \(\mathbf{n}_{i}\) instead of predicted normals. Note that our FlipNeRF is fundamentally different compared to Ref-NeRF since ours generates additional training rays through the derivation of reflection direction without modification to original representations while Ref-NeRF replaced the input viewing direction with its reflection direction, reparameterizing the outgoing radiance. ### Uncertainty-aware Regularization Several regularization techniques have been proposed to reduce the floating artifacts present in synthesized images, which is one of the major problems of NeRF. Among them, we leverage the _Emptiness Loss_[37] which penalizes the small blending weights along a ray as follows: \[l_{\text{Emp.}}(\mathbf{r})=\frac{1}{M}\sum_{i=1}^{M}\log(1+\eta\cdot w_{i}), \tag{11}\] where the bigger \(\eta\) is, the steeper the loss function becomes around \(0\). However, the naive application of existing regularization techniques with limited training views might not be consistently helpful across the different scenes due to the scene-by-scene different structure, resulting in overall performance degradation. To address this problem, we propose _Uncertainty-aware Emptiness Loss (UE Loss)_ developed upon the Emptiness Loss, which reduces the floating artifacts consistently over the different scenes by considering the output uncertainty: \[l_{\text{UE}}(\mathbf{r})=\frac{1}{M}\sum_{i=1}^{M}\log(1+\rho \cdot\eta\cdot w_{i}), \tag{12}\] \[\text{where}\quad\rho=\frac{1}{3}\sum_{c}^{\{r,g,b\}}\sum_{i=1}^ {M}\beta_{i}^{c}.\] \(\rho\) amounts to the average of the summation of estimated scale parameters of RGB color distributions from all samples along a ray, which we use as the uncertainty of a ray. By our proposed UE Loss, we are able to regularize the blending weights adaptively, _i.e_. the more uncertain a ray is, the more penalized the blending weights along the ray are. It is able to reduce floating artifacts consistently across the scenes with different structures and enables to synthesize more reliable outputs by considering uncertainty. ### Bottleneck Feature Consistency Motivated by previous works addressing the feature-level consistency of multiple views for few-shot novel view synthesis [6, 11, 15], we encourage the consistency of bottleneck feature distributions between \(\mathbf{r}\) and \(\mathbf{r}^{\prime}\), which are intermediate feature vectors, _i.e_. outputs of the spatial MLP of NeRF, by Jensen-Shannon Divergence (JSD): \[l_{\text{BFC}}(\mathbf{r},\mathbf{r}^{\prime})=JSD(\psi(\mathbf{b}),\psi( \mathbf{b}^{\prime})), \tag{13}\] where \(\psi(\cdot)\), \(\mathbf{b}\) and \(\mathbf{b}^{\prime}\) denote the softmax function, the bottleneck features of \(\mathbf{r}\) and \(\mathbf{r}^{\prime}\), respectively. While the existing methods [6, 11] rely on off-the-shelf feature extractors like 2D CNN or CLIP [27] to address high-level feature consistency, we regulate the pair of features effectively by enhancing consistency between bottleneck features without depending on additional feature extractors. ### Total Loss Our FlipNeRF is not only trained to maximize the log-likelihood of the target pixel \(\mathbf{c}_{\text{GT}}\) for a set of original input rays \(\mathcal{R}\), but also for flipped reflection rays \(\mathcal{R}^{\prime}_{M}\), where the ineffective rays are excluded from the total flipped reflection rays \(\mathcal{R}^{\prime}\) by our masking strategy in Eq. 8. Likewise, the UE Losses are applied for both \(\mathcal{R}\) and \(\mathcal{R}^{\prime}_{M}\). Aggregating all, our total loss over a batch is as follows: \[\begin{split}\mathcal{L}_{\text{Total}}&=\mathcal{L}_{ \text{MSE}}+\lambda_{1}\mathcal{L}_{\text{NLL}}+\lambda_{2}\mathcal{L}^{\prime}_ {\text{NLL}}\\ &+\lambda_{3}\mathcal{L}_{\text{UE}}+\lambda_{4}\mathcal{L}^{ \prime}_{\text{UE}}+\lambda_{5}\mathcal{L}_{\text{BFC}}+\lambda_{6}\mathcal{ L}_{\text{Ori.}},\end{split} \tag{14}\] where a set of \(\lambda\)'s are balancing weight terms for the losses. ## 4 Experiments ### Experimental Settings Implementation details.Our FlipNeRF is built upon MixNeRF [31] based on the JAX codebase [4], which is developed upon mip-NeRF [1]. The overall training scheme follows [25, 31]. We adopt a scene space annealing strategy for the early training phase. Also, we apply the exponential decay and warm up for the learning rate. The Adam optimizer [16] and the gradient clippings by value at 0.1 and norm at 0.1 are used. We train our FlipNeRF for 500 pixel epochs with a batch size of 4,096 on 4 NVIDIA RTX 3090. Additionally, since the LLFF dataset [23] consists of scenes with much more static movement of viewpoints compared to other datasets, we set the threshold (\(\tau\)) of our masking strategy as 30\({}^{\circ}\) for the experiments on LLFF. More detailed hyperparameters and our loss balancing terms by the datasets and the number of training views are provided in the supplementary material. Datasets and metrics.We evaluate the performance of our FlipNeRF and baselines on the representative benchmarks: Realistic Synthetic 360\({}^{\circ}\)[24], DTU [13], and LLFF [23]. Realistic Synthetic 360\({}^{\circ}\) contains 8 synthetic scenes, each consisting of 400 multi-view rendered images with white background. To compare against other representative baselines [15, 11, 31, 25, 1], we evaluate our FlipNeRF under the scenarios of 4 and 8 views. For a fair comparison, we sample the first \(n\)-image of the training set for the \(n\)-view scenario so that the identical images are provided for training different methods following [31]. We use 200 images of the test set for evaluation. For DTU, which provide various scenes including objects put on a white table with a black background, we conduct experiments on the 15 specific scenes under the scenarios of 3, 6, and 9-view, following the experimental protocol of [43]. We also conduct a series of experiments for the analysis of FlipNeRF under 3-view setting as well as comparison against other baselines. Additionally, we compare our FlipNeRF against other baselines on LLFF consisting of real forward-facing scenes, which is often tested as an out-of-distribution dataset for pre-training methods. Following [24], every 8th image of each scene is used for a held-out test set and the training views are evenly selected from the remaining images. Like DTU, we report our results of the 3, 6, and 9-view scenarios, following [43]. For the quantitative evaluation for rendered images, we adopt the mean of PSNR, SSIM [40], LPIPS [45], and the geometric average [1]. Furthermore, we also adopt the mean angular error (MAE\({}^{\circ}\)) [36] and NLL [33, 32, 21, 18] for evaluating the surface normals and uncertainty, respectively. Specifically, following [33], we compute the NLL by deriving the probability of the GT pixel values given a Gaussian distribution with the estimated RGB values as mean and the uncertainty as variance, which is derived from the weighted sum of the blending weights and estimated scale parameters along a ray in our FlipNeRF. For DTU, we report the results evaluated by masked metrics to prevent background bias, following [25, 31]. Baselines.We compare our FlipNeRF against the SOTA regularization methods [11, 15, 25, 31] on Realistic Synthetic 360\({}^{\circ}\) as well as the vanilla mip-NeRF [1] and RefNeRF [36], which is known for achieving promising results with accurate surface normals. Furthermore, we compare ours against the representative pre-training methods [5, 6, 43] as well as regularization methods on DTU and LLFF. The pre-training baselines exploit the DTU and LLFF as pre-training dataset and out-of-distribution test set, respectively, while the regularization methods, mip-NeRF and Ref-NeRF are optimized per scene. Note that we report the quantitative results of other baselines on DTU and LLFF from [25], which achieved better results than its original papers by the modified training scheme, and those on Realistic Synthetic 360\({}^{\circ}\) from [31], which trained the baselines with the identical training views for a fair comparison. ### Analysis of FlipNeRF Analysis of flipped reflection rays.As shown in Tab. 1, our masking strategy of filtering out the ineffective flipped reflection rays with the threshold (\(\tau\)) of 90\({}^{\circ}\) achieves the best performance among different options. With \(\tau\) of 30\({}^{\circ}\) and 60\({}^{\circ}\), the rendering quality is rather degraded since the newly generated rays are overly-filtered and do not provide enough additional supervision as demonstrated in Fig. 3(a). On the other hand, when we exploit all the flipped reflec \begin{table} \begin{tabular}{l|c|c|c|c} \(\tau\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & Average Err. \(\downarrow\) \\ \hline 30\({}^{\circ}\) & 18.62 & 0.747 & 0.206 & 0.121 \\ 60\({}^{\circ}\) & 18.12 & 0.723 & 0.237 & 0.126 \\ 90\({}^{\circ}\) & **19.55** & **0.767** & **0.180** & **0.101** \\ 180\({}^{\circ}\) (No masking) & 18.76 & 0.755 & 0.190 & 0.111 \\ \hline \end{tabular} \end{table} Table 1: **Comparison of masking conditions. Our masking strategy with \(\tau\) of 90\({}^{\circ}\) achieves the best results, filtering out the ineffective flipped reflection rays successfully.** Figure 4: **Analysis of the masking strategy.** tion rays without masking, there exists a little improvement of performance compared to \(30^{\circ}\) and \(60^{\circ}\) masking, but it is still much inferior to \(90^{\circ}\) due to the negative impact from the ineffective rays. Additionally, Fig. 4b shows the distribution of \(\theta\), _i.e_. angles between the input viewing directions and normal vectors. As the smaller \(\theta\) is, the more photo-consistent the target pixel is, _i.e_. the more effective the newly generated flipped reflection rays are for training. Since our FlipNeRF is trained to estimate the accurate normal vectors, we are able to exploit a set of more effective flipped reflection rays through the training, which are cast on the more photo-consistent target pixels. Furthermore, at the initial training phase, the invalid additional rays are filtered effectively by our masking strategy, leading to high-quality supervision and stabilizing the training. Uncertainty-aware regularization.Tab. 2 shows the effectiveness of our proposed UE Loss. Compared to MixNeRF [31] which models a ray with mixture of distributions as our FlipNeRF, ours consistently achieves more reliable rendering results with much lower NLL. Without \(\mathcal{L}_{\text{Emp.}}\)[37] or our proposed \(\mathcal{L}_{\text{UE}}\), ours already outperforms MixNeRF by a large margin. However, ours with naively leveraged \(\mathcal{L}_{\text{Emp.}}\) rather shows inferior results to MixNeRF. It shows that naive application of the existing regularization technique for reducing artifacts under the few-shot setting can lead to overall performance degradation due to the scene-by-scene various structures. As illustrated in Fig. 5, with our proposed \(\mathcal{L}_{\text{UE}}\), ours improves both of the rendering quality and the reliability of the model outputs by a large margin compared to MixNeRF. ### Comparison with other SOTA Methods Realistic Synthetic 360\({}^{\circ}\).As demonstrated in Tab. 5, our FlipNeRF achieves the SOTA performance across all the evaluation metrics. Compared to MixNeRF which leverages a mixture model framework as ours, our FlipNeRF improves the performance by a large margin. Noticeably, our FlipNeRF estimates more accurate surface normals than other baselines, leading to the performance gain with better reconstructed fine details from limited input views as shown in Fig. 5(a). Additionally, the vanilla Ref-NeRF, which shows great performance with accurate normal vectors, achieves comparable or even better performance than other regularization methods except ours. From this result, we are able to expect that estimating the accurate surface normals is one of the key factors for learning 3D geometry with sparse inputs. Note that the comparable MAE\({}^{\circ}\) of RegNeRF results from the overly-smoothed depth estimation, not indicating the high-quality of rendering results, as shown in Fig. 5(a). Dtu.Our FlipNeRF achieves the best results across all the scenarios and most of the evaluation metrics on DTU as shown in Tab. 6. Remarkably, ours trained with 6-view outperforms all the pre-training methods trained with 9-view in every metric except SSIM. Furthermore, ours trained with 3-view still outperforms PixelNeRF [43] and SRF [6] trained with 6-view. Similar to the results on Realistic Synthetic 360\({}^{\circ}\), ours outperforms other baselines by a large \begin{table} \begin{tabular}{l|c c|c c} \hline \hline & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & Average Err. \(\downarrow\) \\ \hline \hline \multicolumn{5}{l}{_Pre-training._} \\ \hline \hline PixelNeRF [43] & 7.93 & 0.272 & 0.682 & 0.461 \\ PixelNeRF\({}^{\dagger}\)[43] & 16.17 & 0.438 & 0.512 & 0.217 \\ SRF [6] & 12.34 & 0.250 & 0.591 & 0.313 \\ SRF\({}^{\dagger}\)[6] & 17.07 & 0.436 & 0.529 & 0.203 \\ MVSNeRF [5] & 17.25 & 0.557 & 0.356 & 0.171 \\ MVSNeRF\({}^{\dagger}\)[5] & 17.88 & 0.584 & 0.327 & 0.157 \\ \hline \multicolumn{5}{l}{_Regularization._} \\ \hline \hline \multicolumn{5}{l}{DietNeRF [11]} & 14.94 & 0.370 & 0.496 & 0.240 \\ RegNeRF [25] & 19.08 & 0.587 & 0.336 & 0.146 \\ MixNeRF [31] & 19.27 & 0.629 & 0.236 & 0.124 \\ **FlipNeRF (Ours)** & **19.34** & **0.631** & **0.235** & **0.123** \\ \hline \hline \end{tabular} \end{table} Table 6: **Comparison with baselines on DTU. Our FlipNeRF outperforms all the pre-training and regularization methods in every scenario, especially by a large margin under the 3-view setting. \(\dagger\) indicates fine-tuning.** \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{3}{c|}{PSNR \(\uparrow\)} & \multicolumn{3}{c|}{SSIM \(\uparrow\)} & \multicolumn{3}{c|}{LPIPS \(\downarrow\)} & \multicolumn{3}{c}{Average Err. \(\downarrow\)} \\ & 4-view & 8-view & 4-view & 8-view & 4-view & 8-view & 4-view & 8-view & 4-view & 8-view \\ \hline \hline \multicolumn{10}{l|}{mp-NeRF [1]} & 14.12 & 18.74 & 0.722 & 0.828 & 0.382 & 0.238 & 0.221 & 0.121 & 96.05 & 101.21 \\ Ref-NeRF [36] & 18.09 & 24.00 & 0.764 & 0.879 & 0.269 & 0.106 & 0.150 & 0.058 & 65.62 & 57.93 \\ \hline \hline \multicolumn{10}{l|}{DietNeRF [11]} & 15.42 & 21.31 & 0.730 & 0.847 & 0.314 & 0.153 & 0.201 & 0.086 & - & - \\ InfoNeRF [15] & 18.44 & 22.01 & 0.792 & 0.852 & 0.223 & 0.133 & 0.119 & 0.073 & - & - \\ RegNeRF [25] & 13.71 & 19.11 & 0.786 & 0.841 & 0.346 & 0.200 & 0.210 & 0.122 & 62.78 & 60.37 \\ MixNeRF [31] & 18.99 & 23.84 & 0.807 & 0.878 & 0.199 & 0.103 & 0.113 & 0.060 & 70.90 & 62.04 \\ **FlipNeRF (Ours)** & **20.60** & **24.38** & **0.822** & **0.883** & **0.159** & **0.095** & **0.091** & **0.055** & **58.72** & **57.17** \\ \hline \hline \end{tabular} \end{table} Table 7: **Comparison with baselines on LLFF 3-view. \(\dagger\) indicates fine-tuning.** \begin{table} \begin{tabular}{l|c c c|c c|c c|c c|c c} \hline \hline & \multicolumn{3}{c|}{PSNR \(\uparrow\)} & \multicolumn{3}{c|}{SSIM \(\uparrow\)} & \multicolumn{3}{c}{LPIPS \(\downarrow\)} & \multicolumn{3}{c}{Average Err. \(\downarrow\)} \\ & 3-view & 6-view & 9-view & 3-view & 6-view & 9-view & 3-view & 6-view & 9-view & 3-view & 6-view & 9-view \\ \hline \hline \multicolumn{10}{l|}{mp-NeRF [1]} & 8.68 & 16.54 & 23.58 & 0.571 & 0.741 & 0.879 & 0.353 & 0.198 & 0.092 & 0.323 & 0.148 & 0.056 \\ \hline \hline \multicolumn{10}{l|}{_Pre-training._} \\ \hline \hline PixelNeRF [43] & 16.82 & 19.11 & 20.40 & 0.695 & 0.745 & 0.768 & 0.270 & 0.232 & 0.220 & 0.147 & 0.115 & 0.100 \\ PixelNeRF\({}^{\dagger}\)[43] & 18.95 & 20.56 & 21.83 & 0.710 & 0.753 & 0.781 & 0.269 & 0.223 & 0.203 & 0.125 & 0.104 & 0.090 \\ SRF [6] & 15.32 & 17.54 & 18.35 & 0.671 & 0.730 & 0.752 & 0.304 & 0.250 & 0.232 & 0.171 & 0.132 & 0.120 \\ SRF [6] & 15.68 & 18.87 & 20.75 & 0.698 & 0.757 & 0.785 & 0.281 & 0.225 & 0.205 & 0.162 & 0.114 & 0.093 \\ MVSNeRF [5] & 18.63 & 20.70 & 22.40 & **0.769** & 0.823 & 0.853 & 0.197 & 0.156 & 0.135 & 0.113 & 0.088 & 0.068 \\ MVSNeRF\({}^{\dagger}\)[5] & 18.54 & 20.49 & 22.22 & **0.769** & 0.822 & 0.853 & 0.197 & 0.155 & 0.135 & 0.113 & 0.089 & 0.069 \\ \hline \hline \multicolumn{10}{l}{_Regularization._} \\ \hline \hline DietNeRF [11] & 18.85 & 20.63 & 23.83 & 0.633 & 0.778 & 0.823 & 0.314 & 0.201 & 0.173 & 0.243 & 0.101 & 0.068 \\ RegNeRF [25] & 18.89 & 22.20 & 24.93 & 0.745 & **0.841** & **0.884** & 0.190 & 0.117 & 0.089 & 0.112 & 0.071 & 0.047 \\ MixNeRF [31] & 18.95 & 22.30 & 25.03 & 0.744 & 0.835 & 0.879 & 0.203 & 0.102 & 0.065 & 0.113 & 0.066 & 0.042 \\ **FlipNeRF (Ours)** & **19.55** & **22.45** & **25.12** & **0.767** & **0.839** & **0.882** & **0.180** & **0.098** & **0.062** & **0.101** & **0.064** & **0.041** \\ \hline \hline \end{tabular} \end{table} Table 6: **Comparison with baselines on DTU. Our FlipNeRF outperforms all the pre-training and regularization methods in every scenario, especially by a large margin under the 3 margin especially under the 3-view, which is the most challenging scenario, with reducing the floating artifacts successfully as shown in Fig. 5(b). Since the flipped reflection rays are effective training resources for unseen views, the fewer the training views are provided, the more performance gain is expected. Llfr.Table 7 compares our FlipNeRF against other baselines on LLFF, which is a real forward-facing dataset. Although ours achieves the SOTA performance among other baselines, it is much more marginal than those on Realistic Synthetic 360\({}^{\circ}\) and DTU. We conjecture the reason for the marginal improvement of our proposed method can be the fact that a set of flipped reflection rays, which are able to widely cover the unseen views, are not very useful for the scenes in LLFF, where a set of camera poses are much less dynamic than other datasets. In other words, our FlipNeRF is able to not only achieve a competitive performance for the scenes consisting of a set of simple camera poses, but also render the novel views in much higher quality for more dynamically captured scenes with only a few shots. The rendered images are provided in the supp. material. ## 5 Conclusion In this work, we have focused on accurate surface normals, which is another key factor for the few-shot novel view synthesis. Our proposed FlipNeRF utilizes a set of flipped reflection rays as additional training resources, which are simply derived from the estimated normal vectors and the input ray directions. Since it does not require any heuristic factor for unseen view generation, we are able to exploit these additional training resources with much less burden. Furthermore, with our proposed UE Loss, FlipNeRF reduces the floating artifacts consistently across the different scene structures while considering the output uncertainty, leading to more reliable outputs. Also, our proposed BFC Loss enhances the bottleneck feature consistency between the rays cast on the photo-consistent pixels without leveraging the off-the-shelf feature extractor, leading to performance improvement under the few-shot setting. Our FlipNeRF achieves the SOTA performance with limited input views among the other few-shot baselines and vanilla NeRF-like models. We expect that our work is able to open another meaningful direction for the research of few-shot novel view synthesis. Limitations and future work.Our FlipNeRF exploits the flipped reflection rays as a set of additional training rays, leading to more accurate surface normal estimation. Although Ref-NeRF achieved promising results with the high-quality surface reconstruction, we use mip-NeRF representation instead of Ref-NeRF for a fair comparison with other methods which are based on mip-NeRF. However, as shown in Tab. 6 and Fig. 5(a), Ref-NeRF shows promising results without any additional consideration for few-shot setting compared to the vanilla mip-NeRF. Like our FlipNeRF, the accurate surface normal estimation leads to the competitive performance of Ref-NeRF even in the few-shot scenarios. Therefore, a combination of our FlipNeRF training framework with the Ref-NeRF representation or further exploration with regard to the view-dependent appearance for the few-shot novel view synthesis can be interesting directions for future research. Acknowledgements.This work was supported by NRF (2021R1A2C3006659) and IITP (2021-0-01343) both funded by Korean Government. It was also supported by Samsung Electronics (IO201223-08260-01). Figure 6: **Qualitative results on Realistic Synthetic 360\({}^{\circ}\) and DTU. More results are provided in the supp. material.**
2308.00095
Sums of squares on hypersurfaces
We show that the Pythagoras number of rings of type $\mathbb{R}[x,y, \sqrt{f(x,y)}]$ is infinite, provided that the polynomial $f(x,y)$ satisfies some mild conditions.
Kacper Błachut, Tomasz Kowalczyk
2023-07-31T19:15:31Z
http://arxiv.org/abs/2308.00095v2
# Sums of squares on hypersurfaces ###### Abstract. We show that the Pythagoras number of rings of type \(\mathbb{R}[x,y,\sqrt{f(x,y)}]\) is infinite, provided that the polynomial \(f(x,y)\) satisfies some mild conditions. Key words and phrases:Pythagoras number, sums of squares, algebraic surface, coordinate ring 2020 Mathematics Subject Classification: 14P05, 26C99 ## Introduction One of the important problems of real algebraic geometry is the question of sums of squares. Let \(f\) be a nonnegative function on some topological space \(X\): is \(f\) a sum of squares of functions? if so, how many squares are needed to represent \(f\)? This problem appears in various settings and instances, cf. [2, 4, 5, 8, 10, 11, 12, 13]. Here we will be interested in the latter problem in the case of polynomial functions on an algebraic hypersurface in \(\mathbb{R}^{3}\). **Definition 0.1**.: _For a commutative ring \(R\) with identity its Pythagoras number, \(p(R)\) is the smallest positive integer \(g\) such that any element which is a sum of squares can be written as a sum of at most \(g\) squares. If such number does not exist, we put \(p(R)=\infty\)._ One of the most famous problems in this vein was the Hilbert 17th problem: does every nonnegative polynomial on \(\mathbb{R}^{n}\) is a sum of squares of rational functions? It was solved in affirmative by Artin [1] in 1927. Then came the first explicit example of a non-negative polynomial that is a sum of squares of rational functions, but not polynomials, namely \(M(x,y)=x^{2}y^{4}+x^{4}y^{2}-3x^{2}y^{2}+1\), given by Motzkin in 1967 [17]. Not much was known about the possible number of summands until the work of Pfister [18] who showed that \(p(\mathbb{R}(x_{1},\ldots,x_{n}))\leq 2^{n}\). Currently, the only known general bounds for the Pythagoras number of the field of rational functions \(\mathbb{R}(x_{1},x_{2},\ldots,x_{n})\) are of the form \[n+2\leq p(\mathbb{R}(x_{1},x_{2},\ldots,x_{n}))\leq 2^{n}\] for \(n\geq 2\) (see [11]) which in particular gives \(p(\mathbb{R}(x,y))=4\). Nothing more is known for \(n>2\). For arbitrarily field \(K\), the Cassels Theorem [7] shows \(p(K(x))=p(K[x])\). It is known that \(p(\mathbb{R}(x))=p(\mathbb{R}[x])=2\) and \(p(\mathbb{Q}(x))=5\) (see [19]). Hoffman proved that for any positive integer \(s\) there exists a real field \(K\) such that \(p(K)=s\) (see [14]). Currently, it is still not known whether \(p(K)<\infty\) implies \(p(K(x))<\infty\), it does hold for some special classes of fields, but the general answer is still elusive. The Pythagoras number of a ring is a much more subtle problem. Affine \(k\)-algebras of transcendence degree \(1\), where \(k\) is a real closed field, always have finite Pythagoras number. It is also known that \(p(\mathbb{Z}[x])=\infty\) as well as \(p(\mathbb{R}[x_{1},\ldots,x_{n}])=\infty\) for \(n\geq 2\), but \(p(\mathbb{R}[[x,y]])=2\) and \(p(\mathbb{Z}[[x]])=5\). On the other hand, any formally real affine \(k\)-algebra of dimension at least three have infinite Pythagoras number, for a field \(k\). These results were proven in [10]. Quite a lot is known for local rings, especially for local factorial rings of Introduction Let \(R\) be a commutative ring with identity. We define the length of an element \(a\in R\) to be the smallest positive integer \(g\) such that \(a\) can be written as a sum of at most \(g\) squares. If an element is not a sum of any number of squares, then its length is infinite. In general, only elements of finite length will be interesting. **Definition 1.1**.: _Let \(R\) be a commutative ring with identity. We define the length of an element \(a\in R\) to be the smallest positive integer \(g\) such that \(a\) can be written as a sum of at most \(g\) squares._ If an element is not a sum of any number of squares, then its length is infinite. In general, only elements of finite length will be interesting. **Definition 1.2**.: _Let \(f(x,y)\in\mathbb{R}[x,y]\) be a non-constant polynomial such that \(f(0,0)=0\). Let \(d=\deg_{y}f(x,y)\) and \(\alpha x^{b}y^{d}\) be a monomial with the largest possible \(b\) and nonzero coefficient \(\alpha\) among all monomials in \(f(x,y)\). We say that \(f(x,y)\) is strictly admissible if:_ * \(\alpha>0\) _and_ \(b,d\) _are even, or_ * \(b\) _or_ \(d\) _is odd._ **Definition 1.3**.: _We say that a polynomial \(f(x,y)\) is admissible, if there exists an invertible matrix \(M\in\text{GL}_{2}(\mathbb{R})\) such that \(f\circ M\) is a strictly admissible polynomial._ **Example 1.4**.: _Among polynomials \(f_{1}=x^{3}y\), \(f_{2}=x^{2}-y^{2}\), \(f_{3}=-y^{2}-x^{7}\), \(f_{4}=-2x^{2}-3x^{4}y^{2}\), only \(f_{1}\) is strictly admissible while \(f_{2}\) and \(f_{3}\) are admissible._ **Example 1.5**.: _Consider the polynomial \(f(x,y)=-x^{2}y^{4}-x^{4}y^{2}+3x^{3}y^{3}\), clearly, it is not strictly admissible. However, after applying a linear change of variables given by matrix \(\begin{pmatrix}1&-1\\ 1&1\end{pmatrix}\) we obtain \(f\circ M(x,y)=f(x-y,x+y)=(x-y)^{2}(x+y)^{2}(5y^{2}-x^{2})\) which is strictly admissible._ From the very definition of admissibility, we see that an admissible polynomial is strictly positive on a sufficiently large subset of \(\mathbb{R}^{2}\). One can ask, whether a polynomial which is positive on an unbounded set is admissible. This is not the case **Example 1.6**.: _Let \(g(x,y)=x^{2}(1-x^{2})\). Such a polynomial is strictly positive on an infinite strip of width 2. After an invertible linear change of coordinates we obtain \(g(ax+by,cx+dy)=(ax+by)^{2}(1-(ax+by)^{2})\). For such polynomial, the coefficient at \(y^{4}\) is either zero or negative, hence this polynomial cannot be strictly admissible._ Let us now recall some facts concerning length and Pythagoras number. Let \(R_{1},R_{2}\) be commutative rings with identity. We recall a property of the Pythagoras number that follow immediately from the definition. **Proposition 1.7**.: _Let \(\varphi:R_{1}\to R_{2}\) be a homomorphism of rings. Then for any \(x\in R_{1}\), the length of \(x\) is greater than or equal to the length of \(\varphi(x)\) in \(R_{2}\). If \(\varphi\) is an epimorphism, then \(p(R_{1})\geq p(R_{2})\)._ **Theorem 1.8**.: _[_10_, Proposition 4.5']_ _Let \(g(x,y)\in\mathbb{R}[x,y]\) be a polynomial of length \(m\). Take a positive integer \(r\) such that \(2r>\deg g\). Then the polynomial \(G(x,y)=g(x,y)(y-x^{r})^{2}+1\) has length \(m+1\)._ This theorem is a specialized version of [10, Theorem 4.10] to the case \(\mathbb{R}[x,y]\) (the much more general version can be applied to the various rings of type \(A[x]\) for a commutative ring \(A\), however we will not use it here). An immediate corollary is the following **Corollary 1.9**.: _The Pythagoras number of \(\mathbb{R}[x,y]\) is infinite, i.e._ \[p(\mathbb{R}[x,y])=\infty.\] Note that the condition on \(r\) above is not very restrictive, and we can inductively construct a sequence of polynomials \((F_{m})_{m\geq 1}\) such that the length of \(F_{m}\) is \(m\) in the ring \(\mathbb{R}[x,y]\). **Definition 1.10**.: _Let \(f(x,y)\) be a strictly admissible polynomial. Consider a sequence of positive integers \((r_{m})_{m\geq 1}\) such that_ * \(r_{1}>\deg f\) _and_ \(r_{i}>\sum_{k=1}^{i-1}r_{k}\)__ * \(f(x,x^{r_{i}})\) _is a polynomial in_ \(x\)_, which is either of odd degree, or it has even degree, and the leading coefficient is strictly positive._ _We define a sequence of polynomials in a following way:_ * \(F_{1}=1\)__ * \(F_{n}=F_{n-1}(y-x^{r_{n-1}})^{2}+1\)__ _We call \((F_{m})_{m\geq 1}\), a sequence of polynomials associated with \(f(x,y)\)._ The second condition follows from the admissibility of \(f(x,y)\). Note that the condition \(r_{1}>\deg f\) implies that the polynomial \(f(x,x^{r_{i}})\) cannot be a constant polynomial for any \(i\). By Theorem 1.8, length of each \(F_{n}\) is equal to \(n\) in the ring \(\mathbb{R}[x,y]\). Let now \(f(x,y)\) be an admissible polynomial and consider the hypersurface \(V\) given by the zero set of \(z^{2}-f(x,y)\). Then the coordinate ring of \(V\), \(\mathbb{R}[V]=\mathbb{R}[x,y,z]/(z^{2}-f(x,y))\) is isomorphic to \(\mathbb{R}[x,y,\sqrt{f(x,y)}]\), provided that \(f(x,y)\) is not a square (if \(f(x,y)\) is a square, then the former ring is isomorphic to \(\mathbb{R}[x,y]\times\mathbb{R}[x,y]\) and the latter to \(\mathbb{R}[x,y]\), both rings have infinite Pythagoras number). We have the following obvious lemma **Lemma 1.11**.: _Consider the polynomial \(f(x,y)\) and \(g(x,y)=(f\circ M)(x,y)\) for some invertible real matrix \(M\). Then the rings \(\mathbb{R}[x,y,z]/(z^{2}-f(x,y))\) and \(\mathbb{R}[x,y,z]/(z^{2}-g(x,y))\) are isomorphic. _ At the end of this section, we provide sufficient condition for admissibility. **Proposition 1.12**.: _Let \(f(x,y)\) be a non-constant polynomial such that \(f(0,0)=0\) and let \(\deg f(x,y)=d\). Write \(f(x,y)=g(x,y)+h(x,y)\), where \(h(x,y)\) is the homogeneous part of degree \(d\) and \(\deg g(x,y)<d\). If \(h(x,y)\) admits strictly positive value on \(\mathbb{R}^{2}\), then \(f(x,y)\) is an admissible polynomial._ Proof.: Let \(P\in\mathbb{R}^{2}\) be such that \(h(P)>0\) and \(M\) be the rotation matrix, such that it maps the halfline \(OP\) to the positive part of \(y\)-axis. As \(h\) is homogeneous, \(h\circ M\) is strictly positive on the positive part of the \(y\)-axis, in other words, it contains a monomial \(\alpha y^{d}\) with positive \(\alpha\). Thus, \(f\circ M\) is strictly admissible. The above condition is clearly not necessary, as \(f(x,y)=-x^{6}+x^{2}y^{2}\) is admissible, but does not satisfy assumptions. **Corollary 1.13**.: _If \(h(x,y)\) is a homogeneous admissible polynomial of degree \(d\), then for any polynomial \(g(x,y)\) with \(\deg g(x,y)<d\), the polynomial \(f(x,y):=g(x,y)-g(0,0)+h(x,y)\) is admissible._ ## 2. Main result We will now focus on computing the Pythagoras number of the coordinate rings of a specific type of hypersurfaces. Our main result is as follows. **Theorem 2.1**.: _Let \(f(x,y)\) be an admissible polynomial which is not a square. Then_ \[p(\mathbb{R}[x,y,\sqrt{f(x,y)}])=\infty.\] Proof.: After a linear change of coordinates, \(f(x,y)\) becomes a strictly admissible polynomial, hence, by the Lemma 1.11, we may assume that \(f(x,y)\) is a strictly admissible polynomial. Let \((F_{m})_{m\geq 1}\) be a sequence of polynomials associated with \(f(x,y)\). Assume to the contrary that the Pythagoras number of \(\mathbb{R}[x,y,\sqrt{f(x,y)}]\) is finite and equal to a positive integer \(L\), and consider the polynomial \(F_{L+1}\). Every element \(h\in\mathbb{R}[x,y,\sqrt{f(x,y)}]\) can be written in the form \(h=h_{1}+\sqrt{f}h_{2}\), for some polynomials \(h_{1},h_{2}\). By hypothesis, we have an equality \[F_{L+1}=\sum_{i=1}^{L}(f_{i,1}+\sqrt{f}g_{i,1})^{2}\] which translates into \[F_{L+1}=\sum_{i=1}^{L}f_{i,1}^{2}+f\sum_{i=1}^{L}g_{i,1}^{2} \tag{1}\] and \[\sum_{i=1}^{L}f_{i,1}g_{i,1}\equiv 0,\] for some polynomials \(f_{i,1},g_{i,1}\in\mathbb{R}[x,y]\) (note that this last equality is not of interest to us). After substitution \(y=x^{r_{L}}\) we get \[F_{L+1}(x,x^{r_{L}})=\sum_{i=1}^{L}f_{i,1}^{2}(x,x^{r_{L}})+f(x,x^{r_{L}})\sum_{ i=1}^{L}g_{i,1}^{2}(x,x^{r_{L}}). \tag{2}\] Recall that \(F_{L+1}=F_{L}(y-x^{r_{L}})^{2}+1\), hence the above equation yields \[1=\sum_{i=1}^{L}f_{i,1}^{2}(x,x^{r_{L}})+f(x,x^{r_{L}})\sum_{i=1}^{L}g_{i,1}^{2 }(x,x^{r_{L}}). \tag{3}\] Since \(f(x,y)\) is a strictly admissible polynomial, the constant term of the polynomial \(f(x,x^{r_{L+1}})\) is zero. By construction of the associated sequence, \(f(x,x^{r_{L}})\) is not a zero polynomial. Moreover, the polynomial \(f(x,x^{r_{L}})\sum_{i=1}^{L}g_{i,1}^{2}(x,x^{r_{L}})\) is either of odd degree, or of even degree with positive leading coefficient. Straightforward calculation shows that each \(g_{i,1}\) is divisible by \(y-x^{r_{L}}\) and each \(f_{i,1}=a_{i}+(y-x^{r_{L}})f_{i,2}\) for \(i=1,2,\ldots,L\), where \(a_{i}\in\mathbb{R}\) and \(\sum_{i=1}^{L}a_{i}^{2}=1\). We may now apply orthogonal transformation over \(\mathbb{R}[x,y]\) (see [10] or [20, Theorem 8.1.2]), and assume that \(f_{i,1}=(y-x^{r_{L}})f_{i,2}\) for \(i=1,2,\ldots,L-1\) and \(f_{L,1}=1+(y-x^{r_{L}})f_{L,2}\). We can rewrite equation (1) as \[F_{L}((y-x^{r_{L}})^{2}+1=(y-x^{r_{L}})^{2}\sum_{i=1}^{L-1}f_{i,2}^{2}+1+2(y-x ^{r_{L}})f_{L,2}+(y-x^{r_{L}})^{2}f_{L,2}^{2}+(y-x^{r_{L}})^{2}f\sum_{i=1}^{L} g_{i,2}^{2}.\] After cancelling 1's we see that \((y-x^{r_{L}})\) divides \(f_{L,2}\). If \(f_{L,2}\) is a nonzero polynomial, then the degree of the right hand side is at least \(4r_{L}\). On the other hand, by construction, \(2r_{L}\) is strictly larger than \(\deg F_{L}\), a contradiction. As a consequence, \(f_{L,2}\equiv 0\) and we reduced the equation (1) to \[F_{L}=\sum_{i=1}^{L-1}f_{i,2}^{2}+f\sum_{i=1}^{L}g_{i,2}^{2}.\] With the above reasoning, we managed to reduce to number of polynomials \(f_{i,1}\) by one. After repeating this procedure \(L-2\) times, we obtain \[F_{2}=f_{1,L-1}^{2}+f\sum_{i=1}^{L}g_{i,L-1}^{2}\] which is equivalent to \[F_{1}(y-x_{1}^{r})^{2}+1=f_{1,L-1}^{2}+f\sum_{i=1}^{L}g_{i,L-1}^{2}.\] After repeating the above procedure one last time, we get that \(f_{1,L-1}\equiv 1\) hence \[F_{1}=f\sum_{i=1}^{L}g_{i,L}^{2}.\] This, however, is a contradiction as \(F_{1}=1\) and \(f\) is not a unit. This finishes the proof. **Remark 2.2**.: Denote by \(\mathcal{O}(V)\) the ring of regular functions [6, Definition 3.2.1] of the algebraic surface \(V\subset\mathbb{R}^{n}\). One can ask if the same reasoning would work for \(\mathcal{O}(V)\). However, the polynomials which allowed us to derive contradiction in the above proof were of the form \(1+G(x,y)\), where \(G(x,y)\) is a sum of squares. Every such element is a unit and a totally positive element of \(\mathcal{O}(V)\), hence by [15, Theorem 7.3], the length of any such polynomial is at most \(4\). Therefore, in order to compute \(p(\mathcal{O}(V))\), one has to use different methods. To the authors best knowledge \(p(\mathcal{O}(V))\) is not known for any surface. If we assume that \(f(x,y)\) is a sum of squares we get the following: **Theorem 2.3**.: _Assume that \(f(x,y)\) is a polynomial of length \(n>1\). Then_ \[p(\mathbb{R}[x,y,\sqrt{f(x,y)}])=\infty.\] Proof.: Assume to the contrary, that \(p(\mathbb{R}[x,y,\sqrt{f(x,y)}])=L\). Take \((G_{m})_{m\in\mathbb{N}}\) to be any sequence of polynomials such that the length of \(G_{m}\) is \(m\) in \(\mathbb{R}[x,y]\). In this case, \(G_{m}\) can be written as \[G_{m}=\sum_{i=1}^{L}(f_{i}+\sqrt{f}g_{i})^{2}\] which translates into \[G_{m}=\sum_{i=1}^{L}f_{i}^{2}+f\sum_{i=1}^{L}g_{i}^{2}\] and \[\sum_{i=1}^{L}f_{i}g_{i}\equiv 0,\] for some polynomials \(f_{i},g_{i}\in\mathbb{R}[x,y]\). We see, that the right hand side of the first equation contains at most \(L+nL\) squares. Hence, by taking \(m>L+nL\) we obtain a contradiction. We stress that we do not require \(f(x,y)\) to be admissible. As a final remark of this section. we would like to note that the above result cannot be generalized to the case where \(f(x,y)\) is a strictly positive polynomial which is not a sum of squares. The existence of a zero of a polynomial is necessary for the proof of Theorem 2.1. On the other hand, we have the following **Theorem 2.4**.: _If \(f(x,y)=-\sum_{i=1}^{n}f_{i}^{2}\), for some \(f_{i}\in\mathbb{R}[x.y]\) then_ \[p(\mathbb{R}[x,y,z]/(z^{2}-f(x,y)))<\infty\] _provided that \(f(x,y)\) is a nonzero polynomial._ Proof.: Follows readily from [10, Example 3.11]. ## 3. Applications and further problems In this last section we will provide some applications of Theorem 2.1 and propose some further problems. Consider the du Val singularities: * \(A_{n}:z^{2}+x^{2}+y^{n+1}=0\), \(n\geq 1\) * \(D_{n}:z^{2}+x^{2}y+y^{n-1}=0\), \(n\geq 4\) * \(E_{6}:z^{2}+x^{3}+y^{4}=0\) * \(E_{7}:z^{2}+x^{3}+xy^{2}=0\) * \(E_{8}:z^{2}+x^{3}+y^{5}=0.\) Note that the equations \(A_{2k+1}\) describe a single point in \(\mathbb{R}^{3}\), while the others are actual hypersurfaces in \(\mathbb{R}^{3}.\) We may now state the following: **Observation 3.1**.: _Let \(Q\) be a one of equation from the above list. Then the affine \(\mathbb{R}\)-algebra \(\mathbb{R}[x,y,z]/(Q)\) satisfies_ \[p(\mathbb{R}[x,y,z]/(Q))\begin{cases}<+\infty\quad\text{for}\quad Q=A_{2k+1}\\ =+\infty\qquad\text{otherwise}.\end{cases}\] Proof.: Any of the polynomials satisfying second part can be written in the form \(z^{2}-f(x,y)\) where \(f(x,y)\) is an admissible polynomial, hence the result follows from Theorem 2.1. The first part follows from Theorem 2.4. In order to prove Theorem 2.1 we had to assume that the polynomial \(f(x,y)\) has a zero and is positive on a sufficiently large subset of \(\mathbb{R}^{2}\). We propose the following **Problem 3.2**.: _Compute_ \[p(\mathbb{R}[x,y,z]/(z^{2}-f(x,y))).\] _in the following three cases:_ * \(f(x,y)\) _is a strictly positive polynomial which is not a sum of squares,_ * \(f(x,y)\) _is a strictly negative polynomial such that_ \(-f(x,y)\) _is not a sum of squares,_ * \(f(x,y)\) _is indefinite, but it is not an admissible polynomial._ Consider the homogenized Motzkin polynomial \(M(x,y,z)=z^{6}+x^{2}y^{4}+x^{4}y^{2}-3x^{2}y^{2}z^{2}\). It is known that the cone of sums of squares of ternary sextics is strictly contained in the cone of positive semidefinite ternary sextics [21]. Since both cones are closed, there exists a positive \(\epsilon>0\) real number such that the polynomial \(M_{1}(x,y,z)=M(x,y,z)+\epsilon(x^{6}+y^{6}+z^{6})\) is nonnegative, but it is not a sum of squares. Hence, after dehomogenizing, we see an interesting case in \(a)\) and \(b)\) above. For polynomials satisfying \(c)\) see Example 1.6 or any polynomial which is positive only on a bounded subset of \(\mathbb{R}^{2}\). We stress, that (up to translations) the above list together with polynomials satisfying Theorem 2.1, 2.3 or 2.4 consists of the ring of all polynomials. We expect, that the Pythagoras number in \(a)\) and \(c)\) is infinite, and it is finite for polynomials satisfying \(b)\). Based on the evidence shown in this paper, we formulate the following conjecture **Conjecture 3.3**.: _Let \(f(x,y,z)\in\mathbb{R}[x,y,z]\) be an irreducible polynomial. Then, the following conditions are equivalent:_ * \(p(\mathbb{R}[x,y,z]/(f(x,y,z))=+\infty\)__ * _the polynomial_ \(f(x,y,z)\) _is indefinite i.e._ \(f\) _changes sign on_ \(\mathbb{R}^{3}\) _(equivalently, the ideal_ \((f(x,y,z))\) _is real)._ Irreducibility condition is necessary in the above conjecture. For \(f(x,y,z)=z^{2}\) the quotient ring has a surjection onto the ring \(\mathbb{R}[x,y]\) hence by Proposition 1.7 its Pythagoras number is infinite.
2303.17843
Tambara-Yamagami Categories over the Reals: The Non-Split Case
Tambara and Yamagami investigated a simple set of fusion rules with only one non-invertible object, and proved under which circumstances those rules could be given a coherent associator. We consider a generalization of such fusion rules to the setting where simple objects are no longer required to be split simple. Over the real numbers, this means that objects are either real, complex, or quaternionic. In this context, we prove a similar categorification result to the one of Tambara and Yamagami.
Julia Plavnik, Sean Sanford, Dalton Sconce
2023-03-31T07:10:28Z
http://arxiv.org/abs/2303.17843v3
# Tambara-Yamagami categories over the reals: ###### Abstract. Tambara and Yamagami investigated a simple set of fusion rules with only one non-invertible object, and proved under which circumstances those rules could be given a coherent associator. We consider a generalization of such fusion rules to the setting where simple objects are no longer required to be split simple. Over the real numbers, this means that objects are either real, complex, or quaternionic. In this context, we prove a similar categorification result to the one of Tambara and Yamagami. ## 1. Introduction and Summary of Results There are many pairs of nonisomorphic groups \(G_{1}\ncong G_{2}\) whose categories of representations \(\operatorname{Rep}_{\mathbb{C}}(G_{i})\) are equivalent as additive categories. If we require the categories to have the same fusion rules, then even with this more restrictive hypothesis, the groups can still fail to be isomorphic. For example \(D_{8}\) the dihedral group and \(Q_{8}\) the quaternion group of order \(8\) are such a pair. An important implication of this is that groups cannot be recovered from their character tables. It turns out that even when the categories are monoidally equivalent, the groups can still fail to be isomorphic (see [1]). Despite all of these negative results, if the category is equipped with a fiber functor \(F:\operatorname{Rep}_{\mathbb{C}}(G)\to\mathbb{C}\)-Vec, this is sufficient to recover the group \(G\). This is part of a more general theory known as Tannaka-Krein reconstruction. In general, this theory recovers a Hopf algebra from a finite tensor category \(\mathcal{C}\) and a fiber functor \(F:\mathcal{C}\to\mathbb{C}\)-Vec. In the late '90s, Daisuke Tambara and Shigeru Yamagami were studying the Hopf algebras whose categories of representations had the same fusion rules as \(\operatorname{Rep}_{\mathbb{C}}(D_{8})\). This investigation led them to analyze and completely classify the fusion categories that have a general form similar to \(\operatorname{Rep}_{\mathbb{C}}(D_{8})\), [10]. Their classification allowed for arbitrary base fields, but they assumed that all the simple objects were _split_, that is, \(\operatorname{End}(X)\) is isomorphic to the base field. About ten years later, Etingof, Nikshych, and Ostrik developed a homotopy theoretic description of extension theory for fusion categories over algebraically closed fields in [1]. This paper made explicit a deep connection between tensor categories and higher groupoids. As an application of their theory, they give a shorter new proof of Tambara and Yamagami's classification result, but with the assumption that the base field is algebraically closed. The techniques of Etingof, Nikshych, and Ostrik require adjustments in order to be extended to fusion categories over non-algebraically closed fields. Tambara-Yamagami categories, being extensions of pointed categories by \(\mathbb{Z}/2\mathbb{Z}\), are natural first examples to help elucidate the theory in this new setting. When working over Introduction Let \(G\) be a finite group, let \(\tau=\frac{\pm 1}{\sqrt{4|A|}}\) be a non-split \(A\)-module and \(\mathbb{C}_{\mathbb{H}}(A,\tau,\chi)\) be a non-split \(A\)-module. We say that \(\tau\) is _\(\tau^{\prime}\)-_\(\tau^{\prime}\)_ if \(\tau^{\prime}\) is \(\tau^{\prime}\)-_\(\tau^{\prime}\)_ if \(\tau^{\prime}\) is \(\tau^{\prime To understand the case in which \(\operatorname{End}(\mathds{1})\cong\mathbb{C}\) and \(\operatorname{End}(m)\cong\mathbb{C}\), and also understand why it is not necessarily fusion over \(\mathbb{C}\), we introduce the notion of Galois nontrivial objects 3.9. Moreover, having such objects induces a faithful \(\mathbb{Z}/2\mathbb{Z}\)-grading on the fusion categories. **Theorem 3.12**.: _All fusion categories \(\mathcal{C}\) over \(\mathbb{R}\) that contain Galois nontrivial objects necessarily admit a grading by the group \(\text{Gal}(\mathbb{C}/\mathbb{R})\cong\mathbb{Z}/2\mathbb{Z}\). We will refer to this as the **Galois grading**. Moreover, this grading is faithful in the sense that \(\mathcal{C}_{1}\not\simeq 0\)_ **Theorem 7.1**.: _Let \(A\) be a finite group, let \(\tau=\,^{\pm\,1}\!/\sqrt{|A|}\), and let \(\chi:A\times A\to\mathbb{C}^{\times}\) be a nondegenerate skew-symmetric bicharacter on \(A\)._ _A triple of such data gives rise to a Galois graded non-split Tambara-Yamagami category \(\mathcal{C}_{\overline{\mathbb{C}}}(A,\tau,\chi)\), with \(\operatorname{End}(X)\cong\mathbb{C}\) for every simple object \(X\). Furthermore, all equivalence classes of such categories arise in this way. Two categories \(\mathcal{C}_{\overline{\mathbb{C}}}(A,\tau,\chi)\) and \(\mathcal{C}_{\overline{\mathbb{C}}}(A^{\prime},\tau^{\prime},\chi^{\prime})\) are equivalent if and only if \(\tau=\tau^{\prime}\), and there exist isomorphisms:_ _( i ) \(f:A\to A^{\prime}\), and_ _( ii ) \((-)^{h}:\mathbb{C}\to\mathbb{C}\), either the identity or complex conjugation, subject to the condition that for all \(a,b\in A\):_ \[\chi^{\prime}\Big{(}f(a),f(b)\Big{)}=\chi(a,b)^{h}\;.\] ### Outline In Section 2 we provide the key definitions and some important examples. In Section 3, we discuss the properties of fusion categories over \(\mathbb{R}\). We also propose a generalization of Tambara-Yamagami categories in the non-split case in Section 4. In Section 5, we analyze the case with real unit and quaternionic non-invertible simple, while in Section 6, we study the case with real unit and complex non-invertible simple. Finally, in Section 7, we discuss the case with all simple objects being complex and having a Galois nontrivial object. ### Acknowledgments This work began with the Research Experiences for Undergraduates (REU) program at Indiana University supported by the NSF grant DMS-1757857. It then evolved into a chapter of Sean Sanford's Ph.D. thesis from 2021. The research of J.P. was partially supported by NSF grants DMS-1917319 and DMS-2146392 and by Simons Foundation Award 889000 as part of the Simons Collaboration on Global Categorical Symmetries. J.P. performed part of this at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. Part of this research was carried out while J.P. would like to thank the hospitality and excellent working conditions at the Department of Mathematics at the University of Hamburg, where she has carried out part of this research as a Fellow of the Humboldt Foundation. ## 2. Preliminaries We refer the reader to [1] for the basic theory of fusion categories, fusion rings, and for the terminology used throughout this article. We fix a field \(\mathbb{K}\). In most of this article, we will focus on the case in which \(\mathbb{K}=\mathbb{R}\) but for this section, we do not have any restrictions on the field \(\mathbb{K}\). **Definition 2.1**.: A fusion category \(\mathcal{C}\) over \(\mathbb{K}\) is a \(\mathbb{K}\)-linear finite semisimple rigid monoidal category with simple (monoidal) unit. We will denote by \((\mathcal{C},\otimes,\alpha,\mathds{1},\ell,r)\) the monoidal structure of the fusion category \(\mathcal{C}\). Here \(\alpha\) is the associativity constraint and \(\ell\) and \(r\) are the left and right unit constraints for the monoidal unit \(\mathds{1}\). **Remark 2.2**.: The Grothendieck group \(K_{0}(\mathcal{C})\) associated with the underlying finite semisimple abelian category \(\mathcal{C}\) is the abelian free group with a basis given by the isomorphism classes of simple objects, see [1, Definition 1.5.8]. Since \(\mathcal{C}\) is a monoidal category, the Grothendieck group \(K_{0}(\mathcal{C})\) inherits a ring structure. Moreover, since the category is rigid, this ring is a _fusion ring_ (the definition of fusion ring can be found in [1, Definition 3.1.7]). The interested reader can find more details in [1, Section 4.5]. A fusion category \(\mathcal{C}\) can be completely described in terms of its Grothendieck ring and associativity constraints (and unit constraints) satisfying the pentagon axiom (and triangle axiom). A _categorification_ of a fusion ring is a fusion category with such fusion ring as its Grothendieck ring. One natural question is whether a given fusion ring admits a categorification. One can further inquire about all the possible categorifications up to tensor equivalence. **Example 2.3**.: Given a finite group \(G\), the group ring \(\mathbb{Z}G\) is a fusion ring. This fusion ring is always categorifiable. The fusion categories with \(\mathbb{Z}G\) as its Gorthendieck ring are called pointed, see [1, Defintion 5.11.1]. When the field \(\mathbb{K}\) is algebraically closed, pointed fusion categories are tensor equivalent to a category the \(\mathbb{K}\)-Vec\({}_{G}^{\omega}\) of finite dimensional \(\mathbb{K}\)-vector spaces graded by the group \(G\) with the associativity constraint twisted by a 3-cocycle \(Z^{3}(G,\mathbb{K}^{\times})\) (This result is a corollary of a theory long known but unpublished, in the Thesis [10]. See [1] for a modern discussion). If \(\mathbb{K}\) is non-algebraically closed, a similar result holds [11]. The fusion categories \(\operatorname{Rep}_{\mathbb{C}}(G)\) of finite dimensional complex representations of a finite group \(G\) are pointed only when \(G\) is abelian. On the other hand, the fusion rings of \(\operatorname{Rep}_{\mathbb{C}}(S_{3})\) and \(\operatorname{Rep}_{\mathbb{C}}(D_{8})\) are _almost_ pointed; they have a unique non-invertible simple object. With the expectation that the fusion rings associated with these categories would be the next simplest after pointed fusion categories, Tambara and Yamagami investigated and fully classified such fusion rules [13]. In the next subsection, we will describe this classification, and the remainder of the article will be devoted to generalizing their result. ### Tambara-Yamagami Fusion Categories: Split Case Let \(A\) be a finite group. The _(split) Tambara-Yamagami fusion ring_\(\mathsf{TY}(A)\) has a \(\mathbb{Z}\)-basis \(A\sqcup\{m\}\) \(m\notin A\). The product is defined as follows \[a\cdot b=ab\ \,\ \ a\cdot m=m=m\cdot a\ \,\ \ m\cdot m=\sum_{c\in A}c,\] for \(a,b\in A\). The involution of the fusion ring is given by \(m^{*}=m\) and \(a^{*}=a^{-1}\), for \(a\in A\). A fusion category \(\mathcal{C}\) over \(\mathbb{K}\) is said to be a _split Tambara-Yamagami fusion category_ if \(K_{0}(\mathcal{C})=\mathsf{TY}(A)\), for some finite group \(A\), and \(\operatorname{End}(X)\cong\mathbb{K}\), for all simple object \(X\) of \(\mathcal{C}\). Tambara and Yamagami considered such a fusion ring and determine whether or not \(\mathsf{TY}(A)\cong K_{0}(\mathcal{C})\) for some fusion category \(\mathcal{C}\) by solving the pentagon equations [10]. Given a fusion ring \(\mathsf{TY}(A)\), they constructed Tambara-Yamagami fusion categories \(\mathcal{C}(A,\chi,\tau)\), where \(\chi:A\times A\to\mathbb{K}^{\times}\) is a nondegenerate symmetric bicharacter, and \(\tau\in\mathbb{K}^{\times}\) satisfies certain conditions. The associativity of the category \(\mathcal{C}(A,\chi,\tau)\) is determined by the bicharacter \(\chi:A\times A\to\mathbb{K}^{\times}\) and \(\tau\), see [10, Definition 3.1]. This landmark paper stands alone in the field of fusion categories as one of the few times such a categorification has ever been done explicitly by hand. Tambara and Yamagami worked over arbitrary fields \(\mathbb{K}\) but assume that all of their simple objects \(X\) are split, i.e. \(\operatorname{End}(X)\cong\mathbb{K}\). Their main result is the following. **Theorem 2.4**.: _[_10_, Theorem 3.2]_ _Given a triple \((A,\chi,\tau)\), where \(A\) is a finite group, \(\chi:A\times A\to\mathbb{K}^{\times}\) is a nondegenerate symmetric bicharacter, and \(\tau\in\mathbb{K}\) satisfies \(\tau^{2}=\nicefrac{{1}}{{|G|}}\), there exists a split Tambara-Yamagami category \(\mathcal{C}(A,\chi,\tau)\) with fusion ring \(\mathsf{TY}(A)\) and associators given below. Furthermore, any Tambara-Yamagami category is monoidally equivalent to \(\mathcal{C}(A,\chi,\tau)\) for such a triple. Two categories \(\mathcal{C}(A,\chi,\tau)\) and \(\mathcal{C}(A^{\prime},\chi^{\prime},\tau^{\prime})\) are monoidally equivalent if and only if \(\tau=\tau^{\prime}\) and there exists an isomorphism \(f:A\to A^{\prime}\) such that for any \(a,b\in A\), \(\chi^{\prime}\big{(}f(a),f(b)\big{)}=\chi(a,b)\)._ \[\alpha_{a,b,c} =\mathsf{id}_{abc},\] \[\alpha_{a,b,m}=\alpha_{m,b,c} =\mathsf{id}_{m},\] \[\alpha_{a,m,c} =\chi(a,c)\cdot\mathsf{id}_{m},\] \[\alpha_{a,m,m}=\alpha_{m,m,c} =\mathsf{id}_{m\otimes m},\] \[\alpha_{m,b,m} =\bigoplus_{a\in A}\chi(a,b)\cdot\mathsf{id}_{a},\] \[\alpha_{m,m,m} =\left(\frac{\tau}{\chi(a,b)}\cdot\mathsf{id}_{m}\right)_{a,b}: \bigoplus_{a\in A}m\longrightarrow\bigoplus_{b\in A}m\,.\] **Remark 2.5**.: Notice that the nondegeneracy of the bicharacter implies that \(A\) must be abelian. So for the fusion ring \(\mathsf{TY}(A)\) to be categorifiable, \(A\) must be an abelian group. The assumption that all simple objects must be split is automatic if \(\mathbb{K}\) is an algebraically closed field. In order to understand the full picture in the non-algebraically closed setting, we generalize Tambara-Yamagami fusion categories to include the possibility of non-split simple objects. A priori it is not obvious that such categories should exist, so let us consider two fusion categories over \(\mathbb{R}\) whose fusion rules are similar to Tambara-Yamagami fusion rules. **Example 2.6**.: Consider the category \(\operatorname{Rep}_{\mathbb{R}}(\mathbb{Z}/4\mathbb{Z})\) of finite dimensional real representations of \(\mathbb{Z}/4\mathbb{Z}\). This category has two invertible objects \(\mathds{1}\) and \(X\), and there is an additional irreducible representation \(V\) that satisfies \(X\otimes V\cong V\otimes X\cong V,\) and \[V\otimes V\cong 2\cdot\left(\mathds{1}\oplus X\right).\] These fusion rules show that \(\operatorname{Rep}_{\mathbb{R}}(\mathbb{Z}/4\mathbb{Z})\) looks similar to a Tambara-Yamagami category but with some key differences. The first difference is that \(\operatorname{End}(V)\cong\mathbb{C}\), and the second is that \(V\otimes V\) has two copies of each invertible. **Example 2.7**.: Consider the category \(\operatorname{Rep}_{\mathbb{R}}(Q_{8})\) of finite dimensional real representations of \(Q_{8}\), the quaternion group of order \(8\). This category has four invertible objects \(\mathds{1}\), \(I\), \(J\) and \(K\), and there is an additional irreducible representation \(H\) that satisfies \(X\otimes H\cong H\otimes X\cong H,\) for any invertible object \(X\), and \[H\otimes H\cong 4\cdot\left(\mathds{1}\oplus I\oplus J\oplus K\right).\] In this example, we encounter two aspects that make it slightly different from a Tambara-Yamagami category. First, \(\operatorname{End}(H)\cong\mathbb{H}\). The second observation is that \(H\otimes H\) has four copies of every invertible object. The above examples have a striking similarity to Tambara-Yamagami categories, and they appear to differ from a Tambara-Yamagami fusion ring \(\mathsf{TY}(A)\) in a predictable way. In the next sections we pursue the study of fusion categories and fusion rings similar to the ones described in the examples above. The results of our article achieve this classification over \(\mathbb{R}\), and demonstrate that these non-split versions of Tambara-Yamagami categories are indeed very common. ## 3. Fusion Categories over the Reals The ideas of this section were originally developed in the second author's thesis [10] in the more general setting of non-algebraically closed fields \(\mathbb{K}\). Here we specialize to the case \(\mathbb{K}=\mathbb{R}\), and present facts such as Proposition 3.12 which are unique to the real numbers. ### Schur's Lemma For \(\mathbb{K}\)-linear abelian categories, we say that a nonzero object is simple if it has no nontrivial subobjects. The following lemma is a modern adaptation of a representation-theoretic result of Schur. **Lemma 3.1** (Schur's Lemma).: _Let \(X\) and \(Y\) be simple objects in a \(\mathbb{K}\)-linear abelian category. If \(X\not\cong Y\), then \(\operatorname{Hom}(X,Y)=0\) and \(\operatorname{End}(X)\) is a division algebra._ Over algebraically closed fields, the only finite dimensional division algebra is the field itself, and fusion categories over algebraically closed fields have been extensively studied, see for example [1]. The possibility of having \(\operatorname{End}(X)\) be a nontrivial division algebra is the primary source of new phenomena for fusion categories over non-algebraically closed fields. **Definition 3.2**.: A simple object \(X\) in a \(\mathbb{K}\)-linear abelian category is said to be _split_, or split-simple, if \(\operatorname{End}(X)\cong\mathbb{K}\). Otherwise, the simple is called _non-split_. A category is said to be _split_ if all of its simple objects are split. Finite groups give rise to families of examples of fusion categories. We will denote by \(\operatorname{Rep}_{\mathbb{K}}(G)\) the category of finite-dimensional representations over \(\mathbb{K}\) of \(G\). The simple objects in this category are the irreducible representations of \(G\). Example 2.6 describes \(\operatorname{Rep}_{\mathbb{R}}(\mathbb{Z}/4\mathbb{Z})\), which has a non-split irreducible representation \(V\) that is a two-dimensional vector space over \(\mathbb{R}\) and has \(\operatorname{End}(V)\cong\mathbb{C}\). In Example 2.7 the category \(\operatorname{Rep}_{\mathbb{R}}(Q_{8})\) is considered, and it has a non-split simple object \(H\) that is four-dimensional as a vector space over \(\mathbb{R}\) and has \(\operatorname{End}(H)\cong\mathbb{H}\). These are not just isolated cases as can be seen below. When working over the real numbers, there are only two ways for a simple object \(X\) to be non-split, that is, \(\operatorname{End}(X)\cong\mathbb{C}\) or \(\operatorname{End}(X)\cong\mathbb{H}\). This is a consequence of the following well-known result of Frobenius. **Theorem 3.3** ([10]).: _Any finite dimensional division algebras over the real numbers must be isomorphic to one of the following_ 1. \(\mathbb{R}\)_, the real numbers themselves;_ 2. \(\mathbb{C}\)_, the complex numbers; or_ 3. \(\mathbb{H}\)_, the algebra of quaternions._ **Definition 3.4**.: A simple object is said to be real, complex, or quaternionic if \(\operatorname{End}(X)\cong\mathbb{R}\), \(\mathbb{C}\), or \(\mathbb{H}\) respectively. Despite the fact that all possible division algebras can occur, there are restrictions on which simples can have which endomorphism algebras when the category is monoidal. **Proposition 3.5**.: _All invertible objects have isomorphic endomorphism algebras in a fusion category over an arbitrary field \(\mathbb{K}\). Moreover, this common algebra is a finite-dimensional field extension of \(\mathbb{K}\)._ Proof.: If \(g\) is an invertible object, then \((-)\otimes g\) is an equivalence, and so \[\operatorname{End}(\mathds{1})\cong\operatorname{End}(\mathds{1}\otimes g) \cong\operatorname{End}(g)\,.\] Thus all endomorphism algebras of invertible objects are isomorphic to one another. The Eckmann-Hilton argument forces \(\operatorname{End}(\mathds{1})\) to be commutative. Since \(\operatorname{End}(\mathds{1})\) is a finite-dimensional commutative division algebra that contains \(\mathbb{K}\), the claim follows. Proposition 3.5 has an interesting dichotomy as a corollary. To elaborate, we will need some terminology. For any object \(X\), we can use the isomorphism \(\ell_{X}:\mathds{1}\otimes X\to X\) to turn endomorphisms of \(\mathds{1}\) into endomorphisms of \(X\). When the base field admits nontrivial division algebras this embedding of \(\operatorname{End}(\mathds{1})\) into \(\operatorname{End}(X)\) may not be obvious. **Definition 3.6**.: Let \(e\in\operatorname{End}(\mathds{1})\). The endomorphisms \(\lambda_{X}(e),\rho_{X}(e):X\to X\) are defined as the compositions below These define algebra embeddings \[\lambda_{X},\rho_{X}:\operatorname{End}(\mathds{1})\hookrightarrow\operatorname{ End}(X),\] that are called the left and right embeddings for \(X\). The naturality of the unitors \(\ell\) and \(r\) imply that the embeddings \(\lambda_{X}\) and \(\rho_{X}\) factor through the inclusion of the center, as in and similarly for \(\rho_{X}\). **Corollary 3.7**.: _In a fusion category over \(\mathbb{R}\), either all invertible objects are real, or all invertible objects are complex. Moreover, if the invertible objects are complex then all simple objects are complex._ Proof.: By combining the Frobenius Theorem (Theorem 3.3) with Proposition 3.5, it follows that \(\mathds{1}\) is either real or complex, and all invertiles must be of matching type. If a simple object \(X\) is real or quaternionic then \(Z\big{(}\operatorname{End}(X)\big{)}=\mathbb{R}\). Since \(\lambda_{X}^{0}:\operatorname{End}(\mathds{1})\hookrightarrow\mathbb{R}\) is an algebra embedding, we see that real and quaternionic objects can only exist when \(\mathds{1}\) is real. Thus if \(\mathds{1}\) is complex, every simple object must also be complex. Corollary 3.7 raises the following question. If a fusion category over \(\mathbb{R}\) has the property that all of its simple objects are complex, would that mean that the category is also fusion over \(\mathbb{C}\)? The following example shows that the answer is no. **Example 3.8**.: Consider the complex numbers \(\mathbb{C}\) as an algebra over the real numbers. Let \(\mathcal{C}=(\mathbb{C},\mathbb{C})\)-bim be the category of finite dimensional bimodules for this algebra. This category is equivalent to the category of modules for the algebra \(\mathbb{C}\otimes_{\mathbb{R}}\mathbb{C}\). This category is monoidal, with the tensor product being the relative tensor product \(\otimes_{\mathbb{C}}\), and monoidal unit \(\mathds{1}_{\mathcal{C}}=\mathbb{C}\). This category has another simple bimodule \(\overline{\mathbb{C}}\), where the left and right actions of \(\mathbb{C}\) differ by complex conjugation. It can easily be shown that \(\operatorname{End}(\overline{\mathbb{C}})\cong\mathbb{C}.\) Thus all simple objects of \(\mathcal{C}\) are complex. However, this category is not fusion over \(\mathbb{C}\) because the tensor product fails to be \(\mathbb{C}\)-bilinear. To see this consider a complex number \(c:\overline{\mathbb{C}}\to\overline{\mathbb{C}}\) and observe that \(1\otimes c\;=\;\overline{c}\otimes 1\) as morphisms in \(\operatorname{End}(\overline{\mathbb{C}}\otimes\overline{\mathbb{C}}).\) ### Galois Nontrivial Objects The content of this subsection will not be used until Section 7. Any reader primarily interested in the case where \(\operatorname{End}(\mathds{1})\cong\mathbb{R}\) may safely skip ahead to Subsection 2.1. The object \(\overline{\mathbb{C}}\) in Example 3.8 is what is known as a Galois-nontrivial object. That is, the conjugating complex bimodule \(\overline{\mathbb{C}}\) is an object for which \(\lambda_{\overline{\mathbb{C}}}\neq\rho_{\overline{\mathbb{C}}}\). This is a phenomenon that cannot occur when working over an algebraically closed field, so we give it a name. **Definition 3.9**.: An object \(X\) in a fusion category is called _Galois trivial_ if \(\lambda_{X}=\rho_{X}\). Otherwise, we say that \(X\) is _Galois nontrivial_. In general, it is possible for \(\operatorname{im}(\lambda_{X})\) and \(\operatorname{im}(\rho_{X})\) to be distinct subalgebras of \(\operatorname{End}(X)\), but over \(\mathbb{R}\) such issues do not occur when \(X\) is simple. **Proposition 3.10**.: _Let \(\mathcal{C}\) be a fusion category over \(\mathbb{R}\). If \(\mathcal{C}\) has Galois nontrivial simple objects then all simple objects are necessarily complex and the left and right embeddings of Galois nontrivial simple objects differ by complex conjugation._ Proof.: Observe that the linearity assumptions on fusion categories imply that all objects are automatically Galois trivial whenever the unit \(\mathds{1}\) is split. When working over \(\mathbb{R}\) this means that \(\mathds{1}\) must be complex for Galois nontrivial objects to exist. Naturality of the morphisms \(\ell_{X}:\mathds{1}\otimes X\to X\) and \(r_{X}:X\otimes\mathds{1}\to X\) imply that both \(\lambda_{X}\) and \(\rho_{X}\) factor through the center \(Z\big{(}\operatorname{End}(X)\big{)}\). The existence of Galois nontrivial objects implies that we must have \[\mathbb{C}\cong\operatorname{End}(\mathds{1})\hookrightarrow Z\big{(} \operatorname{End}(X)\big{)}\,,\] for every object \(X\). If \(X\) is simple, this condition on the center implies that \(X\) can neither be real nor quaternionic, and so \(X\) must be complex. Thus \(\lambda_{X}\) and \(\rho_{X}\) are both algebra automorphisms of \(\mathbb{C}\) that happen to fix \(\mathbb{R}\). Knowing that the embeddings are isomorphisms makes Galois nontriviality equivalent to the statement that \(\lambda_{X}^{-1}\circ\rho_{X}\neq\mathsf{id}_{\mathbb{C}}\), and therefore this automorphism must be complex conjugation. Given a fusion category \(\mathcal{C}\), we define \(\mathcal{C}_{0}\) as the full subcategory generated under direct sums by the Galois trivial simple objects, and \(\mathcal{C}_{1}\) as the full subcategory generated by the Galois nontrivial objects. In this way, we obtain a \(\mathbb{Z}/2\mathbb{Z}\)-grading \(\mathcal{C}\simeq\mathcal{C}_{0}\oplus\mathcal{C}_{1}\). A priori this is only a grading as \(\mathbb{R}\)-linear abelian categories but we will demonstrate that this grading respects the monoidal structure as well. **Lemma 3.11**.: _Let \(\mathcal{C}\) be a monoidal \(\mathbb{K}\)-linear abelian category. Let \(X,Y,\) and \(Z\) be simple objects in \(\mathcal{C}\) and let \(f:Z\to X\otimes Y\) be a morphism. If \(f\neq 0\), then_ \[\lambda_{X}^{-1}\rho_{X}\lambda_{Y}^{-1}\rho_{Y}\;=\;\lambda_{Z}^{-1}\rho_{Z}\] Proof.: This follows from naturality and the triangle axiom. **Theorem 3.12**.: _All fusion categories over \(\mathbb{R}\) that contain Galois nontrivial objects necessarily admit a grading by the group \(\text{Gal}(\mathbb{C}/\mathbb{R})\cong\mathbb{Z}/2\mathbb{Z}\). We will refer to this as the **Galois grading**. Moreover, this grading is faithful in the sense that \(\mathcal{C}_{1}\not\simeq 0\)._ Proof.: Let \(X\) be an object in \(\mathcal{C}_{i}\) and \(Y\) be an object in \(\mathcal{C}_{j}\). For any simple summand \(Z\) of \(X\otimes Y\), we can find some simple summands \(X_{0}\hookrightarrow X\) and \(Y_{0}\hookrightarrow Y\) such that \(Z\) is a simple summand of \(X_{0}\otimes Y_{0}\). Since each of the \(\mathcal{C}_{k}\) are full subcategories, \(X_{0}\) is in \(\mathcal{C}_{i}\) and \(Y_{0}\) is in \(\mathcal{C}_{j}\). Applying Lemma 3.11 to the inclusion morphism \(f=\iota:Z\hookrightarrow X_{0}\otimes Y_{0}\) shows that \(Z\) is in \(\mathcal{C}_{i+j}\). Since \(Z\) was arbitrary, all simple summands of \(X\otimes Y\) are contained in \(\mathcal{C}_{i+j}\), so the entire object \(X\otimes Y\) must be in \(\mathcal{C}_{i+j}\) as well. Finally, the definition of \(\mathcal{C}_{1}\) immediately implies that the existence of Galois nontrivial objects is equivalent to the faithfulness of the grading. To end this section, we record a corollary of Theorem 3.12 that will be helpful in Section 7. **Corollary 3.13**.: _If \(\mathcal{C}\) is a fusion category over \(\mathbb{R}\) that contains Galois nontrivial objects, then for any simple object \(X\) in \(\mathcal{C}\), the object \(X\otimes X\) lies entirely in \(\mathcal{C}_{0}\)._ ## 4. Tambara-Yamagami fusion categories: Non-Split case We would like to investigate non-split generalizations of the (split) Tambara-Yamagami categories described in Subsection 2.1. In particular, we will focus on the case where \(\mathbb{K}=\mathbb{R}\). In making our generalization, the features that we would like to preserve are the following: 1. The set of (isomorphism classes of) simple objects consist of a group \(A\) of invertible objects, together with a single self-dual simple object \(m\), and 2. the object \(m\otimes m\) is a direct sum of invertible objects, that is, the multiplicity of \(m\) in \(m\otimes m\) is \(0\). A natural starting point would be to allow various simple objects to have nontrivial division algebras, but not all alternatives are possible. We use the results from Section 3 to narrow these options down to only three possibilities. In particular, Theorem 3.3 implies that when working over \(\mathbb{R}\) there are only three ways for a simple object to be non-split. Moreover, Corollary 3.7 shows that either all the _invertible_ objects are real or all the _simple_ objects are complex. If all the invertible objects are real then the only variations would occur when \(m\) is either complex or quaternionic. If all simple objects are complex, either the category is fusion over \(\mathbb{C}\) or not. The original theorem of Tambara and Yamagami (see Theorem 2.4) already covers the case where \(\mathcal{C}\) is fusion over \(\mathbb{C}\), and so we assume that \(\mathcal{C}\) is only fusion over \(\mathbb{R}\). The results of Subsection 3.2 show that \(\mathcal{C}\) must contain Galois nontrivial simple objects and hence, by Theorem 3.12, \(\mathcal{C}\) is faithfully Galois graded. Moreover, Corollary 3.13 implies that the object \(m\otimes m\) is Galois trivial, and by our assumption, this forces all the invertible objects to be Galois trivial. Then, since there must be at least one Galois nontrivial object, it must be \(m\). Summarizing, there are three new possibilities for non-split Tambara-Yamagami categories over \(\mathbb{R}\): Case 1: All invertible objects are real and \(m\) is quaternionic, Case 2: all invertible objects are real and \(m\) is complex, or Case 3: all simple objects are complex and \(m\) is the unique Galois nontrivial simple. Before starting with the analysis of each of the different cases, we highlight some important aspects that are common to all three contexts. In each case we begin by fixing an isomorphism between \(\operatorname{End}(m)\) and the relevant division algebra \(\mathbb{D}\in\{\mathbb{C},\mathbb{H}\}\). We use this fixed isomorphism to identify elements of the algebra \(e\in\mathbb{D}\), with endomorphisms \(e:m\to m\) in the category. In the complex Galois case, we further identify \(\operatorname{End}(\mathds{1})\) with \(\operatorname{End}(m)=\mathbb{C}\) using the left embedding \(\lambda_{m}\) (see Definition 3.6), and thus force \(\rho_{m}\) to be complex conjugation. We consider the following \(\mathbb{R}\)-vector spaces \[\operatorname{Hom}(a\otimes m,m),\quad\operatorname{Hom}(m\otimes a,m),\quad \operatorname{Hom}(m\otimes m,a),\] which are all isomorphic by rigidity of the fusion category. This allows us to compute the multiplicity of the invertibles in \(m\otimes m\) \[\operatorname{End}(m)\cong\operatorname{Hom}(a,m\otimes m)\cong\operatorname {Hom}\Bigl{(}a\,,\,\bigoplus_{b\in A}b^{\oplus n_{b}}\Bigr{)}\cong\bigoplus_ {b\in A}\delta_{a,b}\operatorname{End}(b)^{\oplus n_{b}}\cong\operatorname{ End}(a)^{\oplus n_{a}}.\] By Proposition 3.5, \(\operatorname{End}(a)\cong\operatorname{End}(\mathds{1})\) is a field, and so for every \(a\in A\), the multiplicity of \(a\) in \(m\otimes m\) is the number \(n_{a}=\dim_{\operatorname{End}(\mathds{1})}(\operatorname{End}(m))\). This gives the following variation on the split fusion rules considered in [13] for the non-split cases: \[m\otimes m=\dim_{\operatorname{End}(\mathds{1})}\bigl{(}\operatorname{End}(m) \bigr{)}\cdot\bigoplus_{a\in A}a\;.\] In each of the following sections we proceed first by choosing basis vectors for the hom spaces and then by writing down the coordinate matrix of the associators in terms of the chosen basis. Explicitly, precomposition with the associator \(\alpha_{W,X,Y}:(W\otimes X)\otimes Y\to W\otimes(X\otimes Y)\) produces a map on hom spaces \[\operatorname{Hom}(\alpha_{W,X,Y}\,,\,Z):\operatorname{Hom}\bigl{(}W\otimes(X \otimes Y)\,,\,Z\bigr{)}\to\operatorname{Hom}\bigl{(}(W\otimes X)\otimes Y\,, \,Z\bigr{)}\,,\] for each target object \(Z\in\mathcal{C}\). Here we introduce the notation we will use, that follows Tambara and Yamagami original notation from [13] There are isomorphisms \[\operatorname{Hom}\bigl{(}W\otimes(X\otimes Y)\,,\,Z\bigr{)}\cong\bigoplus_ {U}\operatorname{Hom}(W\otimes U\,,\,Z)\underset{\operatorname{End}(U)}{ \otimes}\operatorname{Hom}(X\otimes Y\,,\,U)\,,\] \[\operatorname{Hom}\bigl{(}(W\otimes X)\otimes Y\,,\,Z\bigr{)}\cong\bigoplus_ {V}\operatorname{Hom}(V\otimes Y\,,\,Z)\underset{\operatorname{End}(V)}{ \otimes}\operatorname{Hom}(W\otimes X\,,\,U)\,.\] By composing these with the map \(\operatorname{Hom}(\alpha_{W,X,Y},Z)\), we arrive at a more concrete description of the associator. **Definition 4.1**.: The tetrahedral transformation \(\big{\{}\,{W,X,Y\,\atop Z}\,\big{\}}\) is determined by the following commutative diagram \[\begin{CD}\operatorname{Hom}\bigl{(}W\otimes(X\otimes Y)\,,\,Z\bigr{)}@<{\cong}<{}< \bigoplus_{U}\operatorname{Hom}(W\otimes U\,,\,Z)\underset{\operatorname{End}(U )}{\otimes}\operatorname{Hom}(X\otimes Y\,,\,U)\\ @V{}V{\operatorname{Hom}(\alpha_{W,X,Y,Z})}V@V{}V{\operatorname{Hom}(V \otimes Y\,,\,Z)}V\\ \operatorname{Hom}\bigl{(}(W\otimes X)\otimes Y\,,\,Z\bigr{)}@>{\cong}>{}> \bigoplus_{V}\operatorname{Hom}(V\otimes Y\,,\,Z)\underset{\operatorname{End} (V)}{\otimes}\operatorname{Hom}(W\otimes X\,,\,U)\,.\end{CD}\] **Remark 4.2**.: The name tetrahedral transformations comes from [13], and is not standard. In more modern language these are often called \(F\)-symbols or \(F\)-matrices (see e.g. [1]). The notation that is common throughout the three next sections follows the conventions in [13]. In each section, corresponding to each of the three cases above, there will be a preferred way of constructing nonzero (and hence surjective) morphisms \[[a,b] \in\operatorname{Hom}(a\otimes b,c)\,,\] \[[a,m] \in\operatorname{Hom}(a\otimes m,m)\,,\] \[[m,a] \in\operatorname{Hom}(m\otimes a,m)\,,\text{ and}\] \[[a] \in\operatorname{Hom}(m\otimes m,a)\,.\] By Schur's Lemma 3.1 the first three will be isomorphisms, and the map \([a]\) will only be surjective. Once \([a]\) is chosen, by semisimplicity, there is a splitting \([a]^{\prime}:a\to m\otimes m\). In the first case, when \(\mathds{1}\) is real and \(m\) is quaternionic, we set \(S=\{1,i,j,k\}\). In the second case, when \(\mathds{1}\) is real and \(m\) is complex, we set \(S=\{1,i\}\). In the third case, we can set \(S=\{1\}\). In all cases, we define a useful map \([a]^{\dagger}\). **Definition 4.3**.: Let \(a\in A\). If \(m\) is Galois nontrivial, then set \([a]^{\dagger}=[a]^{\prime}\). In the other cases, proceed with the following construction: The \(\mathbb{R}\)-linear map \(T:\operatorname{End}(m)\to\mathbb{R}\) is given by the formula \[[a](\mathsf{id}_{m}\otimes e)[a]^{\prime}\;=\;T(e)\cdot\mathsf{id}_{\mathds{1 }}\,,\] for \(e\in\operatorname{End}(m)\). The map \([a]^{\dagger}:\mathds{1}\to m\otimes m\) is given by the formula \[[a]^{\dagger}:=\frac{\sum_{s\in S}T(s)\cdot(\mathsf{id}_{m}\otimes s)[a]^{ \prime}}{\sum_{r\in S}T(r)^{2}}\,.\] Notice that the map \([a]^{\dagger}\) is well-defined because \(T(1)=1\). An easy consequence of the definition of this map is the following property. **Proposition 4.4**.: _Suppose \(\mathds{1}\) is real, and \(m\) is either complex or quaternionic (case 1 or case 2 above). The map \([a]^{\dagger}\) satisfies the formula_ \[[a](\mathsf{id}_{m}\otimes e)[a]^{\dagger}=\Re(e)\cdot\mathsf{id}_{a}\,,\] _where \(\Re(e)\) is the real part of \(e\in\operatorname{End}(m)\)._ Using the maps \([a]^{\dagger}\), we produce orthogonal projections \[(\mathsf{id}_{m}\otimes s)[a]^{\dagger}[a](\mathsf{id}_{m}\otimes\overline{s})\,: \,m\otimes m\to m\otimes m\,,\] and, in this way, identify summands of \(m\otimes m\) with pairs \((a,s)\), where \(a\in A\) and \(s\in S\). Notice that \([a]^{\dagger}\) is uniquely determined by \([a]\), so changes to \([a]\) will alter \([a]^{\dagger}\) accordingly. Using the preferred vectors \([a,m]\), \([m,a]\) and \([a]\), we construct bases for the hom spaces as follows: \[\{[a,m](\mathsf{id}_{a}\otimes s)\}_{s\in S}\text{ for the space }\mathrm{ Hom}(a\otimes m,m)\,,\] \[\{[m,a](s\otimes\mathsf{id}_{a})\}_{s\in S}\text{ for the space }\mathrm{ Hom}(m\otimes a,m)\,,\text{ and}\] \[\{[a](\mathsf{id}_{m}\otimes s)\}_{s\in S}\text{ for the space }\mathrm{ Hom}(m\otimes m,a)\,.\] In these bases, the tetrahedral transformations are determined by what they do on simple tensors of the vectors \([a,m]\), \([m,a]\), and \([a]\). The general naming scheme of the matrix coefficients of the tetrahedral transformations is shown in the table below. \begin{tabular}{|c|c|c|c|c|} \hline Tetrahedral & \(\left\{\begin{smallmatrix}a,b,c\\ abc\end{smallmatrix}\right\}\) & \(\left\{\begin{smallmatrix}m,a,b\\ m\end{smallmatrix}\right\}\) & \(\left\{\begin{smallmatrix}a,m,b\\ m\end{smallmatrix}\right\}\) & \(\left\{\begin{smallmatrix}a,m,b\\ m\end{smallmatrix}\right\}\) & \(\left\{\begin{smallmatrix}a,b,m\\ m\end{smallmatrix}\right\}\) \\ \hline Coefficient & \(\alpha(a,b,c)\) & \(\alpha_{1}(a,b)\) & \(\alpha_{2}(a,b)\) & \(\alpha_{3}(a,b)\) \\ \hline \hline Tetrahedral & \(\left\{\begin{smallmatrix}a,m,m\\ b\end{smallmatrix}\right\}\) & \(\left\{\begin{smallmatrix}m,a,m\\ b\end{smallmatrix}\right\}\) & \(\left\{\begin{smallmatrix}m,m,a\\ b\end{smallmatrix}\right\}\) & \(\left\{\begin{smallmatrix}m,m,a\\ b\end{smallmatrix}\right\}\) & \(\left\{\begin{smallmatrix}m,m,m\\ m\end{smallmatrix}\right\}\) \\ \hline Coefficient & \(\beta_{1}(a,b)\) & \(\beta_{2}(a,b)\) & \(\beta_{3}(a,b)\) & \(\left(\gamma(a,b)\right)_{a,b}\) \\ \hline \end{tabular} The combinatorics imply that there are 16 different types of pentagon equations to be solved in each case. In order to determine when two of our categories are monoidally equivalent, there are 4 matrix coefficients for the tensorators, and they are subject to 8 coherence equations. The tensorator naming conventions are shown in the table below. \begin{tabular}{|l|c|c|c|c|} \hline Tensorator & \(J_{a,b}\) & \(J_{a,m}\) & \(J_{m,b}\) & \(J_{m,m}\) \\ \hline Coefficient & \(\theta(a,b)^{-1}\) & \(\varphi(a)^{-1}\) & \(\psi(b)^{-1}\) & \(\left(\omega(a)^{-1}\,\right)_{a}\) \\ \hline \end{tabular} **Remark 4.5**.: Note the unfortunate presence of inverses. These inverses appear because we aim to align our notation with the change of basis transformations in [13, cf. page 700]. These change of basis transformations are monoidal equivalences, in which the coefficients more naturally appear on the opposite side of the equation. ## 5. Analysis of the Real-Quaternionic case ### Choosing a preferred basis The following observation makes the choice of a basis simpler. **Proposition 5.1**.: _Let \(V\) be an \((\mathbb{H},\mathbb{H})\)-bimodule. If \(V\) is 4-dimensional as an \(\mathbb{R}\)-vector space then there exists a nonzero \(v\in V\) such that \(h.v=v.h\), for all \(h\in\mathbb{H}\)._ Proof.: An \((\mathbb{H},\mathbb{H})\)-bimodule is the same as an \(\mathbb{H}\otimes_{\mathbb{R}}\mathbb{H}^{op}\)-module. Since \(\mathbb{H}\otimes_{\mathbb{R}}\mathbb{H}^{op}\cong M_{4}(\mathbb{R})\) as algebras, there is a unique simple \((\mathbb{H},\mathbb{H})\)-bimodule up to isomorphism. Any such bimodule is simple if and only if it is 4-dimensional over \(\mathbb{R}\). Let us choose some bimodule isomorphism \(\phi:\mathbb{H}\to V\) from the trivial bimodule to our given bimodule \(V\). Then, the vector \(v:=\phi(1)\) has the desired property. Consider the following 4-dimensional (as \(\mathbb{R}\)-vector spaces) hom spaces \[\operatorname{Hom}(a\otimes m,m),\quad\operatorname{Hom}(m\otimes a,m),\quad \text{and}\ \operatorname{Hom}(m\otimes m,a).\] Proposition 5.1 shows that there is always a choice of non-zero morphisms \([a,m]\), \([m,a]\), and \([a]\) such that the quaternions commute with them. One subtlety here is that the space \(\operatorname{Hom}(m\otimes m,a)\) is most naturally a right \(\mathbb{H}\otimes_{\mathbb{R}}\mathbb{H}\)-module. This can be thought of as an \((\mathbb{H}^{op},\mathbb{H})\)-bimodule. Quaternionic conjugation \(h\mapsto\overline{h}\) provides an isomorphism \(\mathbb{H}^{op}\to\mathbb{H}\), and this can be used to transform \(\operatorname{Hom}(m\otimes m,a)\) into an \((\mathbb{H},\mathbb{H})\)-bimodule. By Proposition 5.1, there is a preferred vector \([a]\). Since we needed to apply quaternionic conjugation to one of the actions, the resulting 'commutation' property for \([a]\) involves conjugation. Summarizing, bases have been chosen for the hom spaces using the morphisms \([a,m],[m,a]\), and \([a]\) such that \[[a,m](id_{a}\otimes h) = h[a,m],\] \[[m,a](h\otimes id_{a}) = h[m,a],\] \[[a](id_{m}\otimes h) = [a](\bar{h}\otimes id_{m}). \tag{1}\] ### The associators It is interesting to notice that even though we are in the case in which \(\operatorname{End}(m)\cong\mathbb{H}\) all associators turn out to correspond to real numbers. In fact, these associators are the following: \[\left\{\begin{smallmatrix}a,b,c\\ abc\end{smallmatrix}\right\}: [b,c]\otimes[a,bc]\to\alpha(a,b,c)[a,b]\otimes[ab,c],\] \[\left\{\begin{smallmatrix}a,b,m\\ m\end{smallmatrix}\right\}: [b,m]\otimes[a,b]\to[a,b]\otimes\Big{(}[ab,m]\alpha_{3}(a,b) \Big{)},\] \[\left\{\begin{smallmatrix}a,m,b\\ m\end{smallmatrix}\right\}: [m,b]\otimes[a,m]\to\Big{(}[a,m]\alpha_{2}(a,b)\Big{)}\otimes[m,b],\] \[\left\{\begin{smallmatrix}m,a,b\\ m\end{smallmatrix}\right\}: [a,b]\otimes[m,ab]\to\Big{(}[m,a]\alpha_{1}(a,b)\Big{)}\otimes[m,b],\] \[\left\{\begin{smallmatrix}a,m,m\\ b\end{smallmatrix}\right\}: [a^{-1}b]\otimes[a,a^{-1}b]\to[a,m]\otimes\Big{(}[b]\beta_{1}(a,b) \Big{)},\] \[\left\{\begin{smallmatrix}m,a,m\\ b\end{smallmatrix}\right\}: [a,m]\otimes[b]\to[m,a]\otimes\Big{(}[b]\beta_{2}(a,b)\Big{)},\] \[\left\{\begin{smallmatrix}m,m,a\\ b\end{smallmatrix}\right\}: [m,a]\otimes[b]\to\Big{(}[ba^{-1}]\beta_{3}(a,b)\Big{)}\otimes[ba^{-1},a],\] \[\left\{\begin{smallmatrix}m,m,m\\ m\end{smallmatrix}\right\}: [a]\otimes[m,a]\to\sum_{b\in A,\,s,t\in\mathbb{H}}\gamma(a,b,s,t) \Big{(}[b]s\otimes[b,m]t\Big{)}. \tag{2}\] In the above equations, \(\alpha_{i}\) and \(\beta_{i}\) correspond to endomorphisms of \(m\), hence all of the associator information is carried in the quaternions identified with said endomorphisms. Moreover, we will prove that they are in fact real numbers by showing they are in the center of \(\mathbb{H}\). There are two ways of doing this; one for the \(\alpha_{i}\)'s and another for the \(\beta_{i}\)'s. **Lemma 5.2**.: _The \(\alpha_{i}\)s are real valued functions._ Proof.: We demonstrate the proof for \(\alpha_{2}\). The arguments for \(\alpha_{1}\), and \(\alpha_{3}\) are similar. Let \(h\in\mathbb{H}\) and consider the diagram below (3) By naturality of the associator, the middle square commutes. The top and bottom quadrangles commute by the definition of \(\alpha_{2}\). The quadrangles on the left and right commute by our choice of basis vectors \([a,m]\) and \([m,b]\). It follows that the outer rectangle commutes. Since \(h\in\mathbb{H}\) was arbitrary, \(\alpha_{2}(a,b)\) must lie in the center of \(\mathbb{H}\), which is \(\mathbb{R}\). Since \(a,b\in A\) were arbitrary, all values of \(\alpha_{2}\) must be real numbers. **Lemma 5.3**.: _The \(\beta_{j}\)s are real valued functions._ Proof.: We demonstrate the proof for \(\beta_{1}\). The arguments for \(\beta_{2}\), and \(\beta_{3}\) are similar. Let \(a,b\in A\) and \(h\in\mathbb{H}\). Naturality of the associator can be used to perform the following computation \[[b]\big{(}[a,m]\otimes\beta_{1}(a,b)h\big{)} =[b]\big{(}[a,m]\otimes\beta_{1}(a,b)\big{)}\big{(}(\mathsf{id}_{a }\otimes\mathsf{id}_{m})\otimes h\big{)}\] \[=[a,a^{-1}b]\big{(}\mathsf{id}_{a}\otimes[a^{-1}b]\big{)}\alpha_{ a,m,m}\big{(}(\mathsf{id}_{a}\otimes\mathsf{id}_{m})\otimes h\big{)}\] \[=[a,a^{-1}b]\Big{(}\mathsf{id}_{a}\otimes\big{(}[a^{-1}b]( \overline{h}\otimes\mathsf{id}_{m})\big{)}\Big{)}\alpha_{a,m,m}\] \[=[a,a^{-1}b]\big{(}\mathsf{id}_{a}\otimes[a^{-1}b]\big{)}\alpha_{ a,m,m}\big{(}(\mathsf{id}_{a}\otimes\overline{h})\otimes\mathsf{id}_{m}\big{)}\] \[=[b]\big{(}[a,m](\mathsf{id}_{a}\otimes\overline{h})\otimes \beta_{1}(a,b)\big{)}\] \[=[b]\big{(}\overline{h}[a,m]\otimes\beta_{1}(a,b)\big{)}\] \[=[b](\overline{h}\otimes\mathsf{id}_{m})\big{(}[a,m]\otimes \beta_{1}(a,b)\big{)}\] \[=[b](\mathsf{id}_{m}\otimes h)\big{(}[a,m]\otimes\beta_{1}(a,b) \big{)}\] \[=[b]\big{(}[a,m]\otimes h\beta_{1}(a,b)\big{)}\,.\] Since \(m\) is self-dual, there is an isomorphism \[\operatorname{Hom}\bigl{(}(a\otimes m)\otimes m\,,\,b\bigr{)}\stackrel{{ \phi}}{{\longrightarrow}}\operatorname{Hom}(m,m)=\mathbb{H}\,.\] This map \(\phi\) is an isomorphism of right \(\mathbb{H}\)-modules. Define the quaternion \[q:=\phi\Big{(}[b]\big{(}[a,m]\otimes\mathsf{id}_{m}\big{)}\Big{)}\,.\] Since \(\phi\) is an isomorphism, the morphism \(q\in\mathbb{H}\) is nonzero and hence invertible. We can now compute that \[\beta_{1}(a,b)h =q^{-1}q\beta_{1}(a,b)h\] \[=q^{-1}\phi\Big{(}[b]\big{(}[a,m]\otimes\mathsf{id}_{m}\big{)} \Big{)}\beta_{1}(a,b)h\] \[=q^{-1}\phi\Big{(}[b]\big{(}[a,m]\otimes\beta_{1}(a,b)h\big{)} \Big{)}\] \[=q^{-1}\phi\Big{(}[b]\big{(}[a,m]\otimes h\beta_{1}(a,b)\big{)} \Big{)}\] \[=q^{-1}\phi\Big{(}[b]\big{(}[a,m]\otimes\mathsf{id}_{m}\big{)} \Big{)}h\beta_{1}(a,b)\] \[=q^{-1}qh\beta_{1}(a,b)\] \[=h\beta_{1}(a,b)\,.\] Thus \(\beta_{1}(a,b)\) commutes with \(h\). Since \(h\), \(a\), and \(b\) were arbitrary, the result follows. #### 5.2.1. The nature of \(\alpha_{m,m,m}\) In the split case, Tambara and Yamagami find that the associator on \(3\) copies of \(m\) is not given, like for all the other objects, by a real number. The reason for this is that \(\operatorname{Hom}\bigl{(}m\otimes(m\otimes m),m\bigr{)}\cong\mathbb{R}^{ \oplus|A|}\). Because of this \(\alpha_{m,m,m}:(m\otimes m)\otimes m\to\otimes(m\otimes m)\) is in \(GL_{|A|}(\mathbb{R})\). In this non-split case, in which \(\operatorname{End}(m)\cong\mathbb{H}\), a very similar computation can be done because \[\operatorname{Hom}\bigl{(}m\otimes(m\otimes m),m\bigr{)}\cong\mathbb{H}^{ \oplus 4|A|}.\] By similar logic, this means that in the quaternionic case \(\alpha_{m,m,m}\) is a matrix in \(GL_{16|A|}(\mathbb{R})\). In all of the sums that follow, \(a,b\in A\), and \(r,s,t,s^{\prime},t^{\prime}\in S\), where \(S=\{1,i,j,k\}\subset\mathbb{H}\). We fix the following notation: \[\zeta(b,r,s,t):=[b,m]\left([b]\otimes\mathsf{id}_{m}\right)\left((r\otimes s) \otimes t\right),\] and \[\mathcal{A}(a):=[m,a](\mathsf{id}_{m}\otimes[a])\alpha_{m,m,m}\,.\] With this notation in hand, we can write the following: \[\mathcal{A}(a)=\sum_{b,s,t}\gamma(a,b,s,t)\zeta(b,1,s,t). \tag{4}\] Here the coefficients \(\gamma(a,b,s,t)\) are real. Furthermore, using naturality, we get that \[\sum_{b,s,t}\gamma(a,b,s,t)\zeta(b,1,s,t)\;=\;\mathcal{A}(a)\;=\; (r^{-1}r\circ\mathcal{A}(a))\;=\;r^{-1}\circ\mathcal{A}(a)\circ((r\otimes \mathsf{id}_{m})\otimes\mathsf{id}_{m})\] \[\qquad\qquad\qquad=\;\sum_{b,s^{\prime},t^{\prime}}\gamma(a,b,s^{ \prime},t^{\prime})(r^{-1}\circ\zeta(b,r,s^{\prime},t^{\prime}))\;=\;\sum_{b, s^{\prime},t^{\prime}}\gamma(a,b,s^{\prime},t^{\prime})\zeta\bigl{(}b,r,s^{ \prime},r^{-1}t^{\prime}\bigr{)}\] \[\qquad\qquad\qquad\qquad\qquad=\;\sum_{b,s^{\prime},t^{\prime}} \gamma(a,b,s^{\prime},t^{\prime})\zeta\bigl{(}b,1,\overline{r}s^{\prime},r^{- 1}t^{\prime}\bigr{)}\] Since these two sums must be equal, by equating the coefficients of the basis vectors we obtain the following relation \[\gamma(a,b,s,t)\;=\;\gamma(a,b,\overline{r}^{-1}s,rt)\,.\] When \(r=\bar{s}\), then \(s^{\prime}=1,t^{\prime}=\bar{s}t\), meaning that \(\gamma(a,b,s,t)=\gamma(a,b,1,\bar{s}t)\), and that \[\gamma(a,b,i,i)=\gamma(a,b,j,j)=\gamma(a,b,k,k)=\gamma(a,b,1,1)\,.\] A similar computation, this time involving \(\mathcal{A}(a)\circ(\mathsf{id}_{m\otimes m}\otimes rr^{-1})\), implies that \(\gamma(a,b,s,t)=\gamma(a,b,1,t\bar{s})\). This means that \(\gamma(a,b,s,t)=\gamma(a,b,1,\bar{s}t)=\gamma(a,b,1,t\bar{s})\), which means that \(\gamma(a,b,1,t\bar{s})=-\gamma(a,b,1,t\bar{s})\), if \(t\neq s\). This implies that \(\gamma(a,b,t,s)=0\) if \(t\neq s\). This simplification mean that (4) can be turned into: \[\mathcal{A}(a)=\sum_{b,s}\gamma(a,b,1,1)\zeta(b,1,s,s),\] which suggests setting \(\gamma(a,b):=\gamma(a,b,1,1)\), to finally arrive at: \[\mathcal{A}(a)=\sum_{b,s}\gamma(a,b)\zeta(b,1,s,s).\] ### The Pentagon Equations With the associators simplified as much as possible, and with all but \(\alpha_{m,m,m}\) shown to be a real numbers, now the pentagon equations themselves must be analyzed. Indeed, going through each of the 16 pentagons as [11] did results in the following equations: \[\delta\alpha =1,\] \[\delta\alpha_{3} =\alpha^{-1},\] \[\delta\alpha_{1} =\alpha,\] \[\alpha_{2}(a,bc) =\alpha_{2}(a,c)\alpha_{2}(a,b),\] \[\alpha_{2}(ab,c) =\alpha_{2}(b,c)\alpha_{2}(a,c),\] \[\alpha(a,b,b^{-1}a^{-1}c)\beta_{1}(ab,c) =\beta_{1}(b,a^{-1}c)\beta_{1}(a,c)\alpha_{3}(a,b),\] \[\beta_{3}(ab,c)\alpha(cb^{-1}a^{-1},a,b) =\alpha_{1}(a,b)\beta_{3}(b,c)\beta_{3}(a,cb^{-1}),\] \[\beta_{2}(b,c) =\beta_{2}(b,a^{-1}c)\alpha_{2}(a,b),\] \[\beta_{2}(a,c) =\alpha_{2}(a,b)\beta_{2}(a,cb^{-1}),\] \[\beta_{1}(a,c)\beta_{3}(b,c) =\beta_{3}(b,a^{-1}c)\alpha(a,a^{-1}cb^{-1},b)\beta_{1}(a,cb^{-1}),\] \[\beta_{2}(a,c)\beta_{2}(b,c) =\alpha_{3}(a,b)\beta_{2}(ab,c)\alpha_{1}(a,b),\] \[\alpha_{2}(a,c)\gamma(c,b) =\beta_{1}(a,b)\alpha_{3}(a,a^{-1}b)\gamma(c,a^{-1}b),\] \[\alpha_{2}(b,a)\gamma(c,b) =\beta_{3}(a,c)\alpha_{1}(ca^{-1},a)\gamma(ca^{-1},b),\] \[\alpha_{1}(a,c)\gamma(c,b) =\beta_{2}(a,b)\beta_{1}(a,ac)\gamma(ca,b),\] \[\alpha_{3}(b,a)\gamma(c,b) =\beta_{2}(a,c)\beta_{3}(a,ba)\gamma(c,ba),\] \[\delta_{d,ba^{-1}}\beta_{3}(a,b)\beta_{1}(ba^{-1},b) =4\sum_{c}\beta_{2}(c,b)\gamma(c,d)\gamma(a,c).\] After some close inspection, it is clear that these are the same 16 equations which [11] obtained1 for their pentagons [11, p. 699], with the exception of the last pentagon having a four on the right hand side. Footnote 1: In a few of the equations, a careful eye may notice that there are some group elements transposed from how they appear in [11]. After recomputing these several times, the authors believe that these are simply typos. ### Rescaling Because all of the associator coefficients (except \(\alpha_{m,m,m}\)) are real-valued, the algebraic manipulations by which [11] derived all of the associator information also works in this case. There is one small change however: where [11] have the equation \[\gamma(1,1)^{2}\sum_{c\in A}\alpha_{2}(a,bd^{-1}a^{-1})=\delta_{b,ad},\] there is a four on the left hand side in the quaternionic case: \[4\gamma(1,1)^{2}\sum_{c\in A}\alpha_{2}(a,bd^{-1}a^{-1})=\delta_{b,ad}.\] After normalization, the associators become: \[\alpha\equiv 1\,,\qquad\alpha_{1}=\alpha_{3}=\beta_{1}=\beta_{3}\equiv 1\,, \qquad\alpha_{2}=\beta_{2},\] where \(\alpha_{2}\) is a nondegenerate symmetric bicharacter, and \[\gamma(a,b)=\frac{\gamma(1,1)}{\alpha_{2}(a,b)},\] where \(4\gamma(1,1)^{2}|A|=1\). Therefore all of the associators are uniquely determined by \(\alpha_{2}\) and \(\gamma(1,1)\). In conclusion, we have the following theorem: **Theorem 5.4**.: _Let \(A\) be a finite group, let \(\tau=\nicefrac{{\pm 1}}{{\sqrt{4|A|}}}\), and let \(\chi:A\times A\to\mathbb{R}^{\times}\) be a nongedegerate symmetric bicharacter on \(A\)._ _A triple of such data gives rise to a non-split Tambara-Yamagami category \(\mathcal{C}_{\mathbb{H}}(A,\tau,\chi)\), with \(\operatorname{End}(\mathds{1})\cong\mathbb{R}\) and \(\operatorname{End}(m)\cong\mathbb{H}\). Furthermore, all equivalence classes of such categories arise in this way. Two categories \(\mathcal{C}_{\mathbb{H}}(A,\tau,\chi)\) and \(\mathcal{C}_{\mathbb{H}}(A^{\prime},\tau^{\prime},\chi^{\prime})\) are equivalent if and only if \(\tau=\tau^{\prime}\) and there exists an isomorphism \(f:A\to A^{\prime}\) such that for all \(a,b\in A\),_ \[\chi^{\prime}\big{(}f(a),f(b)\big{)}\ =\ \chi(a,b)\,.\] Proof.: We must establish four separate statements: 1. Our conditions on \(\chi\) and \(\tau\) are necessary to prove the existence of \(\mathcal{C}_{\mathbb{H}}(A,\tau,\chi)\). 2. Our conditions on \(\chi\) and \(\tau\) are sufficient to prove the existence of the category \(\mathcal{C}_{\mathbb{H}}(A,\tau,\chi)\). 3. Our conditions between \(\chi\) and \(\chi^{\prime}\), and \(\tau\) and \(\tau^{\prime}\) are necessary to prove the existence of a monoidal equivalence \((F,J):\mathcal{C}_{\mathbb{H}}(A,\tau,\chi)\to\mathcal{C}_{\mathbb{H}}(A^{ \prime},\tau^{\prime},\chi^{\prime})\). 4. Our conditions between \(\chi\) and \(\chi^{\prime}\), and \(\tau\) and \(\tau^{\prime}\) are sufficient to prove the existence of a monoidal equivalence. The analysis leading up to the theorem establishes the necessity statement (1), and so we proceed to show that our conditions are sufficient for the existence of \(\mathcal{C}_{\mathbb{H}}(A,\tau,\chi)\). Since the simple objects are fusion rules are already proscribed, we only need to write down the associators and prove that they are coherent. The associators of \(\mathcal{C}_{\mathbb{H}}(\chi,\tau)\) are given, for \(a,b,c\in A\), as follows: \[\alpha_{a,b,c} =\mathsf{id}_{abc},\] \[\alpha_{a,b,m}=\alpha_{m,b,c} =\mathsf{id}_{m},\] \[\alpha_{a,m,c} =\chi(a,c)\cdot\mathsf{id}_{m},\] \[\alpha_{a,m,m}=\alpha_{m,m,c} =\mathsf{id}_{m\otimes m},\] \[\alpha_{m,b,m} =\bigoplus_{a\in A}\chi(a,b)\cdot\mathsf{id}_{a^{\otimes 4}},\] \[\alpha_{m,m,m} =\left(\frac{\tau}{\chi(a,b)}\cdot s\cdot t\right)_{(a,s),(b,t)} :\bigoplus_{\begin{subarray}{c}a\in A,\\ s\in\{1,i,j,k\}\end{subarray}}m\longrightarrow\bigoplus_{\begin{subarray}{c }b\in A,\\ t\in\{1,i,j,k\}\end{subarray}}.\] The unit is \(\mathds{1}=\mathds{1}_{A}\), the identity in \(A\), and the unit constraints are identities. By plugging in \(\alpha_{2}=\chi=\beta_{2}\), \(\gamma(a,b)=\tau\cdot\chi(a,b)^{-1}\) and all others \(=1\), the pentagon equations from section 3.4 are satisfied. The fact that \(\chi\) is a bicharacter gives the fourth, fifth and tenth equations. The fact that \(\chi\) is a symmetric bicharacter gives the eighth and ninth equations. The facts that \(\chi\) is nondegenerate and \(\tau^{2}\cdot 4|A|=1\) imply the last equation. All the remaining equations follow immediately from the definitions. Next, suppose that there is a monoidal equivalence \[(F,J):\mathcal{C}_{\mathbb{H}}(A,\chi,\tau)\to\mathcal{C}_{\mathbb{H}}(A^{ \prime},\chi^{\prime},\tau^{\prime})\,.\] Monoidal equivalences send invertible objects to invertible objects, and so \(F\) must act by some group isomorphism \(f:A\to A^{\prime}\). Since \(m^{\prime}\) is the only quaternionic simple object in \(\mathcal{C}_{\mathbb{H}}(A^{\prime},\chi^{\prime},\tau^{\prime})\), we must have that \(m^{\prime}\cong Fm\), and hence \(\operatorname{Hom}(m^{\prime},Fm)\neq 0\) is a simple \((\mathbb{H},\mathbb{H})\)-bimodule. For any \(v\in\operatorname{Hom}(m^{\prime},Fm)\) and \(h\in\mathbb{H}\), the bimodule structure is given by \[h.v\;=\;F(h)\circ v\,,\qquad\text{ and }\qquad v.h\;=\;v\circ h\,.\] Since \(\operatorname{Hom}(m^{\prime},Fm)\) is simple, Proposition 5.1 shows that there is some nonzero \(y\in\operatorname{Hom}(m^{\prime},Fm)\) such that \(h.y=y.h\). This formula is equivalent to \(F(h)=y\circ h\circ y^{-1}\). The components of the tensorator have four different types: \(J_{a,b}\), \(J_{a,m}\), \(J_{m,b}\) and \(J_{m,m}\), and we can use the isomorphism \(y:m^{\prime}\to Fm\) to extract them as follows (composition symbols are omitted for space): \[F\big{(}[a,b]\big{)}J_{a,b} =\theta(a,b)^{-1}\cdot\big{[}f(a),f(b)\big{]} \tag{6}\] \[F\big{(}[a,m]\big{)}J_{a,m}(\operatorname{id}_{f(a)}\otimes y) =y\varphi(a)^{-1}\big{[}f(a),m\big{]}\] (7) \[F\big{(}[m,b]\big{)}J_{m,b}(y\otimes\operatorname{id}_{f(b)}) =y\psi(b)^{-1}\big{[}m,f(b)\big{]}\] (8) \[F\big{(}[a]\big{)}J_{m,m}(y\otimes y) =\big{[}f(a)\big{]}\big{(}\operatorname{id}_{m}\otimes\omega(a) ^{-1}\big{)} \tag{5}\] The inverses here are simply a convention (see Remark 4.5). Just as the naturality of the associator implied that the associator coefficients were real valued, the naturality of \(J\) implies that the tensorator coefficients \(\theta,\varphi,\psi\), and \(\omega\) are also all real valued. The hexagon relations for the tensorators produce the following equations: \[1 =\delta\theta \tag{10}\] \[\theta =\delta(\psi)\] (11) \[\chi^{\prime}\Big{(}f(a),f(b)\Big{)} =\chi(a,b)\] (12) \[\theta =\delta(\varphi)\] (13) \[\varphi(a)\omega(b) =\omega(a^{-1}b)\theta(a,a^{-1}b)\] (14) \[\chi^{\prime}\Big{(}f(a),f(b)\Big{)} =\frac{\varphi(a)}{\psi(a)}\cdot\chi(a,b)\] (15) \[\theta(ba^{-1},a)\omega(ba^{-1}) =\psi(a)\omega(b)\] (16) \[\frac{\tau}{\chi(a,b)\varphi(b)\omega(b)} =\frac{\tau^{\prime}}{\chi^{\prime}\big{(}f(a),f(b)\big{)}\psi(a )\omega(a)} \tag{9}\] Equation 9 is implied by Equations 10 and 12. Equations 11 and 14 imply that \(\varphi=\psi\), which makes Equations 10 and 12 equivalent to one another. Equation 11 can be used to reduce Equation 16 to \[\frac{\tau}{\varphi(b)\omega(b)}=\frac{\tau^{\prime}}{\psi(a)\omega(a)}\,. \tag{17}\] Since the left-hand side only depends on \(b\), and the right-hand side only depends on \(a\), this quantity must depend on neither \(a\) nor \(b\). Setting \(a=b\) we find that \(\tau=\tau^{\prime}\). Thus the existence of a monoidal equivalence implies the desired relations. By removing redundancies, the equations above reduce to the following list: \[\theta =\delta(\psi) \tag{19}\] \[\chi^{\prime}\Big{(}f(a),f(b)\Big{)} =\chi(a,b)\] (20) \[\psi =\varphi\] (21) \[\tau =\tau^{\prime}\] (22) \[\varphi(a)\omega(a) =\varphi(1)\omega(1)\,. \tag{18}\] Finally, suppose that \(\chi^{\prime}\big{(}f(a),f(b)\big{)}=\chi(a,b)\), and \(\tau=\tau^{\prime}\). We can construct a tensorator \(J\) by writing down coefficient functions \(\theta,\varphi,\psi\), and \(\omega\). The coherence of \(J\) is then equivalent to the validity of Equations 18-22. By setting all of these functions to be constant with value \(1\), the coherence of the resulting \(J\) is immediate. Thus, these relations between \(\chi^{\prime}\) and \(\chi\), \(\tau\) and \(\tau^{\prime}\) are enough to prove the existence of a monoidal equivalence between the two categories, and the proof is complete. **Example 5.5**.: The simplest example of such a category is formed from the trivial group: \(\mathcal{C}_{\mathbb{H}}(\mathbf{1},1,\pm\frac{1}{2})\). The simple objects are \(\mathds{1}\) and \(m\). By construction \(\operatorname{End}(\mathds{1})\cong\mathbb{R}\) and \(\operatorname{End}(m)\cong\mathbb{H}\), with the only non trivial fusion rule being: \[m\otimes m=4\cdot\mathds{1}.\] Since there are no non-trivial group automorphisms and no non-trivial bicharacters for the trivial group, there are only two categories spawning from this group over \(\mathbb{R}\), one for \(1/2\) and another for \(-1/2\). In the notation, \(1\) stands for the trivial bicharacter from the trivial group to \(\mathbb{R}\). Whenever the identity object of a monoidal category is involved, the associator becomes trivial. Therefore there is only one associator which is non-trivial, \(\alpha_{m,m,m}\). Since the \(\tau=\pm\frac{1}{2}\) and \(\chi\) is always trivial, this means that the following equation completely describes the associator: \[[m,1](\mathsf{id}_{m}\otimes[1])\alpha_{m,m,m}=\sum_{b,s}\gamma(a,b)\zeta(b,1,s,s)=\pm\frac{1}{2}\sum_{s\in\{1,i,j,k\}}\zeta(1,1,s,s)\,.\] As can be seen above, there are only two possible choices for the only non-trivial associator \(\alpha_{m,m,m}\), and it boils down to a choice of sign. These categories are not new: \(\mathcal{C}_{\mathbb{H}}(\mathbf{1},1,\frac{1}{2})\) and \(\mathcal{C}_{\mathbb{H}}(\mathbf{1},1,-\frac{1}{2})\) were described in [2] where they arose as examples of a real forms of \(\mathrm{Vec}_{\mathbb{R}}^{\omega}(\mathbb{Z}/2)\) for \(\omega=0\) and \(1\), respectively. The category \(\mathcal{C}_{\mathbb{H}}(\mathbf{1},1,\frac{1}{2})\) has also appeared in [13] where it was given a symmetric braiding, dubbed \(\mathsf{SuperVect}_{\mathbb{H}}\) and interpreted as a categorified field extension of \(\mathrm{Vec}_{\mathbb{R}}\). **Example 5.6**.: Let \(A=\mathbb{Z}/2\mathbb{Z}=\langle w\rangle\), and set \(\chi(w,w)=-1\). The object \(m\) in \(\mathcal{C}_{\mathbb{H}}\left(A,\chi,\pm\frac{1}{2\sqrt{2}}\right)\) satisfies \[m\otimes m\cong 4\cdot\left(\mathds{1}\oplus w\right).\] From this it follows that \(\mathrm{FPdim}(m)=2\sqrt{2}\). This implies that \(\mathcal{C}_{\mathbb{H}}\left(A,\chi,\pm\frac{1}{2\sqrt{2}}\right)\) are not of the form \(\mathrm{Rep}_{\mathbb{R}}(G)\) for any group \(G\). **Remark 5.7**.: Theorem 5.4 requires \(\chi:A\times A\to\mathbb{R}^{\times}\) to be nondegenerate. The only groups for which this is possible are elementary abelian \(2\)-groups, that is, groups of the form \((\mathbb{Z}/2\mathbb{Z})^{n}\). In this sense, Examples 5.6 and 5.5 are generic. ## 6. Analysis of the Real-Complex case Here we are supposing \(\mathds{1}\) is real, and \(m\) is complex. Each of the spaces \[\mathrm{Hom}(a\otimes m,m),\qquad\qquad\mathrm{Hom}(m\otimes a,m),\qquad \qquad\mathrm{Hom}(m\otimes m,a)\] are \(1\)-dimensional complex bimodules. Every such bimodule is isomorphic to either the trivial bimodule \(\mathbb{C}\) or the conjugating bimodule \(\overline{\mathbb{C}}\), in which the left and right actions differ by conjugation. **Definition 6.1**.: For an element \(a\in G\) and a scalar \(\lambda\in\mathbb{C}\), define the superscript notation \[\lambda^{a}:=\begin{cases}\lambda&\text{ if }\mathrm{Hom}(a\otimes m,m)\cong \mathbb{C}\\ \overline{\lambda}&\text{ if }\mathrm{Hom}(a\otimes m,m)\cong\overline{\mathbb{C}} \end{cases}\] and also the degree \[|a|:=\begin{cases}0&\text{ if }\mathrm{Hom}(a\otimes m,m)\cong\mathbb{C}\\ 1&\text{ if }\mathrm{Hom}(a\otimes m,m)\cong\overline{\mathbb{C}}\end{cases}\] We say that \(a\)_conjugates_ when \(|a|=1\). The appearance of conjugating bimodules should not be surprising. In [1], Etingof, Nikshych, and Ostrik analyze Tambara-Yamagami type fusion categories as \(\mathbb{Z}/2\mathbb{Z}\)-graded extensions of pointed categories in the AC setting. Using their language, \(\mathcal{M}:=\mathbb{C}\text{-}\text{Vec}=\langle m\rangle\) would need to constitute an invertible bimodule category for the pointed category \(\mathcal{C}:=\mathbb{R}\text{-}\text{Vec}^{\omega}_{G}\). If none of the \(a\in G\) acted by the conjugation functor, then all objects in the dual category \(\mathcal{C}^{*}_{\mathcal{M}}\) would be complex, and this would imply that \(\mathcal{M}\) wasn't invertible. Thus we find that there must be at least one \(w\in G\) that conjugates. On the level of groups, the degree map defined above must be a surjective homomorphism onto \(\mathbb{Z}/2\mathbb{Z}\). Let \(A\) be the kernel of the degree map, so that we have a short exact sequence: \[A\hookrightarrow G\twoheadrightarrow\mathbb{Z}/2\mathbb{Z}\,.\] We will uncover more about the structure of \(G\) later, but for now the reader should simply note that conjugation appears everywhere in this section, and cannot be avoided. This case does not require such careful choices as were necessary in Section 5, and so we will choose arbitrary nonzero morphisms for each \(a,b\) and \(c\in G\): \[[a,b] \in\text{Hom}(a\otimes b,c),\] \[[a,m] \in\text{Hom}(a\otimes m,m),\] \[[m,a] \in\text{Hom}(m\otimes a,m),\] \[[a] \in\text{Hom}(m\otimes m,a).\] A priori, there is nothing to indicate which type of bimodule \(\text{Hom}(m\otimes m,\mathds{1})\) happens to be. For now we will keep track of this possible conjugation by the symbol \(g\). Explicitly, we have the following rule. **Definition 6.2**.: There is a homomorphism of \(\mathbb{R}\)-algebras from \(\mathbb{C}\) to itself, written \(\lambda\mapsto\lambda^{g}\), that is uniquely determined by the equation \[[\mathds{1}]\circ(\mathsf{id}_{m}\otimes i)\;=\;[\mathds{1}]\circ(i^{g} \otimes\mathsf{id}_{m})\,.\] For the sake of compactness, in the notation below, having a complex number to the right of a morphism denotes precomposition with that morphism on the \(m\) strand, tensored with the identity on the non\(-m\) strand, i.e \([a,m]t=[a,m]\circ(id_{a}\otimes t)\) and \([m,a]t=[m,a]\circ(t\otimes\mathsf{id}_{a})\). For \([a]\) when there are two \(m\) strands, \([a]t\) will denote \([a]\circ(\mathsf{id}_{m}\otimes t)\), while \([a]\triangleleft t\) will denote \([a]\circ(t\otimes\mathsf{id}_{m})\). The category is assumed to be linear over \(\mathbb{R}\), so we will simply write \(r\cdot-\) to indicate scalar multiplication by a real number \(r\in\mathbb{R}\). Below are the associators, once again following the notation of [17] for \(a,b,c\in G\): \[\left\{\begin{smallmatrix}a,b,c\\ abc\end{smallmatrix}\right\} : [b,c]\otimes[a,bc]\to\alpha(a,b,c)\cdot[a,b]\otimes[ab,c],\] \[\left\{\begin{smallmatrix}a,b,m\\ m\end{smallmatrix}\right\} : [b,m]\otimes[a,b]\to[a,b]\otimes\Big{(}[ab,m]\alpha_{3}(a,b) \Big{)},\] \[\left\{\begin{smallmatrix}a,m,b\\ m\end{smallmatrix}\right\} : [m,b]\otimes[a,m]\to\Big{(}[a,m]\alpha_{2}(a,b)\Big{)}\otimes[m,b],\] \[\left\{\begin{smallmatrix}m,a,b\\ m\end{smallmatrix}\right\} : [a,b]\otimes[m,ab]\to\Big{(}[m,a]\alpha_{1}(a,b)\Big{)}\otimes[m,b],\] \[\left\{\begin{smallmatrix}a,m,m\\ b\end{smallmatrix}\right\} : [a^{-1}b]\otimes[a,a^{-1}b]\to[a,m]\otimes\Big{(}[b]\beta_{1}(a,b) \Big{)},\] \[\left\{\begin{smallmatrix}m,a,m\\ b\end{smallmatrix}\right\} : [a,m]\otimes[b]\to[m,a]\otimes\Big{(}[b]\beta_{2}(a,b)\Big{)},\] \[\left\{\begin{smallmatrix}m,m,a\\ b\end{smallmatrix}\right\} : [m,a]\otimes[b]\to\Big{(}[ba^{-1}]\beta_{3}(a,b)\Big{)}\otimes[ba^{-1 },a],\] \[\left\{\begin{smallmatrix}m,m,m}\right\} : [a]\otimes[m,a]\to\sum_{\begin{subarray}{c}b\in A\\ s\in\{1,i\}\end{subarray}}[b]s\otimes[b,m]\gamma(a,b)_{s}.\] In this case, only \(\alpha\) is real valued. The \(\alpha_{j}\)s and \(\beta_{k}\)s are complex valued, and \(\gamma\) takes values in \(\operatorname{End}(m\otimes m)\cong M_{|G|}(\mathbb{C}\otimes_{\mathbb{R}} \mathbb{C})\). The \((a,b)\)-th entry of \(\gamma\) will be denoted \(\gamma(a,b)\in\mathbb{C}\otimes_{\mathbb{R}}\mathbb{C}\). We will use a notation that keeps track of the complex factor on the right by using the following decomposition: \[\gamma(a,b)\ =\ 1\otimes\gamma(a,b)_{1}+i\otimes\gamma(a,b)_{i}\,.\] Any element of \(\mathbb{C}\otimes_{\mathbb{R}}\mathbb{C}\) can be given such a decomposition. **Note 6.3**.: In order to derive the following pentagon equations, we make the convention that at every stage, complex scalars should be passed to the input of the rightmost tensor factor of \(m\) that appears in the morphism. \[\delta\alpha =1 \tag{24}\] \[\delta^{R}\alpha_{3} =\alpha^{-1}\] (25) \[\delta^{L}\alpha_{1} =\alpha\] (26) \[\alpha_{2}(a,bc)\alpha_{1}(b,c)^{a} =\alpha_{1}(b,c)\alpha_{2}(a,c)^{b}\alpha_{2}(a,b)\] (27) \[\alpha_{3}(a,b)^{c}\alpha_{2}(ab,c) =\alpha_{2}(b,c)\alpha_{2}(a,c)^{b}\alpha_{3}(a,b)\] (28) \[\alpha(a,b,b^{-1}a^{-1}c)\beta_{1}(ab,c) =\beta_{1}(b,a^{-1}c)\beta_{1}(a,c)\alpha_{3}(a,b)^{gabc}\] (29) \[\beta_{3}(ab,c)\alpha(cb^{-1}a^{-1},a,b) =\alpha_{1}(a,b)\beta_{3}(b,c)^{a}\beta_{3}(a,cb^{-1})\] (30) \[\beta_{2}(a,c)^{b}\beta_{3}(b,c) =\alpha_{2}(a,b)\beta_{3}(b,c)^{a}\beta_{2}(a,cb^{-1})\] (31) \[\beta_{1}(a,c)^{b}\beta_{2}(b,c) =\beta_{2}(b,a^{-1}c)\beta_{1}(a,c)\alpha_{2}(a,b)^{gabc}\] (32) \[\beta_{1}(a,c)^{b}\beta_{3}(b,c) =\beta_{3}(b,a^{-1}c)\alpha(a,a^{-1}cb^{-1},b)\beta_{1}(a,cb^{-1})\] (33) \[\beta_{2}(a,c)^{b}\beta_{2}(b,c) =\alpha_{3}(a,b)\beta_{2}(ab,c)\alpha_{1}(a,b)^{gabc}\] (34) \[\sum_{s}s\cdot\alpha_{2}(a,c)^{gab}\otimes\gamma(c,b)_{s} =\sum_{t}t\cdot\beta_{1}(a,b)\otimes\alpha_{3}(a,a^{-1}b)\cdot \gamma(c,a^{-1}b)_{t}\] (35) \[\sum_{s}s\otimes\alpha_{2}(b,a)\cdot\gamma(c,b)_{s}^{a} =\sum_{t}t\cdot\alpha_{1}(ca^{-1},a)^{gb}\otimes\beta_{3}(a,c) \cdot\gamma(ca^{-1},b)_{t}\] (36) \[\sum_{s}s\cdot\alpha_{1}(a,c)^{gab}\otimes\gamma(c,b)_{s} =\sum_{t}t^{a}\cdot\beta_{2}(a,b)\otimes\beta_{1}(a,ac)\cdot\gamma (ac,b)_{t}\] (37) \[\sum_{s}s\otimes\alpha_{3}(b,a)\cdot\gamma(c,b)_{s}^{a} =\sum_{t}t^{a}\cdot\beta_{3}(a,ba)\otimes\gamma(c,ba)_{t}\cdot \beta_{2}(a,c)\] (38) \[\delta_{d,ba^{-1}}\beta_{3}(a,b)\otimes\beta_{1}(ba^{-1},b) =\sum_{c,s,t}t\otimes s^{gbd}\cdot\beta_{2}(c,b)\cdot\gamma(c,d) _{t}^{gbd}\cdot\gamma(a,c)_{s}. \tag{23}\] Any change in the basis vectors can be achieved by a transformation of the following form: \[[a,b]^{\prime} =\theta(a,b)[a,b]\,,\] \[[a,m]^{\prime} =\varphi(a)[a,m]\,,\] \[[m,a]^{\prime} =\psi(a)[m,a]\,,\] \[[a]^{\prime} =\omega(a)[a]\,,\] where \(\varphi,\psi\) and \(\omega\) are complex valued functions, and \(\theta\) is real. Under such a change of basis, the associator coefficients change in the following way: \[\alpha^{\prime} =\delta\theta\cdot\alpha \tag{40}\] \[\alpha_{1}^{\prime}(a,b) =\frac{\psi(ab)\theta(a,b)}{\psi(a)\psi(b)^{a}}\cdot\alpha_{1}(a,b)\] (41) \[\alpha_{2}^{\prime}(a,b) =\frac{\psi(b)\varphi(a)^{b}}{\psi(b)^{a}\varphi(a)}\cdot\alpha_ {2}(a,b)\] (42) \[\alpha_{3}^{\prime}(a,b) =\frac{\varphi(b)\varphi(a)^{b}}{\varphi(ab)\theta(a,b)}\cdot \alpha_{3}(a,b)\] (43) \[\beta_{1}^{\prime}(a,b) =\frac{\omega(a^{-1}b)\theta(a,a^{-1}b)}{\varphi(a)^{gab}\omega( b)}\cdot\beta_{1}(a,b)\] (44) \[\beta_{2}^{\prime}(a,b) =\frac{\omega(b)^{a}\varphi(a)}{\omega(b)\psi(a)^{gab}}\cdot \beta_{2}(a,b)\] (45) \[\beta_{3}^{\prime}(a,b) =\frac{\psi(a)\omega(b)^{a}}{\theta(ba^{-1},a)\omega(ba^{-1})} \cdot\beta_{3}(a,b) \tag{39}\] We will save the analysis of how \(\gamma\) changes until later in the paper. Unlike the classical and quaternionic cases, Equation 39 does not allow us to trivialize \(\alpha\). Normally we would set \(\theta=\alpha_{1}^{-1}\), but this doesn't make sense, because in general \(\alpha_{1}\) may take on complex values. However, Equation 25 does show us that \(|\alpha|=\delta|\alpha_{1}|\). Thus by setting \[\theta(a,b):=\frac{1}{|\alpha_{1}(a,b)|}\,,\] we may assume that \(|\alpha|\equiv|\alpha_{1}|\equiv 1\). We note at this point that we are still able to use \(\{\pm 1\}\)-valued \(\theta\)s to adjust the sign of \(\alpha\), without altering the norms. Fortunately this will be all that we need to trivialize \(\alpha_{1}\). Observe that Equation 26 can be rearranged to look like \[\frac{\alpha_{2}(a,bc)}{\alpha_{2}(a,c)^{b}\alpha_{2}(a,b)}\;=\;\frac{\alpha _{1}(b,c)}{\alpha_{1}(b,c)^{a}}\,.\] Now let us fix an \(a\) that conjugates, and define \(f(x):=\alpha_{2}(a,x)^{-1}\). Under these conditions, the above equation becomes \[\delta^{L}f\;=\;\alpha_{1}^{2}\,.\] Choose a function \(\psi:G\to\mathbb{C}^{\times}\) that satisfies \(\psi^{2}=f\). It follows that \[\theta(x,y):=\frac{(\delta^{L}\psi)(x,y)}{\alpha_{1}(x,y)}\in\{\pm 1\},.\] By using this \(\psi\) and \(\theta\) in Equation 40, we can impose the condition that \(\alpha_{1}=1\). By way of Equation 25, this forces \(\alpha=1\) also. By the triangle axiom of monoidal categories, we may assume following normalization conditions: For each of \(\alpha\), \(\alpha_{i}\), \(\beta_{1}\) and \(\beta_{3}\), any input being \(1\) implies the value is \(1\). For \(\beta_{2}\), for all \(b\in G\), \(\beta_{2}(1,b)=1\). We can further normalize by setting \[\varphi(a):=\frac{\omega(1)\psi(a)^{ga}}{\omega(1)^{a}\beta_{2}(a,1)}\,,\] which, by Equation 44, allows us to assume that \(\beta_{2}(a,1)\equiv 1\). Thus we have shown that any TY category of this form has a basis for the hom spaces for which the coefficients of the associator are normalized. Without loss of generality we assume all coefficients to be normalized from now on. By normalization, setting \(a=1\) or \(b=1\) in Equations 39-45 shows that only certain basis changes are allowed. The new restrictions are as follows: \[\begin{split}\psi(1)&=\;\theta(a,1)\;=\;\theta(1,b )\,,\\ \varphi(a)&=\frac{\psi(a)^{ga}\omega(1)}{\omega(1)^{ a}}\,,\\ \varphi(a)^{ga}\omega(1)&=\omega(a^{-1})\theta(a,a^{ -1})\,,\\ \theta(a^{-1},a)\omega(a^{-1})&=\psi(a)\omega(1)^{ a}\,.\end{split} \tag{46}\] This shows that the quadruple \((\theta,\psi,\varphi,\omega)\) is completely determined by the triple \(\big{(}\theta,\psi,\omega(1)\big{)}\). The above equations imply that for any \(a\in G\), \[\frac{\omega(1)^{a}}{\omega(1)}\;=\;\left(\frac{\omega(1)^{a}}{\omega(1)} \right)^{g}\,.\] Notice that when \(g\) conjugates, this forces \(\omega(1)^{4}=1\). When comparing categories with the same fusion rules, if their associator coefficients differ by a change of bases, then they must be equivalent. When two categories are equivalent, they are often equivalent in multiple different ways. This is inconvenient for classification, since it means the scope of variables to check is wide. Luckily, there is a way to normalize our equivalences by composing them with autoequivalences. An autoequivalence in this setting means any change of bases that does not alter any of the associator coefficients. Notice that the transformation resulting from using constant scalar factors \[\big{(}\theta,\psi,\omega(1)\big{)}=(r_{1},r_{1},r_{2})\,,\] for \(r_{1},r_{2}\in\mathbb{R}^{\times}\) sends all associator coefficients to themselves. In other words, this corresponds to an autoequivalence of the category. By composing an equivalence \(\big{(}\theta,\psi,\omega(1)\big{)}\) with an autoequivalence \(\big{(}\psi(1)^{-1},\psi(1)^{-1},|\omega(1)|^{-1}\big{)}\), we may assume that \(\theta(a,1)=\theta(1,b)=\psi(1)=1\), and that \(|\omega(1)|=1\). In the case when \(g\) conjugates, the transformation \[\big{(}\theta,\psi,\omega(1)\big{)}=(1,1,\lambda)\] also sends all the coefficients to themselves, so we can assume \(\omega(1)=1\). However when \(g\)_doesn't_ conjugate, we may not be able to assume this. With our normalization assumptions in place, our next task is to determine what dependencies there are between the coefficients. To start, we can set \(c=1\) in Equation 33 to find that \(\alpha_{3}=1\). Next, we set \(c=1\) in Equation 28 to find that \(\beta_{1}=1\). By setting \(c=1\) in Equation 29 instead of 28, we discover that \(\beta_{3}=1\). Let us turn our attention now to \(\alpha_{2}\) and \(\beta_{2}\). By setting \(c=1\) in Equation 30, we can deduce that \[\beta_{2}(a,b)=\alpha_{2}(a,b^{-1})^{-1}\,. \tag{47}\] By setting \(c=1\) in Equation 31, then applying Equation 47, we obtain the following symmetry condition for \(\alpha_{2}\): \[\alpha_{2}(b,a)=\alpha_{2}(a,b)^{gab}\,. \tag{48}\] Equation 26 can be rearranged to express a kind of'multiplicativity' statement \[\alpha_{2}(a,bc)=\alpha_{2}(a,b)\alpha_{2}(a,c)^{b}\,. \tag{49}\] **Note 6.4**.: When restricted to \(A\times A\), Equations 48 and 49 show that \(\alpha_{2}\) is a complex valued bicharacter. If \(g\) conjugates, then \(\alpha_{2}\) is conjugate-symmetric (one might call it hermitian), and if \(g\) doesn't conjugate, then \(\alpha_{2}\) is symmetric. **Note 6.5**.: Here is an interpretation of this multiplicativity. There is a real algebra \(\mathbb{C}_{\sim}G\) defined similarly to the standard group algebra \(\mathbb{C}G\), except that group elements only commute with scalars up to conjugation: \[b\cdot\lambda\;=\;\lambda^{b}\cdot b\] Equation 49 can be understood as saying that for each \(a\in G\), the map \[\alpha_{2}(a,-):\mathbb{C}_{\sim}G\to\operatorname{End}_{\mathbb{R}}(\mathbb{ C})\] endows \(\mathbb{C}\) with the structure of a representation. Equation 27 is similar to Equation 26, and the corresponding multiplicativity statement is a necessary consequence of Equations, 48 and 49. Let us now turn to the problem of simplifying the function \(\gamma\). **Lemma 6.6**.: \[\gamma(1,1)_{i}\;=\;\overline{i}^{g}\cdot\gamma(1,1)_{1}\] Proof.: There are two summands of \(m\otimes m\) that correspond to \(\mathds{1}\), the component corresponding to \(1\) and the component corresponding to \(i\). By composing with the adjoint \([\mathds{1}]^{\dagger}\), we can isolate the coefficient \(\gamma(1,1)_{i}\) as follows \[\gamma(1,1)_{i} =(\mathsf{id}_{m}\otimes[\mathds{1}])\circ\alpha_{m,m,m}\circ( \mathsf{id}_{m}\otimes(\overline{i})\otimes\mathsf{id}_{m})\circ([\mathds{1} ]^{\dagger}\otimes\mathsf{id}_{m})\] \[=(\mathsf{id}_{m}\otimes[\mathds{1}])\circ\alpha_{m,m,m}\circ( \overline{i}^{g}\otimes\mathsf{id}_{m}\otimes\mathsf{id}_{m})\circ([\mathds{1 }]^{\dagger}\otimes\mathsf{id}_{m})\] \[=(\overline{i}^{g}\otimes[\mathds{1}])\circ\alpha_{m,m,m}\circ([ \mathds{1}]^{\dagger}\otimes\mathsf{id}_{m})\] \[=(\overline{i}^{g})\circ(\mathsf{id}_{m}\otimes[\mathds{1}]) \circ\alpha_{m,m,m}\circ([\mathds{1}]^{\dagger}\otimes\mathsf{id}_{m})\] \[=\overline{i}^{g}\gamma(1,1)_{1}\,.\] In Equation 36, by setting \(b=c=1\) we find that \[\sum_{s}s\otimes\gamma(1,1)_{s}=\sum_{t}t^{a}\otimes\cdot\gamma(a,1)_{t}\,, \tag{50}\] which implies that \[\gamma(a,1)_{s} =(s^{2})^{|a|}\gamma(1,1)_{s} \tag{51}\] \[\gamma(a,1)_{s} =\overline{s}^{ga}\gamma(1,1)_{1}\,,\] where we have used Lemma 6.6 in the last line. Next, we set \(a=b^{-1}\) in Equation 37 to find \[\sum_{s}s\otimes\gamma(c,b)_{s}^{a}=\sum_{t}t^{a}\otimes\gamma(c,1)_{t}\beta_{ 2}(b^{-1},c)\] which implies that \[\gamma(c,b)_{s}^{b}=(s^{2})^{|b|}\gamma(c,1)_{s}\beta_{2}(b^{-1},c)\,.\] By relabelling \(c\mapsto a\), the above equation becomes \[\gamma(a,b)_{s}^{b}=(s^{2})^{|b|}\gamma(a,1)_{s}\beta_{2}(b^{-1},a)\,. \tag{52}\] Equations 51 and 52 combine to give \[\gamma(a,b)_{s}^{b}=\overline{s}^{gab}\gamma(1,1)_{1}\beta_{2}(b^{-1},a)\,.\] We can replace the \(\beta_{2}\) terms by using Equation 47 to get \[\gamma(a,b)_{s}^{b}=\frac{\overline{s}^{gab}\gamma(1,1)_{1}}{\alpha_{2}(b^{-1 },a^{-1})}\,.\] The multiplicativity, normality and symmetry properties of \(\alpha_{2}\) can be used to show that the above simplifies to the following final form: \[\gamma(a,b)_{s}^{b}=\frac{\overline{s}^{gab}\gamma(1,1)_{1}}{\alpha_{2}(a,b)^ {g}} \tag{53}\] which implies that \[\gamma(a,b)_{s}=\frac{\overline{s}^{ga}\gamma(1,1)_{1}^{b}}{\alpha_{2}(a,b)^{ gb}}\,. \tag{54}\] Observe that Equation 54 contains the factor \(\overline{s}^{ga}\). This fact allows us to greatly reduce the complexity of Equations 34-37 by using a little algebra. **Lemma 6.7**.: _Let \(u\) and \(v\) represent words in \(G\) possibly also with the symbol \(g\). Consider the element_ \[P_{u,v}:=\tfrac{1}{2}\left(1\otimes 1+i^{u}\otimes\overline{i}^{v}\right)\in \mathbb{C}\otimes_{\mathbb{R}}\mathbb{C}\,.\] _This element is an idempotent, and it satisfies the convenient property that_ \[P_{u,v}\cdot(\lambda\otimes 1)=P_{u,v}\cdot(1\otimes\lambda^{uv})\,.\] Proof.: (routine) This property allows us to identify which projection \(P_{u,v}\) appears on each side of Equations 34-37, then pass all the complex scalars across the tensor symbol via the conjugation \((-)^{uv}\). Thus Equations 34-37 reduce to the following, much simpler equations \[\alpha_{2}(a,c)^{abc}\gamma(c,b)_{1} =\gamma(c,a^{-1}b)_{1} \tag{56}\] \[\alpha_{2}(b,a)\gamma(c,b)_{1}^{a} =\gamma(ca^{-1},b)_{1}\] (57) \[\gamma(c,b)_{1} =\beta_{2}(a,b)^{gc}\gamma(ac,b)_{1}\] (58) \[\gamma(c,b)_{1}^{a} =\gamma(c,ba)_{1}\beta_{2}(a,c)\,. \tag{55}\] The reader should compare these with the original Equations 34-37 from [17]. With this simplification complete, we set \(b=1\) in Equation 55 and continue to reduce. \[\alpha_{2}(a,c)^{ac}\gamma(c,1)_{1}=\gamma(c,a^{-1})_{1}\,,\] \[\alpha_{2}(a,c)^{ac}\gamma(1,1)_{1}=\frac{\gamma(1,1)_{1}^{a}}{ \alpha_{2}(c,a^{-1})^{ga}}\,,\] \[\alpha_{2}(a,c)^{ac}\alpha_{2}(c,a^{-1})^{ga}=\frac{\gamma(1,1)_ {1}^{a}}{\gamma(1,1)_{1}}\,,\] \[\alpha_{2}(c,a)^{g}\alpha_{2}(c,a^{-1})^{ga}=\frac{\gamma(1,1)_ {1}^{a}}{\gamma(1,1)_{1}}\,,\] \[\left(\alpha_{2}(c,a^{-1})\alpha_{2}(c,a)^{a}\right)^{ga}=\frac{ \gamma(1,1)_{1}^{a}}{\gamma(1,1)_{1}}\,,\] \[\alpha_{2}(c,1)^{ga}=\frac{\gamma(1,1)_{1}^{a}}{\gamma(1,1)_{1}}\,, \tag{59}\] \[1=\frac{\gamma(1,1)_{1}^{a}}{\gamma(1,1)_{1}}\,.\] This new equation is true for all \(a\in G\), and so \(\gamma(1,1)_{1}\) must be a real number. We will come back to this equation later, but for now, let us examine Equation 38. Suppose \(d=b=1\) and \(a\neq 1\), then Equation 38 becomes \[0 =\sum_{c,s,t}t\otimes s^{g}\cdot\gamma(c,1)_{t}^{g}\cdot\gamma(a,c)_ {s}\] \[0 =\sum_{c,s,t}t\otimes s^{g}\cdot\overline{t}^{c}\gamma(c,1)_{1}^ {g}\cdot\overline{s}^{ga}\gamma(a,c)_{1}\] \[0 =2\sum_{c,s}P_{1,c}\cdot\left(1\otimes\gamma(c,1)_{1}^{g}\cdot(s ^{2})^{|a|}\gamma(a,c)_{1}\right) \tag{60}\] \[0 =\sum_{c,s}P_{1,c}\cdot\left(1\otimes\gamma(c,1)_{1}^{g}\cdot(s ^{2})^{|a|}\gamma(a,c)_{1}\right)\,.\] Equation 60 is true for all \(a\in G\), but it is uninteresting when \(|a|=1\), so suppose \(|a|=0\) (\(a\in A\)). The computation continues \[0 =2\cdot\sum_{c}P_{1,c}\cdot(1\otimes\gamma(c,1)_{1}^{g}\gamma(a, c)_{1})\] \[0 =2\cdot\sum_{c}P_{1,c}\cdot\left(1\otimes\gamma(1,1)_{1}^{g} \left(\frac{\gamma(1,1)_{1}^{c}}{\alpha_{2}(a,c)^{gc}}\right)\right)\] \[0 =2\cdot\sum_{c}P_{1,c}\cdot\left(1\otimes\frac{\gamma(1,1)_{1}^{ c}}{\alpha_{2}(a,c)^{gc}}\right)\] \[0 =\sum_{|c|=0}(1\otimes 1-i\otimes i)\cdot\left(1\otimes\frac{ \gamma(1,1)_{1}^{c}}{\alpha_{2}(a,c)^{gc}}\right)\] \[\qquad\qquad+\ \sum_{|c|=1}(1\otimes 1+i\otimes i)\cdot\left(1 \otimes\frac{\gamma(1,1)_{1}^{c}}{\alpha_{2}(a,c)^{gc}}\right)\,.\] For ease of notation, let the tensor factor that depends on the index \(c\) be denoted by \(S(c)\), so that the above equation becomes \[0=\sum_{|c|=0}(1\otimes 1-i\otimes i)\cdot(1\otimes S(c))\ +\ \sum_{|c|=1}(1 \otimes 1+i\otimes i)\cdot(1\otimes S(c))\.\] By taking the real and imaginary parts of the left tensor factor, we find that \[0 =\sum_{|c|=0}S(c)\ +\ \sum_{|c|=1}S(c)\;,\ \text{and}\] \[0 =-\sum_{|c|=0}iS(c)\ +\ \sum_{|c|=1}iS(c)\;.\] This is only possible if both summations are zero. Focusing on the \(|c|=0\) sum, we continue reducing the equation \[0 =\sum_{|c|=0}\frac{\gamma(1,1)_{1}^{c}}{\alpha_{2}(a,c)^{gc}}\] \[0 =\sum_{|c|=0}\frac{\gamma(1,1)_{1}}{\alpha_{2}(a,c)^{g}}\] \[0 =\sum_{|c|=0}\frac{1}{\alpha_{2}(a,c)} \tag{61}\] \[0 =\sum_{|c|=0}\alpha_{2}(a,c)\,.\] Since Equation 61 holds for any \(a\neq 1\in A\), we conclude that \(\alpha_{2}\) is nondegenerate when restricted to \(A\times A\). This implies in particular that \(A\) must be abelian, but it provides even more. **Lemma 6.8**.: _The exact sequence \(A\hookrightarrow G\twoheadrightarrow\mathbb{Z}/2\mathbb{Z}\) is necessarily split, and \(\mathbb{Z}/2\mathbb{Z}\) acts on \(A\) by inversion._ Proof.: Let \(|w|=1\), and note that \(w^{2}\in A\). For all \(a\in A\), Equation 49 implies \[\alpha_{2}(a,w^{2})\;=\;\alpha_{2}(a,w)\overline{\alpha_{2}(a,w)}\;=\;|\alpha_ {2}(a,w)|^{2}\;=\;1\,.\] Thus by nondegeneracy of \(\alpha_{2}\), \(w^{2}=1\in G\). We may use \(1\mapsto w\) as the desired splitting. Conjugation by \(w\) produces an automorphism \(a\mapsto waw\) of \(A\). For any \(b\in A\), we can use Equation 49 again to find: \[\alpha_{2}(b,w)\overline{\alpha_{2}(b,a)}\;=\;\alpha_{2}(b,wa)\;=\;\alpha_{2} (b,waw)\;=\;\alpha_{2}\big{(}b,waw\big{)}\alpha_{2}(b,w)\] which implies that \[\alpha_{2}(b,a^{-1})\;=\;\alpha_{2}(b,a)^{-1}\;=\;\overline{\alpha_{2}(b,a)}\; =\;\alpha_{2}\big{(}b,waw\big{)}\,.\] Nondegeneracy then implies that \(waw=a^{-1}\). **Note 6.9**.: The groups \(G\) described in Lemma 6.8 are known as generalized dihedral groups. Earlier we observed that Equation 59 implies that \(\gamma(1,1)_{1}\in\mathbb{R}\). However not all real numbers will work, because Equation 38 enforces a further restriction. By setting \(b=1\) and \(d=a^{-1}\) in Equation 38 we find that \[1\otimes 1 =\sum_{c,s,t}t\otimes s^{ga}\gamma(c,a^{-1})^{ga}_{t}\gamma(a,c)_{s}\] \[=\sum_{c,s,t}t\otimes s^{ga}\overline{t}^{ac}\gamma(c,a^{-1})^{ga}_ {1}\,\overline{s}^{ga}\gamma(a,c)_{1}\] \[=\sum_{c,s}2P_{1,ac}\left(1\otimes\gamma(c,a^{-1})^{ga}_{1}\gamma( a,c)_{1}\right)\] \[=2\sum_{c}2P_{1,ac}\left(1\otimes\gamma(c,a^{-1})^{ga}_{1}\gamma( a,c)_{1}\right)\] \[=2\sum_{c}2P_{1,ac}\left(1\otimes\gamma(c,a^{-1})^{ga}_{1}\gamma( a,c)_{1}\right)\,.\] By taking the real part of the left tensor factor, this becomes \[1 =2\sum_{c}\gamma(c,a^{-1})^{ga}_{1}\gamma(a,c)_{1}\] \[=2\sum_{c}\left(\frac{\gamma(1,1)^{a}_{1}}{\alpha_{2}(c,a^{-1}) ^{ga}}\right)^{ga}\left(\frac{\gamma(1,1)^{c}_{1}}{\alpha_{2}(a,c)^{gc}}\right)\] \[=2\gamma(1,1)^{2}_{1}\sum_{c}\frac{1}{\alpha_{2}(c,a^{-1}) \alpha_{2}(c,a)^{a}}\] \[=2\gamma(1,1)^{2}_{1}\sum_{c}\frac{1}{\alpha_{2}(c,1)}\] \[=2\gamma(1,1)^{2}_{1}|G|\,.\] Solving for \(\gamma(1,1)_{1}\) we find that \[\gamma(1,1)_{1}=\pm\frac{1}{\sqrt{2|G|}}\,. \tag{63}\] This requirement for \(\gamma(1,1)_{1}\) is analogous to the previous requirements that appear in the classification for the split real and quaternionic TY categories. By combining Equations 49 and 48, it follows that \[\alpha_{2}(a,b^{-1})\ =\ \frac{1}{\alpha_{2}(a,b)^{b}}\ \text{ and }\ \alpha_{2}(a^{-1},b)\ =\ \frac{1}{\alpha_{2}(a,b)^{a}}\,. \tag{64}\] We can combine the equations above with Equation 47 to find that \[\beta_{2}(a,b)\;=\;\alpha_{2}(a,b)^{b}\,. \tag{65}\] ### Statement of Theorem With the above computations complete, we can write down necessary conditions for the pentagon equations to have solutions. Let \(G\cong A\rtimes\mathbb{Z}/2\mathbb{Z}\) be a finite generalized dihedral group, and let \(S^{1}_{\sim}\) denote the complex unit circle as a \(G\)-module with the canonical action of \(G\) that factors through \(\mathbb{Z}/2\mathbb{Z}\), where \(\mathbb{Z}/2\mathbb{Z}\) acts on \(S^{1}\subset\mathbb{C}^{\times}\) by complex conjugation. **Definition 6.10**.: A bicocycle for \(G\) is a function \(\chi:G\times G\to S^{1}_{\sim}\) that satisfies the following properties \[\chi(a,bc) =\chi(a,b)\chi(a,c)^{b}\,,\] \[\chi(ab,c) =\chi(a,c)^{b}\chi(b,c)\,.\] A bicocycle \(\chi\) is said to be symmetric if it satisfies the additional relation \[\chi(a,b)=\chi(b,a)^{gab}\,.\] **Theorem 6.11**.: _Let \(\tau=\,^{\pm\,1}/\sqrt{2|G|}\), let \((-)^{g}\in\text{Gal}(\mathbb{C}/\mathbb{R})\), and let \(\chi:G\times G\to S^{1}_{\sim}\) be a symmetric bicocycle on \(G\) with respect to \((-)^{g}\), whose restriction \(\chi\mid_{A\times A}\) is a nongedegerate bicharacter._ _A quadruple of such data gives rise to a non-split Tambara-Yamagami category \(\mathcal{C}_{\mathbb{C}}(G,g,\tau,\chi)\), with \(\operatorname{End}(\mathds{1})\cong\mathbb{R}\) and \(\operatorname{End}(m)\cong\mathbb{C}\). Furthermore, all equivalence classes of such categories arise in this way. Two categories \(\mathcal{C}_{\mathbb{C}}(G,g,\tau,\chi)\) and \(\mathcal{C}_{\mathbb{C}}(G^{\prime},g^{\prime},\tau^{\prime},\chi^{\prime})\) are equivalent if and only if \(g=g^{\prime}\), and there exists a normalized equivalence between them, which consists of the following data:_ 1. _An isomorphism_ \(f:G\to G^{\prime}\)__ 2. _A map_ \((-)^{h}:\mathbb{C}\to\mathbb{C}\)_, either the identity or complex conjugation,_ 3. _A scalar_ \(\lambda\in S^{1}\subset\mathbb{C}\)_,_ _subject to the following conditions for all \(a,b\in G\):_ \[\chi^{\prime}\Big{(}f(a),f(b)\Big{)} =\frac{\lambda\cdot\lambda^{ab}}{\lambda^{a}\cdot\lambda^{b}} \cdot\chi(a,b)^{h}\;, \tag{67}\] \[\frac{\tau^{\prime}}{\tau} =\frac{\lambda}{\lambda^{g}}\,. \tag{66}\] Proof.: (of Theorem 6.11) To prove the existence, we will construct the category \(\mathcal{C}=\mathcal{C}_{\mathbb{C}}(g,\chi,\tau)\), and show that its associator satisfies the pentagon Equations 23-38. For a given \(a\in G\), let us denote the two projections \(m\otimes m\to a\) as \(\pi_{a,s}=[a](\mathsf{id}_{m}\otimes\overline{s})\), and the two inclusion as \(\iota_{a,t}=(\mathsf{id}_{m}\otimes t)[a]^{\dagger}\). The associator for \(\mathcal{C}\) is given by the following equations \[\alpha_{a,b,c} =\mathsf{id}_{abc},\] \[\alpha_{a,b,m} =\alpha_{m,a,b}\;=\;1,\] \[\alpha_{a,m,b} =\chi(a,b)^{ab},\] \[\alpha_{m,m,a} =\alpha_{a,m,m}\;=\;1\otimes 1,\] \[\alpha_{m,a,m} =\sum_{\begin{subarray}{c}b\in G\\ t\in\{1,i\}\end{subarray}}\left(1\otimes\chi(a,b)^{b}t^{b}\right)\left(\iota_ {b,1}\pi_{b,t}\right),\] \[\alpha_{m,m,m} =\sum_{\begin{subarray}{c}a,b\in G\\ s,t\in\{1,i\}\end{subarray}}\left(1\otimes\iota_{a,t}\right)\circ\left(\frac{ \overline{s}^{gab}t^{b}\tau}{\chi(a,b)^{g}}\right)\circ\left(\pi_{b,s}\otimes 1 \right).\] The left and right unitors \(\ell_{X}\) and \(r_{X}\) are identities for all simple objects \(X\). The associators above are chosen so that the corresponding \(\alpha\), \(\alpha_{i}\)s, \(\beta_{j}\)s and \(\gamma\) of the category \(\mathcal{C}\) will satisfy the following identities \[\alpha\equiv\alpha_{1}\equiv\alpha_{3}\equiv\beta_{1}\equiv \beta_{3}\equiv 1 \tag{68}\] \[\alpha_{2}(a,b)=\chi(a,b)=\beta_{2}(a,b)^{b},\] (69) \[\gamma(a,b)_{s}=\frac{\overline{s}^{ga}\tau}{\chi(a,b)^{gb}}\,.\] The reader will recognize these as the relations that we have already shown to be necessary, assuming we have chosen the correct basis vectors. For brevity, we will refer to the collection \(\{\alpha,\alpha_{1},\alpha_{3},\beta_{1},\beta_{3}\}\) as the easy coefficients. Now that we have translated the associators into our existing notation, we can begin to analyze the validity of Equations 23-38. Equations 23, 24, 25, 28, and 29 are immediate by the triviality of the easy coefficients. Equations 26, 27, and 30 - 33 follow from the fact that \(\chi\) is a symmetric bicocycle. Equations 34 - 36 are all similar to one another. Since the easy coefficients are trivialized, and since \(\gamma(a,b)_{s}\) has the factor \(\overline{s}^{ga}\) as in Equation 69, we may equivalently verify Equations 55 - 58 instead. These reduced equations are immediate from Equations 69 and 68. Finally it is necessary to prove Equation 38. We begin by reducing the right-hand side. \[\sum_{c,s,t}t\otimes s^{gbd}\cdot\beta_{2}(c,b)\cdot\gamma(c,d)_{t} ^{gbd}\cdot\gamma(a,c)_{s}\] \[=\sum_{c,s,t}t\otimes s^{gbd}\cdot\chi(c,b)^{b}\cdot\frac{\overline {t}^{cbd}\tau}{\chi(c,d)^{b}}\cdot\frac{\overline{s}^{ga}\tau}{\chi(a,c)^{gc}}\] \[=\sum_{c,s,t}t\otimes s^{gbd}\cdot\chi(c,b)^{b}\cdot\frac{\overline {t}^{cbd}\tau}{\chi(c,d)^{b}}\cdot\frac{\overline{s}^{ga}\tau}{\chi(c,a)^{a}}\] \[=\tau^{2}\sum_{c,s,t}t\otimes s^{gbd}\overline{s}^{ga}\overline{ t}^{cbd}\cdot\frac{1}{\chi(c,b^{-1})\chi(c,d)^{b}\chi(c,a)^{a}}\] \[=\left(\frac{1}{2|G|}\right)\sum_{c,t}t\otimes\sum_{s}\left(s^{g bd}\overline{s}^{ga}\right)\overline{t}^{cbd}\cdot\frac{1}{\chi(c,b^{-1}d) \chi(c,a)^{a}} \tag{70}\] \[=\left(\frac{1}{2|G|}\right)\left(1+(-1)^{|abd|}\right)\sum_{c,t }t\otimes\overline{t}^{cbd}\cdot\frac{1}{\chi(c,ab^{-1}d)^{a}}\,.\] If \(|abd|=1\), then the entire expression is zero, and this matches the left-hand side of Equation 38, so there is nothing to prove. When \(d=ba^{-1}\), Expression 70 becomes \[=\left(\frac{1}{2|G|}\right)(2)\sum_{c,t}t\otimes\overline{t}^{ac} \cdot\frac{1}{\chi(c,1)^{a}}\] \[=\left(\frac{1}{|G|}\right)\sum_{c,t}t\otimes\overline{t}^{ac}\] \[=\left(\frac{1}{|G|}\right)\sum_{c}\left(1\otimes 1\;+\;i\otimes \overline{i}^{ac}\right)\] \[=\left(\frac{1}{|G|}\right)|G|\left(1\otimes 1\right)\] \[=1\otimes 1\,.\] This also agrees with the left-hand side of Equation 38. Thus the only case left to analyze is the case when both \(|abd|=0\) and \(d\neq ba^{-1}\). Let us set \(f=ab^{-1}d\in A\) in Equation 70 to continue the computation in this final case. \[=\left(\frac{1}{2|G|}\right)(2)\sum_{c,t}t\otimes\overline{t}^{ca} \cdot\frac{1}{\chi(c,f)^{a}}\] \[=\left(\frac{1}{|G|}\right)\sum_{c}2P_{1,ca}\left(1\otimes\frac{1} {\chi(c,f)^{a}}\right)\] \[=\left(\frac{1}{|G|}\right)\sum_{c}2P_{1,ca}\left(\frac{1}{\chi(c, f)^{c}}\otimes 1\right)\] \[=\left(\frac{1}{|G|}\right)\sum_{c}2P_{1,ca}\big{(}\chi(c^{-1},f) \otimes 1\big{)} \tag{71}\] \[=\left(\frac{1}{|G|}\right)\sum_{c}2P_{1,ca}\big{(}\chi(c,f) \otimes 1\big{)}\,.\] Since \(G\) is generalized dihedral, we can write every \(c\in G\) as \(c=uw^{\epsilon}\), where \(u\in A\), \(|w|=1\) and \(\epsilon\in\{0,1\}\). Using this description, Expression 71 becomes \[=\left(\frac{1}{|G|}\right)\left(2P_{1,a}\sum_{u\in A}\chi(u,f) \otimes 1\ +\ 2P_{1,wa}\sum_{u\in A}\chi(uw,f)\otimes 1\right)\] \[=\left(\frac{1}{|G|}\right)\left(2P_{1,wa}\sum_{u\in A}\chi(uw,f )\otimes 1\right)\] \[=\left(\frac{1}{|G|}\right)\left(2P_{1,wa}\sum_{u\in A}\overline {\chi(u,f)}\chi(w,f)\otimes 1\right)\ =\ 0.\] Here we have used nondegeneracy of \(\chi\mid_{A\times A}\) to eliminate the two summations. This again matches the left-hand side of Equation 38, so this equation is satisfied in all cases. This completes the proof of the pentagon equations, and thus establishes the existence of the monoidal categories \(\mathcal{C}_{\mathbb{C}}(G,g,\tau,\chi)\). Now suppose there is an \(\mathbb{R}\)-linear monoidal equivalence \[(F,J):\mathcal{C}:=\mathcal{C}_{\mathbb{C}}(g,\tau,\chi)\to\mathcal{C}_{ \mathbb{C}}(g^{\prime},\tau^{\prime},\chi^{\prime})=:\mathcal{C}^{\prime}\,.\] By virtue of being an equivalence, \(F\) must send \(m\) in \(\mathcal{C}\) to \(m^{\prime}\) in \(\mathcal{C}^{\prime}\). Since \((F,J)\) is monoidal, it must restrict to a group isomorphism \(f:G\to G^{\prime}\). Because of this, we may assume that \(G=G^{\prime}\), and that \(f:G\to G\) is an automorphism. Since \(F\) is \(\mathbb{R}\)-linear, it must induce an \(\mathbb{R}\)-linear isomorphism \(\mathbb{C}\cong\mathrm{End}(m)\to\mathrm{End}(m^{\prime})\cong\mathbb{C}\). Since this Galois group consists of the identity and complex conjugation, let us denote \(\lambda^{h}:=F(\lambda)\in\mathbb{C}\cong\mathrm{End}(m^{\prime})\). Monoidal functors preserve duals, so no matter how \(h\) acts, it must be the case that \(g=g^{\prime}\). Said another way, the property of \(m\) being directly self-dual (\(g=\mathsf{id}\)), or conjugately self-daul (\(g=\overline{(\ )}\)) is an invariant of the monoidal equivalence class of \(\mathcal{C}_{\mathbb{C}}(g,\tau,\chi)\). The monoidal structure map \(J\) is required to satisfy a hexagon relation which is determined by a sequence of three objects. In this case the combinatorics dictate that there are \(2^{3}\) equations that must be satisfied. These requirements are analogous to Equations 39 - 45 and one additional equation relating to \(\gamma\). Here are these 8 equations as they would appear in our current context. \[1 =\delta\theta \tag{73}\] \[\theta =\delta^{L}(\psi)\] (74) \[\chi^{\prime}\Big{(}f(a),f(b)\Big{)} =\frac{\psi(b)\varphi(a)^{b}}{\psi(b)^{a}\varphi(a)}\cdot\chi(a,b )^{h}\] (75) \[\theta =\delta^{R}(\varphi)\] (76) \[\varphi(a)^{gab}\omega(b) =\omega(a^{-1}b)\theta(a,a^{-1}b)\] (77) \[\chi^{\prime}\Big{(}f(a),f(b)\Big{)}^{b} =\frac{\omega(b)^{a}\varphi(a)}{\omega(b)\psi(a)^{gab}}\cdot\chi( a,b)^{hb}\] (78) \[\theta(ba^{-1},a)\omega(ba^{-1}) =\psi(a)\omega(b)^{a}\] (79) \[\sum_{t}\frac{t}{\psi(a)^{gb}}\otimes\frac{\gamma^{\prime}\Big{(} f(a),f(b)\Big{)}_{t}}{\omega(a)} =\sum_{s}\frac{s^{h}}{\omega(b)}\otimes\frac{\gamma(a,b)^{h}_{s}} {\varphi(b)} \tag{72}\] By using Equation 69, we can once again apply Lemma 6.7 to reduce Equation 79 to the following \[\frac{\tau^{\prime}}{\psi(a)^{ab}\omega(a)\chi^{\prime}\Big{(}f(a),f(b)\Big{)} ^{gb}}=\frac{\tau}{\omega(b)^{ga}\varphi(b)\chi(a,b)^{ghb}} \tag{80}\] Respectively setting \(a=1\) in Equations 76, 78, and 77 shows that \[\varphi(1) =\theta(1,b), \tag{82}\] \[\psi(1) =\theta(b,1),\quad\text{and}\] (83) \[\varphi(1) =\psi(1)\;. \tag{81}\] We wish to show that there is a convenient normal form for these equations, as this will simplify the process of determining equivalence, but this cannot be done using only Equations 72 - 79 alone. In exactly the same way we were able to normalize Equations 23 - 38 using a change of basis, we can simplify Equations 72 - 79 by using a monoidal natural isomorphism. If \(\mu:(F,J)\rightarrow(F^{\prime},J^{\prime})\) is a monoidal natural isomorphism from our monoidal functor to an alternative monoidal functor, then it follows that \(F=F^{\prime}\) on the nose. Thus we are simply asking about how to choose an alternate -but equivalent- monoidal structure \(J^{\prime}\). The modifications that we are allowed to make result in the following equations. \[\theta^{\prime} =\theta\cdot\delta(\mu_{-}) \tag{85}\] \[\varphi^{\prime}(a) =\frac{\mu_{m}\mu_{a}}{\mu_{m}^{a}}\cdot\varphi(a)\] (86) \[\psi^{\prime}(a) =\frac{\mu_{m}\mu_{a}}{\mu_{m}^{a}}\cdot\psi(a)\] (87) \[\omega^{\prime}(a) =\frac{\mu_{m}^{ga}\mu_{m}}{\mu_{a}}\cdot\omega(a) \tag{84}\] Equations 81 - 83 imply that \(\psi(1)=\varphi(1)\) is real. Thus we can use a \(\mu\) such that \(\mu_{1}=\varphi(1)^{-1}\), and all other \(\mu_{X}\) trivial, in order to assume that \(\psi(1)=\varphi(1)=\theta(b,1)=\theta(1,b)=1\). By taking the norm of both sides of Equation 73, we find that \(|\theta|=\delta|\psi|\). If we set \(\mu_{a}:=|\psi(a)|^{-1}\), this allows us to assume that \(|\theta|=1\), and hence that \(\theta(a,b)\in\{\pm 1\}\). Now set \(b=a\) in Equations 76 and 78 to find that \[\varphi(a) =\frac{\omega(1)^{g}}{\omega(a)^{g}} \tag{89}\] \[\psi(a) =\frac{\omega(1)}{\omega(a)^{a}}\,. \tag{88}\] Equation 88 allows us to express \(\omega\) in terms of \(\varphi\), and using this we can expand Equation 77 for arbitrary \(a\) and \(b\) to find that \[\frac{\varphi(a)}{\varphi(a)^{b}}\ =\ \frac{\varphi(b)}{\varphi(b)^{a}}. \tag{90}\] Case analysis shows that Equation 90 implies that there is some \(\lambda\in S^{1}_{\sim}\) such that \(\varphi^{2}=\delta\lambda^{2}\). By taking square roots, we find that \(\varphi(a)=\pm(\delta\lambda)(a)\) with the sign possibly depending on \(a\). We have used the magnitude of \(\mu_{a}\), but we are still free to use the sign of \(\mu_{a}\) in Equation 85 to ensure that \[\varphi(a)=\frac{\lambda}{\lambda^{a}}\,, \tag{91}\] which eliminates the sign ambiguity. Next we can use \(\mu_{m}=\lambda^{-1}\) in Equation 85 to assume that \(\varphi\equiv 1\). In light of \(\varphi\) being trivial, Equation 88 implies that \(\omega(a)=\omega(1)\), so \(\omega\) is constant. This combines with Equation 89 to yield a nice formula for \(\psi\) in terms of the constant \(\omega(1)\). In summary, after normalization we arrive at the following formulas \[\varphi(a) =1 \tag{93}\] \[\omega(a) =\omega(1)\] (94) \[\psi(a) =\frac{\omega(1)}{\omega(1)^{a}}\,. \tag{92}\] These new formulas can be combined with Equation 74 to produce \[\chi^{\prime}\Big{(}f(a),f(b)\Big{)}=\frac{\omega(1)\cdot\omega(1)^{ab}}{\omega(1) ^{a}\cdot\omega(1)^{b}}\cdot\chi(a,b)^{h}\;, \tag{95}\] They can also be combined with Equations 74 and 77 to imply that \[\omega(1)^{2}\;=\;\big{(}\omega(1)^{2}\big{)}^{g}\,. \tag{96}\] Finally Equations 92, 93, 94, and 96 can be used to reduce Equation 80 to derive \[\frac{\tau^{\prime}}{\tau}\;=\;\frac{\omega(1)}{\omega(1)^{g}}\;. \tag{97}\] By setting \(\omega(1)=\lambda\), the reader will recognize Equations 95 and 97 as Conditions 66 and 67 respectively from the statement of the theorem. Thus we have shown that a generic equivalence forces \(g=g^{\prime}\), gives rise to the data \((f,h,\lambda)\) stated in the theorem, and makes Conditions 66 and 67 necessary. We now turn to the question of sufficiency. Suppose that \(g=g^{\prime}\) and that the data \((f,h,\lambda)\) are given. Then the pair \((f,h)\) uniquely determines the underlying functor \(F:\mathcal{C}_{\mathbb{C}}(g,\tau,\chi)\to\mathcal{C}_{\mathbb{C}}(g,\tau^{ \prime},\chi^{\prime})\). Define the functions \[J_{a,b}=\mathsf{id}_{a\otimes b}\;,\quad J_{a,m}=\mathsf{id}_{a\otimes m}\;, \quad J_{m,a}=\left(\frac{\lambda}{\lambda^{a}}\right)\otimes\mathsf{id}_{a} \;,\quad J_{m,m}=\mathsf{id}_{m}\otimes\lambda\;.\] These are the components of a monoidal structure map, and are clearly isomorphisms. This monoidal structure map \(J\) can be described in terms of complex valued coordinate functions \((\theta,\varphi,\psi,\omega)\) as follows \[\theta(a,b)=1\;,\quad\;\varphi(a)=1\;,\quad\;\;\psi(a)=\frac{\lambda}{\lambda^ {a}}\;,\quad\;\;\omega(a)=\lambda\;.\] The fact that these coefficient functions satisfy Equations 72 through 79 is easy to check. We comment that Condition 66 is used to prove Equation 74, Condition 67 is used to prove Equation 76 and 78, and both Conditions 66 and 67 are necessary to prove Equations 77 and 79. Thus the pair \((F,J)\) is a monoidal equivalence, and the theorem is proven. **Example 6.12**.: The simplest dihedral group is the group \(G=D_{2\cdot 1}\cong\mathbb{Z}/2\mathbb{Z}\). This corresponds to the case where \(A\) is the trivial group. Let us denote the nontrivial element of \(G\) by \(w\). By using Theorem 6.11, we find that there are are exactly four categories with complex \(m\), and group \(G\). Upon base extension to \(\mathbb{C}\), these categories become pointed, with fusion rules corresponding to \(\mathbb{Z}/4\mathbb{Z}\) or \((\mathbb{Z}/2\mathbb{Z})^{2}\). When \(g\) conjugates, the symmetry of \(\chi\) implies that \(\chi(w,w)=\pm 1\). Condition 67 implies that it is possible for \(\tau^{\prime}=-\tau\) by setting \(\lambda=\pm i\). This shows that there is an equivalence \(\mathcal{C}(g,\chi,\tau)\simeq\mathcal{C}(g,\chi,-\tau)\). Thus the only relevant invariant of these categories is the number \(\chi(w,w)\). The case where \(\chi(w,w)=1\) becomes \(\mathbb{C}\)-\(\mathrm{Vec}_{(\mathbb{Z}/2\mathbb{Z})^{2}}\) upon extension to \(\mathbb{C}\), while the case where \(\chi(w,w)=-1\) becomes \(\mathbb{C}\)-\(\mathrm{Vec}_{(\mathbb{Z}/2\mathbb{Z})^{2}}^{\xi}\) upon extension to \(\mathbb{C}\), where \[\zeta\xi(a^{i}b^{j},a^{k}b^{\ell},a^{m}b^{n})=(-1)^{ikm+j\ell n}\] is a representative cocycle for the associator. Using the descent theory of [12], it can be verified that these are the only two cohomology classes in \(H^{3}\big{(}(\mathbb{Z}/2\mathbb{Z})^{2};\mathbb{C}^{\times}\big{)}\) for which the corresponding pointed category admits a real form. When \(g\) doesn't conjugate, Condition 67 implies that \(\tau^{\prime}=\tau\). Theorem 6.11 implies that the scalar \(\chi(w,w)\) can always be normalized to be \(1\) by choosing any \(\lambda\) such that \(\lambda^{4}=\chi(w,w)^{-1}\). Thus the only relevant invariant of these categories is \(\tau=\pm\frac{1}{2}\). The case where \(\tau=\frac{1}{2}\) becomes \(\mathbb{C}\)-Vec\({}_{\mathbb{Z}/4\mathbb{Z}}\) upon extension to \(\mathbb{C}\), while the case where \(\tau=-\frac{1}{2}\) becomes \(\mathbb{C}\)-Vec\({}_{\mathbb{Z}/4\mathbb{Z}}^{\nu^{2}}\), where \[\upsilon^{2}(a^{i},a^{j},a^{k})=(-1)^{i\cdot\lfloor\frac{i+k}{4}\rfloor}\] represents the unique cohomology class of order two in \(H^{3}(\mathbb{Z}/4\mathbb{Z};\mathbb{C}^{\times}\big{)}\). Descent theory again verifies that these are the only two associators that allow the category to have a real form. **Example 6.13**.: When \(A=\mathbb{Z}/n\mathbb{Z}\), the group is \(G=D_{2\cdot n}\), the dihedral group of order \(2n\). If \(a\in A\) is a generator, then \(\chi(a,a)\) must be a primitive \(n^{\text{th}}\) root of unity. The symmetry condition then implies that \(g\) is allowed to conjugate only in the case where \(A=\mathbb{Z}/2\mathbb{Z}\), and for all other cases \(g\) must not conjugate. Note that this restriction is only for classical dihedral groups. For generalized dihedral groups where \(A\) is non-cyclic, there are typically more options. ## 7. Analysis of Complex Galois case The final case under consideration is where all simple objects are complex, and \(m\) is the unique Galois nontrivial simple object. The analysis follows the same pattern as before, with care taken now to keep track of each time the Galois action of \(m\) is applied. The pentagon equations are as follows \[\delta\alpha =1 \tag{99}\] \[\delta\alpha_{3} =\alpha^{-1}\] (100) \[\delta\alpha_{1} =\overline{\alpha}\] (101) \[\alpha_{2}(a,bc) =\alpha_{2}(a,c)\alpha_{2}(a,b)\] (102) \[\alpha_{2}(ab,c) =\alpha_{2}(b,c)\alpha_{2}(a,c)\] (103) \[\overline{\alpha(a,b,b^{-1}a^{-1}c)}\beta_{1}(ab,c) =\beta_{1}(b,a^{-1}c)\beta_{1}(a,c)\overline{\alpha_{3}(a,b)}\] (104) \[\beta_{3}(ab,c)\overline{\alpha(cb^{-1}a^{-1},a,b)} =\alpha_{1}(a,b)\beta_{3}(b,c)\beta_{3}(a,cb^{-1})\] (105) \[\beta_{2}(b,c) =\beta_{2}(b,a^{-1}c)\overline{\alpha_{2}(a,b)}\] (106) \[\beta_{2}(a,c) =\alpha_{2}(a,b)\beta_{2}(a,cb^{-1})\] (107) \[\beta_{1}(a,c)\beta_{3}(b,c) =\beta_{3}(b,a^{-1}c)\overline{\alpha(a,a^{-1}cb^{-1},b)}\beta_{ 1}(a,cb^{-1})\] (108) \[\beta_{2}(a,c)\beta_{2}(b,c) =\alpha_{3}(a,b)\beta_{2}(ab,c)\overline{\alpha_{1}(a,b)}\] (109) \[\alpha_{2}(a,c)\gamma(c,b) =\overline{\beta_{1}(a,b)}\alpha_{3}(a,a^{-1}b)\gamma(c,a^{-1}b)\] (110) \[\alpha_{2}(b,a)\gamma(c,b) =\beta_{3}(a,c)\alpha_{1}(ca^{-1},a)\gamma(ca^{-1},b)\] (111) \[\alpha_{1}(a,c)\gamma(c,b) =\overline{\beta_{2}(a,b)}\beta_{1}(a,ac)\gamma(ac,b)\] (112) \[\alpha_{3}(b,a)\gamma(c,b) =\beta_{2}(a,c)\overline{\beta_{3}(a,ba)}\gamma(c,ba)\] (113) \[\delta_{d,ba^{-1}}\beta_{3}(a,b)\beta_{1}(ba^{-1},b) =\sum_{c}\beta_{2}(c,b)\overline{\gamma(c,d)}\gamma(a,c) \tag{98}\] The change of basis formulas are \[\alpha^{\prime} =\delta\theta\cdot\alpha \tag{115}\] \[\alpha^{\prime}_{1}(a,b) =\frac{\overline{\psi}(ab)\theta(a,b)}{\overline{\psi}(a) \overline{\psi}(b)}\cdot\alpha_{1}(a,b)\] (116) \[\alpha^{\prime}_{2}(a,b) =\alpha_{2}(a,b)\] (117) \[\alpha^{\prime}_{3}(a,b) =\frac{\varphi(b)\varphi(a)}{\varphi(ab)\theta(a,b)}\cdot\alpha_{ 3}(a,b)\] (118) \[\beta^{\prime}_{1}(a,b) =\frac{\omega(a^{-1}b)\theta(a,a^{-1}b)}{\overline{\varphi}(a) \omega(b)}\cdot\beta_{1}(a,b)\] (119) \[\beta^{\prime}_{2}(a,b) =\frac{\varphi(a)}{\overline{\psi}(a)}\cdot\beta_{2}(a,b)\] (120) \[\beta^{\prime}_{3}(a,b) =\frac{\overline{\psi}(a)\overline{\omega}(b)}{\theta(ba^{-1},a) \overline{\omega}(ba^{-1})}\cdot\beta_{3}(a,b) \tag{114}\] The sequence of deductions that follow is only a superficial modification of the original argument of [17], but we include it here for completeness. By setting \(\theta=\alpha_{3}\), and \(\varphi\equiv 1\) in Equation 117, we may assume that \(\alpha\equiv 1\) and \(\alpha_{3}\equiv 1\). Setting \(\overline{\psi}(a)=\beta_{2}(a,1)\), in Equation 119 allows us to assume that \(\beta_{2}(a,1)\equiv 1\). Next, we can substitute \(b=1\) into Equation 118 to find that \[\beta_{1}^{\prime}(a,1)=\frac{\omega(a^{-1})\theta(a,a^{-1})}{\omega(1)}\cdot \beta_{1}(a,1)\,.\] By rearranging this formula, we find that if we set \[\omega(a^{-1}):=\frac{\omega(1)}{\theta(a,a^{-1})\beta_{1}(a,1)}\,,\] this will allow us to assume that \(\beta_{1}(a,1)\equiv 1\). Upon substituting \(c=1\) into Equation 103, this new normalization shows that \(\beta_{1}\equiv 1\). Setting \(a=c\) in Equation 105, and \(b=c\) in Equation 106 imply that \[\alpha_{2}(b,a)\;=\;\beta_{2}(b,a)\;=\;\overline{\alpha_{2}(a,b)}\,.\] By Equations 101 and 102, \(\beta_{2}\) is a bicharacter, and thus Equation 108 implies that \(\alpha_{1}\equiv 1\). Equation 107 implies that \(\beta_{3}(a,b)=\beta_{3}(a,1)\), and by setting \(a=b\) in Equation 109, we find that \[\gamma(a,b)=\alpha_{2}(a,b)\gamma(a,1)\,.\] We can combine these with Equations 110 and 111 to obtain \[\gamma(c,1)\;=\;\beta_{3}(a,1)\gamma(ca^{-1},1)\qquad\gamma(c,1)\;=\;\gamma( ac,1)\,,\] which shows that \(\beta_{3}\equiv 1\) and \[\gamma(a,b)\;=\;\alpha_{2}(a,b)\gamma(1,1)\,.\] With all these observations in place, Equation 113 reduces to \[\delta_{d,ba^{-1}}\;=\;\gamma(1,1)^{2}\sum_{c\in G}\alpha_{2}\big{(}c,ba^{-1}d ^{-1}\big{)}\,,\] which is equivalent to \(\alpha_{2}\) being nondegenerate, and \(\gamma(1,1)^{2}|G|=1\). **Theorem 7.1**.: _Let \(\tau=\nicefrac{{\pm 1}}{{\sqrt{|G|}}}\), and let \(\chi:G\times G\to\mathbb{C}^{\times}\) be a nondegenerate skew-symmetric bicharacter on \(G\)._ _A triple of such data gives rise to a non-split Tambara-Yamagami category \(\mathcal{C}_{\overline{\mathbb{C}}}(G,\tau,\chi)\), with \(\operatorname{End}(X)\cong\mathbb{C}\) for every simple object \(X\). Furthermore, all equivalence classes of such categories arise in this way. Two categories \(\mathcal{C}_{\overline{\mathbb{C}}}(G,\tau,\chi)\) and \(\mathcal{C}_{\overline{\mathbb{C}}}(G^{\prime},\tau^{\prime},\chi^{\prime})\) are equivalent if and only if \(\tau=\tau^{\prime}\), and there exist isomorphisms:_ 1. \(f:G\to G^{\prime}\)_, and_ 2. \((\text{i}\,\text{i}\,)\)__\((-)^{h}:\mathbb{C}\to\mathbb{C}\)_, either the identity or complex conjugation,_ _subject to the condition that for all \(a,b\in G\):_ \[\chi^{\prime}\Big{(}f(a),f(b)\Big{)}=\chi(a,b)^{h}\;. \tag{121}\] Proof.: The first statement follows from a construction of the category \(\mathcal{C}_{\overline{\mathbb{C}}}(\tau,\chi)\), and this is completely analogous to the construction in [13]. The 'furthermore' statement follows from the reduction process outlined above. The main subtlety in the equivalence classification comes from the fact that these categories are only fusion over \(\mathbb{R}\) and not over \(\mathbb{C}\). This means that functors are allowed to act by nontrivial homomorphisms on the algebras \(\operatorname{End}(X)\). The equivalences are monoidal, and the naturality property of the monoidal structure map dictates that all of the algebra homomorphisms \(F:\operatorname{End}(X)\to\operatorname{End}\bigl{(}F(X)\bigr{)}\) are completely determined by what happens on \(\operatorname{End}(\mathds{1})\). Since this map is allowed to be either the identity or complex conjugation, the result follows. **Remark 7.2**.: The reader may recognize this as a skew-symmetric analogue of the classical Tambara-Yamagami classification. Nondegenerate skew-symmetric bicharacters on finite abelian groups were classified in [11]. The classification involves the familiar 'hyperbolic' bicharacters on the \(p\)-primary summands, with extra possibilities for the case when \(p=2\). **Example 7.3**.: Let \(G=\mathbb{Z}/2\mathbb{Z}=\langle t\rangle\). The bicharacter \(\chi(t,t)=-1\) is nondegenerate, and simultaneously symmetric and skew-symmetric. Thus in addition to giving rise to classical split Tambara-Yamagami categories, it can also be used to produce \(\mathcal{C}_{\overline{\mathbb{C}}}(G,\tau,\chi)\). The only difference between the two constructions is the Galois nontriviality of \(m\). **Example 7.4**.: Let \(G=(\mathbb{Z}/4\mathbb{Z})^{2}=\langle x,y\rangle\). Define a skew-symmetric bicharacter by the formulae \[\chi(x,x)=1\,,\qquad\chi(x,y)=i\,,\qquad\chi(y,y)=-1\,.\] This is nondegenerate, so \(\chi\) can be used to produce \(\mathcal{C}_{\overline{\mathbb{C}}}(G,\tau,\chi)\). This type of bicharacter has no nondegenerate analogues for odd \(p\)-primary groups.
2309.12974
Effect of strain on the electronic and magnetic properties of bilayer T-phase VS2: A first-principles study
Using the Density Functional Theory (DFT) calculations, we determined the electronic and magnetic properties of a T-phase VS$_2$ bilayer as a function of tensile and compressive strain. First, we determine the ground state structural parameters and then the band structure, magnetic anisotropy, exchange parameters, and Curie temperature. Variation of these parameters with the strain is carefully analyzed and described. The easy-plane anisotropy, which is rather small in the absence of strain, becomes remarkably enhanced by tensile strain and reduced almost to zero by compressive strain. We also show that the exchange parameters and the Curie temperature are remarkably reduced for the compressive strains below roughly -4$\%$.
Mirali Jafari, Anna Dyrdał
2023-09-22T16:16:13Z
http://arxiv.org/abs/2309.12974v2
Effect of strain on the electronic and magnetic properties of bilayer T-phase VS\({}_{2}\): A first-principles study ###### Abstract Using the Density Functional Theory (DFT) calculations, we determined the electronic and magnetic properties of a T-phase VS\({}_{2}\) bilayer as a function of tensile and compressive strain. First, we determine the ground state structural parameters and then the band structure, magnetic anistropy, exchange parameters and Curie temperature. Variation of these parameters with the strain is carefully analysed and described. We show, that the easy-plane anisotripy, which rather small in the absence of strain, is remarkably enhanced by the tensile strain and reduced almost to zero by compressive strain. Also magnetic moment of Vanadium atoms is shown to be reduced by a compressive strain. We also show that the exchange parameters and the Curie temperature are remarkably reduced for the compressive strains below roughly -4%. **Keywords:**\(VS_{2}\), Trigonal Phase, DFT, Strain, Electronic and magnetic properties ## I Introduction Two-dimensional (2D) transition metal dichalcogenides (TMDs) represent a wide class of materials, that have been extensively investigated recently as they are of high promising potential for applications in nanoelectronic and optoelectronic devices [1; 2; 3; 4; 5; 6; 7; 8]. Generally, TMDs correspond to a general chemical formula, MX\({}_{2}\), where M denotes a transition metal element, such as Mo, W, Nb, and V, while X stands for a chalcogen element, like S, Se, and Te. Currently, over 40 different TMDs are known, including metallic, semiconducting, and superconducting ones [9; 10; 11; 12; 13]. Notably, the 2D layered TMDs display distinct physical properties, when compared with their bulk counterparts - especially in the context of band structures [14]. Several experimental methods and techniques are currently known, which allow to obtain monolayers of various TMDs, including mechanical exfoliation techniques, liquid exfoliation methods, and chemical vapor deposition (CVD). These techniques collectively facilitate the successful production of various TMD-monolayers, increasing thus our understanding of these interesting materials [15; 3; 16]. Typically, layers of vanadium (V) based TMDs exist in two structural phases, namely the 2H phase characterized by trigonal prismatic coordination, and the 1T phase with octahedral coordination [2; 17]. Most primitive TMDs inherently lack of magnetic properties, which limits their applicability in magnetic-based devices. Nevertheless, inducing magnetic properties in these materials is feasible with various techniques, such as doping with various point defects, adsorption of non-metal elements, or exploiting edge effects [18; 19; 20; 21; 22; 23]. For instance, the formation of triple vacancies in single-layer MoS\({}_{2}\) has been proposed as a tool to generate a net magnetic moment, whereas other defects related to Mo and S atoms do not affect the non-magnetic ground state [24]. In MoS\({}_{2}\) nanoribbons, the interplay of defects and adsorption can be used for tuning between non-magnetic and magnetic states, depending on the type of defects introduced and the specific sites where the adatoms are adsorbed. However, extending this ability to other TMD materials has proven to be intricate, as the induced magnetic properties are highly dependent on the nature of defects, edge states, and position of dopants, which leads to significant experimental challenges. On the other hand, computational calculations can be used to elucidate physical properties of TMDs, down to single monolayers. Indeed, such calculations show that monolayers of VX\({}_{2}\) (where X = S and Se) exhibit intriguing ferromagnetic behavior, providing thus evidence of magnetic properties of pristine 2D monolayers [25]. This insight opens new avenues for the fabrication of ferromagnetic TMDs without resorting to doping with point defects, non-metal element adsorption, or external forces like tensile strain. Encouraged by these theoretical predictions, researchers successfully synthesized ultrathin VS\({}_{2}\) nanosheets with less than five S-V-S atomic layers, using a modified all-in-solution method. The ensuing experimental results confirmed the presence of room temperature ferromagnetism (FM) in these ultrathin VS\({}_{2}\) nanosheets, accompanied by a very weak antiferromagnetism (AFM). In our study we have chosen the T-phase of VS\({}_{2}\) for detailed investigation, mainly due to its unique and fascinating electronic properties. The T-phase, characterized by octahedral sulfur coordination around vanadium atoms, presents a promising avenue for exploring new phenomena in two-dimensional materials. The T phase is different from its more commonly studied 2H-phase counterpart, and exhibits distinctive intrinsic electronic correlations. It is worth noting, that the band structure of the T-phase offers new application possibilities in spintronics and quantum computing [26; 27]. To get an insight into the fundamental properties of VS\({}_{2}\), we have decided to investigate the effects of bi-axial strain [28; 29] on the VS\({}_{2}\) bilayer configuration. This choice is supported by the following arguments. Firstly, the bilayer structures reveal interlayer interactions and display electronic phenomena that are absent in single-layer counterparts [30; 31]. Secondly, bilayer structures are experimentally accessible and relevant for device applications. Their suitability for fabrication and potential to tailor electronic properties make them interesting for both fundamental research and technological advancement. The remaining part of this paper is structured as follows: Section II provides an overview of the methodology and computational techniques employed. Section III elaborates on the findings derived from the computational analysis. Finally, Section IV encapsulates the concluding remarks. ## II Computational details The first-principles calculations were performed using the Density Functional Theory (DFT) and the Quantum ATK code package (version 2021.06-SP2) [32]. The calculations were based on the Hohenberg-Kohn theorem [33] and Kohn-Sham [34] equations, and utilized the SG15 collection of optimized norm-conserving Vanderbilt (ONCV) pseudopotentials with Ultra Linear Combination of Atomic Orbitals (LCAO-Ultra) basis set [35]. The exchange-correlation interaction of electrons was described using the Perdew-Burke-Ernzerhof (PBE) generalized-gradient approximation (GGA) [36]. The calculations were performed with a converged energy mesh-cutoff of 500 Ry and the total energy convergence criteria of 10\({}^{-6}\) eV, with higher criteria of 10\({}^{-8}\) eV for magnetic anisotropy energy calculations. The two-dimensional Brillouin zone was sampled using a \(\Gamma\)-centered Monkhorst-Pack method [37] with a k-point grid of 25 \(\times\) 25 \(\times\) 1. All structures were fully optimized and minimized until the force on each atom was less than 0.02 eV/A. To avoid artificial interaction between image layers, vacuum layers of 25 Angstroms were introduced. Additionally, a weak and non-local van der Waals (vdW) interaction was included in bilayer structures of VS\({}_{2}\) material to optimize lattice parameters and bondlengths. The dispersion interactions were accounted for using Grimme DFT-D2 semi-empirical corrections [38]. Furthermore, all the structures were fully optimized in the presence of Hubbard U parameter (U=2 eV), where GGA+U was employed to consider the electron-electron correlation effect of the localized 3\(d\) orbitals of Vanadium (V) [39]. ## III Results and discussions ### Structural properties The T-phase vanadium disulfide (VS\({}_{2}\)) is a van der Waals layered material, in which an individual monolayer consists of a single layer of vanadium atoms sandwiched between two layers of sulfur atoms, as shown schematically in Fig.1. The lattice structure of T-phase VS\({}_{2}\) is hexagonal, with each vanadium atom being surrounded by six nearest-neighbor sulfur atoms. The lattice constants of the unit cell in the T-phase VS\({}_{2}\) are denoted by \(a\) and \(b\), which are equal in honeycomb structures, i.e., \(a=b\). As illustrated in Fig.2, the distance between two Vanadium atoms located in different monolayers in a bilayer, d\({}_{V-V}\), exhibits a non-monotonic dependence on the biaxial strain, increasing smoothly with increasing compressive strain from 0% to -10%, and decreasing with increasing tensile strain from 0% to 10%. A weak but noticeable nonmonotonic behaviour appears within the range of -4% to -8% of compressive strain, where a relatively fast increase appears in the range of -4% to -6%, followed by a decrease in the range of -6% to -8% of compressive strain. This general tendency in the behavior of d\({}_{V-V}\) with strain can be attributed to the competition between the increasing repulsive forces between the Vanadium atoms under compression and the decreasing attractive forces under tension. The nonmonotonic behavior in the above mentioned range of compressive strains may be due to the occurrence of an energy barrier that needs to be overcome for further compression. The bond length between Vanadium and Sulfur atoms (d\({}_{V-S}\)) is found to be sensitive to the direction and magnitude of the applied strain. The d\({}_{V-S}\) exhibits a smooth decrease with increasing compressive strain from 0% to -10%. Under compressive strain, the compression of the lattice constants enhances the covalent interactions be Figure 1: Schematic structure of bilayer of T-VS\({}_{2}\) for (a) Top, (b) Side perspectives in which (\(a=b\)) shows the hexagonal lattice constants, (d\({}_{V-V}\)) is the inter-layer distance between Vanadium atoms, (d\({}_{V-S}\)) is the bond length of Vanadium atoms with the surrounded Sulfur atoms, (d\({}_{S-S}\)) is the distance between Sulfur atoms in each layer. tween Vanadium (V) and Sulfur (S) atoms, resulting in a decrease in the bond length between them. Conversely, under tensile strain, the elongation of the lattice constants weakens the covalent interactions between the V and S atoms, resulting in an increase in the bond length between them. However, under tensile strain, d\({}_{V-S}\) exhibits a non-monotonic variation; it increases for the tensile strain up to 6%, and then smoothly decreases at the higher values of the tensile strains. This anomalous behavior can be explained by the evolution of the electronic structure of the \(VS_{2}\) bilayer under strain, which alters the hybridization of the orbitals involved in the V-S bond. Specifically, the tensile strain can induce a weakening of the V-S bond due to the destabilization of the 3d orbital of the Vanadium atom, leading to an initial elongation of d\({}_{V-S}\). However, at higher tensile strains, the hybridization of the V-S orbitals changes, leading to a stabilization of the 3d orbital of the Vanadium atom and a subsequent contraction of d\({}_{V-S}\). The distance between Sulfur atoms, d\({}_{S-S}\), is also observed to vary monotonically with increasing compressive and tensile strain, increasing with increasing compressive strain, and decreasing with increasing tensile strain. This trend is due to the changes in the electrostatic interaction between the Sulfur atoms and the Vanadium atoms in the \(VS_{2}\) bilayer, which are influenced by the changes in the inter-layer distance and the electronic structure of the bilayer. ### Static electronic and magnetic properties #### ii.2.1 Electronic bandstructure To obtain the spin-resolved electronic band structure, we need to determine first the ground state. In the case of the T-phase of VS\({}_{2}\), finding the ground state is challenging due to the Coulomb interaction. It has been shown that the ground state of this material can alternate between antiferromagnetic (AFM) and ferromagnetic (FM), when the Coulomb interaction is taken into consideration. To determine the most stable geometry of the T-VS\({}_{2}\) bilayer, we calculated the total energy of FM and AFM configurations for different values of \(U_{\rm eff}\) (ranging from 0 to 3). We fully optimized the structure without the Coulomb interaction (\(U=0\)), and found that it had an AFM ground state. Structures with \(U_{\rm eff}\) greater than or equal to 1 were found to be FM. We then optimized the structure for \(U_{\rm eff}\) ranging from 1 to 3, and selected the optimized structure with \(U_{\rm eff}\)=2 eV, based on its stability which was determined by changes in the lattice parameters and bond lengths. Using this value of \(U_{\rm eff}\), we calculated all other relevant properties. We employed the DFT calculations to investigate the spin-resolved bandstructure of the bilayer of T-VS\({}_{2}\) under biaxial strain with and without SOC. Our findings, summarized in Table 2, reveal that the unstrained structure exhibits metallic behavior, which is preserved under all compressive strains examined. However, we observe a significant change in the material's behavior at strains of +4% and +8%, which is attributed to the introduction of SOC. Specifically, the opening of a bandgap at these strains leads to the transition of the bilayer of T-VS\({}_{2}\) from a metal to a very weak semiconductor. In addition, we computed the spin magnetic moment for each strain using GGA+U and GGA+SOC methods. Figure Figure 3: Spin Magnetic Moment (MM) as a function of applied bi-axial strain Figure 2: Geometry changes of T-VS\({}_{2}\) bilayer under the bi-axial strain from -10% to +10%. Here we show the lattice parameter (\(a=b\)), the inter-layer distance between Vanadium atoms (d\({}_{V-V}\)), the bond length of Vanadium atoms with the surrounded Sulfur atoms (d\({}_{V-S}\)), and the distance between Sulfur atoms in each layer (d\({}_{S-S}\)). 3 shows the variation of the spin magnetic moment under different strains. Our results demonstrate that the spin magnetic moment increases under the tensile strain, while it decreases under the compressive strain. Moreover, at higher compressive strains, the magnetization of the structure nearly disappears. This behavior could be due to the weakening of the interlayer interaction and the distortion of the crystal lattice. As the lattice compression increases, the magnetic moment decreases until reaching a critical point where the magnetic order disappears. #### iii.2.2 Magnetic anisotropy energy Magnetic anisotropy energy (MAE) plays a crucial role in determining the ground state magnetization orientation, and can be calculated using the force theorem by evaluating the energy difference between relevant spin orientations. In this paper, we focus on the perpendicular anisotropy, which is defined as the energy difference between magnetizations along two specific crystallographic orientations. More precisely, the perpendicular anisotropy is defined as \(\text{MAE}=E_{[100]}-E_{[001]}\) (or \(\text{MAE}=E_{[010]}-E_{[001]}\), when the system is magnetically isotropic or near isotropic in the plane), where the positive value of MAE corresponds to the perpendicular easy axis, while the negative value to the perpendicular hard axis, i.e. to the easy-plane anisotropy. Our results, as illustrated in Figure 4, show that the pure T-VS\({}_{2}\) structure without any strain exhibits a relatively small easy-plane magnetic anisotropy, which increases under the tensile strain due to the increase in crystal field splitting. In contrast, under compressive strain, the easy-plane magnetic anisotropy decreases and becomes close to zero, then for for the strain below \(\leq-4\%\) is roughly equal to zero. This result can be attributed to the decrease in crystal field splitting, which makes the system more isotropic. ### Exchange parameters, Curie temperature We utilized the energy mapping technique developed by Liechtenstein _et al_ (1987) Liechtenstein and Schmitz (1987) Liechtenstein and Schmitz (1987) to determine the Heisenberg intra-layer exchange coupling constants between the nearest-neighbors (\(J_{1}\)) and next nearest-neighbors (\(J_{2}\)), as well as for the inter-layer coupling (\(J_{\text{int}}\)). To calculate the exchange coupling constants, we first determined the total energy of the system as a function of the magnetic configuration of neighboring spins. We then calculated the energy difference between two magnetic configurations in which only two neighboring spins are flipped. The exchange coupling constants were then obtained from the energy difference and distance between the two spins. Our results for the pure structure show that both \(J_{1}\) and \(J_{2}\) are positive, indicating that the magnetic interactions between the nearest and next nearest neighbors are ferromagnetic. The parameter \(J_{\text{int}}\) is also positive, indicating that the magnetic interaction between spins in different layers is ferromagnetic, too. Furthermore, we found that \(J_{1}\) is the largest coupling parameter, \(J_{2}\) is smaller than \(J_{1}\), and \(J_{int}\) is the smallest from the set of the three determined coupling constants. Our calculations of the Heisenberg exchange coupling constants for the bilayer of T-phase VS\({}_{2}\) reveal several interesting behaviors. First, we find that both GGA+U and GGA+SOC methods yield similar behavior for \(J_{1}\), but with small differences in magnitude. Specifically, the \(J_{1}\) values obtained from both methods are almost the same at zero strain, with the GGA+U values being slightly larger than the GGA+SOC values. This can be attributed to the fact that the GGA+U method tends to overestimate the strength of the Coulomb interaction, leading to larger values of the exchange coupling constants. However, at higher tensile strains, the \(J_{1}\) values obtained from GGA+U and GGA+SOC calculations converge, and the difference between them becomes \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Strain [\(\text{S}\)] & -10 & -8 & -6 & -4 & -2 & 0 & 2 & 4 & 6 & 8 & 10 \\ \hline \(J_{1}\) & +U & -0.00 & -0.04 & -0.20 & -0.50 & -0.74 & -1.76 & 5.92 & 6.65 & 6.34 & 5.91 & 5.38 \\ \hline \(J_{2}\) & +SOC & 0 & 0 & 0 & 0 & 0.64 & -1.81 & 7.15 & 7.17 & 6.12 & 5.62 & 5.70 \\ \hline \(J_{3}\) & +U & 0.40 & 1.05 & 1.23 & -0.46 & -0.28 & 1.66 & 0.22 & 2.32 & 2.20 & 2.38 & 2.29 \\ \hline \(J_{4}\) & +SOC & 0 & 0 & 0 & 0 & 0.92 & 0.93 & 0.16 & 1.61 & 1.94 & 2.04 & 2.11 \\ \hline \(J_{3}\) & +U & 0.02 & -0.26 & -0.01 & 0.27 & 0.96 & 0.77 & 2.14 & 0.53 & 0.30 & 0.08 & -0.17 \\ \hline \(J_{4}\) & +SOC & 0 & 0 & 0 & 0 & 0.09 & 0.24 & 0.35 & 0.35 & 0.26 & 0.11 & -0.07 \\ \hline \end{tabular} \end{table} Table 2: Changes in the exchange paramters for J\({}_{1}\), J\({}_{2}\), and J\({}_{int}\) under bi-axial strain Figure 4: Magnetic Anisotropy Energy changes during the applied bi-axial strain from -10% to +10% \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Strain [\(\text{S}\)] & -10 & -8 & -6 & -4 & -2 & 0 & 2 & 4 & 6 & 8 & 10 \\ \hline \(J_{3}\) & +U & -0.00 & -0.04 & -0.20 & -0.50 & -0.50 & -0.50 & -0.50 & -0.50 & -0.50 & -0.50 & -0.50 & -0.50 \\ \hline \(J_{4}\) & +SOC & 0 & 0 & 0 & 0.092 & 0.09 & 0.09 & 0.16 & 1.88 & 1.39 & 1.38 & 1.45 \\ \hline \(J_{5}\) & +SOC & 0 & 0 & 0 & 0.092 & 0.09 & 0.09 & 0.24 & 0.35 & 0.32 & 0.49 & -0.50 & -0.50 & -0.50 & -0.50 & -0.50 & -0.50 \\ \hline \end{tabular} \end{table} Table 1: Energy band gap of the structure and spin magnetic moment (MM) under bi-axial strain smaller. This behavior can be explained by a combination of strain-induced changes in the electronic structure, spin polarization, and crystal structure of the bilayer. As the magnitude of the tensile strain increases, the crystal lattice becomes more distorted, which leads to a modification of the exchange pathways and a consequent change in the magnetic properties of the system. Additionally, the spin-orbit coupling becomes more significant at higher strains, leading to a more pronounced influence on the magnetic properties of the system. These changes in the electronic structure and spin polarization can lead to \(J_{1}\) obtained from GGA+U and GGA+SOC more similar at higher tensile strains. In the case of compressive strains, we observed that both GGA+U and GGA+SOC result in a decrease in \(J_{1}\) due to the crystal lattice distortion caused by the compressive strain, which in turn leads to a change in the magnetic properties of the system. Notably, we found that at the strain of -4%, the sign of \(J_{1}\) obtained from GGA+U calculations changes to antiferromagnetic coupling, whereas the value obtained from GGA+SOC approaches zero. This can be explained by the tendency of the GGA+U method to overestimate the Coulomb interaction strength, leading to larger values of the exchange coupling constants and its sign change. In contrast, the GGA+SOC method includes the spin-orbit coupling, which can suppress the exchange interaction and can result in smaller values of the exchange parameters. We observed a similar trend for the next nearest neighbor exchange coupling constant (\(J_{2}\)), with both GGA+U and GGA+SOC showing fluctuations in their magnitude. From Figure 5, it can be clearly seen that the magnitude of \(J_{2}\) fluctuates due to the changes in the crystal structure under strain. Interestingly, in the presence of Hubbard corrections, we observe the sign change of \(J_{2}\) from FM to AFM and the from AFM to FM under compressive strain. Finally, we also evaluated the inter-layer exchange coupling constant (\(J_{\rm int}\)), which determines the ground state of the bilayer structure of T-VS\({}_{2}\). From Figure 5 follows that the inter-layer exchange coupling constant sharply increases at strains of \(\pm 2\%\), indicating the preference of the ferromagnetic coupling in this range of strains. However, for higher tensile and compressive strains, the inter-layer exchange coupling constant decreases, indicating the tendency towards antiferromagnetic coupling. This behavior can be attributed to the interplay between changes in the crystal structure and the electronic and magnetic properties of the system. Interestingly, strains equal to or larger than \(-6\%\) lead to deformation of the crystal structure and to a change from ferromagnetic to antiferromagnetic state. Having found the exchange parameters, one can determine the Curie temperature. To do this we use the Mean Field Approximation (MFA) and the Random Phase Approximation (RPA). In the Mean Field Approximation, spins are treated based on an average interaction field, and any correlations beyond the nearest neighbors are disregarded. The MFA allows for a straightforward calculation of the Curie temperature (T\({}_{c}\)) using the following expression: \[T_{c}=\frac{2}{3K_{B}}\sum_{i,j}J_{ij}, \tag{1}\] where k\({}_{B}\) is the Boltzmann constant. In turn, the Random Phase Approximation (RPA) takes into account fluctuations in the spin orientations and incorporates correlations beyond the mean field. This approximation offers a more precise estimation of the Curie temperature. However, the RPA involves solving a system of coupled equations that take into account the spin correlation functions, resulting in a more intricate expression for T\({}_{c}\). The pure structure of the bilayer of T-VS\({}_{2}\) has \(T_{c}\) close or beyond the room temperature for GGA+U in the MFA and RPA methods, respectively. However, this value is lower for the GGA+SOC calculations. Furthermore, we have calculated the Curie temperature for the different strains and the results are listed in Table 3. The T\({}_{c}\) plot as a function of bi-axial strain exhibits a similar trend to that of the exchange parameters, as shown in Figure 6. Under tensile strain, the Curie temperature shows an increasing tendency. However, at higher tensile Figure 5: Exchange coupling constants as a function of applied bi-axial strain from -10% to +10% strains, it slightly decreases, which can be attributed to the interplay between the changes in electronic structure, spin interactions, and lattice distortions. In turn, the Curie temperature under compressive strains decreases, as anticipated from the corresponding \(J_{1}\) and \(J_{2}\) interactions. The decrease in Curie temperature continues until certain points where it becomes very low or even vanishes. ## IV Summary and conclusions In this paper we have analysed the influence of biaxial compressive and tensile strains on electronic and magnetic properties of a bilayer of T-VS\({}_{2}\). The bilayer structure seems to be interesting for applications, as it reveals properties of a natural spin valve. We have determined the strain-induced changes in the structural parameters, electronic band structure, magnetic anisotropy, exchange parameters and Curie temperature. An interesting effect is that easy-plane magnetic anisotropy is enhanced for tensile strain and reduced almost to zero for compressive strains. This anisotropy in unstrained structure is rather small, so the tensile strain can be used to enhance the corresponding anisotropy parameter. Similarly, compressive strain also reduces other magnetic properties, like magnetic moments of Vanadium, exchange parameters, and Curie temperature, which are strongly suppressed at higher compressive strains. ## V Acknowledgments This work has been supported by the Norwegian Financial Mechanism 2014-2021 under the Polish-Norwegian Research Project NCN GRIEG "2Dtronics" no. 2019/34/H/ST3/00515.
2309.06901
On the p-rank of singular curves and their smooth models
In this paper, we are concerned with the computation of the $p$-rank and $a$-number of singular curves and their smooth model. We consider a pair $X, X'$ of proper curves over an algebraically closed field $k$ of characteristic $p$, where $X'$ is a singular curve which lies on a smooth projective variety, particularly on smooth projective surfaces $S$ (with $p_g(S) = 0 = q(S)$) and $X$ is the smooth model of $X'$. We determine the $p$-rank of $X$ by using the exact sequence of group schemes relating the Jacobians $J_X$ and $J_{X'}$. As an application, we determine a relation about the fundamental invariants $p$-rank and $a$-number of a family of singular curves and their smooth models. Moreover, we calculate $a$-number and find lower bound for $p$-rank of a family of smooth curves.
Sadık Terzi
2023-09-13T11:59:17Z
http://arxiv.org/abs/2309.06901v2
# On the P-rank of singular curves and their smooth models ###### Abstract. In this paper, we are concerned with the computation of the \(p\)-rank and \(a\)-number of singular curves and their smooth model. We consider a pair \(X,X^{\prime}\) of proper curves over an algebraically closed field \(k\) of characteristic \(p\), where \(X^{\prime}\) is a singular curve which lies on a smooth projective variety, particularly on smooth projective surfaces \(S\) (with \(p_{g}(S)=0=q(S)\)) and \(X\) is the smooth model of \(X^{\prime}\). We determine the \(p\)-rank of \(X\) by using the exact sequence of group schemes relating the Jacobians \(J_{X}\) and \(J_{X^{\prime}}\). Key words and phrases:\(p\)-rank, Jacobian, singular curves 2010 Mathematics Subject Classification: Primary 14G17, 14H20; Secondary 14H40, 14H50, 14M10 ## 1. Introduction Let \(X\) be a smooth projective curve in characteristic \(p>0\) and \(J_{X}\) be its Jacobian. We are interested in determining the \(p\)-rank \(\sigma(X)\) and the invariant \(a(J_{X})=\dim(\operatorname{Hom}_{k-gr}(\alpha_{p},J_{X}))\). There is a vast literature on computing \(\sigma(X)\) by determining the action of Frobenius on the cohomology group \(H^{1}(X,\mathcal{O}_{X})\) or equivalently the action of the Cartier operator on \(H^{0}(X,\Omega_{X})\). In the former case one essentially determines the Hasse-Witt matrix ([9]) and in the latter the Cartier-Manin matrix ([18]) describing the action. For an extensive bibliography on Hasse-Witt and Cartier-Manin matrices we refer to [1]. If \(\pi:X\to X^{\prime}\) is the resolution of singularities of a curve \(X^{\prime}\) lying on a smooth variety, especially on smooth projective surface \(S\), then in principle one can determine \(\sigma(X)\) by computing the action of Frobenius on \(H^{1}(X^{\prime},\mathcal{O}_{X^{\prime}})\) and relating this computation to \(X\) via the cohomology sequence attached to the resolution. In particular, if \(p_{g}(S)=0=q(S)\) where \(p_{g}(S)\) and \(q(S)\) are the geometric genus and the irregularity of \(S\) respectively, the method is quite effective because in this case the action of Frobenius can be easily calculated. We illustrate this property in Example 10 for curves on Hirzebruch surfaces. As there are curves not defined on projective plane \(\mathbf{P}_{k}^{2}\) but possibly defined on Hirzebruch surfaces, one can expect that constraining a curve in a specific ambient space and taking advantage of its geometry enables one to determine the \(p\)-rank and the \(a\)-number of the curve. In fact, the explicit computation of a basis for \(H^{1}(X,\mathcal{O}_{X})\) and the Frobenius map on \(H^{1}(X,\mathcal{O}_{X})\) will be useful to calculate \(\sigma(X)\) and \(a(X)\) for such curves \(X\) as in Example 10. In the calculations explained in the preceding paragraph we incorporate the exact sequence \[0\longrightarrow G\longrightarrow J_{X^{\prime}}\longrightarrow J_{X}\longrightarrow 0\] of group schemes arising from the resolution \(X\to X^{\prime}\). In the second section of the paper, we discuss the effect of the singularities of \(X^{\prime}\) on the structure of \(G[p]\). Then once we determine the structure of the subgroup \(J_{X^{\prime}}[p]\), we obtain the \(p\)-rank of the smooth curve \(X\) and a lower bound for \(a(X)\). More precisely, we have: **Proposition 1**.: _In the given setup, the following relations hold:_ 1. \(\sigma(J_{X})=\sigma(J_{X^{\prime}})-\sigma(G)\)__ 2. \(a(X)\geq a(J_{X^{\prime}})-a(G)\)_._ In third section, we provide a family of examples (Ex. 11) of complete intersection curves to find lower bound on their \(a\)-number by using explicit basis constructed in [15, Theorem 2] and by using explicit action of the Frobenius map computed in [15, Proposition 14]. This family consists of generalized Fermat curves \(F_{m,n}\) of type \((m,n)\) and forms a family of algebraic curves of dimension \(n-2\) in the moduli space of projective smooth genus \(g(F_{m,n})=1+\frac{m^{n-1}}{2}((m-1)(n-1)-2)\) curves ([6], Section 2). The investigation of algebraic curves over fields of characteristic \(p>0\) is related to several problems for curves over finite fields, such as the cardinality of the set of rational points, the search for maximal curves with respect to the Hasse-Weil bound, properties on zeta functions and Weierstrass points on curves. Many results have been obtained for classical Fermat curves (i.e., \(n=2\)) [4, 17, 10]. The notation is as follows. Our base field is \(k=\overline{\mathbb{F}}_{p}\) for some prime \(p>0\). \(X\) is a projective smooth curve over \(k\), of genus \(g\geq 2\). \(J_{X}\) is the Jacobian of \(X\). \(\omega_{X}\) is the canonical bundle of \(X\). \(F_{Y}\) is the absolute Frobenius morphism for a variety \(Y\). \(B^{i}_{Y/k}\) is the sheaf of the locally exact \(i\)-th differential forms for a variety \(Y\). \(\mu_{p}\), \(\alpha_{p}\) denote the usual infinitesimal group schemes. If \(H\) is a \(k\)-group scheme, \(H[n]\) is the kernel of multiplication by \(n\) in \(H\). \(a(H)=\dim_{k}(\operatorname{Hom}_{k-gr}(\alpha_{p},H))\). \(\sigma(H)\) is defined by \(p^{\sigma(H)}=|\operatorname{Hom}_{k-gr}(\mu_{p},\ H)|\). \(\times f\) is the multiplication by \(f\) map \((a\mapsto af)\). ## 2. Singular curves We let \(X^{\prime}\) be a (singular) integral proper curve over \(k\) and \(\pi:X\to X^{\prime}\) be the normalization map. The Jacobian \(J_{X^{\prime}}\) sits in the following exact sequence of \(k\)-group schemes \[0\longrightarrow G\longrightarrow J_{X^{\prime}}\longrightarrow J_{X} \longrightarrow 0, \tag{2.1}\] where \(G\) is an affine connected group. We let \(L\) be a line bundle on \(X^{\prime}\) and recall the following basic facts : * Duality : Cup product composed with the residue map gives a perfect pairing \[H^{1}(X^{\prime},L)\times H^{0}(X^{\prime},\Omega_{X^{\prime}}\otimes L^{-1}) \to H^{1}(X^{\prime},\Omega_{X^{\prime}})\cong k\] [12, Chap. IV, no. 10, Last Remark]. * We have the Riemann-Roch Theorem \[\chi(L)=\deg(L)+1-p_{a}(X^{\prime}).\] Here \(p_{a}(X^{\prime})=1-\chi(X^{\prime})=\dim(H^{1}(X^{\prime},\mathcal{O}_{X^{ \prime}}))=\ \dim(H^{0}(X^{\prime},\Omega_{X^{\prime}}))\) is the arithmetic genus of \(X^{\prime}\)[12, Chap. IV, no. 6, Thm. 1]. * By applying duality in the special case \(L=\mathcal{O}_{X^{\prime}}\) we obtain an isomorphism \[H^{0}(X^{\prime},\Omega_{X^{\prime}})\cong H^{0}(J_{X^{\prime}},\Omega_{J_{X^ {\prime}}})\]. Next, we recall the concept of ordinarity for curves and the definitions of the \(p\)-rank and the \(a\)-number. **Definition 2**.: [14, Thm. 2.2] Set \(W=H^{1}(X^{\prime},\mathcal{O}_{X}^{\prime})\). Let \(W^{s}\) be the largest \(F\) invariant subspace of \(W\) and \(W^{*}\) be the subspace of \(W\) on which \(F\) acts as zero map. The natural numbers \(\sigma(X^{\prime})=\dim_{k}(W^{s})\) and \(a(X^{\prime})=\dim_{k}(W^{*})\) are called the \(p\)-rank and the \(a\)-number of \(X^{\prime}\), respectively. We say that \(X^{\prime}\) is an ordinary curve if \(W^{s}=W\) The following facts are in [18, Thm. 7.1]: * The \(p\)-rank of \(X^{\prime}\) coincides with the \(p\)-rank \(\sigma(J_{X^{\prime}})\) of its Jacobian i.e., \(\sigma(X^{\prime})=\sigma(J_{X^{\prime}})\). * The \(a\)-number of \(X^{\prime}\) coincides with the \(a\)-number \(a(J_{X^{\prime}})\) of its Jacobian i.e., \(a(X^{\prime})=a(J_{X^{\prime}})\). Let \(X\) be a non-singular proper integral curve defined over algebraically closed field \(k\) of characteristic \(p>0\). The _Cartier operator_\(\mathscr{C}\) defined in [2, Chapter 2, Section 6] is a \(1/p-\)linear operator acting on the sheaf \(\Omega_{X/k}\) of differential forms for \(X\) which satisfies the following properties: 1) \(\mathscr{C}(\omega_{1}+\omega_{2})=\mathscr{C}(\omega_{1})+\mathscr{C}( \omega_{2})\) 2) \(\mathscr{C}(df)=0\) 3) \(\mathscr{C}(f^{p}\omega)=f\mathscr{C}(\omega)\) 4) \(\mathscr{C}(f^{p-1}df)=df\) 5) \(\mathscr{C}(df/f)=df/f\) for all local sections \(\omega_{1}\), \(\omega_{2}\) and \(\omega\) (respectively \(f\)) of \(\Omega_{X}\) (respectively of \(\mathcal{O}_{X}\)). In particular, the operator \(\mathscr{C}\) acts on \(H^{0}(X,\Omega_{X})\), regular differential forms on \(X\). **Remark 3**.: The action of \(\mathscr{C}\) can be extended to rational differential forms on \(X\). The following lemma shows that in a normalization set up \(\pi:X\to X^{\prime}\) restricting to the subspace \(\pi^{*}(H^{0}(X^{\prime},\Omega_{X^{\prime}}))\) of rational forms on \(X\) we obtain a \(1/p-\)linear action on the cohomology group \(H^{0}(X^{\prime},\Omega_{X^{\prime}})\). **Lemma 4**.: _Let \(\mathscr{C}\) be the Cartier operator acting on the rational forms on \(X\) and \(F:H^{1}(X^{\prime},\mathcal{O}_{X^{\prime}})\to H^{1}(X^{\prime}, \mathcal{O}_{X^{\prime}})\) be the Frobenius map. We have_ a)_\(\mathscr{C}(H^{0}(X^{\prime},\Omega_{X^{\prime}}))\subset H^{0}(X^{\prime}, \Omega_{X^{\prime}})\)._ b) _The duality pairing_ \[\langle\,\ \rangle:H^{1}(X^{\prime},\mathcal{O}_{X^{\prime}})\times H^{0}(X ^{\prime},\Omega_{X^{\prime}})\to H^{1}(X^{\prime},\Omega_{X^{\prime}})\cong k\] _satisfies_ \[\langleFv,\omega\rangle\ =\ \langle v,\mathscr{C}\omega\rangle^{p}.\] Proof.: a) Let \(\pi:X\longrightarrow X^{\prime}\) be the normalization morphism and \(t\) be a local parameter at \(x\in X\). \(H^{0}(X^{\prime},\Omega_{X^{\prime}})\) consists precisely of rational differential forms \(\omega\) on \(X\) which at each point \(x^{\prime}\in X^{\prime}\) satisfy the condition \(\sum_{x_{j}\mapsto x^{\prime}}\mathrm{Res}(f\omega,x_{j})=0\) for all \(f\in\mathcal{O}_{X^{\prime},x^{\prime}}\) where the sum is taken over all the points \(x_{j}\in X\) such that \(\pi(x_{j})=x^{\prime}\). Let \(\omega=\sum_{n=-m}^{\infty}a_{n}t^{n}dt\). Since the Cartier operator \(\mathscr{C}\) satisfies the properties 2) - 4), we get \[\mathscr{C}(t^{n}dt)=\begin{cases}\mathscr{C}(\frac{1}{n+1}dt^{n+1})=0&\text { if }p\not|n+1\\ t^{\frac{n+1}{p}-1}dt&\text{ if }p|n+1\end{cases}.\] Therefore, we conclude, by using semilinearity of the Cartier operator \(\mathscr{C}\), that \[\mathscr{C}(\omega)=\sum_{n=-l}^{\infty}a_{pn-1}^{1/p}t^{n-1}dt\] where \(m\) is a non-negative integer and \(l\) is the greatest non-negative integer so that \(pl+1\leq m\). As \(\mathrm{Res}(f\mathscr{C}(\omega),x)=\mathrm{Res}(f^{p}\omega,x)^{1/p}\) for \(x\in X\), we see that for \(\omega\in H^{0}(X^{\prime},\Omega_{X^{\prime}})\) \[(\sum_{x_{j}\to x^{\prime}}\mathrm{Res}(f\mathscr{C}(\omega),x_{j}))^{p}=\sum _{x_{j}\to x^{\prime}}\mathrm{Res}(f^{p}\omega,x_{j})=0.\] The last equality is due to definition of \(\omega\). Hence \(\mathscr{C}(H^{0}(X^{\prime},\Omega_{X^{\prime}}))\subset H^{0}(X^{\prime},\Omega_ {X^{\prime}})\) and the result follows. b) Let \(t\) be a local parameter at \(x^{\prime}\in X^{\prime}\). Let \(f\) and \(\omega\) be any elements of \(H^{1}(X^{\prime},\mathcal{O}_{X^{\prime}})\) and \(H^{0}(X^{\prime},\Omega_{X^{\prime}})\), respectively. Then \(f=\sum_{i=-m}^{\infty}a_{i}t^{i}\) and \(\omega=\sum_{j=-n}^{\infty}b_{j}t^{j}dt\) at \(x^{\prime}\) for some non-negative integers \(m\) and \(n\). For duality between \(H^{1}(X^{\prime},\mathcal{O}_{X^{\prime}})\) and \(H^{0}(X^{\prime},\Omega_{X^{\prime}})\), we refer [12, Chap. IV no. 9 and 10]. We need only show that \[\operatorname{Res}(f^{p}\omega,x^{\prime})=(\operatorname{Res}(f\mathscr{C}( \omega),x^{\prime}))^{p}\] so that \[\langle F(f),\omega\rangle=\sum_{x^{\prime}\in X^{\prime}}\operatorname{Res}( f^{p}\omega,x^{\prime})=(\sum_{x^{\prime}\in X^{\prime}}\operatorname{Res}(f \mathscr{C}(\omega),x_{j}))^{p}=\langle f,\mathscr{C}(\omega)\rangle^{p}.\] We find that \[\operatorname{Res}(f^{p}\omega,x^{\prime})=\sum_{pi+j=-1}a_{i}^{p}b_{j}\] and \[\operatorname{Res}(f\mathscr{C}(\omega),x^{\prime})=\sum_{i+j=0}a_{i}b_{p-1} ^{1/p}.\] Note that \[\sum_{pi+j=-1}a_{i}^{p}b_{j}=\sum_{i}a_{i}^{p}b_{-pi-1}=\sum_{i+j=0}a_{i}^{p}b _{pj-1}=(\sum_{i+j=0}a_{i}b_{pj-1}^{1/p})^{p}.\] Hence we obtain the desired duality. In the following Proposition we will use the fact that since \(k\) is algebraically closed, in the exact sequence (1) the group \(G\) decomposes into a product \(G=\mathbb{G}_{m}^{r}\times G_{u}\) where \(r\) is a non-negative integer and \(G_{u}\) is a unipotent group (that is, a successive extension of the additive group \(\mathbb{G}_{a}\)). **Proposition 1**.: _In the given setup, the following relations hold:_ 1. \(\sigma(J_{X})=\sigma(J_{X^{\prime}})-\sigma(G)=\sigma(J_{X^{\prime}})-r\)__ 2. \(a(J_{X})\geq a(J_{X^{\prime}})-a(G)=a(J_{X^{\prime}})-a(G_{u})\)_._ Proof.: 1) We apply \(\operatorname{Hom}_{k-gr}(\mu_{p},.)\) to sequence (1) and we get the exact sequence \(0\to\operatorname{Hom}_{k-gr}(\mu_{p},G)\to\operatorname{Hom}_{k-gr}(\mu_{p}, J_{X^{\prime}})\to\operatorname{Hom}_{k-gr}(\mu_{p},J_{X})\to\operatorname{Ext}^{1}( \mu_{p},G)\). We know that the groups \(\operatorname{Ext}^{1}(\mu_{p},\mathbb{G}_{m})\) and \(\operatorname{Ext}^{1}(\mu_{p},\mathbb{G}_{a})\) are trivial ([11], p. 81). Therefore, \[\operatorname{Ext}^{1}(\mu_{p},G)=\operatorname{Ext}^{1}(\mu_{p},\mathbb{G}_{ m})^{r}\times\operatorname{Ext}^{1}(\mu_{p},G_{u})=\operatorname{Ext}^{1}(\mu_{p},G_{u}).\] By induction on the chain of successive extensions defining \(G_{u}\), we see that \[\operatorname{Ext}^{1}(\mu_{p},G_{u})\] is trivial. We compare the orders of the groups in sequence (1) to get \[p^{\sigma(J_{X^{\prime}})}=p^{\sigma(J_{X})}p^{\sigma(G)}.\] Thus we have \(\sigma(J_{X})=\sigma(J_{X^{\prime}})-\sigma(G)\). Since \(\operatorname{Hom}_{k-gr}(\mu_{p},\mathbb{G}_{a})\) is also trivial, again by induction on the chain of successive extensions we find \(\operatorname{Hom}_{k-gr}(\mu_{p},G_{u})=0\). Thus we get \[p^{\sigma(G)}=|\operatorname{Hom}_{k-gr}(\mu_{p},G)|=|\operatorname{Hom}_{k- gr}(\mu_{p},\mathbb{G}_{m})^{r}|=p^{r}.\] Hence \(\sigma(G)=r\) and we obtain the the first statement of the Proposition. 2. We apply \(\operatorname{Hom}_{k-gr}(\alpha_{p},.)\) to sequence (1) to get the exact sequence \(0\to\operatorname{Hom}_{k-gr}(\alpha_{p},G)\to\operatorname{Hom}_{k-gr}( \alpha_{p},J_{X^{\prime}})\to\operatorname{Hom}_{k-gr}(\alpha_{p},J_{X})\to \operatorname{Ext}^{1}(\alpha_{p},G)\). Since \(\operatorname{Hom}_{k-gr}(\alpha_{p},\mathbb{G}_{m})\) and \(\operatorname{Ext}^{1}(\alpha_{p},\mathbb{G}_{m})\) are trivial (loc. cit.), the exact sequence reduces to \[0\to\operatorname{Hom}_{k-gr}(\alpha_{p},G_{u})\to\operatorname{Hom}_{k-gr}( \alpha_{p},J_{X^{\prime}})\to\operatorname{Hom}_{k-gr}(\alpha_{p},J_{X})\to \operatorname{Ext}^{1}(\alpha_{p},G_{u})\] and we obtain the relation \(a(X)\geq a(J_{X^{\prime}})-a(G_{u})\). **Corollary 5**.: _If all singular points of \(X^{\prime}\) are double points of the form \(z^{2}=x^{r},r\geq 3\) odd i.e., its singularities are analytically isomotphic to the singularity at \((0,0)\) of the curve \(z^{2}=x^{r}\), then we have \(\sigma(J_{X})=\sigma(J_{X^{\prime}})\)._ Proof. Under the given hypothesis and with the notation of [12, Chapter 5, Section 17], \(R_{m}=\mathbb{G}_{m,P}\times G_{u}\). Therefore, \(G=R_{m}/\Delta=G_{u}\). Hence \(\sigma(G)=0\). \(\square\) **Example 6**.: We will give an example of a singular curve \(X^{\prime}\) such that 1. \(\sigma(J_{X})<\sigma(J_{X^{\prime}})\) and \(a(J_{X^{\prime}})=a(G)\) 2. \(X\) is an ordinary curve of genus \(g(X)=p_{a}(X^{\prime})-\ \dim(G)\). Let \(p=2\) and consider the plane curve \(X^{\prime}\)[7, Section 4, Thm. 3] defined by the equation \[f(x,y,z)=x^{3}y^{3}+x^{3}z^{3}+y^{3}z^{3}+\lambda z^{6}=0,\ \text{where}\ \lambda\neq \lambda^{2}.\] The singular curve \(X^{\prime}\) is of arithmetic genus \(p_{a}(X^{\prime})=10\) and has precisely two singular points \([1:0:0]\) and \([0:1:0]\) which are both ordinary threefolds (cf. [7], Section 4 and see more generally [5, Chapter I, Section 5, pp. 33-39] for ordinary \(r\)-fold). Thus, we see that the group \(G\) in the exact sequence (1) is of dimension \(6\) with \(\mathbb{G}_{m}^{4}\) ([12, Chapter V, Section 17]) as the multiplicative part. It follows that the genus of the normalization \(X\) of \(X^{{}^{\prime}}\) is \(g(X)=4\) and \(\sigma(X)=\sigma(X^{\prime})-4\). Now we compute \(\sigma(X^{\prime})\) and \(a(X^{\prime})\) by using the action of the Frobenius map \(F\) on \(H^{1}(X^{{}^{\prime}},O_{X^{{}^{\prime}}})\) using the basis [5, Chapter III, Thm. 5.1] \[\{\beta=\frac{1}{x^{a}y^{b}z^{c}}\mid a+b+c=6,\ a,b,c\geq 1\}\] for \(H^{1}(X^{{}^{\prime}},O_{X^{{}^{\prime}}})\). Note that \[F(\beta)=f^{p-1}\beta^{p}=f\beta^{2}=\frac{1}{x^{2a-3}y^{2b-3}z^{2c}}+\frac{1} {x^{2a-3}y^{2b}z^{2c-3}}+\frac{1}{x^{2a}y^{2b-3}z^{2c-3}}+\frac{\lambda}{x^{2 a}y^{2b}z^{2c-6}}\] in \(H^{1}(X^{{}^{\prime}},O_{X^{{}^{\prime}}})\) as in ([5], Chapter IV, Proposition 4.21) and also see [15, Proposition 14] for explicit formula for the Frobenius map \(F\) on \(H^{1}(X^{{}^{\prime}},O_{X^{{}^{\prime}}})\). Hence by using explicit basis elements \[\beta_{1}=\frac{1}{xy^{2}z^{3}},\ \beta_{2}=\frac{1}{xy^{3}z^{2}},\ \beta_{3}=\frac{1}{x^{2}yz^{3}},\ \beta_{4}=\frac{1}{x^{2}y^{3}z},\ \beta_{5}=\frac{1}{x^{3}yz^{2}},\] \[\beta_{6}=\frac{1}{x^{3}y^{2}z^{1}},\ \beta_{7}=\frac{1}{x^{2}y^{2}z^{2}},\ \beta_{8}=\frac{1}{xyz^{4}},\ \beta_{9}=\frac{1}{xy^{4}z},\ \beta_{10}=\frac{1}{x^{4}yz},\] We get \[F(\beta_{1})=\beta_{3},\ F(\beta_{2})=\beta_{4},\ F(\beta_{3})=\beta_{1},\ F( \beta_{4})=\beta_{2},\ F(\beta_{5})=\beta_{6},\ F(\beta_{6})=\beta_{5},\] \[F(\beta_{7})=\beta_{8}+\beta_{9}+\beta_{10},\ F(\beta_{8})=\lambda\beta_{7}, \ F(\beta_{9})=F(\beta_{10})=0.\] It is clear that \(F\) acts bijectively on the vector space spanned by the basis \(\{\beta_{i}|i=1,2,\cdots,8\}\) and \(\operatorname{Rank}([F])=8\). Therefore, \(\sigma(X^{\prime})=8\) and \(a(X^{\prime})=2\) by Definition 2. Thus, we see that \(\sigma(X)=4\) (Lemma 2) and that \(X\) is an ordinary curve and so \(a(X)=0\) by Definition 2. One can also use the formula in Remark 7 for affine model of \(f\) at \(z\neq 0\) to show that \(X\) is an ordinary curve. **Remark 7**.: We conclude from Lemma 4b) that instead of working with Frobenius acting on \(H^{1}(X^{\prime},\mathcal{O}_{X^{\prime}})\), we could have worked with the Cartier operator on \(H^{0}(X^{\prime},\Omega_{X^{\prime}})\). The action of the Cartier operator \(\mathscr{C}\) on \(H^{0}(X^{\prime},\Omega_{X^{\prime}})\) is given by the following formula ([13], Theorem 1.1). \[\mathscr{C}(h\frac{dx}{f_{y}})=(\frac{\partial^{2p-2}}{\partial x^{p-1} \partial y^{p-1}}(f^{p-1}h))^{1/p}\frac{dx}{f_{y}},\] where \(f(x,y)=0\) is the dehomogenization of the equation of \(X^{\prime}\) and \(h\in k(X^{\prime})\). **Example 8**.: This is an example of a pair \(X,X^{\prime}\) such that the \(\sigma(X)=\sigma(X^{\prime})=1\). We take \(p=7\) and consider the curve \(X^{{}^{\prime}}\subset\mathbb{P}^{2}\)[16, Section 1] given by the equation \[f(x,y,z)=x^{5}+y^{3}z^{2}+Axyz^{3}+Bxz^{4}=0\] where \(A,B\) are non-zero and \(A\neq B\). \(X^{{}^{\prime}}\) is a singular curve of arithmetic genus \(p_{a}(X^{\prime})=6\). \(X^{\prime}\) has only one singular point \(Q=[0:1:0]\), which is analytically isomorphic to the singularity at \((0,0)\) of the plane curve \(z^{2}=x^{5}\) i.e. the comletion of the local ring at \(Q\) of \(X^{\prime}\) isomorphic to \(k[[x,z]]/(z^{2}-x^{5})\). ([16], Section I). Hence, Corollary 5 applies and we get \(\sigma(X)=\sigma(X^{\prime})\). Calculating as in Example 6 by using the basis [5, Chapter III, Thm. 5.1] \[\{\beta_{1}=\frac{1}{x^{3}yz},\ \beta_{2}=\frac{1}{xy^{3}z},\ \beta_{3}=\frac{1}{xyz ^{3}},\ \beta_{4}=\frac{1}{x^{2}y^{2}z},\ \beta_{5}=\frac{1}{x^{2}yz^{2}},\ \beta_{6}=\frac{1}{xy^{2}z^{2}}\}\] we find \[F(\beta_{1})=\beta_{3}+5B\beta_{2},\] \[F(\beta_{2})=F(\beta_{3})=F(\beta_{4})=0,\] \[F(\beta_{5})=5AB^{2}\beta_{1}+5A^{2}B\beta_{5},\] \[F(\beta_{6})=4A^{3}B\beta_{2}.\] Thus, we have \[F^{6}(\beta_{1})=F^{6}(\beta_{2})=F^{6}(\beta_{3})=F^{6}(\beta_{4})=F^{6}( \beta_{6})=0\ \text{and}\ F^{6}(\beta_{5})\neq 0.\] It follows that \(\sigma(X^{\prime})=1\) by Definition 2 and we get \(\sigma(X)=1\). **Remark 9**.: One can adapt the techniques in Example 8 to find the \(p\)-rank of curves on more general surfaces. Let \(S\) be a smooth projective surface over an algebraically closed field of positive characteristic \(p\) with invariants geometric genus \(p_{g}=0\) and irregularity \(q=0\). Let \(X\) be a projective curve on \(S\) with corresponding divisor \(D\). We have the following short exact sequence which defines our curve. \[0\longrightarrow\mathcal{O}_{S}(-D)\longrightarrow\mathcal{O}_{S} \longrightarrow\mathcal{O}_{X}\longrightarrow 0.\] By using the long exact sequence of cohomology obtained from the above short exact sequence, one sees that \[H^{1}(X,\mathcal{O}_{X})\cong H^{2}(S,\mathcal{O}_{S}(-D))\] As an application of Remark 9, we illustrate how to compute the \(p\)-rank of integral curves on Hirzebruch surfaces \(\mathscr{H}_{r}\). **Example 10**.: Let \(k\) be an algebraically closed field of positive characteristic \(p\). We will review the construction of the \(r\)-th Hirzebruch surface \(\mathscr{H}_{r}\). Let us consider the fan ([3], Ex.3.1.16) \[\Sigma_{r}=\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},\rho_{1}=\sigma_{3} \cap\sigma_{4},\rho_{2}=\sigma_{1}\cap\sigma_{4},\rho_{3}=\sigma_{1}\cap \sigma_{2},\rho_{4}=\sigma_{2}\cap\sigma_{3},0\}\] in \(\mathbb{R}^{2}\) where \[\sigma_{1}=\text{Cone}(e_{1},e_{2}),\sigma_{2}=\text{Cone}(e_{1},-e_{2}), \sigma_{3}=\text{Cone}(-e_{1}+re_{2},-e_{2}),\sigma_{4}=\text{Cone}(-e_{1}+re_ {2},e_{2}).\] The corresponding toric variety \(X_{\Sigma_{r}}\) is covered by open affine subsets, \[\left\{\begin{array}{l}U_{\sigma_{1}}=\mbox{Spec}(k[x,y])\cong k^{2}\\ U_{\sigma_{2}}=\mbox{Spec}(k[x,y^{-1})\cong k^{2}\\ U_{\sigma_{3}}=\mbox{Spec}(k[x^{-1},x^{-r}y^{-1})\cong k^{2}\\ U_{\sigma_{4}}=\mbox{Spec}(k[x^{-1},x^{r}y)\cong k^{2}\end{array}\right\}\] and glued according to ([3] Prop.3.1.3). We call \(X_{\Sigma_{r}}\) the Hirzebruch surface \(\mathscr{H}_{r}\). We set \[u_{\sigma_{1}}=-e_{1}+re_{2},\ u_{\sigma_{2}}=e_{2},\ u_{\sigma_{3}}=e_{1},\ u_{ \sigma_{4}}=-e_{2}\] for the ray generators ([3], Lemma 1.2.15) of the one dimensional cones in \(\Sigma_{r}\). Moreover the total coordinate ring of \(\mathscr{H}_{r}\) is \[R=k[x_{\sigma}\mid\sigma\in\Sigma(1)]\] where \(\Sigma(1)\) is the set of all 1-dimensional cones in \(\Sigma_{r}\). Here we label \(x_{\sigma_{i}}\) as \(x_{i}\) and hence \(R=k[x_{1},x_{2},x_{3},x_{4}]\). Now we will describe how to put grading on the ring \(S([3],\) Section 5.2): Let \(M\) be the lattice of characters of the torus \((\mathbb{C}^{*})^{2}\) of the surface \(\mathscr{H}_{r}\). Note that \(M\) is isomorphic to \((\mathbb{Z})^{2}\). For given \(m\in M\), we define the principal divisor \(\mbox{div}(\chi^{m})\) as \[\mbox{div}(\chi^{m})=\sum_{\rho}\langle m,u_{\rho}\rangle D_{\rho}\] where \(\langle,\rangle\) is the usual dot product in \(\mathbb{Z}^{2}\). Now set \(D_{\rho_{i}}=D_{i}\) and \(u_{\rho_{i}}=u_{i}\). One may compute the divisor class group \(\mbox{CI}(\mathscr{H}_{r})\)([3] Ex.4.1.8 ) as follows: \[\left\{\begin{array}{l}0\sim\mbox{div}(\chi^{e_{1}})=\sum_{i=1}^{4}\langle e _{1},u_{i}\rangle D_{i}=-D_{1}+D_{3}\mbox{ which implies }D_{1}\sim D_{3}\\ \\ 0\sim\mbox{div}(\chi^{e_{2}})=\sum_{i=1}^{4}\langle e_{2},u_{i} \rangle D_{i}=rD_{1}+D_{2}-D_{4}\mbox{ which implies }D_{2}\sim-rD_{3}+D_{4}\end{array}\right\}\] Therefore we see that \(\mbox{CI}(\mathscr{H}_{r})\) is free abelian group of rank two generated by classes \(D_{3}\) and \(D_{4}\) and hence \(\mbox{CI}(\mathscr{H}_{r})\) is isomorphic to \(\mathbb{Z}^{2}\). One identifies classes of \([D_{3}]\) and \([D_{4}]\) by \((1,0)\) and \((0,1)\), respectively. We have the short exact sequence \[0\longrightarrow M\longrightarrow\mathbb{Z}^{\Sigma(1)}\longrightarrow\mbox{ CI}(\mathscr{H}_{r})\longrightarrow 0\] where \(m\in M\) is mapped to \(\mbox{div}(\chi^{m})\) and \((a_{\rho})_{\rho\in\Sigma(1)}\) is mapped to \([\sum_{\rho}a_{\rho}D_{\rho}]\) ([3] Thm.4.1.3). One defines the degree of a monomial \(x^{a}=\prod_{\rho}x_{\rho}^{a_{\rho}}\) as to be \[\deg(x^{a})=[\sum_{\rho}a_{\rho}D_{\rho}]\in\mbox{CI}(\mathscr{H}_{r}).\] Therefore we obtain the followings: \[\left\{\begin{array}{ll}\deg(x_{1})=[D_{1}]=(1,0),&\deg(x_{2})=[D_{2}]=(-r,1)\\ \deg(x_{3})=[D_{3}]=(1,0),&\deg(x_{4})=[D_{4}]=(0,1)\end{array}\right\}.\] Our next object is to homogenize characters of \(M\) described in ([3], Section 5.2) to determine cohomology groups on which the Frobenius map acts. In this way we will provide formula for the Frobenius map on the cohomology groups of curves in the Hirzebruch surface \(\mathscr{H}_{r}\). A Weil divisor \( D=\sum_{i=1}^{4}a_{i}D_{i}\) on \(\mathscr{H}_{r}\) yields the polyhedron \[P_{D}=\{m\in M_{\mathbb{R}}=\mathbb{R}^{2}\mid\langle m,u_{i}\rangle\geq-a_{i},\text{ for }1\leq i\leq 4\}.\] The \(D\)-**homogenization** of \(\chi^{m}\) is defined as to be the Laurent monomial \[x^{\langle m,D\rangle}=\prod_{i=1}^{4}x_{i}^{\langle m,D\rangle+a_{i}}.\] The cohomology group \(H^{0}(\mathscr{H}_{r},\mathcal{O}_{\mathscr{H}_{r}}(D))\) is spanned by the characters coming from lattice points of \(P_{D}\), i.e., \[\begin{split} H^{0}(\mathscr{H}_{r},\mathcal{O}_{\mathscr{H}_{ r}}(D))&=\bigoplus_{m\in P_{D}\cap M}k\cdot\chi^{m}\\ &=\{\bigoplus k\cdot x_{1}^{\alpha_{1}}x_{2}^{\alpha_{2}}x_{3}^{ \alpha_{3}}x_{4}^{\alpha_{4}}\mid\sum_{i=1}^{4}\alpha_{i}\deg(x_{i})=(a,b),\, \alpha_{i}\geq 0\}\end{split} \tag{2.2}\] which gives degree \(\deg(D)=(a,b)\) part of the ring \(R\), and denoted by \(R(a,b)\). Let \(X\) be an integral curve corresponding to \( D=a_{1}D_{1}+a_{2}D_{2}+a_{3}D_{3}+a_{4}D_{4}\sim aD_{3}+bD_{4}\) where \(a,b>0\) and \(\mathcal{O}_{S}(a,b)\) be the invertible sheaf given by \(D\). More details can be found in ([3], Examples 6.1.16 and 6.3.23). Then by Remark 9, \[H^{1}(X,\mathcal{O}_{X})\cong H^{2}(\mathscr{H}_{r},\mathcal{O}_{\mathscr{H}_ {r}}(-a,-b)).\] However, by using ([3], Theorem 9.2.7), we get \[H^{2}(\mathscr{H}_{r},\mathcal{O}_{\mathscr{H}_{r}}(-a,-b))=\bigoplus_{m\in \operatorname{Relint}(P_{D})}k\cdot\chi^{-m}\] where \(\operatorname{Relint}(P_{D})\) is the interior of \(P_{D}\) in \(\mathbb{R}^{2}\). Note that \(m\in\operatorname{Relint}(P_{D})\) if and only if \(\langle m,u_{i}\rangle>-a_{i}\) if and only if \(\langle m,u_{i}\rangle+a_{i}=b_{i}\geq 1\) for \(i=1,2,3,4\). Thus, \[H^{1}(X,\mathcal{O}_{X})=\{\bigoplus k\cdot x_{1}^{-b_{1}}x_{2}^{-b_{2}}x_{3}^ {-b_{3}}x_{4}^{-b_{4}}\mid\sum_{i=1}^{4}b_{i}\deg(x_{i})=(a,b)\,b_{i}\geq 1\}.\] Let \(f\in R(a,b)\) be the polynomial defining the curve \(X\) ([3], Proposition 5.2.4 and [5], Proposition 1.12A). \[\begin{CD}0@>{}>{}>\mathcal{O}_{\mathcal{H}_{r}}(-a,-b)@>{\times f}>{}>\mathcal{O}_{ \mathcal{H}_{r}}@>{}>{}>\mathcal{O}_{X}@>{}>{}>0\\ 0@>{}>{}>\mathcal{O}_{\mathcal{H}_{r}}(-pa,-pb)@>{\times f^{p}}>{}>\mathcal{O}_{ \mathcal{H}_{r}}@>{}>{}>\mathcal{O}_{X^{p}}@>{}>{}>0\\ 0@>{}>{}>\mathcal{O}_{\mathcal{H}_{r}}(-a,-b)@>{\times f}>{}>\mathcal{O}_{ \mathcal{H}_{r}}@>{}>{}>\mathcal{O}_{X}@>{}>{}>0\\ \end{CD}\] Diagram A Here, \(X^{p}\) is the subscheme of \(\mathcal{H}_{r}\) defined by \(f^{p}=0\), and \(F_{X},\ F_{\mathcal{H}_{r}}\) are Frobenius morphisms of \(X\) and \(\mathcal{H}_{r}\), respectively. On the other hand, \(X\) is a closed subscheme of \(X^{p}\) and we have the quotient map \(q:\mathcal{O}_{X^{p}}\longrightarrow\mathcal{O}_{X}\). By Diagram A, we compute the Frobenius morphism \(F_{X}\) of \(X\) as \[F_{X}=qF_{\mathcal{H}_{r}}.\] Let \(F_{X}^{*}\) and \(F_{\mathcal{H}_{r}}^{*}\) be the following Frobenius maps \[F_{X}^{*}:H^{1}(X,\mathcal{O}_{X})\longrightarrow H^{1}(X,\mathcal{O}_{X})\] and \[F_{\mathcal{H}_{r}}^{*}:H^{2}(\mathscr{H}_{r},\mathcal{O}_{\mathscr{H}_{r}}(- a,-b))\longrightarrow H^{2}(\mathscr{H}_{r},\mathcal{O}_{\mathscr{H}_{r}}(-pa,-pb))\] corresponding to morphisms \(F_{X}\) and \(F_{\mathcal{H}_{r}}\), respectively. By using the long exact sequences corresponding to the horizontal short exact sequences in Diagram A, We compute the Frobenius map \(F_{X}^{*}\) of \(X\) as \[F_{X}^{*}=f^{p-1}F_{\mathcal{H}_{r}}^{*}\ \text{on}\ H^{2}(\mathcal{H}_{r}, \mathcal{O}_{\mathcal{H}_{r}}(-a,-b))\] under the indendification \(H^{1}(X,\mathcal{O}_{X})=H^{2}(\mathscr{H}_{r},\mathcal{O}_{\mathscr{H}_{r}} (-a,-b))\). After calculating explicit action of the Frobenius morphism, one can calculate the \(p\)-rank and the \(a\)-number of the curve \(X\) by using Definition 2. To be more concrete, we consider a smooth curve \(X\) over the algebraically closed field \(k\) of characteristic \(p=2\) which is the complete intersection in \(\mathbf{P}_{k}^{3}\)(more general family can be found in [8, Section 3.1, Prop. 3.1.4]) given by the surfaces \[S_{1}:xw-yz=0\] \[S_{2}:y^{3}+z^{3}+w^{3}+\lambda x^{3}=0,\quad\text{where}\ \lambda\neq\lambda^{2}.\] Note that the surface \(X\) is the smooth model of the curve \(X^{\prime}\) in Example 6. The surface \(S_{1}\) is isomorphic to \(\mathscr{H}_{0}\) and the curve \(X\subseteq S_{1}\) corresponds to the equation \(f=x_{1}^{3}(x_{4}^{3}+\lambda x_{2}^{3})+x_{3}^{3}(x_{4}^{3}+x_{2}^{3})=0\). As \(D_{1}\sim D_{3}\) and \(D_{2}\sim-rD_{3}+D_{4}=D_{4}\) (\(r=0\)), we compute the degree of \(f\) as \[\deg(f)=3D_{3}+3D_{4}.\] Therefore, we have that \[H^{1}(X,\mathcal{O}_{X})\cong H^{2}(\mathscr{H}_{0},\mathcal{O}_{\mathscr{H}_{0}}( -3,-3))\] and \[\beta=\{\rho_{1}=\frac{1}{x_{1}x_{3}^{2}x_{2}x_{4}^{2}},\ \rho_{2}=\frac{1}{x_{ 1}x_{3}^{2}x_{2}^{2}x_{4}},\ \rho_{3}=\frac{1}{x_{1}^{2}x_{3}x_{2}x_{4}^{2}},\ \rho_{4}=\frac{1}{x_{1}^{2}x_{3}x_{2}^{2}x_{4}}\}\] is a basis for \(H^{1}(X,\mathcal{O}_{X}).\) The action of \(F_{X}^{*}\) on \(H^{1}(X,\mathcal{O}_{X})\) is as follows: \[F_{X}^{*}(\rho_{1})=(x_{1}^{3}(x_{4}^{3}+\lambda x_{2}^{3})+x_{3}^{3}(x_{4}^{3} +x_{2}^{3}))(\rho_{1})^{2}=\frac{x_{1}^{3}(x_{4}^{3}+\lambda x_{2}^{3})}{x_{1}^ {2}x_{3}^{4}x_{2}^{2}x_{4}^{4}}+\frac{x_{3}^{3}(x_{4}^{3}+x_{2}^{3})}{x_{1}^{2 }x_{3}^{4}x_{2}^{2}x_{4}^{4}}=\frac{1}{x_{1}^{2}x_{3}x_{2}^{2}x_{4}}=\rho_{4},\] \[F_{X}^{*}(\rho_{2})=(x_{1}^{3}(x_{4}^{3}+\lambda x_{2}^{3})+x_{3}^{3}(x_{4}^{3} +x_{2}^{3}))(\rho_{2})^{2}=\frac{x_{1}^{3}(x_{4}^{3}+\lambda x_{2}^{3})}{x_{1}^ {2}x_{3}^{4}x_{2}^{2}x_{4}^{4}}+\frac{x_{3}^{3}(x_{4}^{3}+x_{2}^{3})}{x_{1}^{2 }x_{3}^{4}x_{2}^{2}x_{4}^{2}}=\frac{1}{x_{1}^{2}x_{3}x_{2}x_{4}^{2}}=\rho_{3},\] \[F_{X}^{*}(\rho_{3})=(x_{1}^{3}(x_{4}^{3}+\lambda x_{2}^{3})+x_{3}^{3}(x_{4}^{3} +x_{2}^{3}))(\rho_{3})^{2}=\frac{x_{1}^{3}(x_{4}^{3}+\lambda x_{2}^{3})}{x_{1}^ {4}x_{3}^{2}x_{2}^{2}x_{4}^{4}}+\frac{x_{3}^{3}(x_{4}^{3}+x_{2}^{3})}{x_{1}^{4 }x_{3}^{2}x_{2}^{2}x_{4}^{4}}=\frac{1}{x_{1}x_{3}^{2}x_{2}^{2}x_{4}}=\rho_{2},\] \[F_{X}^{*}(\rho_{4})=(x_{1}^{3}(x_{4}^{3}+\lambda x_{2}^{3})+x_{3}^{3}(x_{4}^{3} +x_{2}^{3}))(\rho_{4})^{2}=\frac{x_{1}^{3}(x_{4}^{3}+\lambda x_{2}^{3})}{x_{1} ^{4}x_{3}^{2}x_{2}^{4}x_{4}^{2}}+\frac{x_{3}^{3}(x_{4}^{3}+x_{2}^{3})}{x_{1}^ {4}x_{3}^{2}x_{2}^{4}x_{4}^{2}}=\frac{\lambda}{x_{1}x_{3}^{2}x_{2}x_{4}^{2}}= \lambda\rho_{1}.\] Thus, \(F_{X}^{*}\) is a bijection on \(H^{1}(X,\mathcal{O}_{X})\) and we see that \(X\) is an ordinary curve. As aresult, \(a(X)=0\) and \(\sigma(X)=4.\) ## 3. A family of curves We will provide two examples in this section: The first one is a family of smooth complete intersection curves in \(\mathbf{P}^{n}\). We will obtain a lower bound on the \(a\)-number of the curves in this family by using the action of Frobenius on cohomology. The second example is a family of singular curves. We determine the \(p\)-rank and the \(a\)-number of the smooth model of any given curve in the family in terms of the corresponding invariants of the given curve. We will use the following example [6, Section 2.2] of smooth integral complete intersection curves : **Example 11** (Generalized Fermat Curve).: Let \(X\) be the curve defined as follows: \[C^{m}(\lambda_{0},\lambda_{1},...,\lambda_{n-2}):=\left\{\begin{array}{c} \lambda_{0}x_{0}^{m}+x_{1}^{m}+x_{2}^{m}=0\\ \lambda_{1}x_{0}^{m}+x_{1}^{m}+x_{3}^{m}=0\\ \vdots\\ \lambda_{n-2}x_{0}^{m}+x_{1}^{m}+x_{n}^{m}=0\end{array}\right\}\subset\mathbf{ P}^{n}\] where \(\lambda_{0},\lambda_{1},...,\lambda_{n-2}\) are pairwise different elements of field \(k\) with \(\lambda_{i}\neq 0\) for \(i=0,1,...,n-2\). We set \(f_{i}=\lambda_{i}x_{0}^{m}+x_{1}^{m}+x_{i+2}^{m}\) for \(i=0,1,...,n-2\). In [15, Theorem 2] we have described elements of \(H^{1}(X,\mathcal{O}_{X})\) as follows: \[\{\alpha\in H^{n}(\mathbf{P}^{n},\mathcal{O}_{\mathbf{P}^{n}}(-(n-1)m)\mid \alpha f_{i}=0\quad\text{in}\ \ H^{n}(\mathbf{P}^{n},\mathcal{O}_{\mathbf{P}^{n}}(-(n-2)m))\}\] for \(i=0,1,\ldots,n-2\). Recall that \[H^{n}(\mathbf{P}^{n},\mathcal{O}_{\mathbf{P}^{n}}(-s)=\{\bigoplus k\cdot\frac {1}{x_{0}^{\alpha_{0}}x_{1}^{\alpha_{1}}\cdots x_{n}^{\alpha_{n}}}\mid\sum_{i= 0}^{n}\alpha_{i}=s,\ \alpha_{i}\geq 1\}\] for \(s\geq n-2\)[5, Chapter III, Thm. 5.1]. We define sets \(S_{i}\) and \(S\) as \[S_{i}=\{(\alpha_{0},\alpha_{1},\ldots,\alpha_{n})\in\mathbb{N}^{n+1}\mid x_{0 }^{-\alpha_{0}}x_{1}^{-\alpha_{1}}\cdots x_{n}^{-\alpha_{n}}f_{i}=0,\ \sum_{j=0}^{n}\alpha_{j}=(n-1)m\text{ and }\alpha_{j}\geq 1\}\] for \(i=0,1,\ldots,n-2\) and \(S=\bigcap\limits_{i=0}^{n-2}S_{i}\). We see that \[x_{0}^{-\alpha_{0}}x_{1}^{-\alpha_{1}}\cdots x_{n}^{-\alpha_{n} }f_{i}= \lambda_{i}\,x_{0}^{-\alpha_{0}+m}\,x_{1}^{-\alpha_{1}}\,\cdots \,x_{n}^{-\alpha_{n}}\,+\,x_{0}^{-\alpha_{0}}\,x_{1}^{-\alpha_{1}+m}\,\cdots \,x_{n}^{-\alpha_{n}}\] \[+x_{0}^{-\alpha_{0}}\,\cdots\,x_{i+2}^{-\alpha_{i+2}+m}\,\cdots \,x_{n}^{-\alpha_{n}}\] \[=0\] if and only if \(-\alpha_{i+2}+m\geq 0\), \(-\alpha_{0}+m\geq 0\) and \(-\alpha_{1}+m\geq 0\) for \(i=0,1,\ldots,n-2\) if and only if \(\alpha_{i}\leq m\) for \(i=0,1,\ldots,n\). Therefore, we have \[S=\{(\alpha_{0},\alpha_{1},\cdots,\alpha_{n})\in\mathbb{N}^{n+1}\mid\sum_{j=0 }^{n}\alpha_{j}=(n-1)m\text{ and }1\leq\alpha_{i}\leq m\text{ for }i=0,1,\ldots,n\}.\] Note that \(\text{Span}_{k}(\{x_{0}^{-\alpha_{0}}x_{1}^{-\alpha_{1}}\cdots x_{n}^{-\alpha _{n}}|(\alpha_{0},\cdots,\alpha_{n})\in S\})\subsetneq H^{1}(X,\mathcal{O}_{X})\). Let us compute the cardinality \(|S|\) of \(S\). We are looking for non-negative integer solutions of the problem: \[\left\{\begin{array}{l}\sum_{j=0}^{n}\alpha_{j}=(n-1)m-(n+1)\\ \alpha_{j}\leq m-1\text{ for }j=0,\ldots,n\end{array}\right\}\] Let \(N(0)\) be the number of all non-negative integer solutions of \(\sum_{j=0}^{n}\alpha_{j}=(n-1)m-(n+1)\) and \(N(i)\) be the number of non-negative integer solutions of \(\sum_{j=0}^{n}\alpha_{j}=(n-1)m-(n+1)\) such that at least \(i\) of \(\alpha_{0},\ldots,\alpha_{n}\) is greater than or equal to \(m\). Then by the principle of Inclusion-Exclusion we find \(|S|=\sum_{i=0}^{n}(-1)^{i}N(i)\) where \[N(i)=\binom{n+1}{i}\operatorname{Card}\left\{\ (\alpha_{0},\ldots,\alpha_{n}): \sum_{j=0}^{n}\alpha_{j}=(n-1)m-(n+1)-im\ \right\}\] for \(i=0,\ldots,n\). Hence \(N(i)=\binom{n+1}{i}\binom{(n-i-1)m-1}{n}\) for \(i=0,\ldots,n\). We now assume that \(\operatorname{\mathbf{char}}(\mathbf{k})=\mathbf{2}\) and \(\mathbf{m}\geq\mathbf{3}\) is an odd integer and we will compute the Frobenius map \(F\) on the set \(\{x_{0}^{-\alpha_{0}}x_{1}^{-\alpha_{1}}\cdots x_{n}^{-\alpha_{n}}|(\alpha_{0 },\ldots,\alpha_{n})\in S\}\). For \((\alpha_{0},\ldots,\alpha_{n})\in S\), \[F(x_{0}^{-\alpha_{0}}\cdots x_{n}^{-\alpha_{n}}) =f_{0}\cdots f_{n-2}x_{0}^{-2\alpha_{0}}\cdots x_{n}^{-2\alpha_{ n}}\] \[=\sum_{\begin{subarray}{c}\rho\in\operatorname{Sym}(\{0,\ldots, n\})\\ \rho(2)<\cdots<\rho(n)\\ \rho(0)<\rho(1)\\ \rho=(\rho(0)\rho(1))(\rho(2)\cdots\rho(n))\end{subarray}}h_{\rho}x_{\rho(0) }^{-2\alpha_{\rho(0)}}x_{\rho(1)}^{-2\alpha_{\rho(1)}}x_{\rho(2)}^{-2\alpha_{ \rho(2)}+m}\cdots x_{\rho(n)}^{-2\alpha_{\rho(n)}+m} \tag{3.1}\] in \(H^{1}(X,\mathcal{O}_{X})\) by using [15, Proposition 14] where \(h_{\rho}=h_{\rho}(\lambda_{0},\ldots,\lambda_{n-2})\neq 0\) as \[h_{\rho}=\begin{cases}\lambda_{i}&\text{ for some }i\\ \lambda_{i}+\lambda_{j}&\text{ for some }i,j\text{ with }i\neq j\\ 1&\text{ otherwise}\end{cases}\] . We observe that \[F(x_{0}^{-\alpha_{0}}\cdots x_{n}^{-\alpha_{n}})=0\] if and only if at least one of the terms \(-2\alpha_{\rho(i)}+m\geq 0\) for \(\rho\) and \(i=2,\ldots,n\) in each summation of the sum (3.1) if and only if at least one of the terms \(\alpha_{\rho(i)}\leq\frac{m-1}{2}\) ( \(m\) is odd) for \(\rho\) and \(i=2,\ldots,n\) in each summation of the sum (3.1). This is the case when at least three of \(\alpha_{i}\) are less than or equal to \(\frac{m-1}{2}\). Let \(T\) be the positive integer solutions of this problem, i.e., solutions of the following: \[\left\{\begin{array}{l}\sum_{j=0}^{n}\alpha_{j}=(n-1)m\\ 1\leq\alpha_{j}\leq m\text{ for }j=0,\ldots,n\\ 1\leq\alpha_{j_{1}},\ldots\alpha_{j_{s}}\leq(m-1)/2\text{ for }s\geq 3\end{array} \right\}\] Then \(T=|S|-\binom{n+1}{n-1}T(n-1)+\binom{n+1}{n}T(n)-\binom{n+1}{n+1}T(n+1)\) where \(T(i)\) is the number of non-negative integer solutions of \(\sum_{j=0}^{n}\alpha_{j}=(n-1)m-(n+1)\) such that at least \(i\) of \(\alpha_{0},\ldots,\alpha_{n}\) is greater than or equal to \((m-1)/2\) for \(i\geq n-1\). Therefore \[T(i)=\begin{pmatrix}n+1\\ i\end{pmatrix}\operatorname{Card}\left\{\ \left(\alpha_{0},\ldots,\alpha_{n} \right):\sum_{j=0}^{n}\alpha_{j}=(n-1)m-(n+1)-i(m-1/2)\ \ \right\}.\] Hence \(T(i)=\binom{n+1}{i}\binom{(n-1)m-i(m-1)/2-1}{n}\) for \(i\geq n-1\). As a result we have the following inequality: \[a(X)=\operatorname{Rank}(\operatorname{Ker}(F))\geq T.\] Now we will work on singular generalized Fermat curves. **Example 12** (Singular Generalized Fermat Curve).: Let \(X^{\prime}\) be the singular curve \(C^{m}(1,1,\lambda_{2},...,\lambda_{n-2})\)\(\lambda_{i}\neq 0,1\) for \(i=2,\ldots,n-2\) and \(X\) be the smooth model of \(X^{\prime}\). Let \(p\) be the characteristic of field \(k\) with \((p,m)=1\). In this example we will observe relations between \(p\)-ranks \(\sigma(X),\sigma(X^{\prime})\) and between \(a\)-numbers \(a(X),a(X^{\prime})\). Let \(\gamma\) be the principal \(2m\)-th root of unity and \(\zeta\) be the principal \(m\)-th root of unity. Set \(\mu_{j}=\gamma\zeta^{j}\) for \(j=1,\ldots,m\). Recall that \(P\) is a singular point of \(X^{\prime}\) if and only if \(\operatorname{Rank}(\frac{\partial f_{i}}{\partial x_{j}}(P))<n-1\). This is the case when \[P=[1:\mu_{i}:0:0:(\lambda_{2}-1)^{1/m}\mu_{\lambda(2)}:\cdots:(\lambda_{n-2}-1 )^{1/m}\mu_{\lambda(n-2)}]\] where \(i=1,\ldots,m\) and \(\lambda(j)=1,\ldots,m\) for \(j=2,\ldots,n-2\). Let \(\widehat{\mathcal{O}}_{P}\) be the completion of the local ring \(\mathcal{O}_{P}\). In the completion ring \(\widehat{\mathcal{O}}_{P}\), one may write the polynomials \[\left\{\begin{array}{c}g_{0}=1+y_{1}^{m}+y_{2}^{m}\\ g_{1}=1+y_{1}^{m}+y_{3}^{m}\\ g_{2}=\lambda_{2}+y_{1}^{m}+y_{4}^{m}\\ \vdots\\ g_{n-2}=\lambda_{n-2}+y_{1}^{m}+y_{n}^{m}\end{array}\right\}\] as \[\left\{\begin{array}{c}g_{0}=(1-\mu_{1}y_{1})h_{1}+y_{2}^{m}\\ g_{1}=(1-\mu_{1}y_{1})h_{1}+y_{3}^{m}\\ g_{2}=\lambda_{2}+(y_{4}-\mu_{1}y_{1})h_{2}\\ \vdots\\ g_{n-2}=\lambda_{n-2}+(y_{n}-\mu_{1}y_{1})h_{n-2}\end{array}\right\}\] Where \(g_{i}\) is dehomogenization of \(f_{i}\) for \(i=0,\ldots,n-2\) and \(h_{j}(P)\neq 0\) for \(j=1,\ldots,n-2\). Therefore, we realize the isomorphism \[\widehat{\mathcal{O}}_{P}\cong k[[x,y]]/(x^{m}-y^{m}).\] As the polynomial \(x^{m}-y^{m}\) contains \(m\) distinct linear factors, each singularity at \(P\) contributes exactly \(m\) factors of \(\mathbb{G}_{m}\) in linear algebraic group \(G\) attached to the short exact sequence \[0\longrightarrow G\longrightarrow J_{X^{\prime}}\longrightarrow J_{X} \longrightarrow 0.\] Therefore, one has \(G=\mathbb{G}_{m}^{(n-2)^{m}(m-1)}\) ([12, Chapter V, Section 17]). As a result, we obtain the relations \[\left\{\begin{array}{l}a(X)=a(X^{\prime})\\ \sigma(X)=\sigma(X^{\prime})-(n-2)^{m}(m-1)\end{array}\right\}.\]
2306.17737
Proximal Langevin Sampling With Inexact Proximal Mapping
In order to solve tasks like uncertainty quantification or hypothesis tests in Bayesian imaging inverse problems, we often have to draw samples from the arising posterior distribution. For the usually log-concave but high-dimensional posteriors, Markov chain Monte Carlo methods based on time discretizations of Langevin diffusion are a popular tool. If the potential defining the distribution is non-smooth, these discretizations are usually of an implicit form leading to Langevin sampling algorithms that require the evaluation of proximal operators. For some of the potentials relevant in imaging problems this is only possible approximately using an iterative scheme. We investigate the behaviour of a proximal Langevin algorithm under the presence of errors in the evaluation of proximal mappings. We generalize existing non-asymptotic and asymptotic convergence results of the exact algorithm to our inexact setting and quantify the bias between the target and the algorithm's stationary distribution due to the errors. We show that the additional bias stays bounded for bounded errors and converges to zero for decaying errors in a strongly convex setting. We apply the inexact algorithm to sample numerically from the posterior of typical imaging inverse problems in which we can only approximate the proximal operator by an iterative scheme and validate our theoretical convergence results.
Matthias J. Ehrhardt, Lorenz Kuger, Carola-Bibiane Schönlieb
2023-06-30T15:34:40Z
http://arxiv.org/abs/2306.17737v2
# Proximal Langevin Sampling With Inexact Proximal Mapping ###### Abstract In order to solve tasks like uncertainty quantification or hypothesis tests in Bayesian imaging inverse problems, we often have to draw samples from the arising posterior distribution. For the usually log-concave but high-dimensional posteriors, Markov chain Monte Carlo methods based on time discretizations of Langevin diffusion are a popular tool. If the potential defining the distribution is non-smooth, these discretizations are usually of an implicit form leading to Langevin sampling algorithms that require the evaluation of proximal operators. For some of the potentials relevant in imaging problems this is only possible approximately using an iterative scheme. We investigate the behaviour of a proximal Langevin algorithm under the presence of errors in the evaluation of proximal mappings. We generalize existing non-asymptotic and asymptotic convergence results of the exact algorithm to our inexact setting and quantify the bias between the target and the algorithm's stationary distribution due to the errors. We show that the additional bias stays bounded for bounded errors and converges to zero for decaying errors in a strongly convex setting. We apply the inexact algorithm to sample numerically from the posterior of typical imaging inverse problems in which we can only approximate the proximal operator by an iterative scheme and validate our theoretical convergence results. ## 1 Introduction In imaging sciences, the problem of estimating an image from acquired data is often ill-posed or ill-conditioned, resulting in uncertainty about the true solution. The formulation of such inverse problems in a Bayesian framework provides a range of tools that allow to describe possible solutions as a distribution, quantify uncertainty and perform further tasks like hypothesis tests. We consider problems where the posterior distribution of a Bayesian imaging inverse problem has a log-concave but non-smooth density function. For the computation of the maximum a posteriori (MAP) point estimate, this would usually mean that a proximal optimization algorithm can be employed to efficiently compute a solution. While convex analysis provides solid convergence theory in this case, theory on the problem of sampling from the posterior is not as well-developed yet. Markov chain Monte Carlo (MCMC) methods are useful for performing Bayesian inference in this context, by allowing for sampling from complex distributions. Compared to other classes of MCMC methods, sampling schemes based on Langevin diffusion processes have proved efficient in inverse problems and imaging applications [1, 10, 33]. Langevin diffusion based MCMC sampling algorithms are all discretisations of the same stochastic differential equation (SDE) which drives Langevin diffusion processes and whose invariant distribution is the target distribution from which we want to sample. In the discrete setting, there is usually a bias between the law of the samples and the target measure, hence many works have been concerned with the characterization of the invariant distribution of the Markov chain and the bias [8, 14, 15, 33]. Further questions typically concern convergence speed of the Markov chain and possible correction steps to overcome the bias and draw unbiased samples from the target. We are interested in a Langevin sampling algorithm that is based on a forward-backward discretization of the potential term in the underlying SDE. The algorithm has been analyzed under the name proximal stochastic gradient Langevin algorithm (PSGLA) in [6] and as stochastic proximal gradient Langevin dynamics in [9]. It can be viewed as a sampling equivalent to proximal gradient descent algorithms (arising from a forward-backward discretization of gradient flows), with the difference that a stochastic term is added in every iteration step in the argument to the proximal mapping. In many imaging inverse problems, the proximal operator of some functional of interest, e.g. total variation (TV) [21, 34] or total generalized variation (TGV) functionals [19], has no closed form and has to be approximated by some iterative procedure. While in the optimization setting, there have been works proving the convergence of algorithms under the assumption that proximal operators are evaluated only inexactly [5, 22, 23, 32], corresponding sampling algorithms lack such theory. We consider a generalization of PSGLA in which proximal points are evaluated only inexactly up to some accuracy level. Our analysis uses techniques from convex optimization by exploiting the close relationship between sampling tasks and optimization in the corresponding space of measures. The rest of the paper is organised as follows. Section 2 first introduces the problem and the relevant existing Langevin algorithms that are based on the evaluation of proximal operators. We then define the considered notion of inexactness in the proximal mappings and give our algorithm. The convergence theory is carried out in Section 3 by proving nonasymptotic and asymptotic convergence results. We provide numerical examples in Section 4. ### Related work The idea to use techniques from convex analysis to study Langevin sampling algorithms goes back to the seminal work [31], by which Langevin dynamics corresponds to the gradient flow of relative entropy with respect to the target in the space of probability measures endowed with the Wasserstein metric. By using a coupling argument, typical bounds on some distance measure of optimization iterates along gradient flows in Euclidean space can be translated to the related sampling algorithm [6, 9, 11]. In [14], the author explored this correspondence further and gave explanations for the unavoidable bias of Langevin sampling algorithms. Several works have proposed Langevin algorithms that involve the evaluation of proximal operators. In [13, 18], the authors approximate non-smooth potential terms by their smooth Moreau-Yosida regularization and apply the standard unadjusted Langevin algorithm (ULA) for smooth potentials to the regularized target. [12] overcomes the problem of non-smooth potentials by discretising the underlying SDE implicitly, leading to a sampling equivalent of proximal gradient descent. For a special case of the Moreau-Yosida smoothing parameter, the method in [18] also falls in this class. Generalizing this idea, the scheme in [6] allows to split up the potential in a smooth and a non-smooth term by constructing a forward-backward type sampling method. Up to our knowledge, inexact evaluations of proximal operators in Langevin sampling algorithms have not been considered yet in the literature. However, several works have been concerned with cases where the potential is only known inexactly in some other way. [16] analysed the continuous-time dynamics of the underlying SDE and a specific first-order approximation to computationally costly gradients in ULA. In [8], the analysis of ULA was extended to versions with inexact potential gradients, where the estimators have bounded bias and variance. Stochastic but unbiased estimators of the gradient with bounded variance were considered in [6, 9] for ULA and a proximal Langevin algorithm respectively. In a recent work, [2] proved convergence of the inexact continuous-time dynamics and ULA under the assumption of a bounded moment generating function of the gradient estimator for applications in score-based generating models. ### Contributions Using the notion of inexact evaluation of proximal operators that has been considered in [5, 22, 23, 32], we generalize PSGLA to allow inaccurate proximal points. We show that the convergence analysis of PSGLA carried out in [6] can be recovered in the inexact setting. Particularly, we recover the same convergence rates in the exact case and quantify the additional bias between the algorithm's stationary distribution and the target due to errors in the inexact case. We show that the additional bias stays bounded for a bounded sequence of errors and, if the errors go to zero in a strongly convex setting, decays to zero during the iteration as fast as the mean of the errors. Numerically, we test the algorithm by sampling from the posterior distribution of typical imaging inverse problems. In cases where the proximal operator has a closed form this allows us to compare the algorithm's exact and inexact version, and in cases where proximal points can only be approximated it showcases how proximal Langevin sampling can efficiently be applied to such imaging problems. ## 2 A Langevin sampling algorithm with inexact proximal points ### Problem formulation We are interested in ill-posed imaging inverse problems that are modelled in the framework of Bayesian statistics. This typically assumes that the cause \(x\) in state space \(\mathcal{X}=\mathbb{R}^{d}\) and the measured data \(y\in\mathcal{Y}=\mathbb{R}^{m}\) come from a joint distribution with density \(p(x,y)\) with respect to the Lebesgue measure. The acquisition process of \(y\) is described by the likelihood \(p(y|x)\). This is combined with a prior distribution with density \(p(x)\) representing structural knowledge or assumptions about the true solution. The central subject of analysis is then the posterior distribution \(\mu^{*}\). By Bayes' law its density, which we denote in a slight abuse of notation by \(\mu^{*}(x):=p(x|y)\), is given by \[\mu^{*}(x)=\frac{p(y|x)p(x)}{Z}=:\frac{1}{Z}\exp(-F(x)-G(x)). \tag{2.1}\] Here \(Z:=\int p(y|\tilde{x})p(\tilde{x})\,\mathrm{d}\tilde{x}\) is the model evidence, a constant that is typically hard to compute and hence unknown in high dimensions. \(F\) and \(G\) are functions whose sum \(F(x)+G(x)=-\log p(y|x)-\log p(x)\) is often called the potential and which are assumed to satisfy the following criteria. **Assumption 1**.: \(F:\mathcal{X}\to\mathbb{R}\) _is \(\lambda_{F}\)-strongly convex, \(\lambda_{F}\geq 0\) (which in the case \(\lambda_{F}=0\) means only convex) and differentiable, where \(\nabla F\) is \(L\)-Lipschitz continuous. \(G:\mathcal{X}\to\overline{\mathbb{R}}:=\mathbb{R}\cup\{\infty\}^{1}\) is proper, convex and lower semicontinuous (lsc). Further, its convex conjugate, denoted by \(G^{*}\), is \(\lambda_{G^{*}}\)-strongly convex, \(\lambda_{G^{*}}\geq 0\). We allow \(\lambda_{G^{*}}=0\), in which case this is no further assumption so that \(G\) is not necessarily differentiable and can take value \(\infty\)._ **Assumption 2**.: _The function \(V=F+G\) satisfies \(\int\exp(-V(x))\,\mathrm{d}x<\infty\) and \(\exp(-V)\in S^{1,1}_{loc}(\mathcal{X})\)._ Due to the first part of Assumption 2, \(\mu^{*}\) is a valid probability distribution on \(\mathcal{X}\). Since its density is log-concave by Assumption 1, all its moments are finite (see [20], Lemma 2.2.1), in particular \(\mu^{*}\) has finite second moment, denoted \(\mu^{*}\in\mathcal{P}_{2}(\mathcal{X})\). In [6], it is shown that \(F\), \(G\) and \(V\) are \(\mu^{*}\)-a.e. differentiable under Assumptions 1 and 2 and that the target density satisfies a first-order optimality condition on which the analysis of their algorithm is based. We further require \(\nabla G\) to be an \(L^{2}\) function w.r.t. \(\mu^{*}\): **Assumption 3**.: _It holds \(\int_{\mathrm{int}(\mathrm{dom}(G))}\lVert\nabla G(x)\rVert^{2}\,\mathrm{d}\mu^{*} (x)<\infty\)._ Due to Assumption 1 we only consider cases where the prior and the likelihood are both log-concave, which covers many relevant noise models and typical imaging priors. In the log-concave case the computation of the maximum a posteriori (MAP) point estimate \[x_{\mathrm{MAP}}:=\operatorname*{arg\,max}_{x}\mu^{*}(x)=\operatorname*{arg\, min}_{x}\left\{F(x)+G(x)\right\}\] can be carried out efficiently and with well-developed theory on convergence behaviour using convex optimization algorithms [17, 25]. However, for more advanced statistical tasks in which the point estimate \(x_{\mathrm{MAP}}\) can not provide sufficient information about the posterior, it is often necessary to draw representative samples from the distribution \(\mu^{*}\). In imaging inverse problems with regularizers that enforce some kind of sparsity, while the log-likelihood might be smooth, the prior log-density is often not differentiable w.r.t. the Lebesgue measure, so that \(F(x)=-\log p(y|x)\) and \(G(x)=-\log p(x)\) is the logical choice. We mention here though that the terms in the posterior log-density can be split up differently into a smooth term \(F\) and a potentially non-smooth term \(G\) as long as \(F\) and \(G\) satisfy the assumptions above. ### Existing proximal Langevin sampling schemes Suppose we want to sample from a target distribution \(\mu^{*}\) with density proportional to \(\exp(-V)\) for a potential \(V:\mathbb{R}^{d}\to\mathbb{R}\) which for now is assumed to be differentiable. Langevin diffusion processes for this target are solutions of the Ito stochastic differential equation \[\mathrm{d}X_{t}=-\nabla V(X_{t})\,\mathrm{d}t+\sqrt{2}\,\mathrm{d}W_{t}, \tag{2.2}\] where \(W_{t}\) is a Wiener process in \(\mathbb{R}^{d}\). \(\mu^{*}\) is the unique invariant probability measure of the Markov semigroup associated with this SDE. Furthermore, if \(\mu^{*}\) satisfies some regularity assumption like the logarithmic Sobolev inequality, every solution of the SDE converges exponentially fast in time to the stationary target \(\mu^{*}\)[11, 33]. Since the SDE only has an explicit solution for specific cases of the function \(V\), sampling algorithms based on Langevin diffusion are usually time-discretized Markov chains that approximate continuous processes which solve the SDE. A straightforward Euler-Maruyama discretization of (2.2) leads to the following sampling scheme called unadjusted Langevin algorithm (ULA) given by \[X^{k+1}=X^{k}-\gamma\nabla V(X^{k})+\sqrt{2\gamma}\,\xi^{k},\quad\xi^{k}\sim \,\mathrm{N}(0,I_{d}), \tag{2.3}\] with step size \(\gamma>0\). In the case of strongly convex \(V\), the algorithm is well-understood in non-asymptotic and asymptotic behaviour of the law \(\mu^{k}\) of iterate \(X^{k}\)[8, 15, 33].If the step size \(\gamma\) is small enough, the Wasserstein distance between \(\mu^{k}\) and \(\mu^{*}\) decreases exponentially with \(k\) up to a constant bias scaling like the square root of the step size \(\sqrt{\gamma}\)[8] and similar results hold for the measures' total variation distance. The bias is usually unavoidable in Langevin diffusion based sampling algorithms if there is no additional step correcting for it. It is attributed to the fact that the iteration is performing an unbalanced discretization in time of the gradient flow of relative entropy. The gradient step in \(V\) is a time-discrete step in the expected potential value while the addition of a normal random variable solves the gradient flow of negative entropy in continuous time, see [14]. For variable step sizes decaying to zero at the right rate, convergence in different metrics (Wasserstein distance, total variation distance or KL-divergence) can be ensured, with accuracy \(\varepsilon\) in Wasserstein distance reached after at most \(\mathcal{O}(d\varepsilon^{-2})\) iterations in the case of strongly convex \(V\). If \(V\) is only convex, weaker results hold with the number of iterations still depending at most polynomially on \(d\)[1, 8]. Particularly due to this moderate dependence on the dimension \(d\) in comparison to other MCMC algorithms, sampling schemes based on Langevin diffusion are very efficient in high-dimensional applications and have gained popularity in recent years. In this work, we are interested in the case when \(V=F+G\) contains a non-smooth term \(G\) which makes ULA not well-defined. Technically, a variant of ULA could still be used if \(G\) is supported everywhere and subdifferentiable by replacing \(\nabla V\) with an element of the subdifferential \(\partial V\). However, the theoretical convergence guarantees then do not hold anymore unless other restrictive conditions are satisfied, e.g. Lipschitz-continuity of the nonsmooth potential [9]. In the literature, several other strategies to circumvent the problem of nonsmoothness have been proposed. One popular technique relies on slightly changing the target density by replacing the non-smooth part \(G\) of the potential with its regularized Moreau-Yosida envelope \(G^{\lambda}\) defined by \(G^{\lambda}(x):=\arg\min_{y}\left\{G(y)+\frac{1}{2\lambda}\|x-y\|_{2}^{2}\right\}\). ULA can then be applied to the altered target measure \(\mu^{\lambda}\) with density \(\exp(-F(x)-G^{\lambda}(x))\). The resulting algorithm MYULA (Moreau-Yosida Unadjusted Langevin Algorithm) is given by \[X^{k+1}=\left(1-\frac{\gamma}{\lambda}\right)X^{k}-\gamma\nabla F(X^{k})+\frac {\gamma}{\lambda}\operatorname{prox}_{\lambda G}(X^{k})+\sqrt{2\gamma}\, \xi^{k},\quad\xi^{k}\sim\,\operatorname{N}(0,I_{d}).\] Since \(\nabla G^{\lambda}\) is Lipschitz continuous, the convergence theory of ULA can be applied to show that the distribution of generated samples is close to \(\mu^{\lambda}\). If \(\lambda\) is small enough, the total variation distance between \(\mu^{\lambda}\) and \(\mu^{*}\) is also small, allowing a convergence theory for MYULA [1, 13]. In this work, we consider a different approach to the problem of non-smooth potentials. Instead of applying an entirely explicit discretization to (2.2), the smooth part \(F\) of the potential is discretized explicitly and the non-smooth part \(G\) implicitly. Analogous to forward-backward splitting algorithms in optimization, we can design a sampling algorithm including a forward step, the addition of a stochastic term and a backward step: \[X^{k+1}=\operatorname{prox}_{\gamma G}(X^{k}-\gamma\nabla F(X^{k})+\sqrt{2 \gamma}\,\xi^{k}),\quad\xi^{k}\sim\,\operatorname{N}(0,I_{d}).\] This algorithm is analyzed in [6] as proximal stochastic gradient Langevin algorithm (PSGLA) with the addition that the gradients \(\nabla F\) are replaced by unbiased estimators with bounded variance. In the present paper, we consider this algorithm in the setting where the evaluation of the proximal mapping is carried out inexactly. The gradients \(\nabla F\) are evaluated exactly, so that we call this scheme proximal gradient Langevin algorithm (PGLA) here. Note, however, that the whole analysis can be extended without major modification to the stochastic gradient case of [6]. The two schemes MYULA and PGLA are comparable in the sense that both algorithms evaluate the gradient of \(F\) and a proximal mapping of \(G\) in each iteration to overcome the problem of non-smooth \(G\). To draw the link to optimization algorithms, PGLA can be viewed as a sampling equivalent of forward-backward splitting where the diffusion step is carried out between the two discrete time steps along the potential fields of \(F\) and \(G\). MYULA, on the other hand, acts like the sampling correspondence of gradient descent on the partly Moreau-Yosida regularized objective \(F+G^{\lambda}\). The flexbile smoothing parameter \(\lambda\) in MYULA allows to trade off the algorithm's speed to the approximation of the true target. PGLA, however, has the advantage that constraints encoded in the potential \(G\) are not relaxed and the samples will always remain in the support of \(G\). We mention here briefly that further discretizations are possible in the non-smooth case, e.g. by smoothing the whole potential term, or by discretizing to only a single backward step in the potential followed by the step along the gradient flow of negative entropy. The resulting methods have also been called proximal Langevin algorithms [12, 18] and are special instances of MYULA, PGLA or both of the latter. ### Inexact evaluation of the proximal mapping We want to generalize PGLA by allowing an inexact evaluation of the proximal mapping in every iteration. The distribution of the resulting Markov chain is, as typical for Langevin sampling algorithms, a biased approximation to the target density. We begin by defining what we mean by an inexact evaluation of the proximal mapping. This first requires us to introduce the \(\varepsilon\)-subdifferential, see [30], which is a generalization to the subdifferential of a function. Given \(G:\mathcal{X}\to\overline{\mathbb{R}}\) for some Hilbert space \(\mathcal{X}\) and an \(\varepsilon\geq 0\), it is defined by \[\partial_{\varepsilon}G(x)=\{p\in\mathcal{X}\,:\,G(y)\geq G(x)+\langle p,y-x \rangle-\varepsilon\}\,,\] where \(\langle\cdot,\cdot\rangle\) is the inner product in \(\mathcal{X}\). Note that when \(\varepsilon=0\), this reduces to the definition of the subdifferential \(\partial G(x)\). Further, it holds \(\partial G(x)\subseteq\partial_{\varepsilon_{1}}G(x)\subseteq\partial_{ \varepsilon_{2}}G(x)\) for all \(0<\varepsilon_{1}<\varepsilon_{2}\). Using the \(\varepsilon\)-subdifferential, we now define inexact proximal points. **Definition 2.1** ([5]).: _For \(G:\mathcal{X}\to\mathbb{R}\), \(\gamma>0\) and \(\varepsilon\geq 0\), say that \(x\in\mathcal{X}\) is an \(\varepsilon\)-approximation of the proximal point \(\operatorname{prox}_{\gamma G}(y)\) if \(x\) satisfies_ \[x\approx^{\varepsilon}\operatorname{prox}_{\gamma G}(y)\ \Leftrightarrow\ \frac{y-x}{\gamma}\in\partial_{\varepsilon}G(x).\] _Note that \(x\) is generally not unique for \(\varepsilon>0\)._ This definition has been used in [32] to analyze inexact proximal gradient descent and also as "type-2 approximation" in [5, 22, 23] where several types of inexact proximal points where used to analyze inexactness in accelerated forward-backward splitting methods and the primal dual hybrid gradient algorithm. In Section 3, we comment more on the interpretation of the definition used here, how it can be verified efficient in numerical practice and how it relates to others that have been considered in the literature. ### The proposed inexact sampling scheme With the definition of inexact evaluation of proximal mappings, we are able to generalize the proximal Langevin algorithm considered in [6]. ``` Input:\(X^{0},K,\{\gamma_{k}\}_{k=0}^{K-1},\{\varepsilon_{k}\}_{k=0}^{K-1}\) for\(k=0,\ldots,K-1\)do Draw \(\xi^{k+1}\sim\operatorname{N}(0,I_{d})\); Compute \(X^{k+1}\approx^{\varepsilon_{k}}\operatorname{prox}_{\gamma_{k}G}\big{(}X^{ k}-\gamma_{k}\nabla F(X^{k})+\sqrt{2\gamma_{k}}\,\xi^{k}\big{)}\); end for ``` **Algorithm 1**Inexact PGLA We allow both the error level \(\varepsilon_{k}\) and the step size \(\gamma_{k}\) to change over time. The convergence of the algorithm is analyzed in Section 3 for both fixed and flexible step size choices and errors. For the sake of notation we split the iteration into three intermediate steps of the form \[X^{k+1/3} =X^{k}-\gamma_{k}\nabla F(X^{k}),\] \[X^{k+2/3} =X^{k+1/3}+\sqrt{2\gamma_{k}}\,\xi^{k},\] \[X^{k+1} =S_{k}(X^{k+2/3})\approx^{\varepsilon_{k}}\operatorname{prox}_{ \gamma_{k}G}\left(X^{k+2/3}\right),\] where \(S_{k}:\mathcal{X}\to\mathcal{X}\) is the operator that maps to the \(\varepsilon_{k}\)-inexact proximal point that our algorithm chooses, since inexact proximal points are in general non-unique. Denote \(\mu^{j}=\operatorname{Law}(X^{j}),j\in\mathbb{N}/3\) for the distributions of the samples at each step. Note that for \(\varepsilon_{k}=0\), we of course recover the iteration formula of the original PGLA algorithm. ## 3 Convergence Theory In this section, we give a short overview of the relation between the analysis of optimization and sampling algorithms. We then give auxiliary results on the type of inexact proximal points which we use in Algorithm 1 and comment on how to numerically ensure the inexact computation for a given accuracy level. This allows us to prove the main nonasymptotic convergence result Theorem 3.1 and two unrolled versions that lead to asymptotic rates in Theorems 3.2 and 3.3 for the cases of fixed or decaying step sizes and proximal mapping accuracy levels. ### Sampling as optimization in the Wasserstein space In several recent works, Langevin Monte Carlo algorithms were successfully analyzed using techniques from optimization theory [6, 9, 11, 14]. We take the same perspective here and motivate the formulation of the sampling task as an optimization problem. This then leads to a reformulation as a saddle point problem following the arguments in [6], which is the basis of our convergence analysis. In the following, let \(\mathcal{P}_{2}(\mathcal{X})\) be the set of probability measures on \(\mathcal{X}=\mathbb{R}^{d}\) with finite variance. Let the Wasserstein \(p\)-metric be defined as usual for \(0<p<\infty\) by \[\mathcal{W}_{p}(\mu,\nu):=\left(\inf_{\pi}\mathbb{E}_{(X,Y)\sim\pi}\left[\|X- Y\|^{p}\right]\right)^{1/p},\] where the infimum is taken over all possible couplings \(\pi\) of the measures \(\mu\) and \(\nu\), i.e., all probability measures \(\pi\in\mathcal{P}_{2}(\mathcal{X}^{2})\) with marginal distributions \(\mu\) and \(\nu\). The sampling task then corresponds to the problem of approximating \(\mu^{*}\) in the metric space \((\mathcal{P}_{2}(\mathcal{X}),\mathcal{W}_{2})\). For any \(\mu\in\mathcal{P}_{2}(\mathcal{X})\), define the negative entropy \[\mathcal{H}(\mu)=\int\log(\mu(x))\,\mathrm{d}\mu(x)\] with \(\mathcal{H}(\mu):=\infty\) if \(\mu\) has no density w.r.t. the Lebesgue measure. Define the potential energy functionals for the functions \(F\) and \(G\) by \[\mathcal{E}_{F}(\mu)=\int F(x)\,\mathrm{d}\mu(x),\quad\mathcal{E}_{G}(\mu)= \int G(x)\,\mathrm{d}\mu(x).\] For \(\mu,\nu\in\mathcal{P}_{2}(\mathcal{X})\) with \(\mu\ll\nu\) (\(\mu\) absolutely continuous with respect to \(\nu\)) we further define the Kullback-Leibler divergence of \(\mu\) from \(\nu\) \[\mathrm{KL}(\mu,\nu)=\int\frac{\mathrm{d}\mu}{\mathrm{d}\nu}(x)\log\left( \frac{\mathrm{d}\mu}{\mathrm{d}\nu}(x)\right)\,\mathrm{d}\nu(x),\] which is often also called relative entropy of \(\mu\) w.r.t. \(\nu\). It can easily be seen that with \(\mu^{*}\) defined as in (2.1) it holds \(\mathrm{KL}(\mu,\mu^{*})=\mathcal{H}(\mu)+\mathcal{E}_{F}(\mu)+\mathcal{E}_{G }(\mu)\) for all \(\mu\in\mathcal{P}_{2}(\mathcal{X})\)[27]. Since the Kullback-Leibler divergence obeys \(\mathrm{KL}(\mu,\mu^{*})\geq 0\) and further \(\mathrm{KL}(\mu,\mu^{*})=0\) if and only if \(\mu=\mu^{*}\), the task of approximating \(\mu^{*}\) corresponds to minimizing the free energy functional \(\mathcal{H}+\mathcal{E}_{F}+\mathcal{E}_{G}\). Exploring this correspondence further provides intuition why Langevin dynamics in the form (2.2) is a good choice when we want to sample from \(\mu^{*}\). In [31], it was shown that the gradient flow of \(\mathrm{KL}(\mu,\mu^{*})\) in \(\mathcal{P}_{2}(\mathcal{X})\) equipped with the Wasserstein 2-distance actually corresponds to the Fokker-Planck equation of the Langevin SDE (2.2). Hence by approximating Langevin diffusion processes, we sample approximately from solutions of a Fokker-Planck equation which we can hope to converge to \(\mu^{*}\) quickly. By exploiting this relationship theoretically, it is possible to derive convergence results in Wasserstein distance to \(\mu^{*}\) of diffusion processes driven by Langevin dynamics. The authors in [6] analyzed Algorithm 1 by deriving a first order optimality condition of the minimization problem \[\min_{\mu\in\mathcal{P}_{2}(\mathcal{X})}\mathrm{KL}(\mu,\mu^{*})=\min_{\mu \in\mathcal{P}_{2}(\mathcal{X})}\mathcal{H}(\mu)+\mathcal{E}_{F}(\mu)+ \mathcal{E}_{G}(\mu).\] As in optimization problems in Euclidean space, this allows to transform the optimization problem into a saddle point problem by regarding \(Y=\nabla G(X)\in L^{2}(\mu^{*};\mathcal{X})\) as having the optimal dual distribution \(\nu^{*}\) Here we denote \(L^{2}(\mu^{*};\mathcal{X})\) for the space of \(\mathcal{X}\)-valued measurable functions with finite second moment over the probability space \((\mathcal{X},\mathcal{B}(\mathcal{X}),\mu^{*})\), consistent with the notation of [27]. One can then view \(\nu(\psi):=\psi\#\mu^{*}\) for any \(\psi\in L^{2}(\mu^{*},\mathcal{X})\) as a dual variable of the problem with the optimal dual measure \(\nu^{*}=\nu(\psi^{*})=\nabla G\#\mu^{*}\), where \(\psi^{*}=\nabla G\). The joint target distribution \(\pi^{*}=(I,\nabla G)\#\mu^{*}\) is partially singular with its mass concentrated on the optimal dual value \(Y=\nabla G(X)\) for a given \(X\). Let in the following \(T_{\mu\to\nu}:\mathcal{X}\to\mathcal{X}\) be the optimal transport map from \(\mu\) to \(\nu\), i.e. the map for which the transport plan \((I,T_{\mu\to\nu})\#\mu\) realizes the infimum in the definition of \(\mathcal{W}_{2}(\mu,\nu)\), which is also called Brenier map and exists uniquely by Brenier's theorem [29] when \(\mu\) is absolutely continuous with respect to the Lebesgue measure. Denoting \(\pi(\mu,\psi)=(T_{\mu^{*}\to\mu},\psi)\#\mu^{*}\), we can define a generalized Lagrangian function by \[\mathcal{L}(\mu,\psi)=\mathcal{H}(\mu)+\mathcal{E}_{F}(\mu)-\mathcal{E}_{G^{* }}(\nu(\psi))+\mathbb{E}_{(X,Y)\sim\pi(\mu,\psi)}[\langle X,Y\rangle],\] where \(\mathcal{E}_{G^{*}}(\nu)=\mathbb{E}_{Y\sim\nu}[G^{*}(Y)]\). As was shown in [6], \(\mathcal{L}\) inherits some of the typical properties of the Lagrangian function. It is possible to show the following strong duality result which justifies the interpretation as a saddle point problem over \(\mathcal{P}_{2}(\mathcal{X})\times L^{2}(\mu^{*},\mathcal{X})\). **Lemma 3.1** ([6]).: _Let Assumptions 1 to 3 hold. For any \(\mu\in\mathcal{P}_{2}(\mathcal{X})\) and any \(\psi\in L^{2}(\mu^{*},\mathcal{X})\), define the duality gap by_ \[\mathcal{D}(\mu,\psi):=\mathcal{L}(\mu,\psi^{*})-\mathcal{L}(\mu^{*},\psi).\] _Then for every \(\mu\in\mathcal{P}_{2}(\mathcal{X})\), \(\psi\in L^{2}(\mu^{*},\mathcal{X})\) it holds \(\mathcal{D}(\mu,\psi)\geq 0\) and \(\mathcal{L}(\mu,\psi)\leq\mathrm{KL}(\mu,\mu^{*})\). \((\mu^{*},\psi^{*})\) is a saddle point of \(\mathcal{L}\) with value 0 in the sense that for all \(\mu\in\mathcal{P}_{2}(\mathcal{X})\), \(\psi\in L^{2}(\mu^{*},\mathcal{X})\) it holds_ \[\mathcal{L}(\mu^{*},\psi)\leq 0=\mathcal{L}(\mu^{*},\psi^{*})\leq\mathcal{L}(\mu, \psi^{*}),\] _and further \(\mathcal{L}(\mu^{*},\psi)=0\) if and only if \(\psi=\psi^{*}\) holds \(\mu^{*}\)-a.e._ The introduction of the dual variable allows the analysis of \(\mathcal{L}\) along the iterates in Algorithm 1. In the discrete setting of Algorithm 1 we can analogously define dual samples \[Y^{k+1}:=\frac{X^{k+2/3}-X^{k+1}}{\gamma_{k}}=\frac{1}{\gamma_{k}}(I-S_{k})(X^ {k+2/3}).\] Denoting \(\psi^{k+1}:=(I-S_{k})/\gamma_{k}\circ T_{\mu^{*}\to\mu^{k+2/3}}\) where we assume \(S_{k}\in L^{2}(\mu^{*},\mathcal{X})\), we are interested in the distribution of the dual variable \(\nu^{k+1}:=\mathrm{Law}(Y^{k+1})=\psi^{k+1}\#\mu^{*}\). ### Inexact evaluation of proximal mappings Let in the following \(G:\mathcal{X}\to\bar{\mathbb{R}}\). We start by stating some technical results about \(\varepsilon\)-subdifferentials and our notion of inexact proximal mappings from Definition 2.1. **Lemma 3.2** ([30]).: _Let \(\varepsilon\geq 0\). If \(G\) is convex, proper and lsc, then_ \[p\in\partial_{\varepsilon}G(u)\,\Leftrightarrow\,u\in\partial_{\varepsilon}G^{ *}(p).\] The next Lemma allows us to characterize the \(\varepsilon\)-subdifferentials of a sum of a convex and a quadratic function. **Lemma 3.3** ([22]).: _Let \(\phi_{\gamma}(z):=\|y-z\|^{2}/(2\gamma)\) and \(G_{\gamma}(z)=G(z)+\phi_{\gamma}(z)\). Then for any \(\varepsilon\geq 0\) it holds_ \[\partial_{\varepsilon}G_{\gamma}(z) =\bigcup_{\varepsilon_{1},\varepsilon_{2}\geq 0\,:\,\varepsilon_{1}+ \varepsilon_{2}=\varepsilon}\partial_{\varepsilon_{1}}G(z)+\partial_{ \varepsilon_{2}}\phi_{\gamma}(z)\] \[=\bigcup_{\varepsilon_{1},\varepsilon_{2}\geq 0\,:\,\varepsilon_{1}+ \varepsilon_{2}=\varepsilon}\partial_{\varepsilon_{1}}G(z)+\left\{\frac{z-y-r}{ \gamma}\,:\,\|r\|\leq\sqrt{2\gamma\varepsilon_{2}}\right\}.\] Next, we prove an inequality resembling a typical property of subgradients of strongly convex functions. Note however that the additional parameter \(\theta\in[0,1)\) gives a slightly worse quadratic term due to the errors. **Lemma 3.4**.: _Let \(\varepsilon\geq 0\), \(G\) proper, lsc and \(\lambda\)-strongly convex, \(\lambda\geq 0\), \(u\in\mathcal{X}\), \(p\in\partial_{\varepsilon}G(u)\). Then for any \(\theta\in[0,1)\) and \(v\in\mathcal{X}\) it holds_ \[G(v)\geq G(u)+\langle p,v-u\rangle+\frac{\theta\lambda}{2}\|u-v\|^{2}-\frac{ \varepsilon}{1-\theta}.\] Before proving Lemma 3.4, we remark the following. **Remark 3.1**.: _Note that the inequality in Lemma 3.4 is in general false when the respective factors \(\theta\) and \((1-\theta)^{-1}\) are omitted and replaced by one. As a counterexample consider the \(\lambda\)-strongly convex function \(G_{\lambda}(v)=\frac{\lambda}{2}\|v\|^{2}\) with \(\partial_{\varepsilon}G_{\lambda}(0)=\{p\,:\,\|p\|^{2}\leq 2\lambda\varepsilon\}\). Then it holds \(G_{\lambda}(0)-G_{\lambda}(v)+\langle p,v-0\rangle+\frac{\lambda}{2}\|0-v\|^ {2}=\langle p,v\rangle\) which cannot be bounded for all \(v\in\mathcal{X}\) if \(p\neq 0\)._ _From the same example, it can also be seen that the inequality in Lemma 3.4 is indeed sharp in the following sense: When the inexact subgradient obeys \(\|p\|^{2}=2\lambda\varepsilon\) (so that \(p\in\partial_{\varepsilon}G_{\lambda}(0)\), but \(p\notin\partial_{\delta}G_{\lambda}(0)\) for any \(\delta<\varepsilon\), then the inequality becomes an equality for any pair of values \(\theta\in[0,1)\) and \(v=\frac{1}{(1-\theta)\lambda}p\)._ _Note further that when \(\varepsilon=0\), the last term in the inequality of the lemma vanishes and by continuity the inequality without the last term then also holds for \(\theta=1\), which is a result frequently used when analysing subdifferentials of strongly convex functions [24]._ Proof.: Since \(G\) is \(\lambda\)-strongly convex, there exists a convex function \(g\) with \(G(v)=g(v)+\frac{\lambda}{2}\|v-u\|^{2}\). By the characterisation of Lemma 3.3, we have \[\partial_{\varepsilon}G(u)=\bigcup_{\varepsilon_{1},\varepsilon_{2}\geq 0 \,:\,\varepsilon_{1}+\varepsilon_{2}=\varepsilon}\partial_{\varepsilon_{1}}g(u )+\left\{q\,:\,\|q\|^{2}\leq 2\lambda\varepsilon_{2}\right\}.\] Hence there exist \(\varepsilon_{1},\varepsilon_{2}\geq 0\), \(\varepsilon_{1}+\varepsilon_{2}=\varepsilon\) such that \(p=p_{1}+p_{2}\) for some \(p_{1}\in\partial_{\varepsilon_{1}}g(u)\) and \(p_{2}\) with \(\|p_{2}\|^{2}\leq 2\lambda\varepsilon_{2}\). We obtain that \[G(u)-G(v)+\langle p,v-u\rangle+\frac{\theta\lambda}{2}\|u-v\|^{2} =g(u)-g(v)+\langle p_{1}+p_{2},v-u\rangle-\frac{(1-\theta)\lambda }{2}\|u-v\|^{2}\] \[\leq\varepsilon_{1}+\langle p_{2},v-u\rangle-\frac{(1-\theta) \lambda}{2}\|u-v\|^{2}\] \[\leq\varepsilon_{1}+\frac{1}{2(1-\theta)\lambda}\|p_{2}\|^{2}\leq \varepsilon_{1}+\frac{\varepsilon_{2}}{1-\theta}\leq\frac{\varepsilon}{1- \theta}.\] where we used the definition of the \(\varepsilon_{1}\)-subdifferential and the Cauchy-Schwarz inequality followed by the rescaled Young's inequality \(ab\leq\frac{a^{2}}{2(1-\theta)\lambda}+\frac{(1-\theta)\lambda b^{2}}{2}\). In the computation of the proximal point \(\text{prox}_{\gamma G}(y)\) we solve problems of the form \[\operatorname*{arg\,min}_{x}\left\{G(x)+\frac{1}{2\gamma}\|x-y\|^{2}\right\} =:\operatorname*{arg\,min}_{x}G_{\gamma}(x).\] The corresponding optimality condition is \[0\in\partial G_{\gamma}(x)\Leftrightarrow\frac{y-x}{\gamma}\in\partial G(x). \tag{3.1}\] Notions of inexact proximal points can be defined by relaxing these optimality conditions when the subdifferentials are replaced by \(\varepsilon\)-subdifferentials. Doing this in the right hand side of (3.1) leads to Definition 2.1. When the subdifferential on the left hand side in (3.1) is replaced, i.e. if the criterion reads \(0\in\partial_{\varepsilon}G_{\gamma}(x)\), then the condition is less strict. In [5], points \(x\) satisfying the latter condition are called "type-1 approximations". By Lemma 3.3, if \(0\in\partial_{\varepsilon}G_{\gamma}(x)\), there exist \(\varepsilon_{1},\varepsilon_{2}\geq 0\), \(\varepsilon_{1}+\varepsilon_{2}=\varepsilon\) and \(r\) with \(\|r\|\leq\sqrt{2\gamma\varepsilon_{2}}\) such that \[\frac{y-x-r}{\gamma}\in\partial_{\varepsilon_{1}}G(x).\] Hence, as is pointed out also in [5], a type-1 approximation can be seen as a type-2 approximation of accuracy \(\varepsilon_{1}\) to the proximal mapping evaluated at the erroneous input \(y-r\). In particular, a type-2 approximation corresponds to \(r=0\) and is therefore always a type-1 approximation. We only consider type-2 approximations here. This is more restrictive, but actually the type-2 error can numerically be ensured very well for a lot of common problems by keeping track of the duality gap during the computation of the proximal mapping. Assume without loss of generality \(G(x)=W(Bx)\) for some proper, convex and \(\mathrm{lsc}\ W:\mathcal{Y}\to\overline{\mathbb{R}}\) and some bounded linear operator \(B:\mathcal{X}\to\mathcal{Y}\). Then the computation of \(\mathrm{prox}_{\gamma G}(y)\) requires solving \[\min_{x}G_{\gamma}(x)=\min_{x}\left\{W(Bx)+\frac{1}{2\gamma}\|x-y\|_{2}^{2} \right\}. \tag{3.2}\] If there exists \(x_{0}\) such that \(w\) is continuous in \(Bx_{0}\), then by strong duality it holds [24] \[\min_{x}G_{\gamma}(x)=-\min_{z}\left\{\frac{\gamma}{2}\|B^{*}z\|_{2}^{2}- \langle B^{*}z,y\rangle+w^{*}(z)\right\}=:-\min_{z}W_{\gamma}(z). \tag{3.3}\] Under strong duality the proximal point \(\hat{x}\) is further given by the dual solution \(\hat{z}\) through the optimality condition \(\hat{x}=y-B^{*}\hat{z}\) and the duality gap \[\mathcal{G}(x,z)=G_{\gamma}(x)+W_{\gamma}(z) \tag{3.4}\] vanishes only at the optimum \((\hat{x},\hat{z})\). The following lemma generalizes this to \(\varepsilon\)-approximate solutions **Lemma 3.5** ([22]).: _If strong duality holds, then we have \(\mathcal{G}(x,z)=G_{\gamma}(x)+W_{\gamma}(z)=0\) if and only if \(x,z\) are the optimal values \(\hat{x}=y-B^{*}\hat{z}\). More general, it holds_ \[\mathcal{G}(y-B^{*}z,z)\leq\varepsilon\ \Rightarrow\ y-B^{*}z\approx_{2}^{ \varepsilon}\hat{x}=\mathrm{prox}_{\gamma G}(y).\] Since the duality gap can usually be computed with little additional computational overhead, the type-2 approximation of proximal points can in practice be ensured efficiently. ### Nonasymptotic and asymptotic convergence We are ready to state our convergence results for iPGLA. Note that the bounds generalize the ones of [6] for flexible step sizes and inexactness levels \(\varepsilon_{k}\) of the approximate evaluation of the proximal mappings. For iterates \(X^{k}\) with law \(\mu^{k}\) and step sizes \(\gamma_{k}\) as in Algorithm 1, we will refer to the following descent condition \[\mathbb{E}_{X\sim\mu^{k}}\left[F(X-\gamma_{k}\nabla F(X))-F(X)+\frac{\gamma_{ k}}{2}\|\nabla F(X)\|^{2}\right]\leq 0, \tag{3.5}\] which is ensured, e.g., by choosing \(\gamma_{k}\leq L^{-1}\). **Theorem 3.1**.: _Let Assumptions 1 to 3 be satisfied and let \((X^{k})_{k}\) be generated by Algorithm 1. If the step size \(\gamma_{k}\) satisfies (3.5), then for any \(\theta\in[0,1)\) it holds_ \[\mathcal{W}_{2}^{2}(\mu^{k+1},\mu^{*})\leq(1-\lambda_{F}\gamma_{k}) \mathcal{W}_{2}^{2}(\mu^{k},\mu^{*})-\gamma_{k}(\theta\lambda_{G^{*}}+ \gamma_{k})\mathcal{W}_{2}^{2}(\nu^{k+1},\nu^{*})\] \[-2\gamma_{k}\left(\mathcal{L}(\mu^{k+2/3},\psi^{*})-\mathcal{L}( \mu^{*},\psi^{k+1})\right)+\gamma_{k}^{2}\tilde{C}+\frac{2\gamma_{k}\varepsilon _{k}}{1-\theta},\] _where \(\tilde{C}=2Ld+\int\|\nabla G\|^{2}\,\mathrm{d}\mu^{*}(x)<\infty\)._ In order to prove the theorem, we need some auxiliary results. **Lemma 3.6**.: _Let Assumptions 1 to 3 be satisfied. For any \(\gamma_{k}>0\), \(\theta\in[0,1)\) and any \(\mu\in\mathcal{P}_{2}(\mathcal{X})\) and \(\mu\)-measurable function \(\psi:\mathcal{X}\to\mathcal{X}\) it holds_ \[\mathcal{W}_{2}^{2}(\mu^{k+1},\mu)\leq\mathcal{W}_{2}^{2}(\mu^{k +2/3},\mu)-\gamma_{k}(\theta\lambda_{G^{*}}+\gamma_{k})\mathcal{W}_{2}^{2}(\nu ^{k+1},\psi\#\mu)+\gamma_{k}^{2}\mathbb{E}_{Y\sim\psi\#\mu}\left[\|Y\|^{2}\right]\] \[\qquad-2\gamma_{k}\left(\mathbb{E}\left[\langle X^{k+2/3},Y \rangle\right]-\mathbb{E}\left[\langle X,Y^{k+1}\rangle\right]+\mathcal{E}_{ G^{*}}(\nu^{k+1})-\mathcal{E}_{G^{*}}(\psi\#\mu)\right)+2\gamma_{k}\frac{ \varepsilon_{k}}{1-\theta},\] _where the expectations in the second line are over variables with joint distributions \((X^{k+2/3},Y)\sim(T_{\mu\to\mu^{k+2/3}},\psi)\#\mu\) and \((X,Y^{k+1})\sim(I,(I-S_{k})/\gamma_{k}\circ T_{\mu\to\mu^{k+2/3}})\#\mu\), respectively._ Proof.: Let \(\bar{x}\in\mathcal{X}\) be fixed, \(\hat{x}\) the \(\varepsilon_{k}\)-inexact proximal point \(\hat{x}=S_{k}(\bar{x})\approx^{\varepsilon_{k}}\text{prox}_{\gamma_{k}G}(\bar {x})\). Using Lemma 3.2 to rewrite the condition for the inexact proximal mapping gives \[\hat{y}:=\frac{\bar{x}-\hat{x}}{\gamma_{k}}\in\partial_{\varepsilon_{k}}G(\hat {x})\ \Leftrightarrow\ \hat{x}\in\partial_{\varepsilon_{k}}G^{*}(\hat{y}).\] Since \(\mathcal{X}\) is a Hilbert space, for any \(x\in\mathcal{X}\) it holds \[\|\hat{x}-x\|^{2}=\|\bar{x}-x\|^{2}-\|\hat{x}-\bar{x}\|^{2}+2\langle\hat{x}- \bar{x},\hat{x}-x\rangle. \tag{3.6}\] In order to reformulate the last term, we can apply Lemma 3.4 using the (strong) convexity of \(G^{*}\) so that for any \(x,y\in\mathcal{X}\) and any \(\theta\in[0,1)\) we obtain \[\langle\hat{x}-\bar{x},\hat{x}-x\rangle =\gamma_{k}\langle\hat{y},x\rangle-\gamma_{k}\langle\hat{y},\hat{ x}\rangle\] \[=\gamma_{k}\langle\hat{y},x\rangle-\gamma_{k}\langle\hat{x},y \rangle+\gamma_{k}\langle\hat{x},y-\hat{y}\rangle\] \[\leq\gamma_{k}\langle\hat{y},x\rangle-\gamma_{k}\langle\hat{x},y \rangle+\gamma_{k}\left(G^{*}(y)-G^{*}(\hat{y})-\frac{\theta\lambda_{G^{*}}}{2 }\|\hat{y}-y\|^{2}+\frac{\varepsilon_{k}}{1-\theta}\right).\] Plugging this into (3.6) gives \[\|\hat{x}-x\|^{2} \leq\|\bar{x}-x\|^{2}-\|\hat{x}-\bar{x}\|^{2}+2\gamma_{k}\left( \langle\hat{y},x\rangle-\langle\hat{x},y\rangle+G^{*}(y)-G^{*}(\hat{y})-\frac {\theta\lambda_{G^{*}}}{2}\|\hat{y}-y\|^{2}+\frac{\varepsilon_{k}}{1-\theta}\right)\] \[=\|\bar{x}-x\|^{2}-\gamma_{k}^{2}\|\hat{y}\|^{2}+2\gamma_{k}\left( \langle\hat{y},x\rangle-\langle\bar{x},y\rangle+\gamma_{k}\langle\hat{y},y \rangle+G^{*}(y)-G^{*}(\hat{y})-\frac{\theta\lambda_{G^{*}}}{2}\|\hat{y}-y\|^{ 2}+\frac{\varepsilon_{k}}{1-\theta}\right)\] \[=\|\bar{x}-x\|^{2}-\gamma_{k}(\theta\lambda_{G^{*}}+\gamma_{k})\| \hat{y}-y\|^{2}+\gamma_{k}^{2}\|y\|^{2}+2\gamma_{k}\left(\langle\hat{y},x \rangle-\langle\bar{x},y\rangle+G^{*}(y)-G^{*}(\hat{y})+\frac{\varepsilon_{k} }{1-\theta}\right),\] where we used the definition of \(\hat{y}\) in the first equality. Assume now that the variables \(x,\bar{x},y\) are marginally distributed with laws \(x\sim\mu\), \(\bar{x}\sim\mu^{k+2/3}\) and \(y\sim\nu=\psi\#\mu\) respectively. This implies that \(\hat{x}\sim S_{k}\#\mu^{k+2/3}=\mu^{k+1}\) and \(\hat{y}\sim\frac{I-S_{k}}{\gamma_{k}}\#\mu^{k+2/3}=\nu^{k+1}\). From the last inequality it follows that \[\mathcal{W}_{2}^{2}(\mu^{k+1},\mu)+\gamma_{k}(\theta\lambda_{G^{*} }+\gamma_{k})\mathcal{W}_{2}^{2}(\nu^{k+1},\nu)\] \[\qquad\leq\mathbb{E}\left[\|\hat{x}-x\|^{2}\right]+\gamma_{k}( \theta\lambda_{G^{*}}+\gamma_{k})\mathbb{E}\left[\|\hat{y}-y\|^{2}\right]\] \[\qquad\leq\mathbb{E}\left[\|\bar{x}-x\|^{2}\right]+\gamma_{k}^{2 }\mathbb{E}\left[\|y\|^{2}\right]+2\gamma_{k}\left(\mathbb{E}\left[\langle \hat{y},x\rangle\right]-\mathbb{E}\left[\langle\bar{x},y\rangle\right]+ \mathcal{E}_{G^{*}}(\nu)-\mathcal{E}_{G^{*}}(\nu^{k+1})+\frac{\varepsilon_{k}}{ 1-\theta}\right),\] Since we only assumed the marginal distributions of \(x,y,\bar{x}\), this holds for any coupling of these variables. In particular we can consider \(x,\bar{x}\) to be optimally coupled in the sense that their joint distribution realizes the Wasserstein distance, i.e. \(\mathcal{W}_{2}^{2}(\mu^{k+2/3},\mu)=\mathbb{E}[\|\bar{x}-x\|^{2}]\). The coupling can be written in terms of a transport map [29] which we denote by \(T_{\mu\to\mu^{k+2/3}}\) so that \((x,\bar{x})\sim(I,T_{\mu\to\mu^{k+2/3}})\#\mu\). The remaining variables are then coupled to \(x\) by \(\hat{x}=(S_{k}\circ T_{\mu\to\mu^{k+2/3}})(x)=\) and \(\hat{y}=(\frac{I-S_{k}}{\gamma_{k}}\circ T_{\mu\to\mu^{k+2/3}})(x)=\psi^{k+1}(x)\). Letting further \(y=\psi(x)\), the last inequality for this particular coupling of \(x,\bar{x},y\) is just the desired result. We recall three Lemmata from the analysis in [9], their respective Lemmata 3, 5 and 28. The first bounds the development of the entropy along the solutions. **Lemma 3.7**.: _For any \(\gamma_{k}>0\), the entropy values of the iterates satisfy_ \[\mathcal{H}(\mu^{k+2/3})-\mathcal{H}(\mu^{*})\leq\frac{1}{2\gamma_{k}}\left( \mathcal{W}_{2}^{2}(\mu^{k+1/3},\mu^{*})-\mathcal{W}_{2}^{2}(\mu^{k+2/3},\mu^ {*})\right).\] The next Lemma bounds the error in the smooth part \(F\) of the potential by the entropy gradient flow step. **Lemma 3.8**.: _For any \(\gamma_{k}>0\), we have_ \[\mathcal{E}_{F}(\mu^{k+2/3})-\mathcal{E}_{F}(\mu^{k+1/3})\leq Ld\gamma_{k}.\] The last auxiliary result for Theorem 3.1 quantifies the decrease in potential energy along under a descent condition. Note that this formulation is slightly more general than the typical assumption \(\gamma\leq L^{-1}\) in [9]. **Lemma 3.9**.: _If \(\gamma_{k}>0\) satisfies the descent condition (3.5), then it holds_ \[2\gamma_{k}(\mathcal{E}_{F}(\mu^{k+1/3})-\mathcal{E}_{F}(\mu^{*}))\leq(1- \lambda_{F}\gamma_{k})\mathcal{W}_{2}^{2}(\mu^{k},\mu^{*})-\mathcal{W}_{2}^{ 2}(\mu^{k+1/3},\mu^{*}).\] Proof.: Let \(x,\tilde{x}\in\mathcal{X}\) arbitrary. Using strong convexity, one obtains \[2\gamma_{k} \left(F(x-\gamma_{k}\nabla F(x))-F(y)\right)\] \[=2\gamma_{k}\left(F(x-\gamma_{k}\nabla F(x))-F(x)\right)+2\gamma _{k}(F(x)-F(y))\] \[\leq 2\gamma_{k}\left(F(x-\gamma_{k}\nabla F(x))-F(x)\right)+2 \gamma_{k}\left(\langle\nabla F(x),x-y\rangle-\frac{\lambda_{F}}{2}\|x-y\|^{ 2}\right)\] \[=(1-\lambda_{F}\gamma_{k})\|x-y\|^{2}-\|x-\gamma_{k}\nabla F(x)-y \|^{2}+\gamma_{k}^{2}\|\nabla F(x)\|^{2}+2\gamma_{k}\left(F(x-\gamma_{k} \nabla F(x))-F(x)\right).\] We now let \(y\sim\mu^{*}\) and \(x\sim\mu^{k}\), which also implies \(x-\gamma_{k}\nabla F(x)\sim\mu^{k+1/3}\). Adding \(\|x-\gamma_{k}\nabla F(x)-y\|^{2}\) on both sides and taking expectations, we obtain \[2\gamma_{k} \left(\mathcal{E}_{F}(\mu^{k+1/3})-\mathcal{E}_{F}(\mu^{*}) \right)+\mathcal{W}_{2}^{2}(\mu^{k+1/3},\mu^{*})\] \[\leq\mathbb{E}\left[2\gamma_{k}\left(F(x-\gamma_{k}\nabla F(x))- F(y)\right)\right]+\mathbb{E}\left[\|x-\gamma_{k}\nabla F(x)-y\|^{2}\right]\] \[\leq(1-\lambda_{F}\gamma_{k})\mathbb{E}\left[\|x-y\|^{2}\right].\] Since the last inequality holds for any couplings of \(x\sim\mu^{k}\) and \(y\sim\mu^{*}\), we can take the infimum over all couplings and obtain the desired result. We are now able to prove the theorem. Proof of Theorem 3.1.: Firstly, Lemma 3.6 with the choice \(\mu=\mu^{*}\), \(\psi=\psi^{*}=\nabla G\) gives \[\mathcal{W}(\mu^{k+1},\mu^{*})\leq\mathcal{W}_{2}^{2}(\mu^{k+2/3 },\mu^{*})-\gamma_{k}(\theta\lambda_{G^{*}}+\gamma_{k})\mathcal{W}_{2}^{2}( \nu^{k+1},\nu^{*})+\gamma_{k}^{2}\mathbb{E}_{Y\sim\nu^{*}}\left[\|Y\|^{2}\right]\] \[-2\gamma_{k}\left(\mathbb{E}\left[\langle X^{k+2/3},Y^{*}\rangle \right]-\mathbb{E}\left[\langle X^{*},Y^{k+1}\rangle\right]+\mathcal{E}_{G^{ *}}(\nu^{k+1})-\mathcal{E}_{G^{*}}(\nu^{*})\right)+2\gamma_{k}\frac{\varepsilon _{k+1}}{1-\theta}\] with expectations over \((X^{k+2/3},Y^{*})\sim(T_{\mu^{*}\to\mu^{k+2/3}},\nabla G)\#\mu^{*}\) and \((X^{*},Y^{k+1})\sim(I,\psi^{k+1})\#\mu^{*}\) in the second line. Next, Lemmata 3.7 and 3.8 give \[2\gamma_{k}\left(\mathcal{H}(\mu^{k+2/3})-\mathcal{H}(\mu^{*})\right)\leq \mathcal{W}_{2}^{2}(\mu^{k+1/3},\mu^{*})-\mathcal{W}_{2}^{2}(\mu^{k+2/3},\mu^{*})\] \[2\gamma_{k}\left(\mathcal{E}_{F}(\mu^{k+2/3})-\mathcal{E}_{F}(\mu^{k+1/3})\right) \leq 2Ld\gamma_{k}^{2}.\] Under the assumption that the descent condition is satisfied we can use Lemma 3.9 to get \[2\gamma_{k}(\mathcal{E}_{F}(\mu^{k+1/3})-\mathcal{E}_{F}(\mu^{*}))\leq(1- \lambda_{F}\gamma_{k})\mathcal{W}_{2}^{2}(\mu^{k},\mu^{*})-\mathcal{W}_{2}^{2} (\mu^{k+1/3},\mu^{*}).\] The statement of the theorem is the sum of these four inequalities. We now want to iterate the nonasymptotic result in Theorem 3.1 to get a decay rate in Wasserstein distance with a closed form of the bias. **Theorem 3.2** (Fixed step size).: _Let Assumptions 1 to 3 be satisfied and \(X^{k}\) be generated by Algorithm 1 with \(\gamma_{k}=\gamma\leq L^{-1}\) for all \(k\). Then for any \(\theta\in[0,1)\), the following statements hold:_ 1. _If_ \(\varepsilon_{k}\leq\varepsilon\) _for all_ \(k\)_, then_ \[\min_{1\leq k\leq K}\mathcal{W}_{2}^{2}(\nu^{k},\nu^{*})\leq\frac{1}{\gamma( \theta\lambda_{G^{*}}+\gamma)K}\mathcal{W}_{2}^{2}(\mu^{0},\mu^{*})+C_{\theta} (\gamma,\varepsilon)\] (3.7) _where_ \(C_{\theta}(\gamma,\varepsilon)=\frac{\tilde{C}\gamma(1-\theta)+2\varepsilon} {(1-\theta)(\theta\lambda_{G^{*}}+\gamma)}\) _and_ \[\min_{1\leq k\leq K}\mathcal{D}(\mu^{k-1/3},\nu^{k})\leq\frac{1}{2\gamma K} \mathcal{W}_{2}^{2}(\mu^{0},\mu^{*})+\frac{\tilde{C}}{2}\gamma+\varepsilon.\] (3.8) _with_ \(\tilde{C}\) _as in Theorem_ 3.1_. If further_ \(\lambda_{F}>0\)_, then_ \[\mathcal{W}_{2}^{2}(\mu^{K},\mu^{*})\leq(1-\lambda_{F}\gamma)^{K}\mathcal{W}_ {2}^{2}(\mu^{0},\mu^{*})+\frac{\gamma}{\lambda_{F}}\tilde{C}+\frac{2 \varepsilon}{\lambda_{F}}.\] (3.9) 2. _If_ \(F\) _is_ \(\lambda_{F}\)_-strongly convex and_ \((\varepsilon_{k})_{k}\) _is a monotonically decreasing sequence, then_ \[\mathcal{W}_{2}^{2}(\mu^{K},\mu^{*})\leq(1-\lambda_{F}\gamma)^{K}\mathcal{W}_ {2}^{2}(\mu^{0},\mu^{*})+\frac{\gamma}{\lambda_{F}}\tilde{C}+\frac{2}{K \lambda_{F}}\sum_{k=0}^{K-1}\varepsilon_{k}.\] (3.10) We give some remarks on the interpretation of these results. **Remark 3.2**.: _If all parameters on the right hand side of (3.7) are known (or estimated), one can optimize the parameter \(\theta\) to arrive at an optimal bound and the number of iterations necessary to achieve a certain level of accuracy in the dual variable. If \(\lambda_{G^{*}}=0\), then the optimal \(\theta\) is zero and \(\inf_{\theta}C_{\theta}(\gamma,\varepsilon)=\tilde{C}+2\varepsilon/\gamma\). If \(\lambda_{G^{*}}>0\) then for any \(\theta>0\) we have \(C_{\theta}(\gamma,\varepsilon)\to 0\) as \(\gamma,\varepsilon\to 0\)._ _Note further that in part (ii), the bias induced by the errors \(\varepsilon_{k}\) vanishes as \(K\to\infty\) if and only if \((\varepsilon_{k})_{k}\) converges to 0. In this case, independently of the decay rate of \(\varepsilon_{k}\), the bias scales linearly with \(\gamma\) in the limit as is the case in the error-free version of the algorithm [6]._ **Remark 3.3**.: _The results show that with decreasing error \(\varepsilon\) in the proximal map, the approximation to the target \(\mu^{*}\) gets better. If in practice hyperparameters for \(\gamma\), \(\varepsilon\) and \(K\) can be set in advance, this leads to an implicit optimization problem of finding the right parameters since higher accuracy in the proximal points usually comes with increasing computational cost at each iteration. Consider for example that we are in the setting of (3.9) and we want to set optimal parameters such that \(\mathcal{W}_{2}^{2}(\mu^{K},\mu^{*})\leq\delta\) for some bound \(\delta>0\). Assuming that the cost of each iteration is dominated by the accuracy \(\varepsilon\) indicates that there is an optimal \(\varepsilon\): Making large steps and picking \(\varepsilon\) as small as possible is suboptimal since then the cost per iteration becomes prohibitively large. On the other hand, choosing \(\varepsilon\) as large as possible close to \(\frac{\lambda_{F}\delta}{2}\) in order to save on cost per iteration means \(\gamma\) has to be small, implying \(1-\lambda_{F}\gamma\approx 1\) so that the necessary number of iterations \(K\) becomes large._ Proof of Theorem 3.2.: For part (i), we assume only that the errors are bounded by some upper value \(\varepsilon\). Theorem 3.1 gives \[\gamma(\theta\lambda_{G^{*}}+\gamma)\mathcal{W}_{2}^{2}(\nu^{k+1},\nu^{*})+2 \gamma\mathcal{D}(\mu^{k+2/3},\nu^{k+1})\leq\mathcal{W}_{2}^{2}(\mu^{k},\mu^{* })-\mathcal{W}_{2}^{2}(\mu^{k+1},\mu^{*})+\gamma^{2}\tilde{C}+\frac{2\gamma \varepsilon}{1-\theta}.\] Summing over \(k=0,\ldots,K-1\) gives \[\sum_{k=1}^{K}\gamma(\theta\lambda_{G^{*}}+\gamma)\mathcal{W}_{2}^{2}(\nu^{k}, \nu^{*})+2\gamma\sum_{k=1}^{K}\mathcal{D}(\mu^{k-1/3},\nu^{k})\leq\mathcal{W}_ {2}^{2}(\mu^{0},\mu^{*})-\mathcal{W}_{2}^{2}(\mu^{K},\mu^{*})+K\gamma^{2} \tilde{C}+\frac{2K\gamma\varepsilon}{1-\theta}.\] This implies \[\gamma(\theta\lambda_{G^{*}}+\gamma)\min_{k=1,\ldots,K}\mathcal{W}_{2}^{2}( \nu^{j},\nu^{*})+2\gamma\min_{k=1,\ldots,K}\mathcal{D}(\mu^{k-1/3},\nu^{k}) \leq\frac{1}{K}\mathcal{W}_{2}^{2}(\mu^{0},\mu^{*})+\gamma^{2}\tilde{C}+\frac {2\gamma\varepsilon}{1-\theta},\] which in turn directly implies both (3.7) and, for \(\theta=0\), (3.8). If \(F\) is \(\lambda_{F}\)-strongly convex and \(\gamma_{k}=\gamma>0\) fixed then Theorem 3.1 with \(\theta=0\) implies \[\mathcal{W}_{2}^{2}(\mu^{k+1},\mu^{*})\leq(1-\lambda_{F}\gamma)\mathcal{W}_{2} ^{2}(\mu^{k},\mu^{*})+\gamma^{2}\tilde{C}+2\gamma\varepsilon_{k}\] By iterating, we get \[\mathcal{W}_{2}^{2}(\mu^{K},\mu^{*})\leq(1-\lambda_{F}\gamma)^{K}\mathcal{W}_{ 2}^{2}(\mu^{0},\mu^{*})+\sum_{k=0}^{K-1}(1-\lambda_{F}\gamma)^{k}\left(\gamma^ {2}C+2\gamma\varepsilon_{K-1-k}\right). \tag{3.11}\] The last part of (i) follows from (3.11) by using \(\varepsilon_{k}\leq\varepsilon\) and \[\sum_{k=0}^{K-1}(1-\lambda_{F}\gamma)^{k}=\frac{1-(1-\lambda_{F}\gamma)^{K}}{ \lambda_{F}\gamma}\leq\frac{1}{\lambda_{F}\gamma}.\] For (ii), if \(\varepsilon_{k}\) is assumed to be monotonically decreasing we can use Chebyshev's inequality to bound the right side of (3.11) and obtain \[\mathcal{W}_{2}^{2}(\mu^{K},\mu^{*}) \leq(1-\lambda_{F}\gamma)^{K}\mathcal{W}_{2}^{2}(\mu^{0},\mu^{*}) +\frac{\gamma}{\lambda_{F}}C+2\gamma\sum_{k=0}^{K-1}(1-\lambda_{F}\gamma)^{k }\varepsilon_{K-1-k}\] \[\leq(1-\lambda_{F}\gamma)^{K}\mathcal{W}_{2}^{2}(\mu^{0},\mu^{*}) +\frac{\gamma}{\lambda_{F}}C+\frac{2}{K\lambda_{F}}\sum_{k=0}^{K-1} \varepsilon_{k}.\] In the same style as for other stochastic optimization and sampling algorithms, we can decrease the step sizes during the iteration at a certain rate to successively reduce the bias. The following theorem shows that in the case of strongly convex \(F\) and with the right choice of step sizes, the only condition for convergence to the target distribution is that the errors \(\varepsilon_{k}\) decay monotonically to zero. **Theorem 3.3** (Decaying step sizes).: _Let Assumptions 1 to 3 be satisfied with \(\lambda_{F}>0\) and assume the step sizes \(\gamma_{k}\) satisfy all of the following:_ 1. \(\gamma_{0}=L^{-1}\)_,_ 2. \(\gamma_{k-1}\geq\gamma_{k}\geq\frac{\gamma_{k-1}}{1+\lambda_{F}}\) _for all_ \(k\geq 1\) _and_ 3. \(\lim_{k\to\infty}\gamma_{K}=0\)_, the sum of all step sizes_ \(\sum_{k=0}^{K-1}\gamma_{k}\) _diverges to_ \(\infty\) _while the sums_ \[A_{K}:=\sum_{k=0}^{K-1}\gamma_{k}\prod_{l=k+1}^{K-1}(1-\lambda_{F}\gamma_{l})\] _remain bounded, i.e._ \(A_{K}\leq M<\infty\) _as_ \(K\to\infty\)_._ _Then, if the errors \(\varepsilon_{k}\) are monotonically decreasing with \(\varepsilon_{k}\to 0\), we obtain \(\mathcal{W}(\mu^{K},\mu^{*})\to 0\) as \(K\to\infty\)._ Proof of Theorem 3.3.: Iterating Theorem 3.1, we obtain \[\mathcal{W}_{2}^{2}(\mu^{K},\mu^{*})\leq\prod_{k=0}^{K-1}(1-\lambda_{F}\gamma _{k})\mathcal{W}_{2}^{2}(\mu^{0},\mu^{*})+\sum_{k=0}^{K-1}(\gamma_{k}^{2} \tilde{C}+2\gamma_{k}\varepsilon_{k})\prod_{j=k+1}^{K-1}(1-\lambda_{F}\gamma_ {j}),\] with \(\tilde{C}=2Ld+\int\lVert\nabla G\rVert^{2}\,\mathrm{d}\mu^{*}(x)<\infty\). We rewrite this denoting \(a_{k}:=\gamma_{k}\prod_{j=k+1}^{K-1}(1-\lambda_{F}\gamma_{j})\) and using the basic inequality \(\log(1-x)\leq-x\) to obtain \[\mathcal{W}_{2}^{2}(\mu^{K},\mu^{*})\leq\exp\left(\sum_{k=0}^{K-1}\log(1- \lambda_{F}\gamma_{k})\right)\mathcal{W}_{2}^{2}(\mu^{0},\mu^{*})+\sum_{k=0}^ {K-1}(\gamma_{k}\tilde{C}+2\varepsilon_{k})a_{k}\] The sequence \((\tilde{C}\gamma_{k}+2\varepsilon_{k})\) is monotonically decreasing by assumption. The sequence \(a_{k}\) is monotonically increasing due to the requirement \(\gamma_{k+1}\geq\frac{1}{1+\lambda_{F}}\gamma_{k}\), as \(a_{k}/a_{k+1}=\frac{\gamma_{k}(1-\lambda_{F}\gamma_{k+1})}{\gamma_{k+1}}= \frac{\gamma_{k}}{\gamma_{k+1}}-\lambda_{F}\geq 1\). Hence we can use Chebyshev's sum inequality and obtain \[\mathcal{W}_{2}^{2}(\mu^{K},\mu^{*})\leq\exp\left(-\lambda_{F}\sum_{k=0}^{K-1 }\gamma_{k}\right)\mathcal{W}_{2}^{2}(\mu^{0},\mu^{*})+\frac{1}{K}\left(\sum_ {k=0}^{K-1}(\gamma_{k}\tilde{C}+2\varepsilon_{k})\right)\left(\sum_{k=0}^{K-1 }a_{k}\right).\] The first term on the right hand side vanishes to zero as \(K\to\infty\) since we assumed that \(\sum_{k=0}^{\infty}\gamma_{k}=\infty\). The second term converges to zero because by assumption \(A_{K}=\sum_{k=0}^{K-1}a_{k}\) remains bounded as \(K\to\infty\) and \(\frac{1}{K}\sum_{k=0}^{K-1}(\gamma_{k}\tilde{C}+2\varepsilon_{k})\) converges to \(0\). Since Theorem 3.3 poses several constraints on the step size, we give a simple choice of \(\gamma_{k}\) satisfying all conditions in the following remark. **Remark 3.4**.: _A choice of step sizes meeting the requirements of Theorem 3.3 is \(\gamma_{0}=L^{-1}\) and \(\gamma_{k}=\min\{\gamma_{k-1},\max\{\frac{C^{\prime}}{k},\frac{\gamma_{k-1}}{ 1+\lambda_{F}}\}\}\), for any constant \(C^{\prime}\geq\lambda_{F}^{-1}\). See Figure 1 for a visualization of this choice. To see that this is valid, note that \(\gamma_{k-1}\geq\gamma_{k}\geq\frac{\gamma_{k-1}}{1+\lambda_{F}}\) is always satisfied, as well as the descent condition since \(\gamma_{k}\leq\gamma_{0}=L^{-1}\). The term \(\frac{C^{\prime}}{k}\) ensures that the step sizes decay to 0 slowly enough so that \(\sum_{k=0}^{K-1}\gamma_{k}\) diverges, since by construction there exists an \(N\) such that \(\gamma_{k}=\frac{C^{\prime}}{k}\) for all \(k>N\). The final condition is also satisfied since for \(K>N\) it holds_ \[A_{K}=\sum_{k=0}^{K-1}\gamma_{k}\prod_{l=k+1}^{K-1}(1-\lambda_{F }\gamma_{l}) =\sum_{k=0}^{N}\gamma_{k}\prod_{l=k+1}^{K-1}(\frac{1-\lambda_{F} \gamma_{l}}{\leq 1})+\sum_{k=N+1}^{K-1}\gamma_{k}\prod_{l=k+1}^{K-1}(1-\lambda_{F} \gamma_{l})\] \[\leq M^{\prime}+\sum_{k=N+1}^{K-1}\frac{C^{\prime}}{k}\prod_{l=k+1 }^{K-1}\frac{l-\lambda_{F}C^{\prime}}{l}\] \[=M^{\prime}+\frac{C^{\prime}}{K-1}\sum_{k=N+1}^{K-1}\prod_{l=k+1}^ {K-1}\underbrace{\frac{l-\lambda_{F}C^{\prime}}{l-1}}_{\leq 1}\leq M^{\prime}+C^{ \prime}=:M<\infty\] _where the constant \(M^{\prime}=\sum_{k=0}^{N}\gamma_{k}\) depends only on \(N\), not on \(K\). The last equality follows from shifting the denominators inside the product by one index._ ## 4 Numerical results In this section, we validate the theoretical results and apply the sampling algorithm to a range of imaging inverse problems with posterior distributions of the form (2.1). From the samples, we compute approximations of point estimates like the minimum mean square error (MMSE) and pixelwise standard deviations. In a first experiment, we validate Theorem 3.2 by running the algorithm on a toy example in one dimension in Section 4.1. For actual images, in Section 4.2 we choose a setup in which we know the exact proximal mapping of \(G\). This allows us to compare inexact PGLA with its exact special case corresponding to errors \(\varepsilon=0\) and visualize the additional bias in the stationary distribution due to the errors. In the experiments in Section 4.3 we sample from the posterior of typical imaging problems for which the proximal operator of the non-smooth potential \(G\) does not have a closed form. These tests demonstrate the ability of the algorithm to converge to a stationary distribution close to the target, despite the lack of an exactly available proximal mapping. The code for all the experiments is available at [https://github.com/lokuger/inexact-proximal-langevin-sampling](https://github.com/lokuger/inexact-proximal-langevin-sampling). ### Validation on a toy example Consider a one dimensional toy example with \(F(x)=\frac{1}{2\sigma^{2}}(x-y)^{2}\) for all \(x\in\mathbb{R}\) as the negative log-likelihood of a Gaussian noise distribution for a measurement \(y\in\mathbb{R}\) and a Laplace distribution as prior with negative log-density \(G(x)=\lambda|x|\), \(x\in\mathbb{R}\). Then the posterior is of the form (2.1) and \(F\), \(G\) satisfy Assumptions 1 to 3 with \(\lambda_{F}=L=\sigma^{-2}\) and \(\lambda_{G^{*}}=0\). For this example we can compute that \(\tilde{C}=1+2\sigma^{-2}\) which allows us to validate the upper bounds from Theorem 3.2 Checking the theoretical bounds like (3.9) and (3.10) requires computing Wasserstein distances between the inaccessible distributions \(\mu^{k}\) and the posterior. We replace the distributions by empirical approximations by generating samples. The distribution \(\mu^{k}\) can be approximated by running several parallel instances of Algorithm 1 with independent realizations of the stochastic term and initial values drawn from \(\mu^{0}\). Samples from the posterior \(\mu^{*}\) are generated by computing a Markov chain with a time discretized Langevin diffusion step as proposal and an additional Metropolis-Hastings correction step as proposed in [18]. These methods, sometimes also called Metropolis-adjusted Langevin algorithms are typically less efficient due to Figure 1: Visualization of the choice of step sizes in Remark 3.4 when \(C=1\), \(\lambda_{F}=0.5\) and \(L=1\) (left) and the corresponding values \(A_{K}\) and the upper bound \(M=M^{\prime}+C^{\prime}\) (right). For \(k\geq 5\), it holds \(\gamma_{k}=C^{\prime}/k\). the additional correction step, but guarantee convergence to the target measure [33]. With the choice \(\lambda=\sigma=1\), we compute \(10^{4}\) parallel Markov chains using Algorithm 1 up to \(k=10^{3}\) each and from one single chain another \(10^{6}\) samples with a Metropolis-adjusted algorithm. We then compare the empirical approximations \(\tilde{\mu}^{k}\approx\mu^{k}\) with the empirical approximation \(\tilde{\mu}^{*}\approx\mu^{*}\) target. In one dimension, the squared Wasserstein distances \(\mathcal{W}_{2}^{2}(\tilde{\mu}^{k},\tilde{\mu}^{*})\) can be computed efficiently, the computations are carried out using the 'Python optimal transport' toolbox [3]. The resulting distances and the upper bounds predicted in (3.9) and (3.10) are shown in Figure 2. ### Wavelet-based deblurring - comparison of inexact and exact PGLA We consider image deblurring with an \(l_{1}\) regularization term on the image's coefficients in a wavelet basis. The image \(x\in\mathbb{R}^{d}\), \(d=n_{1}n_{2}\) is blurred by applying a blurring operator \(A\in\mathbb{R}^{d\times d}\) and then further corrupted by adding normally distributed independent zero-mean noise with variance \(\sigma^{2}\) to each pixel. Implicitly, we represent the image in a wavelet basis as \(x=W^{T}z\), \(z\in\mathbb{R}^{d}\) where \(W\) is an orthogonal discrete wavelet transform. The negative log-likelihood is then \(F(z)=\|AW^{T}z-y\|_{2}^{2}/2\sigma^{2}\), a smooth, strongly convex function with strong convexity constant \(\lambda_{F}=\lambda_{\min}(A^{*}A)/\sigma^{2}\) where \(\lambda_{\min}(A^{*}A)\) denotes the smallest eigenvalue of \(A^{*}A\). Note that depending on the blur kernel, the strong convexity constant can get very small in the deblurring case when the smallest singular value of \(A\) is close to zero. The \(l_{1}\) regularization on the wavelet coefficients corresponds to assuming a prior of identical centered Laplace distributions for each wavelet coefficient, giving a prior density proportional to \(\exp(-\mu\|z\|_{1})\). The scale parameter \(\mu\) takes the role of a regularization parameter on the wavelet coefficients in the denoising experiment. The corresponding energy potential is \(G(z)=\mu\|z\|_{1}\). The proximal mapping of \(G\) is given in closed-form by the soft thresholding operator which for each component \(i\in\{1,\ldots,d\}\) is given by \[(\operatorname{prox}_{\lambda\|\cdot\|_{1}}(x))_{i}=\max(|x_{i}|-\lambda,0) \operatorname{sign}(x_{i}).\] We compare Algorithm 1 for inexact proximal points (\(\varepsilon>0\)) with its special case of exact proximal points (\(\varepsilon=0\)) on the blurred and noisy image shown in the middle of the first row of Figure 3. We draw \(10^{5}\) samples from the posterior using Algorithm 1 with fixed step size \(\gamma=L^{-1}\) and the exact proximal points (corresponding to \(\varepsilon_{k}=0\) for all \(k\)). We then report the image which is reconstructed from the MMSE estimate wavelet coefficients given by the sample mean. Figure 2: Squared Wasserstein distances in a 1D toy example. Left: Samples are drawn using Algorithm 1 with fixed step size \(\gamma\) and fixed inexactness level \(\varepsilon\). The computed squared distances \(\mathcal{W}_{2}^{2}(\tilde{\mu}^{k},\tilde{\mu}^{*})\) are shown as solid lines, the corresponding upper bound predicted by (3.9) as dashed lines. Right: Samples are drawn using Algorithm 1 with fixed \(\gamma\) and \(\varepsilon_{k}=k^{\alpha}\) for different rates \(\alpha<0\). Solid lines are computed distances, dashed lines the upper bounds predicted by (3.10). All chains were started with the same random seed. Afterwards, we run the inexact algorithm with the same step size and same number of samples but different error levels \(\varepsilon>0\). This is implemented by deliberately computing the proximal points \(\text{prox}_{\gamma G}(X^{k+2/3})\) only approximately. We ensure the type-2 approximation as suggested in Section 3.2 by constructing a sequence \(z^{n}\) which converges to the true solution of the dual problem (3.3) and stopping the iteration once the duality gap (3.4) is less or equal \(\varepsilon\). Since we do not know all the constants in the convergence results the accuracy \(\varepsilon\) is practically chosen depending on the size of the duality gap. Similar to [5], we scale \(\varepsilon\) w.r.t. a reference constant of the duality gap evaluated for the first sample with \(z=0\) and the corresponding primal iterate \[\varepsilon=\delta\eta,\quad\delta:=\mathcal{G}(X^{0+2/3},0),\] and test different values of \(\eta\leq 1\). Note that the extreme inexact case of taking \(\eta=1\) and accepting as inexact dual solution \(z=0\) coincides with primal inexact proximal points \(X^{k+1}=X^{k+2/3}\) and hence with applying ULA (2.3) only to the likelihood term and ignoring the \(l_{1}\) regularizing prior. The smaller \(\eta\) is chosen, the more exact the approximation to the proximal point. We run the algorithm for \(\eta=10^{-0.1},10^{-0.5},10^{-2}\) and report the resulting images reconstructed from the MMSE estimates in the second row of Figure 3. Both visually and in PSNR values between the reconstruction and the ground truth we can see the effect of the inexactness in the proximal operator and the resulting regularizing effect of more accurate proximal points. ### TV-regularized denoising and deblurring Next, we consider image denoising and deblurring with a total variation (TV) prior. As before, the negative log-likelihood is given by \(F(x)=\|Ax-y\|_{2}^{2}/2\sigma^{2}\). For denoising, \(A\) is the identity. For deblurring, we now consider a more challenging problem with a Gaussian blur kernel. Figure 3: Comparison of exact and inexact version of PGLA for wavelet-based deblurring. For the inexact version we use errors \(\varepsilon=\delta\eta\). As regularization, we choose a Gibbs prior with density proportional to \(\exp(-G(x))=\exp(-\mu\operatorname{TV}(x))\). The TV functional is defined by \(\operatorname{TV}(x):=\sum_{i,j}\sqrt{(\Delta_{h}x)_{i,j}^{2}+(\Delta_{v}x)_{i,j }^{2}}\) where \(\Delta_{h},\Delta_{v}\) denote horizontal and vertical finite differences in the pixelated image. We can also write this as \(\operatorname{TV}(x)=\|Dx\|_{2,1}\) with \(D=(\Delta_{h},\Delta_{v})\), where the inner \(l_{2}\)-norm is taken over the vertical and horizontal finite differences and the \(l_{1}\)-norm over all \(n_{1}n_{2}\) pixels. The TV functional is convex, continuous and non-smooth. Due to the non-smoothness, gradient-based methods cannot be applied neither in optimization nor in sampling tasks and instead one often uses proximal methods in problems involving TV terms. The proximal mapping of TV is \[\operatorname{prox}_{\gamma\operatorname{TV}}(y)=\operatorname{arg\,min}_{x} \left\{\|Dx\|_{2,1}+\frac{1}{2\gamma}\|x-y\|_{2}^{2}\right\},\] which is an instance of the problem (3.2) with \(B=D\) and \(w=\|\cdot\|_{2,1}\). Since the solution has no closed form, its computation (which is solving the ROF problem [34]) is usually carried out by an iterative procedure, which only gives an approximation instead of the exact point. In the numerical tests carried out here, we solve the dual problem (3.3) as suggested in Section 3 and proposed first in [28], which reads \[\hat{z}=\operatorname*{arg\,min}_{z\,:\,\|z\|_{2,\infty}\leq\gamma}\left\{ \frac{\gamma}{2}\|D^{*}z\|_{2}^{2}-\langle D^{*}z,y\rangle\right\}. \tag{4.1}\] The dual solution \(\hat{z}\) is approximated by applying accelerated proximal gradient descent (AGD), see [17, 26] for details. We track the duality gap \(\mathcal{G}(x^{n},z^{n})\) along the dual iterates \(z^{n}\in\mathbb{R}^{n_{1}\times n_{2}\times 2}\) and the corresponding primal iterates \(x^{n}=y-D^{*}z^{n}\) and stop AGD when \(\mathcal{G}(x^{n},z^{n})\leq\varepsilon\). Since strong duality holds, this implies by Lemma 3.5 that \(x^{n}\) is an \(\varepsilon\)-approximation to the exact proximal point. For denoising, we scale an image of size \(512\times 512\) to gray scale values in \([0,1]\) and set \(\sigma=0.2\). The regularization parameter \(\mu\) in the prior is set using an empirical Bayesian strategy which approximates the marginal MLE for \(\mu\) given data \(y\) proposed in [4, 7]. In Figure 4 we show the ground truth image, corrupted version and MAP estimate. We compute \(10^{5}\) samples from the posterior using Algorithm 1. As before, we set \(\varepsilon=\delta\eta\) with \(\delta=\mathcal{G}(X^{0+2/3},0)\). We run each model four times with values \(\eta=10^{1},10^{0},10^{-1},10^{-2}\). We apply Algorithm 1 to sample from the posterior distribution and show MMSE and pixelwise standard deviation images in Figure 5. The results show the trade-off role of the errors \(\varepsilon_{k}\) and the contribution of these to the sampling bias. With \(\eta=1\), the likelihood gradient step pushes the iterate towards the noisy image in every iteration and one inner iteration towards the proximal point w.r.t. the TV prior is not sufficient to have any proper regularizing effect. The bias introduced by the errors is too large, the samples stay in a region around the noisy image and although we see some contours and edges in the posterior standard deviation, the posterior mean is close to the noisy image. With smaller errors \(\eta=10^{-1},10^{-2},10^{-3}\), the TV regularization effect becomes increasingly visible. However, the computational effort is increased significantly, see Table 1, Figure 4: Test image for the TV-denoising experiment. We always show a close-up of an area on the right to visualize details. where we report the total and average number of iterations that it took to achieve the required accuracy in the proximal point in each sampling iteration. As a final, more ill-posed problem we consider image deblurring from a Gaussian blur with a TV prior. The blur kernel has a standard deviation of 1.5 pixels and we add normally distributed noise (\(\sigma=0.1\)) to the blurred image. Since the blur operator is very ill-conditioned, the posterior distribution has a fairly large region of high probability compared to the largest possible stepsize \(L^{-1}=\sigma^{2}/\lambda_{\max}(A^{*}A)\). The Markov chain therefore has a longer burn-in time and we have to run the sampling algorithm longer (compared to the denoising example) in order to get representative samples from the whole posterior distribution. We compute \(10^{6}\) samples and reduce the computational complexity by considering a \(256\times 256\) image. The results including MMSE estimates and pixelwise estimated standard deviation are shown in Figure 6. In the deblurring examples the number of steps necessary to achieve a sufficient approximation of the proximal points was experienced to be very small, presumably because the proximal points are very close to the initial guess \(X^{k+2/3}\) in each step due to the smaller step size and the smaller regularization parameter. In particular, the first column of results were computed with a Markov chain using on average only one single inner step of gradient descent on (4.1) in every sampling step (\(\varepsilon=\delta\eta\) with \(\delta\) as before and \(\log_{10}\eta=-1\)). With smaller errors \(\log_{10}\eta=-2\) and \(\log_{10}\eta=-3\) (corresponding to on average 4 and 13 iterations for every sample), the samples' distribution and the considered point estimates barely changed, e.g., the PSNR of the posterior mean changed by 0.01% from the experiment with 1 iteration to the one with 13 iterations in each step. \begin{table} \begin{tabular}{c|c|c|c|c} \(\log_{10}\eta\) & 1 & 0 & \(-1\) & \(-2\) \\ \hline total AGD steps & \(1.0\cdot 10^{5}\) & \(3.0\cdot 10^{5}\) & \(1.2\cdot 10^{6}\) & \(3.1\cdot 10^{6}\) \\ \hline avg. AGD steps per sample & 1.0 & 3.0 & 12 & 31 \\ \hline PSNR of MMSE estimate & 18.26 & 19.98 & 20.54 & 20.61 \\ \end{tabular} \end{table} Table 1: In every iteration of the inexact PGLA algorithm applied to TV denoising, we approximate the proximal point by applying accelerated gradient descent (AGD) to the dualized ROF model (4.1), see [17]. We report the total and average number of steps in AGD to reach the accuracies \(\varepsilon=\delta\eta\) for different \(\eta\). Figure 5: TV-denoising: MMSE estimates (top) and logarithm of pixelwise standard deviation (bottom) of \(10^{5}\) samples for fixed proximal accuracy levels \(\varepsilon=\delta\eta\). See also the corresponding results in Table 1. ## 5 Conclusion In this work we proposed a new framework of proximal Langevin sampling for inexact proximal operators. We generalized existing non-asymptotic and asymptotic convergence results on the exact proximal gradient Langevin algorithm to the inexact case. The additional bias between the stationary distribution and the target due to errors is quantified for types of errors that can be ensured efficiently in numerical settings. Our numerical experiments reflect the theoretical results and show how the error in the proximal points can be traded for speed of the algorithm. In particular, in regimes where the step size dominates the algorithm's bias, it can be useful to stop the inner iteration approximating the proximal map after very few iterations giving efficient sampling schemes in high-dimensional problems. Figure 6: TV-deblurring experiment. First row reference images, second row MMSE estimates given by the mean of samples generated by Algorithm 1. Middle row: MMSE estimates, all with \(\text{PSNR}=23.75\). Bottom row: Corresponding log-10 of pixelwise standard deviation. ## Acknowledgements MJE acknowledges support from EPSRC (EP/S026045/1, EP/T026693/1, EP/V026259/1) and the Leverhulme Trust (ECF-2019-478). LK acknowledges support from the German Federal Ministry of Education and Research BMBF (15S59431 B). LK and CBS acknowledge support from the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 777826 NoMADS. CBS further acknowledges support from the Philip Leverhulme Prize, the Royal Society Wolfson Fellowship, the EPSRC advanced career fellowship EP/V029428/1, EPSRC grants EP/S026045/1 and EP/T003553/1, EP/N014588/1, EP/T017961/1, the Wellcome Innovator Awards 215733/Z/19/Z and 221633/Z/20/Z, the Cantab Capital Institute for the Mathematics of Information and the Alan Turing Institute.
2307.16700
Forgetting 1-Limited Automata
We introduce and investigate forgetting 1-limited automata, which are single-tape Turing machines that, when visiting a cell for the first time, replace the input symbol in it by a fixed symbol, so forgetting the original contents. These devices have the same computational power as finite automata, namely they characterize the class of regular languages. We study the cost in size of the conversions of forgetting 1-limited automata, in both nondeterministic and deterministic cases, into equivalent one-way nondeterministic and deterministic automata, providing optimal bounds in terms of exponential or superpolynomial functions. We also discuss the size relationships with two-way finite automata. In this respect, we prove the existence of a language for which forgetting 1-limited automata are exponentially larger than equivalent minimal deterministic two-way automata.
Giovanni Pighizzini, Luca Prigioniero
2023-07-31T14:18:42Z
http://arxiv.org/abs/2307.16700v2
# Forgetting \(1\)-Limited Automata ###### Abstract We introduce and investigate _forgetting \(1\)-limited automata_, which are single-tape Turing machines that, when visiting a cell for the first time, replace the input symbol in it by a fixed symbol, so forgetting the original contents. These devices have the same computational power as finite automata, namely they characterize the class of regular languages. We study the cost in size of the conversions of forgetting \(1\)-limited automata, in both nondeterministic and deterministic cases, into equivalent one-way nondeterministic and deterministic automata, providing optimal bounds in terms of exponential or superpolynomial functions. We also discuss the size relationships with two-way finite automata. In this respect, we prove the existence of a language for which forgetting \(1\)-limited automata are exponentially larger than equivalent minimal deterministic two-way automata. ## 1 Introduction Limited automata have been introduced in 1967 by Hibbard, with the aim of generalizing the notion of determinism for context-free languages [7]. These devices regained attention in the last decade, mainly from a descriptional complexity point of view, and they have been considered in several papers, starting with [15, 16]. (For a recent survey see [14].) In particular, \(1\)_-limited automata_ are single-tape nondeterministic Turing machines that are allowed to rewrite the content of each tape cell only in the first visit. They have the same computational power as finite automata [25, Thm. 12.1], but they can be extremely more succinct. Indeed, in the worst case the size gap from the descriptions of \(1\)-limited automata to those of equivalent one-way deterministic finite automata is double exponential [15]. In order to understand this phenomenon better, we recently studied two restrictions of \(1\)-limited automata [18]. In the first restriction, called _once-marking \(1\)-limited automata_, during each computation the machine can make only one change to the tape, just marking exactly one cell during the first visit to it. We proved that, under this restriction, a double exponential size gap to one-way deterministic finite automata remains possible. In the second restriction, called _always-marking \(1\)-limited automata_, each tape cell is marked during the first visit. In this way, at each step of the computation, the original content in the cell remains available, together with the information saying if it has been already visited at least one time. In this case, the size gap to one-way deterministic finite automata reduces to a single exponential. However, the information about which cells have been already visited still gives extra descriptional power. In fact, the conversion into equivalent two-way finite automata in the worst case costs exponential in size, even if the original machine is deterministic and the target machine is allowed to make nondeterministic choices. A natural way to continue these investigations is to ask what happens if in each cell the information about the original input symbol is lost after the first visit. This leads us to introduce and study the subject of this paper, namely _forgetting \(1\)-limited automata_. These devices are \(1\)-limited automata in which, during the first visit to a cell, the input symbol in it is replaced with a unique fixed symbol. Forgetting automata have been introduced in the literature longtime ago [9]. Similarly to the devices we consider here, they can use only one fixed symbol to replace symbols on the tape. However, the replacement is not required to happen in the first visit, so giving the possibility to recognize more than regular languages. In contrast, being a restriction of 1-limited automata, forgetting 1-limited automata recognize only regular languages. In this paper, first we study the size costs of the simulations of forgetting 1-limited automata, in both nondeterministic and deterministic versions, by one-way finite automata. The upper bounds we prove are exponential, when the simulated and the target machines are nondeterministic and deterministic, respectively. In the other cases they are superpolynomial. These bounds are obtained starting from the conversions of always-marking 1-limited automata into one-way finite automata presented in [18], whose costs, in the case we are considering, can be reduced using techniques and results derived in the context of automata over a one-letter alphabet [3, 12]. We also provide witness languages showing that these upper bounds cannot be improved asymptotically. In the last part of the paper we discuss the relationships with the size of two-way finite automata, which are not completely clear. We show that losing the information on the input content can reduce the descriptional power. In fact, we show languages for which forgetting 1-limited automata, even if nondeterministic, are exponentially larger than minimal two-way deterministic finite automata. We conjecture that also the converse can happen. In particular we show a family of languages for which we conjecture that two-way finite automata, even if nondeterministic, must be significantly larger than minimal deterministic forgetting 1-limited automata. ## 2 Preliminaries In this section we recall some basic definitions useful in the paper. Given a set \(S\), #\(S\) denotes its cardinality and \(2^{S}\) the family of all its subsets. Given an alphabet \(\Sigma\) and a string \(w\in\Sigma^{*}\), \(|w|\) denotes the length of \(w\), \(|w|_{a}\) the number of occurrences of \(a\) in \(w\), and \(\Sigma^{k}\) the set of all strings on \(\Sigma\) of length \(k\). We assume the reader to be familiar with notions from formal languages and automata theory, in particular with the fundamental variants of finite automata (1dfas, 1nfas, 2dfas, 2nfas, for short, where 1/2 mean _one-way/two-way_ and d/n mean _deterministic/nondeterministic_, respectively). For any unfamiliar terminology see, e.g., [8]. A _\(1\)-limited automaton_ (1-la, for short) is a tuple \(A=(Q,\Sigma,\Gamma,\delta,q_{I},F)\), where \(Q\) is a finite _set of states_, \(\Sigma\) is a finite _input alphabet_, \(\Gamma\) is a finite _work alphabet_ such that \(\Sigma\cup\{\rhd,\trianglelefteq\}\subseteq\Gamma\), \(\rhd,\trianglelefteq\notin\Sigma\) are two special symbols, called the _left_ and the _right end-markers_, \(\delta:Q\times\Gamma\to 2^{Q\times(\Gamma\setminus\{\rhd,\trianglelefteq\}) \times\{-1,+1\}}\) is the _transition function_, and \(F\subseteq Q\) is a set of final states. At the beginning of the computation, the input word \(w\in\Sigma^{*}\) is stored onto the tape surrounded by the two end-markers, the left end-marker being in position zero and the right end-marker being in position \(|w|+1\). The head of the automaton is on cell 1 and the state of the finite control is the _initial state_\(q_{I}\). In one move, according to \(\delta\) and the current state, \(A\) reads a symbol from the tape, changes its state, replaces the symbol just read from the tape with a new symbol, and moves its head to one position forward or backward. Furthermore, the head cannot pass the end-markers, except at the end of computation, to accept the input, as explained below. Replacing symbols is allowed to modify the content of each cell only during the first visit, with the exception of the cells containing the end-markers, which are never modified. Hence, after the first visit, a tape cell is "frozen". More technical details can be found in [15]. The automaton \(A\) accepts an input \(w\) if and only if there is a computation path that starts from the initial state \(q_{I}\) with the input tape containing \(w\) surrounded by the two end-markers and the head on the first input cell, and which ends in a _final state_\(q\in F\) after passing the right end-marker. The device \(A\) is said to be _deterministic_ (d-1-la, for short) whenever \(\#\delta(q,\sigma)\leq 1\), for every \(q\in Q\) and \(\sigma\in\Gamma\). We say that the 1-la \(A\) is a _forgetting_ 1-la (for short f-1-la or d-f-1-la in the deterministic case), when there is only one symbol \(Z\) that is used to replace symbols in the first visit, i.e., the work alphabet is \(\Gamma=\Sigma\cup\{Z\}\cup\{\rhd,\lhd\}\), with \(Z\notin\Sigma\) and if \((q,A,d)\in\delta(p,a)\) and \(a\in\Sigma\) then \(A=Z\). Two-way finite automata are limited automata in which no rewritings are possible; one-way finite automata can scan the input in a one-way fashion only. A finite automaton is, as usual, a tuple \((Q,\Sigma,\delta,q_{I},F)\), where, analogously to 1-las, \(Q\) is the finite set of states, \(\Sigma\) is the finite input alphabet, \(\delta\) is the transition function, \(q_{I}\) is the initial state, and \(F\) is the set of final states. We point out that for two-way finite automata we assume the same accepting conditions as for 1-las. Two-way machines in which the direction of the head can change only at the end-markers are said to be _sweeping_[23]. In this paper we are interested in comparing the size of machines. The _size_ of a model is given by the total number of symbols used to write down its description. Therefore, the size of 1-las is bounded by a polynomial in the number of states and of work symbols, while, in the case of finite automata, since no writings are allowed, the size is linear in the number of instructions and states, which is bounded by a polynomial in the number of states and in the number of input symbols. We point out that, since f-1-las use work alphabet \(\Gamma=\Sigma\cup\{Z\}\cup\{\rhd,\lhd\}\), \(Z\notin\Sigma\), the relevant parameter for evaluating the size of these devices is their number of states, differently than 1-las, in which the size of the work alphabet is not fixed, i.e., depends on the machine. We now shortly recall some notions and results related to number theory that will be useful to obtain our cost estimations. First, given two integers \(m\) and \(n\), let us denote by \(\gcd(m,n)\) and by \(\operatorname{lcm}(m,n)\) their _greatest common divisor_ and _least common multiple_, respectively. We remind the reader that each integer \(\ell>1\) can be factorized in a unique way as product of powers of primes, i.e., as \(\ell=p_{1}^{k_{1}}\cdots p_{r}^{k_{r}}\), where \(p_{1}<\cdots<p_{r}\) are primes, and \(k_{1},\ldots,k_{r}>0\). In our estimations, we shall make use of the _Landau's function_\(F(n)\)[11, 10], which plays an important role in the analysis of simulations among different types of unary automata (e.g. [3, 5, 12]). Given a positive integer \(n\), let \[F(n)=\max\{\operatorname{lcm}(\lambda_{1},\ldots,\lambda_{r})\ \mid\ \lambda_{1}+\cdots+\lambda_{r}=n\}\,,\] where \(\lambda_{1},\ldots,\lambda_{r}\) denote, for the time being, arbitrary positive integers. Szalay [24] gave a sharp estimation of \(F(n)\) that, after some simplifications, can be formulated as follows: \[F(n)=e^{(1+o(1))\cdot\sqrt{n\cdot\ln n}}.\] Note that the function \(F(n)\) grows less than \(e^{n}\), but more than each polynomial in \(n\). In this sense we say that \(F(n)\) is a _superpolynomial function_. As observed in [6], for each integer \(n>1\) the value of \(F(n)\) can also be expressed as the maximum product of powers of primes, whose sum is bounded by \(n\), i.e., \[F(n)=\max\{p_{1}^{k_{1}}\cdots p_{r}^{k_{r}}\ \mid\ p_{1}^{k_{1}}+\cdots+p_{r}^{k _{r}}\leq n,\,p_{1},\ldots,p_{r}\text{ are primes, and }k_{1},\ldots,k_{r}>0\}.\] ## 3 Forgetting 1-Limited Automata vs. One-Way Automata When forgetting 1-limited automata visit a cell for the first time, they replace the symbol in it with a fixed symbol \(Z\), namely they forget the original content. In this way, each input prefix can be rewritten in a unique way. As already proved for _always-marking 1-las_, this prevents a double exponential size gap in the conversion to 1dfas[18]. However, in this case the upper bounds obtained for always-marking 1-las, can be further reduced, using the fact that only one symbol is used to replace input symbols: **Theorem 1**: _Let \(M\) be an \(n\)-state f-1-la. Then \(M\) can be simulated by a 1nfa with at most \(n\cdot(5n^{2}+F(n))+1\) states and by a complete 1dfa with at most \((2^{n}-1)\cdot(5n^{2}+F(n))+2\) states._ _Proof._ First of all, we recall the argument for the conversion of 1-las into 1nfas and 1dfas presented [15, Thm. 2] that, in turn, is derived from the technique to convert 2dfas into equivalent 1dfas, presented in [22], and based on _transitions tables_. Let us start by supposing that \(M=(Q,\Sigma,\Gamma,\delta,q_{0},F)\) is an \(n\)-state 1-la. Roughly, transition tables represent the possible behaviors of \(M\) on "frozen" tape segments. More precisely, given \(z\in\Gamma^{*}\), the _transition table_ associated with \(z\) is the binary relation \(\tau_{z}\subseteq Q\times Q\), consisting of all pairs \((p,q)\) such that \(M\) has a computation path that starts in the state \(p\) on the rightmost symbol of a tape segment containing \(\rhd z\), ends reaching the state \(q\) by leaving the same tape segment to the right side, i.e., by moving from the rightmost cell of the segment to the right, and does not visit any cell outside the segment. A 1nfa\(A\) can simulate \(M\) by keeping in the finite control two components: * The transition table corresponding to the part of the tape at the left of the head. This part has been already visited and, hence, it is frozen. * The state in which the simulated computation of \(M\) reaches the current tape position. Since the number of transition tables is at most \(2^{n^{2}}\), the number of states in the resulting 1nfa\(A\) is bounded by \(n\cdot 2^{n^{2}}\). Applying the subset construction, this automaton can be converted into an equivalent deterministic one, with an exponential increasing in the number of states, so obtaining a double exponential number of states in \(n\). In the general case, this number cannot be reduced due to the fact that different computations of \(A\), after reading the same input, could keep in the control different transition tables, depending on the fact that \(M\) could replace the same input by different strings. We now suppose that \(M\) is a f-1-la. In this case each input string can be replaced by a unique string. This would reduce the cost of the conversion to 1dfas to a single exponential. Indeed, it is possible to convert the 1nfa\(A\) obtained from \(M\) into an equivalent 1dfa that keeps in its finite control the _unique_ transition table for the part of the tape scanned so far (namely, the same first component as in the state of \(A\)), and the set of states that are reachable by \(M\) when entering the current tape cell (namely, a set of states that can appear in the second component of \(A\), while entering the current tape cell). This leads to an upper bound of \(2^{n}\cdot 2^{n^{2}}\) states for the resulting 1dfa. We can make a further improvement, reducing the number of transition tables used during the simulation. Indeed we are going to prove that only a subset of all the possible \(2^{n^{2}}\) transition tables can appear during the simulation. Since only a fixed symbol \(Z\) is used to replace input symbols on the tape, the transition table when the head is in a cell depends only on the position of the cell and not on the initial tape content. For each integer \(m\geq 0\), let us call \(\tau_{m}\) the transition table corresponding to a frozen tape segment of length \(m\), namely the transition table when the head of the simulating one-way automaton is on the tape cell \(m+1\). We are going to prove that the sequence \(\tau_{0},\tau_{1},\ldots,\tau_{m},\ldots\) is ultimately periodic, with period length bounded by \(F(n)\) and, more precisely, \(\tau_{m}=\tau_{m+F(n)}\) for each \(m>5n^{2}\). The proof is based on the analysis of computation paths in unary 2nfas carried on in [12, Section 3]. Indeed, we can see the parts of the computation on a frozen tape segment as computation paths of a unary 2nfa. More precisely, by definition, for \(p,q\in Q\), \(\tau_{m}(p,q)=1\) if and only if there is a computation path \(C\) that enters the frozen tape segment of length \(m\) from the right in the state \(p\) and, after some steps, exits the segment to the right in the state \(q\). Hence, during the path \(C\) the head can visit only frozen cells (i.e., the cells in positions \(1,\ldots,m\)) of the tape, and the left end-marker. There are two possible cases: * _In the computation path \(C\) the head never visits the left end-marker._ A path of this kind is also called _left U-turn_. Since it does not depend on the position of the left end-marker, this path will also be possible, suitably shifted to the right, on each frozen segment of length \(m^{\prime}>m\). Hence \(\tau_{m^{\prime}}(p,q)=1\) for each \(m^{\prime}\geq m\). Furthermore, it has been proven that if there is a left U-turn which starts in the state \(p\) on cell \(m\), and ends in state \(q\), then there exists another left U-turn satisfying the same constraints, in which the head never moves farther than \(n^{2}\) positions to the left of the position \(m\)[12, Lemma 3.1]. So, such a "short" U-turn can be shifted to the left, provided that the tape segment is longer than \(n^{2}\). Hence, in this case \(\tau_{m}(p,q)=1\) implies \(\tau_{m^{\prime}}(p,q)=1\) for each \(m^{\prime}>n^{2}\). * _In the computation path \(C\) the head visits at least one time the left end-marker._ Let \(s_{0},s_{1},\ldots,s_{k}\), \(k\geq 0\), be the sequence of the states in which \(C\) visits the left end-marker. We can decompose \(C\) in a sequence of computation paths \(C_{0},C_{1},\ldots,C_{k},C_{k+1}\), where: * \(C_{0}\) starts from the state \(p\) with the head on the cell \(m\) and ends in \(s_{0}\) when the head reaches the left end-marker. \(C_{0}\) is called _right-to-left traversal_ of the frozen segment. * For \(i=1,\ldots,k\), \(C_{i}\) starts in state \(s_{i-1}\) with the head on the left end-marker and ends in \(s_{i}\), when the head is back to the left end-marker. \(C_{i}\) is called _right U-turn_. Since, as seen before for left U-turns, each right U-turn can always be replaced by a "short" right U-turn, without loss of generality we suppose that \(C_{i}\) does not visit more than \(n^{2}\) cells to the right of the left end-marker. * \(C_{k+1}\) starts from the state \(s_{k}\) with the head on the left end-marker and ends in \(q\), when the head leaves the segment, moving to the right of the cell \(m\). \(C_{k+1}\) is called _left-to-right traversal_ of the frozen segment. From [12, Theorem 3.5], there exists a set of positive integers \(\{\ell_{1},\ldots,\ell_{r}\}\subseteq\{1,\ldots,n\}\) satisfying \(\ell_{1}+\cdots+\ell_{r}\leq n\) such that for \(m\geq n\), if a frozen tape segment of length \(m\) can be (left-to-right or right-to-left) traversed from a state \(s\) to a state \(s^{\prime}\) then there is an index \(i\in\{1,\ldots,r\}\) such that, for each \(\mu>\frac{5n^{2}-m}{\ell_{i}}\), a frozen tape segment of length \(m+\mu\ell_{i}\) can be traversed (in the same direction) from state \(s\) to state \(s^{\prime}\). This was proved by showing that for \(m>5n^{2}\) a traversal from \(s\) to \(s^{\prime}\) of a segment of length \(m\) can always be "pumped" to obtain a traversal of a segment of length \(m^{\prime}=m+\mu\ell_{i}\), for \(\mu>0\), and, furthermore, the segment can be "unpumped" by taking \(\mu<0\), provided that the resulting length \(m^{\prime}\) is greater than \(5n^{2}\). Let \(\ell\) be the least common multiple of \(\ell_{1},\ldots,\ell_{r}\). If \(m>5n^{2}\), from the original computation path \(C\), by suitably pumping or unpumping the parts \(C_{0}\) and \(C_{k+1}\), and without changing \(C_{i}\), for \(i=1,\ldots,k\), for each \(m^{\prime}=m+\mu\ell>5n^{2}\), with \(\mu\in\mathbb{Z}\), we can obtain a computation path that enters a frozen segment of length \(m^{\prime}\) from the right in the state \(p\) and exits the segment to the right in the state \(q\). By summarizing, from the previous analysis we conclude that for all \(m,m^{\prime}>5n^{2}\), if \(m\equiv m^{\prime}\pmod{\ell}\) then \(\tau_{m}=\tau_{m^{\prime}}\). Hence, the transition tables used in the simulation are at most \(5n^{2}+\ell\). Since, by definition, \(\ell\) cannot exceed \(F(n)\), we obtain the number of different transitions tables that are used in the simulation is bounded by \(5n^{2}+1+F(n)\). According with the construction outlined at the beginning of the proof, from the f-1-la \(M\) we can obtain a 11fa \(A\) that, when the head reaches the tape cell \(m+1\), has in the first component of its finite control the transition table \(\tau_{m}\), and in the second component the state in which the cell \(m+1\) is entered for the first time during the simulated computation. Hence the total number of states of \(A\) is bounded by \(n\cdot(5n^{2}+1+F(n))\). We observe that, at the beginning of the computation, the initial state is the pair containing the transition matrix \(\tau_{0}\) and the initial state of \(M\). Hence, we do not need to consider other states with \(\tau_{0}\) as first component, unless \(\tau_{0}\) occurs in the sequence \(\tau_{1},\ldots,\tau_{5n^{2}+F(n)}\). This allows to reduce the upper bound to \(n\cdot(5n^{2}+F(n))+1\) If the simulating automaton \(A\) is a 1dfa, then first component does not change, while the second component contains the set of states in which the cell \(m+1\) is entered for the first time during all possible computations of \(M\). This would give a \(2^{n}\cdot(5n^{2}+F(n))+1\) state upper bound. However, if the set in the second component is empty then the computation of \(M\) is rejecting, regardless what is the remaining part of the input and what has been written on the tape. Hence, in this case, the simulating 1dfa can enter a sink state. This allows to reduce the upper bound to \((2^{n}-1)\cdot(5n^{2}+F(n))+2\). ### Optimality: The Language \(\mathcal{L}_{n,\ell}\) We now study the optimality of the state upper bounds presented in Theorem 1. To this aim, we introduce a family of languages \(\mathcal{L}_{n,\ell}\), that are defined with respect to integer parameters \(n,\ell>0\). Each language in this family is composed by all strings of length multiple of \(\ell\) belonging to the language \(L_{MF_{n}}\) which is accepted by the \(n\)-state 1nfa\(A_{MF_{n}}=(Q_{n},\{a,b\},\delta_{n},q_{0},\{q_{0}\})\) depicted in Figure 1, i.e., \(\mathcal{L}_{n,\ell}=L_{MF_{n}}\cap(\{a,b\}^{\ell})^{*}\). The automaton \(A_{MF_{n}}\) was proposed longtime ago by Meyer and Fischer as a witness of the exponential state gap from 1nfa to 1dfas[13]. Indeed, it can be proved that the smallest 1dfa accepting it has exactly \(2^{n}\) states. In the following we shall refer to some arguments given in the proof of such result presented in [21, Thm. 3.9.6]. Let us start by presenting some simple state upper bounds for the recognition of \(\mathcal{L}_{n,\ell}\) by one-way finite automata. **Theorem 2**: _For every two integers \(n,\ell>0\), there exists a complete 1dfa accepting \(\mathcal{L}_{n,\ell}\) with \((2^{n}-1)\cdot\ell+1\) states and a 1nfa with \(n\cdot\ell\) states._ Figure 1: The 1nfa\(A_{MF_{n}}\) accepting the language of Meyer and Fischer. Proof.: We apply the subset construction to convert the 1nfa\(A_{MF_{n}}\) into a 1dfa with \(2^{n}\) states and then, with the standard product construction, we intersect the resulting automaton with the trivial \(\ell\)-state automaton accepting \((\{a,b\}^{\ell})^{*}\). In this way we obtain a 1dfa with \(2^{n}\cdot\ell\) states for \(\mathcal{L}_{n,\ell}\). However, all the states obtained from the sink state, corresponding to the empty set, are equivalent, so they can be replaced by a unique sink state. This allows to reduce the number of states to \((2^{n}-1)\cdot\ell+1\). In the case of 1nfa we apply the product construction to \(A_{MF_{n}}\) and the \(\ell\)-state automaton accepting \((\{a,b\}^{\ell})^{*}\), so obtaining a 1nfa with \(n\cdot\ell\) states. We now study how to recognize \(\mathcal{L}_{n,\ell}\) using two-way automata and f-1-las. In both cases we obtain sweeping machines. **Theorem 3**: _Let \(\ell>0\) be an integer that factorizes \(\ell=p_{1}^{k_{1}}\cdots p_{r}^{k_{r}}\) as a product of prime powers and \(o=r\bmod 2\). Then:_ * \(\mathcal{L}_{n,\ell}\) _is accepted by a sweeping_ 2nfa _with_ \(n+p_{1}^{k_{1}}+\cdots+p_{r}^{k_{r}}+o\) _states, that uses nondeterministic transitions only in the first sweep._ * \(\mathcal{L}_{n,\ell}\) _is accepted by a sweeping_ f-1-la _with_ \(\max(n,p_{1}^{k_{1}}+\cdots+p_{r}^{k_{r}}+o)\) _states that uses nondeterministic transitions only in the first sweep._ * \(\mathcal{L}_{n,\ell}\) _is accepted by a sweeping_ 2dfa _with_ \(2n+p_{1}^{k_{1}}+\cdots+p_{r}^{k_{r}}+o\) _states._ Proof.: In the first sweep, the 2nfa for \(\mathcal{L}_{n,\ell}\), using \(n\) states, simulates the 1nfa\(A_{MF_{n}}\) to check if the input belongs to \(L_{MF_{n}}\). Then, it makes one sweep for each \(i=1,\ldots,r\) (alternating a right-to-left sweep with a left-to-right sweep), using \(p_{i}^{k_{i}}\) states in order to check whether \(p_{i}^{k_{i}}\) divides the input length. If the outcomes of all these tests are positive, then the automaton accepts. When \(r\) is even, the last sweep ends with the head on the right end-marker. Then, moving the head one position to the right, the automaton can reach the accepting configuration. However, when \(r\) is odd, the last sweep ends on the left end-marker. Hence, using an extra state, the head can traverse the entire tape to finally reach the accepting configuration. A f-1-la can implement the same strategy. However, to check if the tape length is a multiple of \(\ell\), it can reuse the \(n\) states used in the first sweep, plus \(p_{1}^{k_{1}}+\cdots+p_{r}^{k_{r}}+o-n\) extra states when \(n<p_{1}^{k_{1}}+\cdots+p_{r}^{k_{r}}+o\). This is due to the fact that the value of the transition function depends on the state and on the symbol in the tape cell and that, in the first sweep, all the input symbols have been replaced by \(Z\). Finally, we can implement a 2dfa that recognizes \(\mathcal{L}_{n,\ell}\) by firstly making \(r\) sweeps to check whether \(p_{i}^{k_{i}}\) divides the input length, \(i=1,\ldots,r\). If so, then the automaton, after moving the head from the left to the right end-marker in case of \(r\) even, makes a further sweep from right to left, to simulate a 1dfa accepting the reversal of \(L_{MF_{n}}\), which can be accepted using \(2n\) states [20]. If the simulated automaton accepts, then the machine can make a further sweep, by using a unique state to move the head from the left end-marker to the right one, and then accept. The total number of states is \(2n+p_{1}^{k_{1}}+\cdots+p_{r}^{k_{r}}+2-o\). This number can be slightly reduced as follows: in the first sweep (which is from left to right) the automaton checks the divisibility of the input length by \(p_{1}^{k_{1}}\); in the second sweep (from right to left) the automaton checks the membership to \(L_{MF_{n}}\); in the remaining \(r-1\) sweeps (alternating left-to-right with right-to-left sweeps), it checks the divisibility for \(p_{i}^{k_{i}}\), \(i=2,\ldots,r\). So, the total number of sweeps for these checks is \(r+1\). This means that, when \(r\) is even, the last sweep ends on the right end-marker and the machine can immediately move to the accepting configuration. Otherwise the head needs to cross the input from left to right, using an extra state. As a consequence of Theorem 3, in the case of f-1-las we immediately obtain: **Corollary 1**: _For each \(n>0\) the language \(\mathcal{L}_{n,F(n)}\) is accepted by a F-1-la with at most \(n+1\) states._ _Proof._ If \(F(n)=p_{1}^{k_{1}}\cdots p_{r}^{k_{r}}\) then \(p_{1}^{k_{1}}+\cdots+p_{r}^{k_{r}}\leq n\leq F(n)\). Hence, the statement follows from Theorem 3. We are now going to prove lower bounds for the recognition of \(\mathcal{L}_{n,\ell}\), in the case \(n\) and \(\ell\) are relatively primes. Let us start by considering the recognition by 1dfas. **Theorem 4**: _Given two integers \(n,\ell>0\) with \(\gcd(n,\ell)=1\), each 1dfa accepting \(\mathcal{L}_{n,\ell}\) must have at least \((2^{n}-1)\cdot\ell+1\) states._ _Proof._ Let \(Q_{n}=\{q_{0},q_{1},\ldots,q_{n-1}\}\) be the set of states of \(A_{MF_{n}}\) (see Figure 1). First, we briefly recall some arguments from the proof presented in [21, Thm. 3.9.6]. For each subset \(S\) of \(Q_{n}\), we define a string \(w_{S}\) having the property that \(\delta_{n}(q_{0},w_{S})=S\). Furthermore, it is proved that all the strings so defined are pairwise distinguishable, so obtaining the state lower bound \(2^{n}\) for each 1dfa equivalent to \(A_{MF_{n}}\). In particular, the string \(w_{S}\) is defined as follows: \[w_{S}=\left\{\begin{array}{ll}b&\mbox{if $S=\emptyset$;}\\ a^{i}&\mbox{if $S=\{q_{i}\}$;}\\ a^{e_{k}-e_{k-1}}ba^{e_{k-1}-e_{k-2}}b\cdots a^{e_{2}-e_{1}}ba^{e_{1}},&\mbox{ otherwise;}\end{array}\right. \tag{1}\] where in the second case \(S=\{q_{i}\}\), \(0\leq i<n\), while in the third case \(S=\{q_{e_{1}},q_{e_{2}},\ldots,q_{e_{k}}\}\), \(1<k\leq n\), and \(0\leq e_{1}<e_{2}<\cdots<e_{k}<n\). To obtain the claimed state lower bound in the case of the language \(\mathcal{L}_{n,\ell}\), for each nonempty subset \(S\) of \(Q_{n}\) and each integer \(j\), with \(0\leq j<\ell\), we define a string \(w_{S,j}\) which is obtained by suitably padding the string \(w_{S}\) in such a way that the set of states reachable from the initial state by reading \(w_{S,j}\) remains \(S\) and the length of \(w_{S,j}\), divided by \(\ell\), gives \(j\) as reminder. Then we shall prove that all the so obtained strings are pairwise distinguishable. Unlike (1), when defining \(w_{S,j}\) we do not consider the case \(S=\emptyset\). In the following, let us denote by \(f:\mathbb{N}\times\mathbb{N}\rightarrow\mathbb{N}\) a function satisfying \(f(i,j)\) mod \(n=i\) and \(f(i,j)\) mod \(\ell=j\), for \(i,j\in\mathbb{N}\). Since \(\gcd(n,\ell)=1\), by the Chinese Reminder Theorem, such a function always exists. For each non-empty subset \(S\) of \(Q_{n}\) and each integer \(j\), with \(0\leq j<\ell\), the string \(w_{S,j}\) is defined as: \[w_{S,j}=\left\{\begin{array}{ll}a^{f(i,j)}&\mbox{if $S=\{q_{i}\}$;}\\ a^{e_{k}-e_{k-1}}ba^{e_{k-1}-e_{k-2}}b\cdots a^{e_{2}-e_{1}}b^{H\ell-k-e_{k}+2 +j}a^{e_{1}},&\mbox{otherwise;}\end{array}\right. \tag{2}\] where in the first case \(S=\{q_{i}\}\), \(0\leq i<n\), while in the second case \(S=\{q_{e_{1}},q_{e_{2}},\ldots,q_{e_{k}}\}\), \(1<k\leq n\), \(0\leq e_{1}<e_{2}<\cdots<e_{k}<n\), and \(H\geq 1\) is a fixed integer such that \(H\ell>2n\) (this last condition is useful to have \(H\ell-k-e_{k}+2+j>0\), in such a way that the last block of \(b\)'s is always well defined and not empty). We claim and prove the following facts: 1. \(|w_{S,j}|\) mod \(\ell=j\). If \(S=\{q_{i}\}\), then by definition \(|w_{S,j}|\) mod \(\ell=f(i,j)\) mod \(\ell=j\). Otherwise, according to the second case in (2), \(S=\{q_{e_{1}},q_{e_{2}},\ldots,q_{e_{k}}\}\) and \(|w_{S,j}|=e_{k}-e_{k-1}+1+e_{k-1}-e_{k-2}+1+\cdots+e_{2}-e_{1}+H\ell-k-e_{k}+2+ j+e_{1}\), which is equal to \(H\ell+j\). 2. \(\delta_{n}(q_{0},w_{S,j})=S\). In the automaton \(A_{MF_{n}}\), all the transitions on the letter \(a\) are deterministic. Furthermore, by reading the string \(a^{x}\), \(x>0\), from the state \(q_{0}\), the only reachable state is \(q_{x\bmod n}\). Hence, for the first case \(S=\{q_{i}\}\) in (2) we have \(\delta_{n}(q_{0},w_{S,j})=\{q_{f(i,j)\bmod n}\}=\{q_{i}\}\). For the second case, we already mentioned that \(\delta_{n}(q_{0},w_{S})=S\). Furthermore \(w_{S,j}\) is obtained from \(w_{S}\) by replacing the rightmost \(b\) by a block of more than one \(b\). From the transition diagram of \(A_{MF_{n}}\) we observe that from each state \(q_{i}\), with \(i>0\), reading a \(b\) the automaton can either remain in \(q_{i}\) or move to \(q_{0}\). Furthermore, from \(q_{0}\) there are no transitions on the letter \(b\). This allows to conclude that the behavior does not change when one replaces an occurrence of \(b\) in a string with a sequence of more than one \(b\). Hence, \(\delta_{n}(q_{0},w_{S,j})=\delta_{n}(q_{0},w_{S})=S\). 3. _For \(i=0,\ldots,n-1\) and \(x\geq 0\), \(\delta_{n}(q_{i},a^{x})=\{q_{i^{\prime}}\}\) where \(i^{\prime}=0\) if and only if \(x\bmod n=n-i\). Hence \(a^{x}\) is accepted by some computation path starting from \(q_{i}\) if and only if \(x\bmod n=n-i\)._ It is enough to observe that all the transitions on the letter \(a\) are deterministic and form a loop visiting all the states. More precisely \(i^{\prime}=(i+x)\bmod n\). Hence, \(i^{\prime}=0\) if and only if \(x\bmod n=n-i\). We now prove that all the strings \(w_{S,j}\) are pairwise distinguishable. To this aim, let us consider two such strings \(w_{S,j}\) and \(w_{T,h}\), with \((S,j)\neq(T,h)\). We inspect the following two cases: * \(S\neq T\). Without loss of generality, let us consider a state \(q_{s}\in S\setminus T\). We take \(z=a^{f(n-s,\ell-j)}\). By the previous claims, we obtain that \(w_{S,j}\cdot z\in L_{MF_{n}}\), while \(w_{T,h}\cdot z\notin L_{MF_{n}}\). Furthermore, \(|w_{S,j}\cdot z|\bmod\ell=(j+\ell-j)\bmod\ell=0\). Hence \(w_{S,j}\cdot z\in(\{a,b\}^{\ell})^{*}\). This allows to conclude that \(w_{S,j}\cdot z\in\mathcal{L}_{n,\ell}\), while \(w_{T,h}\cdot z\notin\mathcal{L}_{n,\ell}\). * \(j\neq h\). We choose a state \(q_{s}\in S\) and, again, the string \(z=a^{f(n-s,\ell-j)}\). Exactly as in the previous case we obtain \(w_{S,j}\cdot z\in\mathcal{L}_{n,\ell}\). Furthermore, being \(j\neq h\) and \(0\leq j,h<\ell\), we get that \(|w_{T,h}\cdot z|\bmod\ell=(h+\ell-j)\bmod\ell\neq 0\). Hence \(w_{T,h}\cdot z\notin(\{a,b\}^{\ell})^{*}\), thus implying \(w_{T,h}\cdot z\notin\mathcal{L}_{n,\ell}\). By summarizing, we have proved that all the above defined \((2^{n}-1)\cdot\ell\) strings \(w_{S,j}\) are pairwise distinguishable. We also observe that each string starting with the letter \(b\) is not accepted by the automaton \(A_{MF_{n}}\).1 This implies that the string \(b\) and each string \(w_{S,j}\) are distinguishable. Hence, we are able to conclude that each 1dfa accepting \(\mathcal{L}_{n,\ell}\) has at least \((2^{n}-1)\cdot\ell+1\) states. Footnote 1: We point out that two strings that in \(A_{MF_{n}}\) lead to the emptyset are not distinguishable. This is the reason why we did not considered strings of the form \(w_{\emptyset,j}\) in (2). Concerning 1nfas, we prove the following: **Theorem 5**: _Given two integers \(n,\ell>0\) with \(\gcd(n,\ell)=1\), each 1nfa accepting \(\mathcal{L}_{n,\ell}\) must have at least \(n\cdot\ell\) states._ _Proof._ The proof can be easily given by observing that \(X=\{(a^{i},a^{n\cdot\ell-i})\;\mid\;i=0,\ldots,n\cdot\ell-1\}\) is a _fooling set_ for \(\mathcal{L}_{n,\ell}\)[2]. Hence, the number of states of each 1nfa for \(\mathcal{L}_{n,\ell}\) cannot be lower than the cardinality of \(X\). As a consequence of Theorems 4 and 5 we obtain: **Theorem 6**: _For each prime \(n>4\), every 1dfa and every 1nfa accepting \(\mathcal{L}_{n,F(n)}\) needs \((2^{n}-1)\cdot F(n)+1\) and \(n\cdot F(n)\) states, respectively._ _Proof._ First, we prove that \(\gcd(n,F(n))=1\) for each prime \(n>4\). To this aim, we observe that by definition \(F(n)\geq 2\cdot(n-2)\) for each prime \(n\). Furthermore, if \(n>4\) then \(2\cdot(n-2)>n\). Hence \(F(n)>n\) for each prime \(n>4\). Suppose that \(\gcd(n,F(n))\neq 1\). Then \(n\), being prime and less than \(F(n)\), should divide \(F(n)\). By definition of \(F(n)\), this would imply \(F(n)=n\); a contradiction. This allows us to conclude that \(\gcd(n,F(n))=1\), for each prime \(n>4\). Using Theorems 4 and 5, we get that, for all such \(n\)'s, a 1dfa needs at least \((2^{n}-1)\cdot F(n)+1\) states to accept \(\mathcal{L}_{n,F(n)}\), while an equivalent 1nfa needs at least \(n\cdot\ell\) states. As a consequence of Theorem 6, for infinitely many \(n\), the 1dfa and 1nfa for the language \(\mathcal{L}_{n,F(n)}\) described in Theorem 2 are minimal. By combining the results in Corollary 1 and Theorem 6, we obtain that the costs of the simulations of f-1-las by 1nfas and 1dfas presented in Theorem 1 are asymptotically optimal: **Corollary 2**: _For infinitely many integers \(n\) there exists a language which is accepted by a f-1-la with at most \(n+1\) states and such that all equivalent 1dfas and 1nfas require at least \((2^{n}-1)\cdot F(n)+1\) and \(n\cdot F(n)\) states, respectively._ ## 4 Deterministic Forgetting \(1\)-Limited Automata vs. One-Way Automata In Section 3 we studied the size costs of the conversions from f-1-las to one-way finite automata. We now restrict our attention to the simulation of deterministic machines. By adapting to this case the arguments used to prove Theorem 1, we obtain a superpolynomial state bound for the conversion into 1dfas, which is not so far from the bound obtained starting from nondeterminstic machines: **Theorem 7**: _Let M be an n-state d-f-1-la. Then M can be simulated by a 1dfa with at most \(n\cdot(n+F(n))+2\) states._ _Proof._ We can apply the construction given in the proof of Theorem 1 to build, from the given d-f-1-la\(M\), a one-way finite automaton that, when the head reaches the tape cell \(m+1\), has in its finite control the transition table \(\tau_{m}\) associated with the tape segment of length \(m\) and the state in which the cell is reached for the first time. Since the transitions of \(M\) are deterministic, each tape cell is reached for the first time by at most one computation and the resulting automaton is a (possible partial) 1dfa, with no more than \(n\cdot(5n^{2}+F(n))+1\) states. However, in this case the number of transition tables can be reduced, so decreasing the upper bound. In particular, due to determinism and the unary content in the frozen part, we can observe that left and right U-turns cannot visit more than \(n\) tape cells. Furthermore, after visiting more than \(n\) tape cells, a traversal is repeating a loop. This allows to show that the sequence of transition matrices starts to be periodic after the matrix \(\tau_{n}\), i.e, for \(m,m^{\prime}>n\), if \(m\equiv m^{\prime}\pmod{F(n)}\) then \(\tau_{m}=\tau_{m^{\prime}}\). Hence, the number of different transition tables used during the simulation is at most \(n+1+F(n)\), and the number of states of the simulating (possibly partial) 1dfa is bounded by \(n\cdot(n+F(n))+1\). By adding one more state we finally obtain a complete 1dfa. ### Optimality: The Language \(\mathcal{J}_{n,\ell}\) We now present a family of languages for which we prove a size gap very close to the upper bound in Theorem 7. Given two integers \(n,\ell>0\), let us consider: \[\mathcal{J}_{n,\ell}=\left\{w\in\left\{a,b\right\}^{*}\ \mid\ \left|w\right|_{a}\bmod n=0 \text{ and }\left|w\right|\bmod\ell=0\right\}.\] First of all, we observe that it is not difficult to recognize \(\mathcal{J}_{n,\ell}\) using a 1dfa with \(n\cdot\ell\) states that counts the number of \(a\)'s using one counter modulo \(n\), and the input length using one counter modulo \(\ell\). This number of states cannot be reduced, even allowing nondeterministic transitions: **Theorem 8**: _Each 1nfa accepting \(\mathcal{J}_{n,\ell}\) has at least \(n\cdot\ell\) states._ _Proof._ Let \(H>\ell+n\) be a multiple of \(\ell\). For \(i=1,\ldots,\ell\), \(j=0,\ldots,n-1\), consider \(x_{ij}=a^{j}b^{H+i-j}\) and \(y_{ij}=b^{H-i-n+j}a^{n-j}\). We are going to prove that the set \[X=\{(x_{ij},y_{ij})\;\mid\;1\leq i\leq\ell,0\leq j<n\}\] is an _extended fooling set_ for \(\mathcal{J}_{n,\ell}\). To this aim, let us consider \(i,i^{\prime}=1,\ldots,\ell\), \(j,j^{\prime}=0,\ldots,n-1\). We observe that the string \(x_{ij}y_{ij}\) contains \(n\)\(a\)'s and has length \(j+H+i-j+H-i-n+j+n-j=2H\) and hence it belongs to \(\mathcal{J}_{n,\ell}\). For \(i,i^{\prime}=1,\ldots,\ell\), if \(i\neq i^{\prime}\) then the string \(x_{ij}y_{i^{\prime}j}\notin\mathcal{J}_{n,\ell}\) because it has length \(2H+i-i^{\prime}\) which cannot be a multiple of \(\ell\). On the other hand, if \(j<j^{\prime}\), the string \(x_{ij}y_{i^{\prime}j^{\prime}}\) contains \(j+n-j^{\prime}<n\) many \(a\)'s, so it cannot belong to \(\mathcal{J}_{n,\ell}\), Concerning the recognition of \(\mathcal{J}_{n,\ell}\) by f-1-las we prove the following: **Theorem 9**: _Let \(\ell>0\) be an integer that factorizes \(\ell=p_{1}^{k_{1}}\cdots p_{s}^{k_{r}}\) as product of prime powers, \(o=r\) mod \(2\), and \(n>0\). Then \(\mathcal{J}_{n,\ell}\) is accepted by a sweeping 2dfa with \(n+p_{1}^{k_{1}}+\cdots+p_{r}^{k_{r}}+o\) states and by a sweeping d-f-1-la with \(\max(n,p_{1}^{k_{1}}+\cdots+p_{r}^{k_{r}}+o)\) states._ _Proof._ A 2dfa can make a first sweep of the input, using \(n\) states, to check if the number of \(a\)'s in the input is a multiple of \(n\). Then, in further \(r\) sweeps, alternating right-to-left with left-to-right sweeps, it can check the divisibility of the input length by \(p_{i}^{k_{i}}\), \(i=1,\ldots,r\). If \(r\) is odd this process ends with the head on the left end-marker. Hence, in this case, when all tests are positive, a further sweep (made by using a unique state) is used to move the head from the left to the right end-marker and then reach the accepting configuration. We can implement a d-f-1-la that uses the same strategy. However, after the first sweep, all input symbols are replaced by \(Z\). Hence, as in the proof of Theorem 3, the machine can reuse the \(n\) states of the first sweep. So, the total number of states reduces to \(\max(n,p_{1}^{k_{1}}+\cdots+p_{r}^{k_{r}}+o)\). As a consequence of Theorem 9, we obtain: **Corollary 3**: _For each integer \(n>0\) the language \(\mathcal{J}_{n,F(n)}\) is accepted by a d-f-1-la with at most \(n+1\) states._ By combining the upper bound in Corollary 3 with the lower bound in Theorem 8, we obtain that the superpolynomial cost of the simulation of d-f-1-las by 1dfas given in Theorem 7 is asymptotically optimal and it cannot be reduced even if the resulting automaton is nondeterministic: **Corollary 4**: _For each integer \(n>0\) there exists a language accepted by a d-f-1-la with at most \(n+1\) states and such that all equivalent 1dfas and 1nfas require at least \(n\cdot F(n)\) states._ ## 5 Forgetting 1-Limited vs. Two-Way Automata Up to now, we have studied the size costs of the transformations of f-1-las and d-f-1-las into one-way automata. We proved that they cannot be significantly reduced, by providing suitable witness languages. However, we can notice that such languages are accepted by two-way automata whose sizes are not so far from the sizes of f-1-las and d-f-1-las we gave. So we now analyze the size relationships between forgetting and two-way automata. On the one hand, we show that forgetting input symbols can dramatically reduce the descriptional power. Indeed, we provide a family of languages for which f-1-las are exponentially larger than 2dfas. On the other hand, we guess that also in the opposite direction at least a superpolynomial gap can be possible. To this aim we present a language accepted by a d-f-1-la of size \(O(n)\) and we conjuncture that each 2nfa accepting it requires more than \(F(n)\) states. ### From Two-way to Forgetting \(1\)-Limited Automata For each integer \(n>0\), let us consider the following language \[\mathcal{E}_{n}=\{w\in\{a,b\}^{*}\ \mid\ \exists x\in\{a,b\}^{n},\exists y,z\in\{a,b \}^{*}:w=x\cdot y=z\cdot x^{R}\},\] i.e., the set of strings in which the prefix of length \(n\) is equal to the reversal of the suffix. As we shall see, it is possible to obtain a 2dfa with \(O\left(n\right)\) states accepting it. Furthermore, each equivalent f-1-la requires \(2^{n}\) states. To achieve this result, first we give a lower bound technique for the number of states of f-1-las, which is inspired by the _fooling set technique_ for 1nfas[2]. **Lemma 1**: _Let \(L\subseteq\Sigma^{*}\) be a language and \(X=\{(x_{i},y_{i})\ \mid\ i=1,\ldots,n\}\) be a set of words such that the following hold:_ * \(|x_{1}|=|x_{2}|=\cdots=|x_{n}|\)_,_ * \(x_{i}y_{i}\in L\)_, for_ \(i=1,\ldots,n\)_,_ * \(x_{i}y_{j}\notin L\) _or_ \(x_{j}y_{i}\notin L\)_, for_ \(i,j=1,\ldots,n\) _with_ \(i\neq j\)_._ _Then each f-1-las accepting \(L\) has at least \(n\) states._ _Proof._ Let \(M\) be a f-1-las accepting \(L\). Let \(C_{i}\) be an accepting computation of \(M\) on input \(x_{i}y_{i}\), \(i=1,\ldots,n\). We divide \(C_{i}\) into two parts \(C_{i}^{\prime}\) and \(C_{i}^{\prime\prime}\), where \(C_{i}^{\prime}\) is the part of \(C_{i}\) that starts from the initial configuration and ends when the head reaches for the first time the first cell to the right of \(x_{i}\), namely the cell containing the first symbol of \(y_{i}\), while \(C_{i}^{\prime\prime}\) is the remaining part of \(C_{i}\). Let \(q_{i}\) be the state reached at the end of \(C_{i}^{\prime}\), namely the state from which \(C_{i}^{\prime\prime}\) starts. If \(q_{i}=q_{j}\), for some \(1\leq i,j\leq n\), then the computation obtained concatenating \(C_{i}^{\prime}\) and \(C_{j}^{\prime\prime}\) accepts the input \(x_{i}y_{j}\). Indeed, at the end of \(C_{i}^{\prime}\) and of \(C_{j}^{\prime}\), the content of the tape to the left of the head is replaced by the same string \(Z^{|x_{i}|}=Z^{|x_{j}|}\). So \(M\), after inspecting \(x_{i}\), can perform exactly the same moves as on input \(x_{j}y_{j}\) after inspecting \(x_{j}\) and hence it can accept \(x_{i}y_{j}\). In a similar way, concatenating \(C_{j}^{\prime}\) and \(C_{i}^{\prime\prime}\) we obtain an accepting computation on \(x_{j}y_{i}\). If \(i\neq j\), then this is a contradiction. This allows to conclude that \(n\) different states are necessary for \(M\). We are now able to prove the claimed separation. **Theorem 10**: _The language \(\mathcal{E}_{n}\) is accepted by a 2dfa with \(O\left(n\right)\) states, while each f-1-la accepting it has at least \(2^{n}\) states._ _Proof._ We can build a 2dfa that on input \(w\in\Sigma^{*}\) tests the equality between the symbols in positions \(i\) and \(|w|-i\) of \(w\), for \(i=1,\ldots,n\). If one of the tests fails, then the automaton stops and rejects, otherwise it finally accepts. For each \(i\), the test starts with the head on the left end-marker and the value of \(i\) in the finite control. Then, the head is moved to the right, while decrementing \(i\), to locate the \(i\)th input cell and remember its content in the finite control. At this point, the head is moved back to the left end-marked, while counting input cells to restore the value of \(i\). The input is completely crossed from left to right, by keeping this value in the control. When the right end-marker is reached, a similar procedure is applied to locate the symbol in position \(|w|-i\), which is then compared with that in position \(i\), previously stored in the control. If the two symbols are equal, then the head is moved again to the right end-marker, while restoring \(i\). If \(i=n\), then the machine moves in the accepting configuration, otherwise the value of \(i\) is incremented and the head is moved to the left end-marker to prepare the next test. From the above description we can conclude that \(O\left(n\right)\) states are enough for a 2dfa to accept \(\mathcal{E}_{n}\). For the lower bound, we observe that the set \(X=\{(x,x^{R})\ \mid\ x\in\{a,b\}^{n}\}\), whose cardinality is \(2^{n}\), satisfies the requirements of Lemma 1. ### From Forgetting \(1\)-limited to Two-way Automata We wonder if there is some language showing an exponential, or at least superpolynomial, size gap from f-1-las to two-way automata. Here we propose, as a possible candidate, the following language, where \(n,\ell>0\) are integers: \[\mathcal{H}_{n,\ell}=\{\,ub^{n}v\;\mid\;u\in(a+b)^{*}a,\,v\in(a+b)^{*},\,|u|_{a }\text{ mod }n=0,\text{ and }|u|\text{ mod }\ell=0\}.\] We prove that \(\mathcal{H}_{n,F(n)}\) can be recognized by a d-f-1-la with a number of states linear in \(n\). **Theorem 11**: _For each integer \(n>1\) the language \(\mathcal{H}_{n,F(n)}\) is accepted by a d-f-1-la with \(O(n)\) states._ _Proof._: A d-f-1-la\(M\) can start to inspect the input from left to right, while counting modulo \(n\) the \(a\)'s. In this way it can discover each prefix \(u\) that ends with an \(a\) and such that \(|u|_{a}\text{ mod }n=0\). When such a prefix is located, \(M\) verifies whether \(|u|\) is a multiple of \(F(n)\) and it is followed by \(b^{n}\). We will discuss how to do that below. If the result of the verification is positive, then \(M\) moves to the accepting configuration, otherwise it continues the same process. Now we explain how the verification can be performed. Suppose \(F(n)=p_{1}^{k_{1}}\cdots p_{r}^{k_{r}}\), where \(p_{1}^{k_{1}},\ldots,p_{r}^{k_{r}}\) are prime powers. First, we point out that when the verification starts, exactly the first \(|u|\) tape cells have been rewritten. Hence, the rough idea is to alternate right-to-left and left-to-right sweeps on such a portion of the tape, to check the divisibility of \(|u|\) by each \(p_{i}^{k_{i}}\), \(i=1,\ldots,r\). A right-to-left sweep stops when the head reaches the left end-marker. On the other hand, a left-to-right sweep can end only when the head reaches the first cell to the right of the frozen segment. This forces the replacement of the symbol in it with the symbol \(Z\), so increasing the length of the frozen segment by \(1\). In the next sweeps, the machine has to take into account how much the frozen segment increased. For instance, after checking divisibility by \(p_{1}^{k_{1}}\) and by \(p_{2}^{k_{2}}\), in the next sweep the machine should verify that the length of the frozen segment, modulo \(p_{3}^{k_{3}}\), is \(1\). Because the machine has to check \(r\) divisors and right-to-left sweeps alternate with left-to-right sweeps, when all \(r\) sweeps are done, exactly \(\lfloor r/2\rfloor\) extra cells to the right of the original input prefix \(u\) are frozen. Since \(n>r/2\), if the original symbol in all those cells was \(b\), to complete the verification phase the machine has to check whether the next \(n-\lfloor r/2\rfloor\) not yet visited cells contain \(b\). However, the verification fails if a cell containing an \(a\) or the right end-marker is reached during some point of the verification phase. This can happen either while checking the length of the frozen segment or while checking the last \(n-\lfloor r/2\rfloor\) cells. If the right end-marker is reached, then the machine rejects. Otherwise it returns to the main procedure, i.e., resumes the counting of the \(a\)'s. The machine uses a counter modulo \(n\) for the \(a\)'s. In the verification phase this counter keeps the value \(0\). The device first has to count the length of the frozen part modulo \(p_{i}^{k_{i}}\), iteratively for \(i=1,\ldots,r\), and to verify that the inspected prefix is followed by \(b^{n}\), using again a counter. Since \(p_{1}^{k_{1}}+\cdots+p_{r}^{k_{r}}\leq n\), by summing up we conclude that the total number of states is \(O(n)\). By using a modification of the argument in the proof of Theorem 8, we can show that each 1nfa accepting \(\mathcal{H}_{n,F(n)}\) cannot have less than \(n\cdot F(n)\) states.2 We guess that such a number cannot be substantially reduced even having the possibility of moving the head in both directions. In fact, a two-way automaton using \(O(n)\) states can easily locate on the input tape a "candidate" prefix \(u\). However, it cannot remember in which position of the tape \(u\) ends, in order to check \(|u|\) in several sweeps of \(u\). So we do not see how the machine could verify whether \(|u|\) is a multiple of \(F(n)\) using less than \(F(n)\) states. ## 6 Conclusion We compared the size of forgetting 1-limited automata with that of finite automata, proving exponential and superpolynomial gaps. We did not discuss the size relationships with 1-las. However, since 2dfas are d-1-las that never write, as a corollary of Theorem 10 we get an exponential size gap from d-1-las to f-1-las. Indeed, the fact of having a unique symbol to rewrite the tape content dramatically reduces the descriptional power. We point out that this reduction happens also in the case of f-1-las accepting languages defined over a one-letter alphabet, namely unary languages. To this aim, for each integer \(n>0\), let us consider the language \((a^{2^{n}})^{*}\). This language can be accepted with a d-1-la having \(O\left(n\right)\) states and a work alphabet of cardinality \(O\left(n\right)\), and with a d-1-la having \(O\left(n^{3}\right)\) states and a work alphabet of size not dependent on \(n\)[17, 19]. However, each 2nfa accepting it requires at least \(2^{n}\) states [17]. Considering the cost of the conversion of f-1-las into 1nfa (Theorem 1), we can conclude that such a language cannot be accepted by any f-1-la having a number of states polynomial in \(n\).
2309.14450
Learning dislocation dynamics mobility laws from large-scale MD simulations
The computational method of discrete dislocation dynamics (DDD), used as a coarse-grained model of true atomistic dynamics of lattice dislocations, has become of powerful tool to study metal plasticity arising from the collective behavior of dislocations. As a mesoscale approach, motion of dislocations in the DDD model is prescribed via the mobility law; a function which specifies how dislocation lines should respond to the driving force. However, the development of traditional hand-crafted mobility laws can be a cumbersome task and may involve detrimental simplifications. Here we introduce a machine-learning (ML) framework to streamline the development of data-driven mobility laws which are modeled as graph neural networks (GNN) trained on large-scale Molecular Dynamics (MD) simulations of crystal plasticity. We illustrate our approach on BCC tungsten and demonstrate that our GNN mobility implemented in large-scale DDD simulations accurately reproduces the challenging tension/compression asymmetry observed in ground-truth MD simulations while correctly predicting the flow stress at lower straining rate conditions unseen during training, thereby demonstrating the ability of our method to learn relevant dislocation physics. Our DDD+ML approach opens new promising avenues to improve fidelity of the DDD model and to incorporate more complex dislocation motion behaviors in an automated way, providing a faithful proxy for dislocation dynamics several orders of magnitude faster than ground-truth MD simulations.
Nicolas Bertin, Vasily V. Bulatov, Fei Zhou
2023-09-25T18:16:45Z
http://arxiv.org/abs/2309.14450v1
# Learning dislocation dynamics mobility laws from large-scale MD simulations ###### Abstract The computational method of discrete dislocation dynamics (DDD), used as a coarse-grained model of true atomistic dynamics of lattice dislocations, has become of powerful tool to study metal plasticity arising from the collective behavior of dislocations. As a mesoscale approach, motion of dislocations in the DDD model is prescribed via the mobility law; a function which specifies how dislocation lines should respond to the driving force. However, the development of traditional "hand-crafted" mobility laws can be a cumbersome task and may involve detrimental simplifications. Here we introduce a machine-learning (ML) framework to streamline the development of data-driven mobility laws which are modeled as graph neural networks (GNN) trained on large-scale Molecular Dynamics (MD) simulations of crystal plasticity. We illustrate our approach on BCC tungsten and demonstrate that our GNN mobility implemented in large-scale DDD simulations accurately reproduces the challenging tension/compression asymmetry observed in ground-truth MD simulations while correctly predicting the flow stress at lower straining rate conditions unseen during training, thereby demonstrating the ability of our method to learn relevant dislocation physics. Our DDD+ML approach opens new promising avenues to improve fidelity of the DDD model and to incorporate more complex dislocation motion behaviors in an automated way, providing a faithful proxy for dislocation dynamics several orders of magnitude faster than ground-truth MD simulations. keywords: Dislocation mobility, Dislocation dynamics, Graph neural networks, Machine learning + Footnote †: journal: arXiv ## 1 Introduction In metals, plastic deformation is ordinarily defined by the motion and interaction of dislocation lines through the lattice. While direct Molecular Dynamics (MD) simulations of crystal plasticity are now within reach to provide insights on the collective behavior of dislocations and metal strength [1; 2; 3; 4], their scale still commands considerable computing resources. Alternatively, the method of Discrete Dislocation Dynamics (DDD), used as an expedient mesoscale proxy, has been widely regarded as a powerful tool to connect the evolution of the dislocation microstructure to the macroscopic response of crystals, thereby bridging the gap between mesoscale and continuum approaches in metal plasticity [5; 6; 7; 8; 9; 10; 11]. In contrast to MD which simulates "all the atoms", only the dislocation microstructure is considered in DDD, which is represented as a set of dislocation segments inter-connected through nodes and generally evolved with an over-damped equation of motion [12] \[\mathbf{V}_{i}=\frac{d\mathbf{r}_{i}}{dt}=\mathcal{M}\left[\mathbf{F}_{i}\left(\{\mathbf{r}_{ j}\},\{\mathbf{b}_{jk}\}\right)\right] \tag{1}\] where \(\mathbf{r}_{j}\in\mathcal{V}\) and \(\mathbf{b}_{jk}\in\mathcal{E}\) are the dislocation node positions and Burgers vector connectivity, respectively defining the set of vertices and edges of the dislocation graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). In Eq. (1), \(\mathcal{M}\) is the mobility function that relates the nodal force \(\mathbf{F}_{i}\) exerted on the dislocation line at node \(i\) to its resulting nodal velocity \(\mathbf{V}_{i}\). The mobility law is the core ingredient of the DDD model which encapsulates the physics of the simulation (analogous to the interatomic potential in MD), i.e. by which DDD practitioners specify how dislocations should move in response to the driving force. In principle, the DDD mobility needs to be cali brated to a reference behavior in order for simulations to produce meaningful predictions. Given the relative difficulty to extract force-velocity relations from experiments, a widely used strategy has been to calibrate mobility laws using MD simulations of individual dislocations. Within this approach, a mobility law is typically constructed by postulating a "hand-crafted" functional form based on physical knowledge and intuition, and quantifying its parameters by running MD calculations of isolated dislocations [13; 14; 15], e.g. by computing the velocity of dislocations of distinct types (for instance, edge and screw) under varying levels of stresses and temperatures [16; 17; 18; 19; 20], extracting nucleation energy barriers and critical stresses for motion controlled by the kink-pair mechanism [15; 21; 22], or extracting dislocation drag by analysis of the thermal fluctuations spectrum [23]. However, while this conventional workflow has been widely adopted, it presents several important drawbacks. First, constructing mobility laws can be a time-consuming task, and differences in metals crystallography and dislocation behaviors make it difficult to fully automate the process. Second, this approach necessarily involves simplifications. In particular, it remains unclear if and how behaviors extracted from straight, isolated dislocations in MD can be transferred [24] to large-scale DDD simulations in which dislocations form dense and intricate networks of curved lines [25]. Third, to the authors' knowledge, no work has yet verified how DDD predictions using mobility laws constructed in the way described above would compare to direct, large-scale MD simulations performed using the same interatomic potential used in the calibration. In this work, we introduce the DDD+ML framework to streamline the construction of DDD mobility laws using a machine learning (ML) approach akin to the task of developing interatomic potentials for MD simulations. For this, we propose to define the mobility law in Eq. (1) as a graph neural network (GNN) operator trained to learn the evolution of dislocation network trajectories produced by large-scale MD simulations. Here, we therefore view DDD as a coarse-grained model of direct MD simulations of crystal plasticity, which we regard as the ground-truth that we wish to emulate. Thus, a faithful mobility law should result in DDD+ML predictions that closely match those of direct MD simulations performed under identical conditions. Using body-centered cubic (BCC) W as a testbed material, we show that our trained GNN-based mobility implemented in our DDD model can learn relevant dislocation physics and is able to reproduce well the strength behavior and tension/compression asymmetry predicted in large-scale MD simulations. We believe that our new GNN-based mobility framework provides a novel, unbiased approach to accurately incorporate dislocation physics in the DDD model while paving the way to automated workflows for constructing mobility laws of arbitrary complexity. ## 2 Results ### DDD+ML model Taking advantage of the graph nature of dislocation networks, we view the construction of the Figure 1: DDD+ML framework. (a) Schematic of two consecutive dislocation networks \(\mathcal{G}^{t}\) and \(\mathcal{G}^{t+\Delta t}\) extracted from a MD trajectory using DXA at times \(t\) (solid lines) and \(t+\Delta t\) (dashed lines). As extracted, the configurations contain nodal positions \(\mathbf{r}=\{\mathbf{r}_{i}\}\) and Burgers vector connectivity \(\mathbf{b}=\{\mathbf{b}_{ij}\}\) but (i) contain no information about the line forces driving their evolution, and (ii) nodal velocity vectors generally cannot be unambiguously defined due to the non-isomorphism between both networks. For instance, colored arrows show three possible ways to define the ground-truth velocity \(\mathbf{V}_{2}\) at node 2. (b) Schematic of the inference and training loop for the GNN-based mobility law. (c) Illustration of the Nye’s tensor field-matching approach to circumvent the ill-defined matching of velocities. As exemplified here, the procedure is agnostic to line discretization and network topology. DDD mobility law in Eq. (1) as a ML task defined on a graph network, Fig. 1. Very recently, we successfully developed a machine-learned mobility from a simplified dislocation-obstacle DDD system [26]. Here, we propose to model the mobility function with a GNN trained to predict the evolution of subsequent dislocation network configurations, e.g. as extracted from large-scale (ground-truth) MD trajectories using the dislocation extraction algorithm (DXA) [27; 28]. In practice however, the task is complex and presents several challenges that need to be addressed. To illustrate the matter, consider two consecutive dislocation graphs \(\mathcal{G}^{t}\) and \(\mathcal{G}^{t+\Delta t}\) at time \(t\) and \(t+\Delta t\), respectively, as depicted in Fig. 1a. As implied by Eq. (1), training a ML mobility law entails learning the relation between the (input) nodal force and (output) nodal velocity vectors given the geometry and topology of the dislocation network. Yet, by themselves dislocation networks extracted with DXA (i) contain no information about the driving force on the dislocation lines, and (ii) the dislocation velocities are only implicitly defined by comparing dislocation line configurations attained at consecutive times, and thus not explicitly contained in the data. This is because the topology of dislocation networks constantly evolves (e.g. as a result of dislocation intersections and core reactions) while the discretization of dislocation links is essentially arbitrary, i.e. subsequent dislocation graphs are generally nonisomorphic. In other words, there does not exist a one-to-one correspondence between dislocation nodes in consecutive network configurations and velocity vectors cannot be unambiguously defined. As such, learning on nodal velocity vectors is not a well-defined task for ML. We propose to address both these challenges of unknown ground-truth force and velocity vectors in the following way. First, we estimate the nodal forces \(\mathbf{F}_{i}\) on nodes \(i\) of the MD dislocation networks using the force calculation procedure of the DDD model \[\mathbf{F}_{i}\sim\mathbf{F}_{i}^{\text{DDD}}=\mathbf{F}_{i}^{\text{app}}+\mathbf{F}_{i}^{ \text{lr}}+\mathbf{F}_{i}^{\text{sr}}+\mathbf{F}_{i}^{\text{core}} \tag{2}\] where the force \(\mathbf{F}_{i}^{\text{DDD}}\) calculated with DDD is the sum of several contributions, namely the force due to the applied loading and boundary conditions \(\mathbf{F}_{i}^{\text{app}}\), the long-range \(\mathbf{F}_{i}^{\text{lr}}\) and short-range \(\mathbf{F}_{i}^{\text{sr}}\) elastic interactions, and the force associated with the dislocation core \(\mathbf{F}_{i}^{\text{core}}\). Given the various approximations involved in force calculations in the DDD model, this force is only an _estimate_ of the true (ground-truth) but unknown force \(\mathbf{F}_{i}^{\text{MD}}\) driving the dislocations in the MD simulations. Mathematically, we can write this approximation exactly as \[\mathbf{F}_{i}^{\text{MD}}=\mathbf{F}_{i}^{\text{DDD}}+\mathbf{F}_{i}^{\text{corr}} \tag{3}\] where \(\mathbf{F}_{i}^{\text{corr}}\) is a correction term accounting for the error made in approximating the true force with the DDD model. Assuming that the applied and long-range components of the force are smoothly varying functions that can be accurately computed with DDD, it follows that the correction term in Eq. (3) is a rather local function of dislocation network, which can itself be easily learned by a GNN model. Thus, we propose to define a GNN-based mobility law \(\mathcal{M}_{\mathbf{\theta}}^{\text{GNN}}\) with learnable parameters \(\mathbf{\theta}\), \[\mathbf{V}_{i}=\mathcal{M}_{\mathbf{\theta}}^{\text{GNN}}\left(\mathbf{F}_{i}^{\text{DDD}},\{N_{j}\},\{E_{jk}\}\right), \tag{4}\] and let the model learn the resulting nodal velocity vector \(\mathbf{V}_{i}\) as a function of the estimated force \(\mathbf{F}_{i}^{\text{DDD}}\) and local graph neighborhood of nodes \(j\) and segments \(jk\) with attributes \(N_{j}\) and \(E_{jk}\), respectively (see Methods). As defined, the mobility function in Eq. (4) thus fully subsumes the calculation of the as-yet unspecified local force correction. Since in DDD simulations forces are always computed using Eq. (2), this formulation further ensures that the approach is self-consistent when employing the trained mobility in rollout simulations. We now turn our attention to the second issue, namely the absence of well-defined nodal velocity vectors in the ground-truth MD data. Here we note that this issue is essentially identical to the task of tracing dislocation motion between consecutive MD dislocation snapshots, for which we have recently introduced the sweep-tracing algorithm (STA) [3]. By using a dual line/field representation of the dislocation networks, STA can "reconnect" successive networks in a way that is independent of line discretization and agnostic to network topology. Conceptually, the task is defined as an optimization problem that seeks to minimize the distance between Nye's tensor [29] field representations of consecutive graphs [3]. Following the STA approach, the training of the GNN-based mobility operator in Eq. (4) is achieved by minimizing the following loss function \[\mathcal{L}(\mathbf{\theta})=\sum_{s}\sum_{\mathbf{g}}\sum_{kl}\Bigl{[} \alpha_{kl}^{\mathbf{g}}\left(\mathbf{r}^{t_{s}}+\mathbf{\mathcal{M}}_{\mathbf{\theta}}^{ \text{GNN}}\Delta t_{s},\mathbf{b}^{t_{s}}\right)\\ -\alpha_{kl}^{\mathbf{g}}\left(\mathbf{r}^{t_{s}+\Delta t_{s}},\mathbf{b}^{t_ {s}+\Delta t_{s}}\right)\Bigr{]}^{2} \tag{5}\] over the set of training examples \(s\), where \({\mathbf{\alpha}}_{kl}^{\mathbf{g}}\) are the components of the Nye's tensor field numerically computed at grid point \({\mathbf{g}}\) using the method introduced in [30]. During training, the mobility law thus learns to predict nodal velocity vectors \({\mathbf{V}}_{i}\) which, when applied to configurations at time \(t_{s}\), best match the tensor field representation of the ground-truth evolution of the networks at time \(t_{s}+\Delta t_{s}\), Fig. 1c. In other words, the field-matching loss of Eq. (5) circumvents of the seemingly straightforward but ill-defined matching of velocities. We point out that the calculation of the Nye's tensor field over networks is only required to compute and back-propagate mobility parameters \({\mathbf{\theta}}\) information during training. Once trained, only the GNN-based mobility function in Eq. (4) needs to be evaluated at inference time, Fig. 1b. Our GNN model and training protocol are implemented using PyTorch [31]. ### Validation on DDD trajectories To validate our approach, we first apply our method to DDD trajectories to evaluate the ability of our framework to learn a simple "hand-crafted" DDD mobility law. Although the approach was described for DXA configurations extracted from MD in SS2.1, it can be applied equally well to DDD configurations. Doing so can be insightful since, in contrast to MD simulations, we have complete control over traditional mobility laws in DDD and can prescribe any desired ground-truth force-velocity relation, and then examine how well our approach can recover it. Here we choose a generic linear mobility function developed for BCC metals [32] and implemented in our DDD code ParaDiS [11], which can be expressed as \[{\mathbf{V}}_{i}=\mathbf{\mathcal{M}}^{\text{DDD}}(M_{s},M_{e})\cdot{\mathbf{F}}_{i}^{ \text{DDD}} \tag{6}\] where \(\mathbf{\mathcal{M}}^{\text{DDD}}\) is a matrix parameterized by scalar mobility coefficients \(M_{s}=20~{}(\text{Pa}\cdot\text{s})^{-1}\) and \(M_{e}=2600~{}(\text{Pa}\cdot\text{s})^{-1}\) associated with the velocity of pure screw and pure edge dislocations, respectively. To generate the training data, we run 5 large-scale DDD simulations each starting with 12 prismatic loops seeded at different random positions. Crystals are compressed at a strain rate of \(2\times 10^{8}\)/s along the [001] direction. Simulations are run to 0.1 strain during which the dislocation networks are regularly saved at intervals of 0.25 ps, generating a total of 10,000 configurations for training. These configurations then follow the same procedure as the one described for DXA snapshots in SS2.1: first nodal forces are computed using DDD, and then training is performed by minimizing the STA-based loss function in Eq. (5). Once trained, we inspect the best resulting GNN-based mobility law by inferring the nodal velocities of pure edge and screw dislocations as a function of shear stress. For this, we create small infinite, straight dislocation configurations of edge and screw characters and compute the nodal forces \({\mathbf{F}}_{i}^{\text{DDD}}={\mathbf{F}}_{i}^{\text{app}}(\tau)\), where \(\tau\) is the probing shear stress value. This simple technique allows us to slice through the otherwise "black-box" nature of the model. Results in Fig. 2 show the velocity magnitudes as function of shear stress predicted by the trained GNN-mobility law. The linear nature of the ground-truth DDD mobility law is well captured by the GNN model. In addition, fitting the stress-velocity curves with a linear regression yields mobility coefficients of \(\hat{M}_{e}\approx 2564~{}(\text{Pa}\cdot\text{s})^{-1}\) and \(\hat{M}_{s}\approx 6~{}(\text{Pa}\cdot\text{s})^{-1}\) for the edge and screw dislocations, respectively, which are in good agreement with the ground-truth values. While further training could lead to yet better accuracy, this simple example demonstrates the general feasibility of our proposed framework. ### Application to MD trajectories We now apply our method to learn a mobility law from large-scale MD simulations. To il Figure 2: Edge and screw dislocation velocities predicted by the GNN-based mobility law trained on DDD trajectories. The GNN model does capture the linear nature of the ground-truth DDD mobility law, with linear fits of the stress-velocity curves that give mobility coefficients in good agreement with ground-truth values. lustrate the approach, we focus on the case of BCC plasticity which is notoriously complex due to the strong plastic anisotropy [33; 34; 35], as manifested in the tension/compression asymmetry [36; 37; 3], and the still debated issue of slip crystallography [38]. We choose BCC W as a testbed material because it is nearly isotropic, which makes it convenient for the use of isotropic elasticity formulations in the DDD calculations. MD simulations are run using the EAM-style interatomic potential developed in [39], which yields an anisotropic ratio \(A=2C_{44}/(C_{11}-C_{12})\sim 1.15\) at 300 K. We perform 2 large-scale simulations of \(\sim 35\) millions atoms in which BCC crystals initially seeded with prismatic dislocation loops are deformed at 300 K under uniaxial tension and compression along the [001] direction. Crystals are deformed at a true strain rate of \(2\times 10^{8}\)/s until reaching a strain of 1.0. Consistent with our previous results in BCC Ta [3; 40], the simulations predict a strong tension/compression asymmetry, with flow stress of \(\sim 4.2\) GPa in compression and \(\sim 2.8\) GPa in tension. During both runs, DXA is used to extract the dislocation configurations at every 1 ps interval of time. This results in a total of 10,000 DXA snapshots which are then used to train a GNN-based mobility. Results for edge and screw dislocation velocities as a function of shear stress predicted by the best trained GNN-based mobility law are shown in Fig. 3. The model exhibits smooth and well-behaved functions. The predicted edge dislocation velocity shows a linear stress-velocity relation up to \(\sim 300\) MPa, followed by a non-linear regime reaching an asymptotic velocity of \(\sim 2000\) m/s at a stress of 2 GPa. Qualitatively, this result is fully consistent with the known behavior of the edge dislocation [15] and previous MD calculations in BCC metals [41; 16]. Of particular interest are the predicted velocities for the screw dislocation. When sheared along the twinning (T) direction (corresponding to a MRSSP angle \(\chi=-30^{o}\)), the screw velocity response resembles that of thermally-activated motion, with very slow velocity at stresses below an activation threshold (finite temperature Peierls stress), and a transition to a drag-controlled regime at higher stresses. In contrast, when sheared in the anti-twinning (AT) direction (corresponding to a MRSSP angle \(\chi=+30^{o}\)), the screw dislocation is predicted to be much slower on average. Here again these results are fully consistent with theoretical models [42] and earlier observations [3]. ### Large-scale DDD+ML simulations with trained mobility To assess the validity of our approach and the quality of the learned model in realistic conditions, we now examine predictions of large-scale DDD+ML simulations using the GNN-based mobility law trained in SS2.3, which is implemented within the ParaDiS DDD code [11] via the C++ PyTorch interface. To establish a one-to-one comparison, DDD simulations are performed under identical conditions to those used in the ground-truth MD simulations. Cubic DDD simulation boxes of side length \(296b\), where \(b=0.2743\) nm is the magnitude of the Burgers vector, are initially seeded with randomly positioned prismatic loops (using different seeds than in the ground-truth MD runs) and then deformed under [001] tension and compression at a strain rate of \(2\times 10^{8}\)/s. We use values of the shear modulus \(\mu=149.78\) GPa and Poisson's ratio \(\nu=0.289\), corresponding to values of the elastic constants of the ground-truth interatomic potential [39] at 300 K. Dislocation core forces are computed from core energies extracted from the same interatomic potential using the framework detailed in [43]. Results of the DDD predictions compared to MD simulations are shown in Fig. 4. Flow stress predictions are in remarkable agreement with the MD Figure 3: Edge and screw dislocation velocities predicted by the GNN-based mobility law trained on large-scale MD trajectories of BCC W deformed under [001] tension and compression. The GNN mobility law predicts a non-linear force-velocity relation for the edge dislocation and a strong asymmetry in the velocity of the screw dislocation sheared in the twinning (T) and anti-twinning (AT) direction. simulations, Fig. 4a. Specifically, the asymmetry in the tension/compression response is naturally captured by the GNN mobility law, demonstrating that the model has correctly learned the distinct behavior of the dislocations depending on shearing direction. In Fig. 4b however, we note a quantitative difference in the predicted evolution of the dislocation densities. While DDD indeed predicts a higher dislocation density in compression compared to tension, in agreement with the MD data, DDD also systematically underpredicts the value of the dislocation density by a factor of approximately two. Potential causes of these discrepancies will be discussed in the next section. ## 3 Discussion Overall, results presented in SS2.4 clearly demonstrate the feasibility and potential of the approach. These results are all the more significant that our previous attempts at developing traditional "hand-crafted" mobility laws to satisfactorily capture the tension/compression asymmetry in BCC metals have proved surprisingly challenging, with mixed results when comparing large-scale DDD predictions to reference MD simulations. Indeed, the strong anisotropic plasticity of BCC metals, rooted in the T/AT asymmetry [34; 35; 44; 45], translates into a complex motion of the dislocations as a function of the local configuration and stress state. Here, we show that such a behavior can be naturally captured by our ML approach. Through seeing a large number of examples during training, the GNN-based model learns to predict accurate, effective velocities, which in our example generally result in smooth stress-velocity relations, e.g. see Fig. 3. Of particular interest, the GNN mobility law also encodes information about slip crystallography (i.e. direction of the velocity vector), without pre-existing assumption about slip planes, which is an unsettled and not fully understood issue in BCC metals [38]. This is in contrast with earlier DDD mobility law frameworks that assume a fixed set of glide planes and only focus on modeling the magnitude of the dislocation velocity vector [13; 15]. This example illustrates the novelty and power of such a tool to encode complex dislocation physics into the DDD model, without imposing constraints (e.g. functional forms, slip planes, etc.) that may prove detrimental. In our first application, we nevertheless observe discrepancies compared to the ground-truth MD simulations. While the flow stress in compression is remarkably captured within statistical fluctuations, flow stress under tension is slightly underestimated. Yet, as noted previously the largest discrepancy is found in the evolution of the dislocation densities, which are roughly underestimated by a factor two in both DDD simulations. Several reasons could be at the origin of these discrepancies. The first set of reasons could be related to the DDD model itself. In DDD, dislocation mobility is not the only ingredient by which the dislocation network is evolved. Indeed, topological operations performed to handle core reactions (e.g. junction formation and splitting) constitute another critical step of the DDD method [11]. This seperate step is performed in turn with the integration of the equation of motion, and core reactions are thus absent from the learning procedure. It is thus possible that the lack of multiplication observed in Fig. 4b is rather a consequence of the details of the treatment of topological operations than due to inaccuracies of the mobility law itself. An example of this issue was recently uncovered in [46]. The second reason could be a more general one, analogous to the issue of uncertainty quantification and transferability of interatomic potentials [47]. For instance, it is possible that - if it were feasible - large-scale _ab-initio_ simulations would produce ground-truth outcomes that differ significantly from Figure 4: Comparison of ground-truth MD results with DDD predictions of [001] tension and compression deformation of BCC W using the GNN-based mobility trained from large-scale MD data. MD and DDD simulation volumes of identical size are deformed under identical uniaxial straining rates of \(2\times 10^{8}\)/s. (a) Axial stress and (b) dislocation density evolution as a function of strain. MD predictions using even the most accurate interatomic potentials fitted to it. In other words, if the emerging behavior (e.g. crystal plasticity) is sensitive to the smallest details (e.g. details of the electronic structure), then the transferabilty of physics across the scales may be a daunting task. By analogy a similar difficulty may exist in coarse-graining lattice motion to a dislocation mobility law. As a way to address this issue, recent work has focused on the selection of the training data [48; 49; 50; 51; 52], which has been an everlasting challenge in the realm of interatomic potentials. As a first application of our DDD+ML framework, here we have used training data produced on a very narrow set of loading conditions (unique high straining rate in tension and compression). It is thus possible that a broader training dataset could help reduce the discrepancies. Simultaneously, it is also possible that the applicability of our trained mobility may be limited to a narrow range of conditions as well. To first examine the latter issue, we have run and compared results of simulations performed at a lower straining rate of \(2\times 10^{7}\)/s under compression. To ensure that volumes remain statically representative of bulk plasticity, the MD crystal size was increased by a factor 4 to \(\sim 140\) millions atoms, i.e. corresponding to a DDD box of side length \(470b\). Results for the stress/strain predictions are shown in Fig. 5. Although the mobility law was only trained on \(2\times 10^{8}\)/s trajectories, the DDD+ML model is seen to remarkably capture the flow stress at this lower strain rate. This result is consistent with our previous attempt at employing GNN to evolve simplified DDD systems [26], in which we similarly observed that the GNN model trained on high flow stress (i.e. high strain rate) simulations was able to correctly predict the evolution of simulations at lower strain rates not seen during training. We believe one reason for this success could be that, despite the system being driven at a high external stress in the training data, local dislocation lines sample a much richer set of stress conditions. Indeed, the stress acting along dislocations lines is not just the applied stress, but the sum of the applied stress and the internal stress arising from interactions with all other dislocations (e.g. short-range interactions). While we leave further aspects of this issue out the scope of this paper, future work will focus on the accuracy and generalizability of the approach to different conditions. Finally, we point out that, as coarse-grained model, DDD simulations are typically several orders of magnitude faster than MD simulations under identical conditions. In DDD+ML, the GNN-based mobility only comes with a marginal cost overhead compared to traditional mobility laws, and the simulation cost is still fully dominated by force calculations. For large-scale examples presented in this work, MD simulations (at \(2\times 10^{8}\)/s) typically require \(\sim 100-1000\) GPU-hours (e.g. depending on the interatomic potential), compared to \(\sim 10-20\) CPU-hours needed for the same DDD simulations. At lower straining rates the gains are only even larger, and DDD can access volume sizes (e.g. several \(\mu\)m) that are out-of-reach from direct MD. To summarize, we have introduced a ML framework to construct DDD mobility laws trained on large-scale MD data using a GNN model. Training is achieved by matching the evolution of the continuum dislocation density tensor between subsequent DXA configurations for which an estimate of the dislocation driving forces computed with DDD is provided as input. This scheme allows us to circumvent both the issues of unknown nodal force and velocity vectors in the ground-truth MD data. By applying our approach to the complex case of BCC W, we showed that DDD+ML simulations using a trained GNN mobility are able to reproduce well the behavior of ground-truth MD simulations it was trained on, and that of unseen loading con Figure 5: Comparison of ground-truth MD results with DDD predictions of [001] compression for straining rates of \(2\times 10^{8}\)/s and \(2\times 10^{7}\)/s. Although training was only performed on \(2\times 10^{8}\)/s trajectories (red), our DDD+ML method also correctly predicts the flow stress at the lower straining rate of \(2\times 10^{7}\)/s (green). Comparison of the dislocation density is shown in the inset for the straining rate of \(2\times 10^{7}\)/s. ditions. We believe that our proposed approach to construct data-driven mobility laws is a promising avenue that has the potential to significantly improve the fidelity of the DDD model while allowing to incorporate more complex dislocation motion behaviors which, if desired, can be made unbiased of human intervention. ## 4 Methods ### GNN mobility law We model the mobility law in Eq. (4) with a message-passing GNN [53], which has proved very powerful for predicting force fields and other materials properties [54; 55; 56; 57; 58] and for simulating complex physics [59]. Following our recent work on applying GNN to DDD simulations [26], a dislocation configuration is represented by a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{N_{i}\}\) is a collection of dislocation node/vertex features (attributes), and \(\mathcal{E}=\{E_{ij}\}\) is a collection of dislocation segment/edge features. We define the input features for each node \(i\) as \[N_{i}=(n_{i},\mathbf{F}^{\mathrm{DDD}},\|\mathbf{F}^{\mathrm{DDD}}\|) \tag{7}\] where \(n_{i}\) is a flag used to specify whether node \(i\) is a discretization or physical (junction) node. The input edge features on edge \(ij\) are \[E_{ij}=(e_{ij},\mathbf{b}_{ij},\|\mathbf{b}_{ij}\|,\mathbf{r}_{j}-\mathbf{r}_{i},\|\mathbf{r}_{j}- \mathbf{r}_{i}\|) \tag{8}\] where \(e_{ij}\) is a flag used to specify whether segment \(ij\) is a glissile or junction segment, and \(\mathbf{r}_{j}-\mathbf{r}_{i}\) are the local segments line vectors, naturally compatible with the use of periodic boundary conditions. To satisfy Burgers vector conservation, we use a directed graph, i.e. if \(ij\) is an edge with Burgers vector \(\mathbf{b}_{ij}\), then \(ji\) is also an edge but with opposite Burgers vector \(-\mathbf{b}_{ij}\). Our GNN architecture follows [60] and is first composed of vertex and edge encoders \(\mathrm{ENC}^{V}\), \(\mathrm{ENC}^{E}\) transforming concatenated input features into a latent space \[v_{i}^{(0)}=\mathrm{ENC}^{V}(N_{i}),\;e_{ij}^{(0)}=\mathrm{ENC}^{E}(E_{ij}), \tag{9}\] followed by \(K\) stacked message passing layers \(f^{E(k)}\), \(f^{V(k)}\) (\(1\leq k\leq K\)) sequentially updating the latent vertex and edge variables \[e_{ij}^{(k)} =f^{E(k)}(e_{ij}^{(k-1)},v_{i}^{(k-1)},v_{j}^{(k-1)}), \tag{10}\] \[v_{i}^{(k)} =f^{V(k)}(v_{i}^{(k-1)},\sum_{j}e_{ij}^{(k)}), \tag{11}\] and finally a node decoder DEC that translates the latent node variables \(v^{(K)}\) into the desired output properties, i.e. nodal velocity vectors: \[\mathbf{V}_{i}=\mathrm{DEC}(v_{i}^{(K)}). \tag{12}\] Functions \(\mathrm{ENC}^{V}\), \(\mathrm{ENC}^{E}\), \(f^{V}\), \(f^{E}\), and DEC are neural network operators built from multi-layer perceptrons with two hidden layers, layer normalization [61], skip connections [62], and GELU activation functions [63]. ### MD simulations Large-scale MD simulations of BCC tungsten are performed with LAMMPS [64] under the Kokkos GPU implementation [65], using the EAM-style interatomic potential developed in [39]. Simulations are performed following the protocol introduced in [1]. Periodic, orthorombic BCC perfect crystals are initially seeded with twelve \(1/2\langle 111\rangle\{110\}\) prismatic loops of the vacancy type. The crystals are first equilibrated at the temperature of 300 K, after which they are deformed at a constant true strain rate. Temperature and uniaxial loading conditions are maintained during deformation using the _langevin_ thermostat and the _nph_ barostat. For the \(\sim 35\) millions atoms simulations deformed at a rate of \(2\times 10^{8}\)/s, DXA [28] is executed every 1 ps to save the detailed evolution of the dislocation networks. ### Mobility workflow and training DXA configurations produced in the MD simulations are first converted to the ParaDiS format [11]. For consistency, during this operation the line networks are also remeshed with discretization size of \(\sim 10b\), corresponding to the average segment size in used our DDD simulations. The so-converted dislocation configurations \(\{\mathcal{G}^{t_{s}}\}\) are then fed to the ParaDiS code to compute nodal forces, Eq. (2). Applied forces \(\mathbf{F}_{i}^{app}\) are computed by integrating the Peach-Koehler force \((\mathbf{\sigma}^{t_{s}}\cdot\mathbf{b}_{ij})\times\mathbf{t}_{ij}\) along the dislocation segments \(ij\) with unit tangent \(\mathbf{t}_{ij}\), where \(\mathbf{\sigma}^{t_{s}}\) is the instantaneous stress applied to network \(\mathcal{G}^{t_{s}}\) at time \(t_{s}\) as recorded during the MD runs. Long-range interaction forces \(\mathbf{F}_{i}^{lr}\) are computed using DDD-FFT approach introduced in [30] which can easily handle non-cubic, deforming simulation boxes as produced by MD simulations. Short-range interaction forces \(\mathbf{F}_{i}^{sr}\) are computed using the non-singular isotropic analytical formulation [66]. Core forces \(\mathbf{F}_{i}^{core}\) are computed from core energies extracted from the ground-truth interatomic potential [39] using the framework developed in [43]. The networks containing nodal forces are then used as inputs to our training procedure implemented within PyTorch [31]. To facilitate training, input forces are rescaled by a factor \(10^{9}\) Pa \(\cdot b^{2}\) so that their average magnitude is on the order of unity. The loss function, Eq. (5), is computed by evaluating the Nye's tensor on a grid of \(32^{3}\) voxels using a fully vectorized implementation of the discrete-to-continuous method introduced in [30]. We trained different GNN models to explore different sets of hyper-parameters. We tested a combination of models with \(K=\{2,3\}\) message-passing layers each with latent space of size \(L=\{48,96\}\), leading to 4 different trained models. The models were trained using the Adamw optimizer with weight decay of \(1\times 10^{-5}\)[67]. Training was performed for 12 hours with batch size of 4 on a single NVidia V100 GPU. We find that the GNN model with \(K=3\) and \(L=48\) offers the best trade-off between accuracy and complexity (78,768 total parameters) while showing no sign of ovefitting. We thus selected it as the best model for results presented in this work. ## Acknowledgement NB and VB acknowledge support by the Laboratory Directed Research and Development (LDRD) program (22-ERD-016) and by the ASC PEM program at Lawrence Livermore National Laboratory (LLNL). FZ was supported by the Critical Materials Institute, an Energy Innovation Hub funded by the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, and Advanced Materials and Manufacturing Technologies Office. Computing support for this work came from LLNL Institutional Computing Grand Challenge program. This work was performed under the auspices of the U.S. Department of Energy by LLNL under contract DE-AC52-07NA27344.
2309.15996
Loupe: Driving the Development of OS Compatibility Layers
Supporting mainstream applications is fundamental for a new OS to have impact. It is generally achieved by developing a layer of compatibility allowing applications developed for a mainstream OS like Linux to run unmodified on the new OS. Building such a layer, as we show, results in large engineering inefficiencies due to the lack of efficient methods to precisely measure the OS features required by a set of applications. We propose Loupe, a novel method based on dynamic analysis that determines the OS features that need to be implemented in a prototype OS to bring support for a target set of applications and workloads. Loupe guides and boosts OS developers as they build compatibility layers, prioritizing which features to implement in order to quickly support many applications as early as possible. We apply our methodology to 100+ applications and several OSes currently under development, demonstrating high engineering effort savings vs. existing approaches: for example, for the 62 applications supported by the OSv kernel, we show that using Loupe, would have required implementing only 37 system calls vs. 92 for the non-systematic process followed by OSv developers. We study our measurements and extract novel key insights. Overall, we show that the burden of building compatibility layers is significantly less than what previous works suggest: in some cases, only as few as 20% of system calls reported by static analysis, and 50% of those reported by naive dynamic analysis need an implementation for an application to successfully run standard benchmarks.
Hugo Lefeuvre, Gaulthier Gain, Vlad-Andrei Bădoiu, Daniel Dinca, Vlad-Radu Schiller, Costin Raiciu, Felipe Huici, Pierre Olivier
2023-09-27T20:21:37Z
http://arxiv.org/abs/2309.15996v1
# Loupe: Driving the Development of ###### Abstract. Supporting mainstream applications is fundamental for a new OS to have impact. It is generally achieved by developing a layer of compatibility allowing applications developed for a mainstream OS like Linux to run unmodified on the new OS. Building such a layer, as we show, results in large engineering inefficiencies due to the lack of efficient methods to precisely measure the OS features required by a set of applications. We propose Loupe, a novel method based on dynamic analysis that determines the OS features that need to be implemented in a prototype OS to bring support for a target set of applications and workloads. Loupe guides and boosts OS developers as they build compatibility layers, prioritizing which features to implement in order to quickly support many applications as early as possible. We apply our methodology to 100+ applications and several OSes currently under development, demonstrating high engineering effort savings vs. existing approaches: for example, for the 62 applications supported by the OSv kernel, we show that using Loupe, would have required implementing only 37 system calls vs. 92 for the non-systematic process followed by OSv developers. We study our measurements and extract novel key insights. Overall, we show that the burden of building compatibility layers is significantly less than what previous works suggest: in some cases, only as few as 20% of system calls reported by static analysis, and 50% of those reported by naive dynamic analysis need an implementation for an application to successfully run standard benchmarks. Key words and phrases:A _ASPLOS '24, April 27-May 1, 2024, La Jolla, CA, USA_ + Footnote †: journal: Accepted in 2024 + Footnote †: journal: Accepted in 2024 to achieve a desired degree of support. This results in a significant amount of unnecessary engineering. Past attempts at streamlining that process leverage static analysis [63] and suffer from its drawbacks, heavily overestimating the set of OS features required to support an application. For instance, while binary-level static analysis identifies that >100 system calls are required to conservatively support the entire superset of operations, configurations, and error handling code in Redis (much of which can be quite rarely used in practice, or simply irrelevant for an early prototype), we find that only 42 are actually needed to reliably pass its entire test suite, and just 20 to run redis-benchmark. Hence, OS designers often fall back to naive dynamic analysis, e.g., using strace. These tools fail to take into account common practices used in early OS development to save engineering effort: feature stubbing (returning -ENOSYS[3] upon invocation, without implementing the feature), faking feature success (returning a success code without implementing the feature), and partial implementation of complex features [40, 51]. Indeed, in early development, the goal is not to support every feature but rather core functionalities of target applications [31]. For example, we find that more than half of the system calls invoked by Redis running the redis-benchmark can be stubbed or faked, and do not need to be implemented to support that application and workload. We propose a systematic methodology based on dynamic analysis, centered around a novel tool called _Loupe_. Loupe measures, for an application and a given input workload (e.g., a benchmark, test suite), which OS features really need to be implemented and which ones can be faked, stubbed, or partially implemented. Loupe also computes, given an OS under construction and a set of applications and workloads, an optimized development plan to support as many applications as possible with as little engineering effort as possible. Dynamic analysis comes with its own challenges, in particular the difficulty to scale to numerous applications. This is tackled by designing Loupe to require as little effort as possible to integrate a new application, letting us present results for more than 100 applications in our evaluation. Another challenge is how to detect OS features that can be stubbed, faked, or partially implemented. This is addressed by leveraging Linux's seccomp[23] and prrace[19] tracing and interposition facilities to measure what OS features' implementation can be avoided with these techniques. We run Loupe on 100+ popular applications, and present examples of optimized Linux compatibility layer development plans obtained with Loupe for three OSes under construction [5, 10, 45] with various levels of existing support for the Linux system call ABI. We further measure the engineering effort savings obtained by using Loupe to drive the development of compatibility layers. Taking half the applications supported by OSv[43], Loupe reports that only 37 system calls are required to run them, vs. 92 for our estimation of the non-systematic process followed by OSv developers, and 142 for a process driven by strace-based dynamic analysis. We study Loupe's Linux API usage measurements for our set of applications. This analysis brings many new insights. We demonstrate that the minimal effort needed to provide compatibility is significantly lower than that determined by previous works using static analysis [63]. Our study shows that as much as 40-60% of system calls found in application code do not need implementation to successfully run meaningful workloads, including full test suites. We also find that many applications are resilient to stubbing, faking, and partial implementation of OS features. We investigate the reasons behind it, and the impact of such practices on application performance and resource usage. Finally, we study how the C library influences OS feature requirements. In all, this paper makes the following contributions: * A novel methodology to measure the minimum set of OS features that need implementation for a compatibility layer to support a set of applications and workloads, with the aim of minimizing development effort. * Loupe, a tool able to derive, for a given OS and target applications, an optimized OS feature support plan to run as many apps as possible, as early as possible. * A demonstration of the engineering effort savings obtained with Loupe, with examples of optimized feature implementation plans for 11 OSes under development. * An analysis, using Loupe, of the OS features required by a set of applications showing the lack of precision of past approaches and investigating common development practices in compatibility layer development. Loupe is actively used in Unikraft[45], an open-source commercial OS, and has attracted the attention of several others. Overall, this study brings a message of hope: contrary to what past work seems to suggest, a good degree of compatibility with existing applications can be achieved without immense engineering, provided we follow a focused and methodical approach. Loupe and our results are available online1 under an open-source license. Footnote 1: [https://github.com/unikraft/loupe](https://github.com/unikraft/loupe) / [https://github.com/unikraft/loupedb](https://github.com/unikraft/loupedb) ## 2. Motivation and Approach _Building Compatibility Layers for New OSes_. Compatibility layers can be found in mature OSes for interoperability reasons [11, 18, 28, 50, 61], but also in a plethora of new/prototype/research OSes [5, 6, 10, 20, 30, 31, 43, 45, 51, 52, 57, 62, 64]. Providing support for existing applications in these OSes is generally crucial [45, 47, 51, 52] to gather early performance numbers, to attract open source contributors, early investors, or transition to real-world use. Manually porting software [4, 17] is not sustainable in the long run, nor does it scale to a large amount of applications [51, 52]. Hence, the developers of many new OSes resort to implementing compatibility layers. Even considering OS models that choose to drop application compatibility for other gains (e.g., performance), it is not uncommon to see Linux versions of these models appear a few years after the seminal paper, with claims of stronger compatibility, e.g., Popcorn Linux [33] for the multikernel [34] or Graphene, Lupine and UKL [46, 58, 62] for the unikernel [48, 49]. Building a compatibility layer is seen as a non-negligible engineering effort [31, 40, 41, 45, 46, 51, 52, 55, 63]. We investigated the compatibility layers present in several open-source OS projects [5, 6, 10, 20, 30, 31, 43, 45, 51, 52, 62, 64]. Based on this study, and on our multiple years of experience providing Linux/POSIX compatibility in research OSes, we observe that compatibility layers are built in an ad-hoc, non-systematic ("organic") way: developers select an application to support, determine the OS features it requires, and implement them [31]. That process is repeated for each target application. Because so many projects undergo the task of building compatibility layers [5, 6, 10, 11, 18, 20, 28, 30, 31, 43, 45, 51, 52, 61, 62, 64], there is a need for tools to streamline that process. The corresponding effort consists in 1) identifying OS features required by target applications and 2) implementing these features. The latter task is known to be very specific to the new OS considered [31], and can hardly be streamlined. We show in this paper that the former task, identifying and prioritizing what OS features to implement, can be systematized and optimized. Next, we motivate our method by explaining how past and current approaches are suboptimal. _Limitations of Static Analysis._ Existing approaches measuring the usage of OS features by applications often rely on static analysis [32, 35, 36, 37, 38, 51, 52, 54, 63, 65]. Static analysis is comprehensive: the set of features identified for an application includes all the ones that _may_ be invoked at runtime, under any possible workload, operation, or configuration, and traversing any possible error path. Alas, static analysis is also conservative and yields many false positives: it overestimates OS features that will actually be invoked at runtime. Static analysis can be performed on application sources or binaries. Binary analysis [32, 36, 37, 51, 52, 63] scales well to a large number of applications because it targets a common format (e.g., ELF binaries). However, it suffers from a lack of precision due to the difficulty of extracting information from a binary [37]. Such issues may be alleviated with source-level analysis [38, 65], which is however not a panacea: it is language-specific, making it difficult to scale to many applications written in different languages. Tsai et al. [63] measure, using static binary analysis, the system call usage of the entire set of applications from an Ubuntu distribution. The study concludes that to support 100% of the distribution's packages, 272 system calls need to be implemented. That number goes down to 81 system calls for the 10% most popular applications. These results suggest that a large implementation effort would be required for an OS aiming at supporting even a few applications. As we demonstrate in the evaluation, both source- and binary-level approaches significantly overestimate the OS features required by an application to run popular workloads. This is due to dead or unexecuted code, and the difficulty or impossibility to statically determine runtime-level information (e.g., memory content such as function pointers). Although all of these system calls would likely need to be implemented in a production-grade general-purpose OS, these numbers remain an upper bound of limited usefulness for OS designers in earlier development stages. _Limitations of Naive Dynamic Analysis._ Dynamic analysis too has well-known drawbacks. Its precision depends on the coverage of the input workload run during the analysis: if it is too low, some required OS features may not be identified. It is also harder to fully automate, as there is a variable amount of manual effort required for each application to analyze (e.g., selecting an input workload). In this paper we refer to using a tool such as strace[24] to trace OS features invoked by an application, as _naive_ dynamic analysis. The main drawback of naive dynamic analysis is its failure to consider two techniques commonly used in early OS development [40]: * Feature _stubbing_: not implementing the feature and returning an error code (-ENOSYS: "Not Implemented" [3]) to the application when it invokes the feature. * _Faking_ feature success: not implementing the feature and returning a success code (typically system-call specific) to the application upon invocation. The two examples below are extracted from the source code of the HermiTux unikernel [51], where the sigaltstack system call is stubbed, and the mprotect system call is faked: ``` longsys_sigaltstack(conststackt*ss,stackt*oss){ return-ENOSYS;//// stubbed:notsupported} longsys_mprotect(sizetaddrsize_tlen,uint64tprot){ return%;// faked:pretendingsuccess} ``` Many applications are resilient to the failure of OS features [40, 59] and will run correctly when stubbing and faking. In this study, we show that many invoked OS features can avoid being implemented through these practices in the development stages of an OS. This highlights the importance of faking and stubbing as an engineering practice: without it, showcasing a particular application use-case for a new OS concept would take significantly longer, or even be unattainable for a small-scale research project. Despite of this, naive dynamic analysis does not typically consider stubbing and faking. Naive dynamic analysis traces all features and sub-features invoked by an application, independently of the fact that they can be stubbed/faked or not for a given workload. Thus, OS designers typically rely on trial and error to determine which features they need to implement first, and which ones they can fake or stub. _When to Stub or Fake and When not To?_ The reliance on stubbing and faking as a development practice in transitional OS development stages introduces a pivotal question: _when to stub and fake, and when not to?_ This question is driven by two sources of concern: * Impacting stability. Although guaranteed stability of entire applications is not a primary goal in the early development stages of an OS, faking and stubbing must not impact the stability of relevant application features. Failing to do so would negate the benefits of faking and stubbing by creating an additional debugging cost. * Impacting performance metrics. Early OS prototypes must be comparable to full-fledged mainstream OSes; this is especially true for research OSes. Impacting performance metrics by faking or stubbing would defeat the purpose of the OS prototype by making it impossible to fairly evaluate its performance advantage or cost. For instance, stubbing or faking an expensive and relevant security feature may provide an unfair advantage to an early OS prototype vs. a full-fledged OS that implements it. Non-systematic, trial-and-error-based approaches are especially prone to fall into stability and performance pitfalls. Although important, these concerns have been little discussed by works which rely or relied on faking and stubbing. _Breaking the Status Quo with Loupe_. We aim to propose a systematic and adaptive method to determine which OS features to implement first. Our goal is to help OS designers transition from _no support_ towards _full support_ to run as many applications as early as possible. Overall, dynamic analysis is better suited to the problem we aim to solve, being able to evaluate the concrete impact of both stubbing and faking, and providing fine-grain, per-workload results. The coverage issue of dynamic analysis is a nonproblem in our context: in early development phases of an OS, the goal is not to support every feature but rather core functionalities of target applications (R Once all OS features have been tested, a final run confirms that the analysis performed on a per-feature basis holds when all features are considered. In the event of a failure, users can use Loupe to alter subsets of system calls to find the culprits, a process which could be automated in future works. We now detail the behavior of Loupe _for each run_, as visualized in Figure 1. Loupe first simultaneously sets up tracing and sandboxing (AURUR(\) on Figure 1) and starts the application (BURUR(\)) using the seccomp(Leskov et al., 2017) and ptrace(Leskov et al., 2018) Linux tracing and interposition facilities. Once the application has been started, Loupe uses the test script to feed the application with inputs (e.g., generating client requests for a server application) and gather performance numbers (AUR(\)), all the while recording data on resource usage via /proc(\(\)DUR(\)). Using the hooks set up in BUR(\), Loupe intercepts each system call invoked by the application, and tests it for one of the two previously described modes. At the end of the run, Loupe determines the success of the application using the return code of the test script (more in SS3.2). Accesses to pseudo-files are hooked and disabled, stubbed, or faked similarly by catching system calls from the open family (see SS3.3). In order to maximize the reliability and reproducibility of the results, each analysis is performed multiple times in containerized replicas, and the result of the analysis is conservatively updated to take all results into account. The number of replicas (3 by default) and whether they run in parallel (_no_ by default) can be configured to suit different applications, accuracy needs, and available hardware. Finally, OS developers can specify the system calls supported by their OS in CSV form, and Loupe will recommend which OS features to implement, stub, or fake, to support a set of applications selected among those measured by the tool. Loupe will prioritize the list of features to indicate which should be implemented first in order to support as many applications, as early as possible. Loupe's measurements can optionally be shared in an online database (BUR(\)). ### Evaluating Success and Performance Loupe builds on the premise that users are able to describe a workload that they want to support for a given application. Loupe then tells the user which precise set of system calls they have to support (and how) to be able to run that workload reliably, i.e., over multiple runs without observable functional and non-functional issues. Describing WorkloadsWorkloads describe the feature set that must be supported in a given application. Loupe users express workloads in _test scripts_, responsible for supplying external input, if required by the application, and detecting the success of a run.2 Test scripts may materialize any type of workload: simple health checks (e.g., for a web server: can the application process a simple HTTP query?), benchmarks, test suites, or even fuzzing. If specific error cases or application features must be supported, then the test script must also exercise them as part of the run. In this paper we explore health checks, benchmarks, and test suites. Each workload may be relevant at different stages in the development life cycle of a new OS. Workloads correspond to different levels of guarantee of application stability; they can be evolved as support progresses, until complete compatibility can be provided to ensure stable application behavior in all circumstances. Footnote 2: Some programs do not require input and determine success by themselves or via a wrapper script (e.g., test suites). If so, the test script is _practically included_ in the application and need not be passed separately. Loupe supports this. Since this is similar to the general case, we do not further discuss it here. Defining "Success"A run is considered successful when the application terminates and the test script exit code indicates success. Crashes, or unresponsiveness are considered as generic failure signs. The notion of generic failure can be extended to unusual resource usage, or even unusual filesystem or network usage, which Loupe can observe without understanding application semantics. Generally however, the notion of success or failure is application-specific and inseparable from the workload itself: e.g., outputs on the standard output/error channels or logs that do not correspond to normal application behavior, or altered performance (e.g., throughput, latency, packet loss rate). Application-specific success criteria must be evaluated by the test script. An example of a test script for Nginx benchmarked with wrk is shown below: ``` #!bin/bash #[...]omittedhelpers(includingis_failedandgreen_req_per_sec) ``` b=5(wrkhttp://localhost:8888-d1es{gen_req_per_sec} if[[$1s_failed$b$7]];thenexit!; elseecho$b; fi ``` is_failed() is responsible for detecting failures, left out above for space reasons. When performing a simple health check, the function verifies that the throughput is non-zero. We implemented detection of unusual resource usage and performance in our prototype. Loupe records application resource usage (maximum resident size and open file descriptors) via /proc and compares results over multiple runs when stubbing or faking. Similarly, when performing a performance benchmark, test scripts return the relevant performance number (which can be any application-specific performance metric), and Loupe ensures that the performance does not incur a statistically significant variation from the full-fledged baseline. Together, resource usage and performance checks can provide insights into the impact of stubbing or faking features, and particularly increase the confidence on the correctness (or incorrectness) of faking and stubbing. We further discuss performance and resource usage in SS5.3. ### Loupe in Detail We now discuss various aspects of Loupe that are relevant in this paper: supporting vectored system calls and pseudo-files, making Loupe easy to use in many applications, how long Loupe analyzes take, and sharing analysis results. _Vectored System Calls_. Identifying OS features at the granularity of an entire system call is sometimes too coarse, considering vectored system calls (e.g., ioctl, fnctl) and system calls with several functionalities that may be partially implemented in a compatibility layer (e.g., mmap, or madvise). In such cases, Loupe can also disable, stub, and fake system calls based on _individual system call parameters_, allowing users to easily explore partial implementations at a fine granularity. The output is a list of system calls along with their used sub-features, and whether they can be faked or stubbed. _Pseudo Files_. Part of the Linux API is offered through pseudo-files such as /dev/random. Loupe is able to detect usage of such special files by pattern matching the arguments of certain system calls (e.g., open, openat) against paths (e.g., /dev, /proc). Loupe can also fake or stub system calls accessing these files, enabling users to track which special files require an implementation for applications to run. _Testing Framework Integration_. Dynamic analysis tools can be difficult to integrate in application testing frameworks. Test suites, for instance, may start the application multiple times, from complex scripts, from different call points (Blei et al., 2017; Blei et al., 2017). Calling a naive analysis tool like strace requires manual changes, along with additional logic to gather and merge results obtained from the multiple runs triggered by the test suite. Calling the tool on the test suite itself (e.g., strace make test) is not effective, as the test suite may call external tools whose OS feature usage is not part of the application's. For instance, the Ruby test suite makes extensive calls to git to set up test environments; the OS feature requirements of git should not be included into the application's. We tackle this problem with a whitelist system: when run on a wrapper (e.g., a test suite), users can specify which binaries are that of the application and should be considered in the analysis. Loupe then tracks all children processes, checking the binary path upon exec, to ignore any system call originating from a binary that does not correspond to the specified one(s). This allows, for instance, unmodified analysis of test suites run via make test; Loupe simply executes the Makefile and only considers system calls executed by the appropriate binary. _Debhelper Integration_. To further simplify running Loupe on many applications, we integrated Loupe into the Debhelper (Blei et al., 2017) Debian package build system. Loupe can build Debian packages and run on the package's dh_auto_test(Blei et al., 2017) rule which, if provided by the package, executes the target application's test suite. Combined with the previous technique, which Loupe can leverage by listing the package's binaries, we can significantly reduce the cost of testing applications. Running Loupe on the Lighttpd, Memcached, and websfd test suites, for instance, is fully automated this way. _Loupe Run Time_. The runtime of a full Loupe analysis is \((2+(2*t*s))*\lceil\frac{r}{p}\rceil\) with: \(t\) the application workload runtime, \(s\) the number of distinct system calls (and pseudo-files, if enabled in the analysis) executed by the application under the specific workload, \(r\) the number of replicas, and \(p\) the number of replicas executed in parallel. 2+ corresponds to the initial run to discover executed system calls, and to the final run to confirm the analysis. 2* corresponds to the "stubbing" and to the "faking" run for each system call. The overall runtime is therefore dominated by the length and complexity of the application workload; it varies from about 4 minutes for a fast Nginx health check, to 50 minutes for the Lighttpd test suite, and 1-1.5 days for the SQLite test suite (by far the largest we encountered, running _millions_ of tests (Blei et al., 2017)). These run times are reasonable: porting cost for a single application often reach multiple weeks or months in early OS development stages (Krause et al., 2019) and, as we expand next, this is a one-time cost. _Sharing Loupe Results_. Thanks to the techniques described previously, Loupe test scripts are easy to write; 2-30 minutes on average according to the expertise of the user, most of it spent on understanding how to run and test the application. The main barrier to running Loupe on a large number of applications is runtime. Nevertheless, as we described previously, the results are final for a fixed build of the software, its workload, dependencies, kernel, and test script. To leverage this, we have set up a shared online database that can be populated and looked up by any individual running Loupe or interested in its results. Loupe can automatically submit results to the database along with metadata (EURUR(tm) in Figure 1). We envision that in the long run, this database will contain results for a wide range of applications, helping OS and application developers to study OS features usage patterns, build compatibility layers, and more, without even the runtime cost mentioned previously. ## 4. Loupe: OS Feature Support Guide For space reasons, we set aside pseudo files and focus on system call support, as it represents the majority of the engineering effort in building compatibility layers (Krause et al., 2019; Krause et al., 2019; Krause et al., 2019). ### Examples of Support Plans We ran Loupe on a total of 116 applications with various workloads including standard benchmarks (e.g., wrk for web applications, redis-benchmark). We choose a selection of representative applications from OpenBenchmarking.org (Krause et al., 2019), as well as various other sources (Blei et al., 2017; Krause et al., 2019; Krause et al., 2019)3. Leveraging these measurements, Loupe guides the process of developing a compatibility layer by giving a prioritized list of system calls to implement/stub/fake. Specifically, given (1) the state of a partially Linux-compatible OS in terms of system calls supported (a simple text file with one line per supported system call) and (2) a set of target applications to support, Loupe can output an incremental support plan listing the order in which missing system calls should be implemented/faked/s-tubbed in order to enable compatibility with a maximum of applications as early as possible. We enabled Loupe to generate support plans for all 116 applications we measured, for 11 OSes under development: Unikraft (Luo et al., 2017), Google Fuchsia (2018) and Zephyr (Zehyr, 2018), Kerla (2019), HermiTux (Zehyr, 2018), Google gVisor (2018), Graphene/Gramine (2018, 2019), FreeBSD Linuxulator (Han et al., 2019), Browiss (2019), OSv (Zehyr, 2018), and Linux nolibc (2019). To illustrate this functionality, we present here a subset of these results (for space reasons): we consider recent versions of 3 OSes: Unikraft, Fuchsia and Kerla, and a target set of 15 popular cloud applications. The support plans are presented in Table 1. The number of steps to reach support for all 15 apps is directly linked to the maturity of the OS: Unikraft for example has initial support for 12 applications and requires only 3 steps to reach full support, while Kerla, with initial support for only 4 applications, requires 11 steps. Loupe's incremental support plans optimize the development of compatibility layers by breaking down the effort into small steps (>80% of which requiring to implement 1-3 system calls), unlocking support for an application after each step. The support plans in Table 1 target a small set of applications for space reasons. Full support plans for each of the 11 OSes we target, for all 116 applications in our database, are larger: 35 steps for Fuchsia, 32 for Unikraft, and 79 for Kerla. ### Engineering Effort Savings To estimate the engineering effort savings that an OS project would enjoy while building a compatibility layer with Loupe rather than in an ad-hoc, organic fashion, we designed the following experiment: we select a large set of 62 applications supported by a popular experimental OS, OSv (Zehyr, 2018), from the OSv-Apps repository (Han et al., 2019). We then estimate the order in which these applications were organically supported by the OS. For that we use git metadata to track the creation date of the folder corresponding to each app in the repository. We then derive from the order in which applications were supported, the organic order in which system calls had to be implemented by OSv developers. Because stubbing/faking OS features are well-known practices (Zehyr, 2018), and because there are traces of their usage in OSv's codebase (Zehyr, 2018), we assume that OSv developers used stubbing and faking as much as possible. We can then derive, in chronological order, the number of system calls that were implemented by OSv developers, and the evolution of the number of supported applications. We also compute these numbers for a hypothetical optimized compatibility layer development process that would be guided by Loupe's support plan, which would also take stubbing/faking into account, as well as a naive approach that would implement every system call traced by dynamic analysis, without stubbing/faking. These results are presented on Figure 2. As one can observe, Loupe would have heavily optimized the process of implementing OSv's support for the target application set, leading to more applications supported earlier and with less engineering effort vs. our estimation of the organic process undertaken by OSv's developers. For example, to support half (31) of the applications, with Loupe only 37 system calls need to be implemented, vs. 92 for the organic process. The naive method relying on dynamic analysis without stubbing/faking requires even more engineering effort: to reach 31 applications, 142 system calls would need to be implemented. \begin{table} \begin{tabular}{|c|l|l|l|l|} \hline Step & Implement & Stub & Fake & Support for... \\ \hline \hline \multicolumn{5}{|c|}{**Unikraft** (commit 7d6707f, supports 174 syscalls)} \\ \hline 0 & - & - & - & (12 apps) \\ \hline 1 & 290 & 273, 218, 230 & - & + Memcached \\ \hline 2 & 218 & - & - & + H2O \\ \hline 3 & 283, 27 & 186 & - & + MongoDB \\ \hline \hline \multicolumn{5}{|c|}{**Fuchsia** (commit 5d20758, supports 152 syscalls)} \\ \hline 0 & - & - & - & (10 apps) \\ \hline 1 & 33 & 273, 302, 105 & - & + Lighttpd \\ \hline 2 & 302 & 230 & - & + Memcached \\ \hline 3 & - & 99, 222, 223 & - & + HAProxy \\ \hline 4 & 105 & 40 & 128, 99, 27 & + Nginx \\ \hline 5 & 128, 99, 27 & - & - & + MongoDB \\ \hline \hline \multicolumn{5}{|c|}{**Kerla** (commit 73a1873, supports 58 syscalls)} \\ \hline 0 & - & - & - & (4 apps) \\ \hline 1 & 56, 257, 54 & (17 syscalls) & 47 & + Htptd \\ \hline 2 & 10 & - & 302 & + Weborf \\ \hline 3 & 8, 21, 87 & - & 25 & + SQLite \\ \hline 4 & 232, 233, 302 & (9 syscalls) & 288, 213 & + HAProxy \\ \hline 5 & 17, 213, 262 & 95 & - & + Redis \\ \hline 6 & 291 & \(105,106,116,293\) & - & + Lighttpd \\ \hline 7 & 288, 290 & 32 & 102 & + H2O \\ \hline 8 & 46 & 230 & - & + Memcached \\ \hline 9 & 105, 18, 53, 106 & 40 & 92, 130, 107, + Nginx \\ \hline 10 & 104, 107, 108, 102 & - & - & + Webfsd \\ \hline 11 & 128, 99, 229, 27, 273, 202, 283 & 131 & 137 & + MongoDB \\ \hline \end{tabular} \end{table} Table 1. Step-by-step support plans for 3 OSes. Figure 2. Evolution of the number of applications and system calls supported by OSv assuming 1) a support plan generated with Loupe, 2) organic development based on git history, 3) measurement with naive dynamic analysis without stubbing/faking. Higher values indicate more applications supported for the same effort. Our method to estimate engineering efforts makes a few simplifications. The real order in which applications were supported by OSv is likely not exactly that of folder creation in the OSv-Apps repository. We repeated the study using the date of the _last commit_ in each application's folder to determine the order; results were similar. The effort to implement system calls is also variable according to which system call is targeted: the x-axis in Figure 2 is non-uniform since not all system calls have the same implementation cost. However, we believe these results provide a sufficiently solid estimation of the engineering effort reduction that Loupe can bring to demonstrate its usefulness. ## 5. Analyzing the Linux API with Loupe Here we study the Linux API usage results obtained using Loupe for the 116 applications considered in our study. We aim to answer the following research questions: * How important is the accuracy gap between Loupe's method vs naive dynamic analysis (strace) and static analysis? * When building a Linux compatibility layer, which system calls must be implemented, and which ones can be commonly faked or stubbed? What is the absolute minimum set of system calls that must be implemented for a test suite to correctly run? * What are the most important system calls, i.e., the ones whose implementation is required by most applications? * Why can some system calls be faked or stubbed? Does it impact performance or resource usage metrics? * How much do the system call requirements of applications and standard libraries evolve over time? For space reasons, we concentrate on system calls and set aside results regarding special files and vectored system calls. ### Analysis Method: Static vs. Dynamic _Loupe vs. Naive Dynamic Analysis._ We computed the _API importance_ of each system call as reported by Loupe and by naive dynamic analysis. API importance [63] represents the probability that in our 116 applications data set, a system call is required by at least one application in that set. A system call is defined as required for an application if it is traced with dynamic analysis, and if it is traced and can neither be stubbed nor faked with Loupe. Figure 3 visualizes our results. They show that naive dynamic analysis severely overestimates the amount of system calls required to support applications. Loupe reports a total of 148 system calls requiring implementation to support 100% of our 116 applications, vs. 180 system calls for a naive analysis. The 25 most commonly required system calls are present in more than 80% of the applications with Loupe, and in less than 50% with naive dynamic analysis. _Loupe vs. Static Analysis._ We faced scalability issues when trying to apply binary- and source-level static analysis tools to our large data set of 116 applications. There exists no source-level tool able to identify system calls for all the relevant programming languages. We also attempted to run several binary-level tools and experienced a high level of failures (close to 50%) skewing the results. Hence, we fall back on selecting a subset of applications from our data set for comparison between static analysis and Loupe. We select 7 popular cloud applications that support standard benchmarks and ship with comprehensive test suites: Redis, Nginx, Memcached, SQLite, HAProxy, Lighttpd, and Weborf. To gather results for static analysis we use the source- and binary-level tools made available by Unikraft [26; 27]. Figure 4 details the amount of system calls identified in each application by each method. Both static analysis techniques severely overestimate the number of system calls actually needed to run the benchmarks and test suites. The minimum number of system calls identified by Loupe as required for these applications varies around 20 for benchmarks, and 20-40 for test suites. Both static binary and source analysis methods report numbers that are generally between 5x and 2x higher. For example, on Redis, binary-level static analysis identifies 103 system calls vs. 68 dynamically traced ones from the test suite, and Loupe further indicates that more than a third of these can be stubbed/faked. This observation can be generalized to all other applications. Overall these results show that the effort to provide comprehensive support of core features and even full test suites is much lower than suggested by previous work based on static analysis [63]. Figure 5 details which system calls are detected by the various analysis techniques when applied to the 7 applications running benchmarks. Once again the overestimation of static and naive dynamic analysis is clear, compared to the results obtained with Loupe. Regarding static analysis, operating on the binary only yields more system calls compared to targeting the sources. Concerning dynamic analysis, a non-negligible amount of system calls can be stubbed/faked, confirming the benefits of Loupe vs. naive dynamic analysis. We investigate faking/stubbing more in details next. **Insight:** Static and naive dynamic analysis both highly overestimate the engineering effort needed to build a compatibility layer for a target set of applications. Figure 3. API importance for dynamic analysis with Loupe and a naive approach (= no stubbing/faking). ### Resilience to Stubbing and Faking As visualized in Figure 4, we find that, on average, the proportion of invoked system calls that can be stubbed or faked is 46% for test suites (ranging from 31% for Nginx to 58% for Lighttpd), and 60% for benchmarks (from 51% for Lighttpd to 65% for HAProxy). This shows that the effort required to provide strong support of core features (i.e., those covered by test suites) for these popular applications is certainly lower than suggested by previous work, and is even lower when considering support for benchmarks only (needed for evaluation in research papers). The difference between Figure 4(c) and 4(d) clarifies this, highlighting which system calls can commonly be stubbed and faked. We observe broadly two categories: * **Low range system calls (system call ID \(\sim\) _150_)**, representing the majority of system calls detected by all analysis methods. This is unsurprising as these system calls correspond to core services that have been present in the Linux feature set for a long time, such as basic network system calls (bind, accept, etc.). * 213, 232, 233), as well as new variants of core system calls (openat - 257, prlimit64 - 302, etc.) Though system calls from both categories can be stubbed or faked, system calls with higher numbers are better candidates: out of the lower half of used system calls (46 system calls with number \(<\) 63), 13 system calls can always be stubbed vs. 30 for the upper half (46 system calls with number \(>\) _63_). This is because these map to more recent, generally less critical functionalities; we expand on this next. **Insight:** Though applications may invoke many system calls, many of them can be stubbed or faked to run popular workloads. Figure 4. Number of system calls statically identified and dynamically traced by Loupe for applications running standard benchmarks (_bench_) and test-suites (_suite_). Traced system calls are broken down into those that can be stubbed, faked, either faked or stubbed (_any_), and those that can neither be faked nor stubbed (_required_). Figure 5. System calls identified by static binary, static source, naive dynamic _traced_ (all system calls detected), and Loupe’s dynamic _required_ (those that cannot be stubbed/faked). Each box represents a Linux system call and its number. _Why are Programs Resilient to Stubbing and Faking?_ Applications are able to detect and react to the failure of a system call. Often, system call failures are non-critical and programs can take action to circumvent them. These actions are the enabling factor of system call stubbing. They include, among others (cf. Figure 4(d)): * **Ignoring the issue.** Not all failures are consequential, and programs can simply decide to not take further action. For instance, Redis ignores when sysinfo (99) fails to return the maximum memory size and when ioctl (16) fails to return the resident size, as this information is only used for output to the debugging logs. * a pattern from the glibc early allocator, or reallocating mappings with mmap (9) when mremap (25) fails, as we observe in SQLite. * **Falling back to safe default values.** Applications query the OS for various values to tune their behavior (max stack size and file descriptor count, processor affinity and scheduling importance, etc.). When this fails, a safe default can often be adopted. Figure 5(a) shows an example with getlimit (97) and prlimit64 (302) in Redis. Another example is using ioctl (16) to query the terminal width: when this fails, Redis assumes a safe value of 80 characters. * **Disabling program functionalities.** Programs may also decide to simply disable the functionality that makes use of the system call; in certain cases, this may not even have observable consequences. For example, many applications only make use of connect (42) through the glibc for the NSCD cache socket [15]. When connect fails, name caching is simply disabled. In other cases, programs may interpret the failure conservatively and decide to abort, making stubbing impossible. Still, in a subset of these cases, programs are overly conservative and the failure of the system call is in reality non-critical: if so, faking a successful return value for the system call, without _actually_ doing the work of the system call in question, will work. Figure 5(b) presents a concrete example in Nginx, where prctl (157) fails to force the retaining of capabilities upon UID transition; in the context of an OS that does not have a user/kernel separation, like a unikernel, capabilities make little sense and so it is fine to fake success: faking the system call here will have strictly no impact on the correct execution of the software. Similar examples are get/setgroups (115-116), or setsid (112) which have, once again, no meaning in the context of a unikernel. Still, faking OS features may also result in breaking program functionalities, e.g., pipe2 (293) in Redis (see SS5.3). If the functionality is not part of the target set of application features, faking may remain a reasonable approach to achieve a first level of compatibility. Inversely, certain system calls can (almost) never be stubbed nor faked without breaking core program functionalities. Though generalization is difficult, these system calls typically represent fundamental OS features: executing programs with execve (59), opening and writing to connections with bind (49), listen (50), socket (41), and writev (20), allocating memory with mmap (9). We also find vectored system calls like fcntl (72), motivating our discussion in SS5.4. _System Call Return Value Checks._ In addition to identifying system calls issued by applications, we performed a manual inspection of these applications' source code in order to gather ground truth about which system calls had their return values checked. Is there a link between the presence or absence of checks, and the ability to stub or fake? Note that we are interested here in user-written code, so we look at whether C standard library system calls wrappers - not system calls themselves - have their return value checked. We choose manual inspection; building an automated static analysis method for this task is non-trivial and rather out of the scope of this paper: some programs directly check the return value, others store it in a variable which is later checked, directly or through auxiliary functions, while yet others rely on macros to do the checking. We semi-automated Figure 6. Real world code snippets where it is effective to stub (left) and fake (right) system call implementations. Figure 7. Apps checking system calls return values. the process by building scripts scanning sources for system call wrapper invocations and displaying their corresponding location in source files; we then manually checked this output to determine if the return code was checked or not. Figure 7 shows, for each system call wrapper, the number of programs that check its return value. The majority have their return value checked. Studying the small set of system calls for which no application has checks, we identify system calls that always succeed, e.g., alarm (37), getpoid (110), but also several that can actually fail: getrusage (98), \(\mathtt{utilm}\) (132), \(\mathtt{intify\_rm\_watch}\) (255) and \(\mathtt{futinesat}\) (261). For those invoked and traced by Loupe, we observe that all can be stubbed/faked for this set of applications. Nevertheless, it would be incorrect to conclude that the ability to stub and fake is induced by the absence of checks: inversely, numerous system calls that are always checked can themselves often be stubbed/faked, such as \(\mathtt{ioct1}\) (16), \(\mathtt{uname}\) (63), or \(\mathtt{geteuid}\) (107). There is also a set of system calls for which only some applications feature checks. These include system calls that are generally assumed to always succeed (even if they can fail) such as \(\mathtt{clock\_gettime}\) (228), or freeing resources: e.g., \(\mathtt{close}\) (3), or \(\mathtt{unlink}\) (87). Generally, these can be stubbed/faked only in some applications. Overall, we conclude that the ability to stub or fake is not a factor of the presence (or absence) of checks, but rather of the semantics of individual system calls and applications. ### Impact on Performance and Resource Usage An important concern when stubbing and faking system calls is whether doing so would have an effect on performance or resource usage. Both detrimental and _positive_ effects are undesirable, as unintended improvements on these metrics may skew comparisons with a full-fledged baseline. To study the question, we use Loupe's ability to record performance and resource usage metrics while performing its analysis. As described in Section 3.2, Loupe gathers performance metrics through user-defined scripts, and resource usage information (peak file descriptor and memory usage) through /proc. For the sake of conciseness, we provide detailed results for a subset of three representative, performance-focused applications: Nginx (web server), Redis (key-value store), and iPerf3 (TCP benchmark framework). Nginx is benchmarked with wrk (29) (HTTP requests/s), Redis with \(\mathtt{redis\_benchmark}\) (21) (SET requests/s), and iPerf3 with an official iPerf client (9) (TCP throughput). All numbers are provided as averages of 10 runs. Our results are visible in Table 2. _Impact on Performance._ For the majority of system calls, the variation in performance when stubbing or faking is within the error margin. For the applications considered here, 3/45 system calls trigger a performance change when faked or stubbed. For Nginx, stubbing/faking write increases performance as it prevents writing to access logs (14) (something that test scripts do not check - access logs are usually disabled in performance-focused settings as they are written to once per request). It does not, however, prevent payloads from being written to, as this is done via \(\mathtt{write}\) (which, when stubbed or faked, prevents Nginx to answer requests correctly, and is detected by the test script). Still for Nginx, stubbing or faking \(\mathtt{rt\_sissupend}\) hurts performance, as it turns the master process' notification-based behavior into busy-waiting. None of these alters the well-functioning of Nginx's core features as tested by the Loupe test script. Conversely, in the Redis case, faking \(\mathtt{futex}\) results in synchronization issues, manifesting as a performance degradation. This alters the core functioning of Redis, clearly indicating that faking \(\mathtt{futex}\) is not a correct path to follow for compatibility, which matches intuitive expectations. As for iPerf3, no system call results in performance degradation when faked or stubbed. When such variations occur, Loupe notifies the user that further investigation is needed to understand the implications (e.g., on stability or scientific soundness) of stubbing or faking a particular OS feature for a given application. This further emphasizes the need for a tool like Loupe to avoid pitfalls which may cause debugging costs down the line, or skew comparisons with a full-fledged baseline. _Impact on Resource Usage._ Similarly to performance, we find that faking or stubbing most system calls does not result in statistically significant variations in resource usage. For the three applications considered, 4/45 system calls result in memory usage variations, and 3/45 in file descriptor usage variations, with one (brk) being caused by the libc and thus common among all three applications. In the general case, as discussed earlier, system calls that allocate resources cannot be stubbed or faked: this is the case for memory allocation services such as \(\mathtt{mmap}\) (9), but also for those that allocate file descriptors such as \(\mathtt{openat}\) (257) (see Figure 4(d)). In particular cases, the claim is more nuanced; alternatives like \(\mathtt{open}\) (2) do not need to be implemented (e.g., because \(\mathtt{openat}\) is used instead, see Section 5.6). Similarly, brk _can_ be stubbed or faked in a significant number of cases: for instance, the program exclusively uses \(\mathtt{mmap}\), and the only usage of brk is in the glibc initialization sequence, which is itself capable of falling back to \(\mathtt{mmap}\) if brk does not function (at the cost of a slight memory usage increase, see Table 2). Another case is \(\mathtt{pipe2}\), which creates pipes at the process' demand. Stubbing or faking it results in pipes not being created, which in turn results in an observable reduction in file descriptor count. In the case of Redis, this breaks the persistence feature (which is often disabled in performance-focused experiments), but not the key-value store's core functionalities. The situation is different for APIs that free resources. In general, \(\mathtt{mummap}\) and \(\mathtt{close}\) can be stubbed or faked without functional impact, though resource usage will increase. For Redis, faking or stubbing \(\mathtt{mummap}\) and \(\mathtt{close}\) leads to a 20% increase in memory usage, and an 8x increase in open file descriptors under a \(\mathtt{redis\_benchmark}\) workload (cf. Table 2). Still, although these features can be stubbed or faked without sacrificing stability (as long as resources suffice), we note that the incentives to do so are lower than for other API elements; if the algorithm was developed to allocate resources, it should not be a problem to develop one that frees them. Lastly, similarly to performance, variations in resource usage turn out to be good indicators of instability caused by stubbing or faking. In the case of Nginx, faking clone results in the master process executing the worker event loop, which itself manifests as an increase in memory usage (likely because resources are left dangling). Although functional in practice, it is not a reliable path to take for compatibility and meaningful performance comparison. In the case of Redis, faking futex results in inconsistent synchronization, which itself translates into an increased number of allocated file descriptors (see Table 2). Beyond system calls that (de-)allocate resources, and those that indicate underlying instability, we identify two more classes of system calls which may impact resource usage (or performance): * **Optimizing system calls:** by giving semantic indications to the kernel regarding e.g., memory management policies, system calls such as madvise(Krishnan et al., 2017) should influence performance and resource usage. This behavior is not visible when faking/stubbing in Table 2: kernel hints are used rather sporadically in applications, and for those that use them (e.g., Redis), the kernel did not perform actions that impacted our metrics. Impact may be observable in other settings, e.g., multi-process scenarios. * **System Limit Setters/Getters:** by getting/setting system defaults (e.g., max stack size, number of FDs), better/setter system calls like prlimit64 (or part of ioct1) may also result in resource usage or performance variations. For instance, with system defaults different from the ones in Table 2, stubbing prlimit64 in Redis results in 30% lower memory usage under a redis-benchmark workload because the libc (stack size) and Redis (FD limits) default to values conservatively lower than the system limits. _Impact on Stubbing and Faking Policy._ Overall, we stress the importance of evaluating the impact of stubbing and faking on performance metrics as part of the process of deciding what to support and how. Though most system calls do not impact performance metrics, some do: when the underlying reason is instability, the OS feature should never be faked; otherwise, whether or not to stub or fake should be an _explicit factor_ of the experimental setup and expectations on the OS prototype. It is critical that the (positive or negative) impact of stubbing and faking must not be mistaken for that of the system's design. Overall, we encourage authors of future systems research works to explicitly list features that they stub or fake for reproducibility and future analysis. **Insight:** Stubbing/Faking does not impact performance and resource usage in the general case. Still, there are edge cases which may or may not indicate correctness issues. Impact on either metric must call for special care when stubbing/faking. ### Partial Implementation of System Calls In the previous sections, we considered system calls as monolithic API elements. This consideration shows its limits when investigating vectored system calls (e.g., ioct1) or complex system calls like mmap (usable for memory allocation and file mapping, two very different purposes). To clarify this point, we use Loupe to determine the precise set of sub-system call features applications require. Our insights are twofold. First, applications execute surprisingly few features from complex or vectored system calls. For example, almost all applications require arch_prctl(158) (see Figure 4(d)). However, they are far from requiring a full implementation: in fact, in all applications that we considered, \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline App. & System Call & Perf. Impact & FD Usage & Mem. Usage & Explanations of Stubbing/Faking Impact & Breaks... \\ \hline \hline \multirow{4}{*}{Nginx} & write & +15\% & - & - & Access logs are not written anymore, increasing performance. & Access Logging \\ \cline{2-7} & brk & - & - & +17\% & Triggers a fallback to mmap in the glibc early allocator. & \(\varnothing\) \\ \cline{2-7} & clone & - & - & +10\% & Results in master process executing the worker loop. & Core functioning \\ \cline{2-7} & siguspend & -38\% & - & - & Results in master process polling (busy-waiting) for events. & \(\varnothing\) \\ \hline \hline \multirow{4}{*}{Redis} & close & - & x8 & - & FDs are not closed anymore. & \(\varnothing^{1}\) \\ \cline{2-7} & mmap & - & - & +19\% & Regions are not disposed anymore. & \(\varnothing^{2}\) \\ \cline{2-7} & brk & - & - & +2\% & Triggers a fallback to mmap in the glibc early allocator. & \(\varnothing\) \\ \cline{2-7} & sigprocmask & - & - & -15\% & \begin{tabular}{c} prevents creation of jemalloc background threads, resulting in \\ memory being freed synchronously and/or at an earlier point. \\ \end{tabular} & \(\varnothing\) \\ \cline{2-7} & futex & -66\% & +94\% & - & Inconsistent synchronization results in incorrect behavior. & Core functioning \\ \cline{2-7} & pipe2 & - & -25\% & - & Pipes are not created anymore, resulting in less FDs. & Persistence \\ \hline \hline iPerf3 & brk & - & - & +11\% & Triggers a fallback to mmap in the glibc early allocator. & \(\varnothing\) \\ \hline \end{tabular} * Within the maximum number of FD limits, core functioning is altered beyond this point. 2Within the limits of available memory. \end{table} Table 2. Performance and resource usage (file descriptors: FD, memory usage) impact of stubbing and faking for Nginx, Redis, and iPerf3 (=_App_lications_). Only systems calls with impact outside of the error margin (>3%) in either category are displayed. “-” means _no impact_; _+X%_ means _X% faster_ or _more_ resource usage; _-X%_ means _X% slower_ or _less_ resource usage. this system call was exclusively called by the libc, which requires one single feature (ARCH_SET_FS, out of 6 in total) related to thread local storage setup. The situation is similar for primit64 (302), required by many applications: out of 16 features, only 3 are used, RLIMIT_CORE, _NOFILE, and _STACK, the latter one being used almost exclusively as part of the libc initialization. This is also the case for ioct1 (16): with a benchmark load, Redis, weoff, and h2o use one single feature (TCGETS), Nginx two (FIOMBIO and FIOASYNC), and Lighttpd none. All of them can be stubbed. Second, when looking at required features of system calls, we find that certain system calls such as fcnt1 typically exhibit a mix of required and fakeable/stubable features, and the required set is typically common among applications. For instance, F_SETFL is required to put file descriptors in non-blocking mode in all applications except Nginx, a critical operation for most codebases. On the other hand, F_SETFD is widely executed across applications but can always be stubbed as it is used to enable _close-on-exec_ on file descriptors, a non-critical operation. In these cases, taking a look at the required system calls at the granularity of a system call would make the situation appear worse than it is in practice. **Insight:** Several complex system calls do not require a full implementation to support a large number of applications. ### Stability of System Call Usage Over Time Once an OS prototype supports an application, how likely is it that, as the program evolves over time, additional or different system calls will be required, breaking the initial support? Here we study the stability of system call usage by applications and libcs. _Evolution: C Standard Library._ We first study the libc, from which most system calls invocations generally originate. We compiled Nginx v0.3.19 against an old version of glibc (2.3.2, from 2003) and a modern one (2.31, from 2020). Since we were unable to run Nginx 0.3.19 with glibc 2.3.2 in 64-bit mode, we compiled and run this configuration in 32-bit. This is likely due to these versions featuring unstable AMD64 support (the first AMD64 CPUs were released in 2003 [(42)]). The results in Table 3 show that the number of used system calls is more or less unchanged, 48 vs 51. Moreover, we see that most of the change in system call usage is caused by the deprecation of old system calls. Still, there is some evolution in the types of system calls invoked, which we classify into two categories. First, the recent libc uses a different version of some system calls due to a change of architecture (e.g., it uses pread64 instead of pread). Second, the recent libc uses additional system calls, e.g., arch_prctl to set up TLS. Setting aside the issue of supporting a new architecture (orthogonal to compatibility), we assume that it is the second category that would require supporting effort for updating a given compatibility layer as applications evolve. However, we consider this effort to be low: we only count 8 new system calls in 17 years for this libc/application combination. _Evolution: Application._ We are now interested to see how the system call usage of an application changes over the years. For this experiment, we used a modern glibc/complier. We explore the difference in system call usage of Nginx, Apache and Redis through the years and list the results in Figure 8. We observe that, although the number of Linux system calls has increased, all applications are using roughly the same amount of system calls; the number of system calls that can be stubbed or faked also remains almost unchanged. In all, we find the usage of system calls by applications and libcs to be fairly stable over time. This is further encouragement to OS prototype developers: once you provide support for an application, you are likely to be able to keep it with minimal work for a long while. **Insight:** Application and libc system call usage patterns tend to be stable over time: support is a one-time effort. Figure 8. System call usage and capacity to be stubbed/faked for recent (2021) and older (2005-2010) applications releases. \begin{table} \begin{tabular}{|c|} \hline glibc 2.3.2 / 32-bit (_48 system calls_) & glibc 2.31 / 64-bit (_51 system calls_) \\ \hline _Ilseek_, accept, access, bind, brk, clone, close, connect, epoll_create, _fent164_, epoll_ctl, epoll_wait, & read, write, close, _stat_, _stat_, _fstat_, _lsetk_, brk, rt_sigaction, **mmap**, _fent164_, epoll_ctl, epoll_wait, & ioct, rt_sigprocmask _pread64_, _setkocopf_, write, access, send- _geteud32_, _mktpmap_, _zerut32_, _setk32_, _mktpmap_, _zerut32_, _sold_mmap_, _geteud32_, _guestoups_, **uamc**, **open**, prctl, _pread_, **pwwrite**, read, rt_sigaction, rt_sigprocmask, rt_siguspend, **set_thread_area_, _setgid32_, _setgid_, _setgid34_, _setgid4_, _setgid4_, _setgid_, _setgid_, _setgid_, _setgid_, _setgid_, _setgid_, _setgid_, _setgupup_, _setgupup_, _setg ### C Library Impact on System Call Usage Typical applications perform the majority of their system call invocations through the C standard library (libc). Bypassing the libc using direct system call invocation happens only for functionalities rarely called by user code (e.g., futex) or newer system calls for which libcs do not provide a wrapper: we counted around 51 system calls (58 including removed/u-implemented system calls) that do not have a wrapper as of glibc 2.33. In this case, applications wishing to invoke them use the syscall function. Setting aside these special cases, we find that the libc implementation greatly influences the system call API usage of applications. This is due to two main factors: (1) the libc initialization sequence and (2) the choice of system call alternatives (e.g., openat vs. open). Libc Initialization SequenceThe initialization sequence is the libc code executed from the program entry point until the user's main function is invoked. The system calls invoked by that code will be by construction present in any binary linked against that libc and constitute the minimum set of system calls an OS should implement to support this libc. To study initialization sequences, we recorded the system call usage of a trivial application printing "Hello, world!" across two libcs, glibc (version 2.28) and musl (version 1.2.2), for a dynamically- and a statically-linked executable. Results in Table 4 show that the number and types of system calls executed vary: glibc's initialization sequence invokes for dynamically compiled binaries 2.5x more system calls vs. musl, and 1.8x more for statically compiled programs. The system calls invoked also change: glibc is not a strict superset of musl and out of 18 system calls in total, only 6 are common to both libcs for dynamic, 3 for static (and 3 overall). System Call AlternativesSome discrepancies are due to the libcs choosing different system call alternatives to perform the same task. For example, glibc uses write for printf, vs. writev for musl. Similarly, musl uses iccl to check that the TTY is writable, while glibc uses fstat. Finally, glibc uses openat, read, mmap, and mprotect to map the libc into the address space, an operation that musl achieves by embedding the libc into the linker itself, avoiding these system calls entirely. Other differences are caused by libc-specific initialization and debugging features. For example, even in single-threaded programs, musl will call set_tid_address during TLS initialization, something that glibc does not. Glibc, on the other hand, uses uname to ensure that the kernel is recent enough, readlink to expand $ORIGIN with statically compiled binaries, and access for a debugging feature; none of these used by musl's initialization sequence. ## 6. Discussion: Pitfalls & Future Works As discussed throughout this work, there are pitfalls to developing OS compatibility layers with dynamic analysis, stubbing, faking, and partial support techniques. Impact on StabilityDynamic analysis, stubbing, faking, and partial support techniques, bring the concern of stability: _do we trade off correctness to reduce porting time_? Loupe assumes that users are able to evaluate the functionality of application features they aim to support by specifying a set of tests (SS3.2). The tool ensures that this set of tests can be passed reliably, over multiple runs, when applying stubbing, faking, and partial support techniques. Loupe can also ensure that performance, resource usage, and any other metric, remains stable (SS5.3). Under this assumption, stability issues outside users' target feature range are not in the problem scope of Loupe, or our study. Still, perfect correctness cannot be guaranteed, and compatibility bugs may hide in incomplete or buggy tests, varying test environments, etc. We believe that these are reasonable trade-offs to be made in transitional development stages of a new OS. Impact on Evaluation MetricsAssuming stability, another concern remains: _do we trade off (or simply influence) performance, resource usage, or any other metric for porting time?_ This is most relevant as early OS prototypes must be able to compare, in a sound manner, properties with full-fledged baseline OSes. We show that, although the majority of system calls do not influence performance metrics when stubbed, faked, or partially supported, there _are_ pitfalls: even when reliably passing tests, these techniques can result in visible performance or resource usage variations (SS5.3). Loupe improves on the state of the art, which does not consider this problem, by evaluating these costs systematically and early, to provide strong evidence that achieved support does not impact chosen metrics. Still, it remains impossible to formally guarantee that these metrics will be unaffected in all cases. We believe that this too constitutes a reasonable trade-off in development stages. \begin{table} \begin{tabular}{|l|l|} \hline glibc & musl \\ \hline _28 system calls (dynamic binary)_ & _11 system calls (dynamic binary)_ \\ \hline execve (1x), brk (3x), arch_prctl (1x), exit_group (1x), **access (1x), openat (2x), fstat (3x), mmap (7x), close (2x), read (1x), mprotect (4x), **munmap (1x), set_tid_address (1x)** \\ \hline _11 system calls (static binary)_ & _6 system calls (static binary)_ \\ \hline execve (1x), arch_prctl (1x), exit_group (1x), **brk (4x), fstat (1x), write (1x), uname (1x), readlink (1x)** & execve (1x), brk (2x), arch_prctl (1x), exit_group (1x), writev (1x), **inctl (1x), set_tid_address (1x)** \\ \hline \end{tabular} \end{table} Table 4. System call API usage of a hello world application across glibc (2.28) and musl (1.2.2). Apart from exit_group and write/write, this set corresponds to the libc initialization sequence. Differing system calls are in bold. Overall, dynamic analysis, stubbing, faking, and partial support should not be seen as end-goals for production-ready compatibility, but as a transitional, "necessary evil" in development phases. The takeaway of this paper should not be that most of the system call API is irrelevant, or that static analysis is impertinent in engineering compatibility layers; each corresponds to distinct life cycle phases in the development of new OS. As we show, static analysis is not appropriate in earlier stages, however its output should decisively be a target in later stages of development, and full support should eventually come to achieve high levels of correctness assurance. Looking forward, we plan to improve Loupe with support for other analysis metrics, such as identifying standard application-specific logs and error message formats, or network and file system usage statistics, to better detect silent faults and effects of stubbing, faking, and partial support techniques. We believe that there remain many interesting research opportunities in application analysis for compatibility that should be explored in future works. Future research avenues include exploring speeding up the analysis by transferring knowledge across applications, and generally using machine learning techniques to identify patterns over the data set, at scale, and generating application-specific workloads. ## 7. Related Work _OS Compatibility Layers._ Many research and prototype OSes have implemented compatibility layers to transparently support legacy software. An early example (Kumar et al., 2017) presents a compatibility layer for Linux applications implemented in the K42 (Kumar et al., 2018) OS. Similarly to our work, the authors note that to be widely adopted, an OS must provide good support for existing applications, and that emulating the Linux API is the best way to achieve this goal without requiring modification of target applications. In another study (Kumar et al., 2018), researchers propose a POSIX compatibility layer for the Embassies (Zhu et al., 2019) system. This work presents the construction of the compatibility layer, which is realized in a fully ad-hoc way. As we demonstrate, this process can be highly optimized with Loupe. Still, the authors make some observations similar to ours, in particular the fact that some system calls are "failure-oblivious" (i.e., they can be stubbed) and others are "neutered" (they can be faked). Other works proposed compatibility layers for new monolithic (Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018), libOS (Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018) or micro-kernels (Kumar et al., 2018; Loupe et al., 2018), web browsers (Kumar et al., 2018), for running applications within the Linux kernel (Kumar et al., 2018), as well as various OS interoperability layers for existing kernels (Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018). To the best of our knowledge, all these compatibility layers have been developed in an organic way. _Libc-Based Compatibility Layers._ Some works (Kumar et al., 2018; Kumar et al., 2018) approach compatibility at the libc level, instead of the system call API. Though most system calls are performed through the libc, prior works have shown that interfacing at the libc level leads to weaker degrees of compatibility (Kumar et al., 2018) because many programs do issue system calls outside the libc (500+ ELF Debian 10 executables fall into that category (Kumar et al., 2018)). Thus, we focus on compatibility at the system call level. _Linux & POSIX APIs Studies._ Past work studied the usage of the Linux (Kumar et al., 2018) and POSIX (Kumar et al., 2018) APIs by applications. Tsai et al. (Tsai et al., 2018) use binary static analysis to measure the system calls and pseudo files required by a large set of binaries from the Ubuntu 15.04 archive. Even for the most minimal Ubuntu installation, the study reports that 224 system calls, 208 ioctl/prctl/fcntl codes and 100+ pseudo files require support. Our results demonstrate that static binary analysis is overly pessimistic. Using dynamic analysis, Loupe shows that the amount of OS features required to run standard benchmarks or even full test suites is actually much lower. Another study (Kumar et al., 2018) leverages both static and dynamic analysis to measure applications' POSIX API usage. Unlike this work, the authors' goal is not to determine and optimize compatibility efforts, but to study the evolution of the POSIX interface and identify emerging/missing abstractions. Though the study provides valuable insights for building a compatibility layer at the POSIX (i.e., libc) level (Kumar et al., 2018; Kumar et al., 2018), past studies showed that the Linux API (mainly system calls) provided a higher degree of compatibility (Kumar et al., 2018): the authors themselves (Kumar et al., 2018) note that many applications (e.g., Go apps) circumvent POSIX to use OS specific APIs. ## 8. Conclusion We propose Loupe, an efficient method to determine and prioritize OS features new compatibility layers should implement to provide support for as many applications as possible, as early as possible. Applying Loupe to 100+ applications, we provide examples of support plans, demonstrate high engineering effort savings, and study our measurements in depth. A significant number of system calls identified as needed by previous works are actually not required for those applications to run. These results bring a message of hope to the level of compatibility a new OS must provide in order to support mainstream applications, and should provide encouragement to ongoing and future research OS development efforts. ## Acknowledgments We thank the anonymous reviewers and our shepherd, Donald E. Porter, for their insights. This work was funded by a studentship from NEC Labs Europe, a Microsoft Research PhD Fellowship, UK's EPSRC grants EP/V012134/1 (UniFaaS), EP/V000225/1 (SCorCH), and the EPSRC/Innovate UK grant EP/X015610/1 (FlexCap), as well as EU H2020 grants 825377 (UNICORE), 871793 (ACCORDION) and 758815 (CORNET). UPB authors were supported by VMware gift funding.
2302.14722
Quasinormal modes of a holonomy corrected Schwarzschild black hole
We analyze the quasinormal modes (QNMs) of a recently obtained solution of a Schwarzschild black hole (BH) with corrections motivated by Loop Quantum Gravity (LQG). This spacetime is regular everywhere and presents the global structure of a wormhole, with a minimal surface whose radius depends on a LQG parameter. We focus on the investigation of massless scalar field perturbations over the spacetime. We compute the QNMs with the WKB approximation, as well as the continued fraction method. The QNM frequency orbits, for $l=0$ and $n>0$, where $l$ and $n$ are the multipole and overtone numbers, respectively, are self-intersecting, spiraling curves in the complex plane. These orbits accumulate to a fixed complex value corresponding to the QNMs of the extremal case. We obtain that, for small values of the LQG parameter, the overall damping decreases as we increase the LQG parameter. Moreover the spectrum of the quantum corrected black hole exhibits an oscillatory pattern, which might imply in the existence of QNMs with vanishing real part. This pattern suggests that the limit $n\rightarrow \infty$ for the real part of the QNMs is not well-defined, what differs from Schwarzschild's case. We also analyze the time-domain profiles for the scalar perturbations, showing that the LQG correction does not alter the Schwarzschild power-law tail. We compute the fundamental mode from the time profile by means of the Prony method, obtaining excellent agreement with the two previously mentioned methods.
Zeus S. Moreira, Haroldo C. D. Lima Junior, Luís C. B. Crispino, Carlos A. R. Herdeiro
2023-02-28T16:32:32Z
http://arxiv.org/abs/2302.14722v1
# Quasinormal modes of a holonomy corrected Schwarzschild black hole ###### Abstract We analyze the quasinormal modes (QNMs) of a recently obtained solution of a Schwarzschild black hole (BH) with corrections motivated by Loop Quantum Gravity (LQG). This spacetime is regular everywhere and presents the global structure of a wormhole, with a minimal surface whose radius depends on a LQG parameter. We focus on the investigation of massless scalar field perturbations over the spacetime. We compute the QNMs with the WKB approximation, as well as the continued fraction method. The QNM frequency orbits, for \(l=0\) and \(n>0\), where \(l\) and \(n\) are the multipole and overtone numbers, respectively, are self-intersecting, spiraling curves in the complex plane. These orbits accumulate to a fixed complex value corresponding to the QNMs of the extremal case. We obtain that, for small values of the LQG parameter, the overall damping decreases as we increase the LQG parameter. Moreover the spectrum of the quantum corrected black hole exhibits an oscillatory pattern, which might imply in the existence of QNMs with vanishing real part. This pattern suggests that the limit \(n\to\infty\) for the real part of the QNMs is not well-defined, what differs from Schwarzschild's case. We also analyze the time-domain profiles for the scalar perturbations, showing that the LQG correction does not alter the Schwarzschild power-law tail. We compute the fundamental mode from the time profile by means of the Prony method, obtaining excellent agreement with the two previously mentioned methods. ## I Introduction The detection of gravitational waves (GWs) marks the beginning of GW astronomy [1; 2] and creates great expectations for the future of gravitational physics research. Any orbiting pair of astrophysical objects produces GWs, but only those sufficiently compact and moving very rapidly can produce detectable signals for the current generation of GWs detectors. This makes BH binaries ideal systems for detecting GWs. The collision of BHs can be divided in three stages: _(i) inspiral_: The BHs orbit around each other, getting closer due to loss of energy through GWs; _(ii) merger:_ the actual collision of the two BHs and _(iii) ringdown:_ The merged BH relaxes to its equilibrium form (widely believed to be a Kerr BH [3]). The GW signal produced by the binary carries a very characteristic signature [4; 5; 6], which in turn can reveal properties of the BH itself [7; 8; 9]. In a perturbed physical system, the modes of vibration associated with energy dissipation are called quasinormal modes (QNMs). Thus, the ringdown phase of the coalescence of two BHs is essentially characterized by the corresponding QNMs [10; 11]. The study of BH perturbations began with the work of Regge and Wheeler [12] and was further developed by Zerilli [13; 14]. The problem of finding the QNM frequencies was investigated for the first time in a famous paper written by Chandrasekhar and Detweiler as a non self-adjoint boundary problem [15]. Thus, we loose the nice properties of self-adjoint problems, such as completeness and normalizability of the eigenfunctions and the spectrum becomes complex [16; 17]. The real part of the eigenfrequencies are the standard oscillation frequency, whereas the imaginary part is related to the wave damping. Starting in the 1960s, due to several astronomical discoveries related with pulsar, quasars and cosmic background radiation, Einstein's theory of GR experienced a new series of experimental confirmations [18; 19]. More recently, in addition to the already mentioned detection of GWs, the shadow images of M87* and Sgr A* were obtained [20; 21]. GR had its birth in the beginning of the 20st century and now it is enjoying a more mature and robust era, both theoretically and experimentally. Despite these previous achievements, most relativists believe that GR cannot be the final theory of gravity and should be replaced by some quantum theory. The very early universe [22], the interior of BHs [23] and the last stages of BH evaporation [24] are examples of physical scenarios where quantum effects play a fundamental role and GR no longer gives a precise description of the gravitational field. This is one of the biggest open questions in theoretical physics to date, i.e. how to reconcile gravity with quantum mechanics. The idea to canonically quantize gravity considering as canonical variables the spatial metric and its conjugated momentum, led to some problems [25]. Since the constraints equations are non-polynomial functions of the canonical variables, their corresponding operator equations in the quantum formulation are not well-defined [26]. Due to Sen, Ashtekar and Barbero, a new set of coordinates was found, the Ashtekar-Barbero connection variables, such that the constraints equations were reduced to polynomial expressions [27; 28; 29]. By writing GR in terms of the Ashtekar-Barbero variables, it is possible to put the theory in a framework very similar to other quantum field theories, where quantization techniques have already been developed [30]. The early construction of Loop Quantum Gravity (LQG) was based on the quantization of GR, in terms of a smeared version of the connection variables in a background independent fashion. As some of the important results of LQG, we can mention the construction of singular-free cosmological models [31], the quantization of spherically symmetric vacuum spacetime [32], as well as the derivation of the Hawking-Bekenstein entropy [33]. Working within the full machinery of LQG is very challenging and some effective models have shown to be useful in understanding how quantum gravity effects might look like. There are several works applying modifications to GR for cosmological models [34; 35] and also for spherically symmetric spacetimes, such as Schwarzschild [36; 37; 38] and Reissner-Nordstrom solutions [39]. In Refs. [40; 41] an effective spherically symmetric spacetime is proposed, which is not singular and presents a global structure of a wormhole whose minimal surface is hidden by an event horizon. Here we calculate the scalar QNMs of this quantum corrected BH, investigating how its spectrum deviates from the well-known Schwarzschild case. The remaining of this paper is organized as follows. In Sec. II we review some aspects of the solution obtained in Refs. [40; 41], highlighting its main properties. In Sec. III we investigate the dynamics of a massless scalar field over the quantum corrected spacetime and review the corresponding boundary problem of QNMs. In Sec. IV we revisit two methods for calculating QNM frequencies, namely the third order Wentzel-Kramers-Brillouin (WKB) approximation, as well as the Leaver's continued fraction method. In Sec. V we exhibit a selection of our numerical results. We first compare, as a consistency check, the third order WKB results and the ones obtained via continued fraction calculations. We also compute, with the Leaver method, the first 30 overtones for the modes \(l=0\) and \(l=1\). We present our final remarks in Sec. VI. We use natural units, such that \(c=G=\hbar=1\). ## II Effective quantum corrected Schwarzschild spacetime The authors of Ref. [40; 41] reported the following line element: \[ds^{2}=-f(r)dt^{2}+\left[\left(1-\frac{r_{0}}{r}\right)f(r)\right]^{-1}dr^{2} +r^{2}d\Omega^{2}, \tag{1}\] where \(r_{0}<2M\) is a LQG parameter, \(f(r)\equiv 1-2M/r\) and \(d\Omega^{2}\) is the line element of the 2-sphere. This metric represents a static, spherically symmetric and asymptotically flat spacetime. The horizon is located at the hypersurface \(r=r_{h}=2M\), similarly to what we have in Schwarzschild spacetime. Nonetheless, the quantity \(M\) cannot be simply interpreted as the mass of the BH. As pointed out in Ref. [41], the different geometric definitions of mass, namely, the Komar, ADM and Misner-Sharp masses, need to be taken into account. These quantities are given by \[M_{\rm K}=M\sqrt{1-\frac{r_{0}}{r}}, \tag{2a}\] \[M_{\rm ADM}=M+\frac{r_{0}}{2},\] (2b) \[M_{\rm MS}=M+\frac{r_{0}}{2}-\frac{Mr_{0}}{r}, \tag{2c}\] where \(M_{\rm K},~{}M_{\rm ADM}\) and \(M_{\rm MS}\) are the Komar, ADM and Misner-Sharp mass, respectively. The Komar and Misner-Sharp masses do not need to coincide, since the quantum corrected spacetime is not a solution of the Einstein's equations [46]. However, in the limit that \(r\) goes to infinity, for spherically symmetric and asymptotically flat spacetimes, the ADM and Misner-Sharp masses must be equal [47], what is indeed the case. The BH parameters, \(M\) and \(r_{0}\) can be redefined in a geometric invariant way, according to \[M=\lim_{r\to\infty}M_{\rm K}, \tag{3a}\] \[r_{0}=2\lim_{r\to\infty}(M_{\rm MS}-M_{\rm K}). \tag{3b}\] In FIG.1 we display the Penrose diagram of the spacetime maximal extension [41]. Region I stands for the asymptotically flat region in which \(r\in(r_{h},\infty)\). This patch have the usual conformal infinities, namely, the timelike infinities, \(i^{-}\) and \(i^{+}\), the null infinities, \(\mathcal{J}^{-}\) and \(\mathcal{J}^{+}\) and the spatial infinity \(i^{0}\). Region II stands for the BH region and corresponds to \(r\in(r_{0},r_{h})\). The remaining regions III and IV, which cannot be covered by the coordinate system \((t,r,\theta,\varphi)\), are the white hole region and another asymptotically flat region, respectively. The bottom and upper regions in blank with dashed contour are copies of the middle structure [41]. This spacetime has a global structure of a wormhole with minimal surface area \(4\pi r_{0}^{2}\) (see FIG. 1). The radius \(r_{0}\) defines the minimal spacelike hypersurface separating the trapped BH interior from the anti-trapped WH region. Therefore, the effective quantum Schwarzschild spacetime, differently from the Schwarzschild BH, is regular everywhere, as can be verified by computing the curvature scalar of this spacetime. ## III Scalar perturbations The dynamics of a massless scalar field \(\Phi\) is determined by the Klein-Gordon equation \[\nabla_{\mu}\nabla^{\mu}\Phi=\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g} g^{\mu\nu}\partial_{\nu}\Phi\right)=0, \tag{4}\] where \(g\) is the metric determinant and \(g^{\mu\nu}\) are the contravariant components of the metric tensor. Due to spherical and time translation symmetries, the scalar field admits the product decomposition given by \[\Phi(x^{\mu})=\frac{\psi_{\omega l}(r)}{r}Y_{lm}(\theta,\varphi)e^{-i\omega t}, \tag{5}\] where \(Y_{lm}(\theta,\varphi)\) are the spherical harmonics. Inserting the metric components given in Eq. (1), as well as the field decomposition given in Eq. (5), into Eq. (4), we obtain a Schrodinger-like equation for the radial part, which is given by \[\frac{d^{2}\psi_{\omega l}}{dr_{*}^{2}}+\left(\omega^{2}-V_{I,\nu_{0}}[r(r_{*})] \right)\psi_{\omega l}=0, \tag{6}\] where the effective potential \(V_{I,\nu_{0}}(r)\) is defined by \[V_{I,\nu_{0}}(r)=f(r)\left(\frac{l(l+1)}{r^{2}}+\frac{4M+r_{0}}{2r^{3}}-\frac{ 3Mr_{0}}{r^{4}}\right), \tag{7}\] and \(r_{*}\) is the tortoise coordinate: \[dr_{*}=\frac{dr}{f(r)\sqrt{1-r_{0}/r}}. \tag{8}\] The effective potential is illustrated in FIG. 2, where we see that the maximum value of the potential decreases as we increase the LQG parameter \(r_{0}\). To calculate the scalar QNMs of the loop quantum corrected Schwarzschild spacetime we have to solve Eq. (6), imposing the boundary conditions \[\psi_{\omega l}(r_{*})\approx\begin{cases}e^{-i\omega r_{*}}\approx(r-2M) \sqrt{\frac{-2i\omega M}{2r}}&r_{*}\rightarrow-\infty\\ e^{+i\omega r_{*}}\approx e^{i\omega r}r^{2i\omega M+\frac{i\omega m_{0}}{2}} &r_{*}\rightarrow+\infty\end{cases}. \tag{9}\] Eqs. (6) and (9) define an eigenvalue problem for \(\psi_{\omega l}\) with eigenvalue \(\omega\) in the domain \(r\in(2M,\infty)\). We expect that the spectrum is a countable infinite set \(\{\omega_{n}|\,n=0,1,...\}\), where \(n\) enumerates the eigenfrequencies in increasing imaginary part magnitude order, the so-called overtones. In general, an expression for the spectrum cannot be written in a closed analytical form, not even for Schwarzschild. Thus, it is common to implement approximate and numerical methods to treat the problem of QNMs. In the next session, we implement the third order WKB, continued fraction and the Prony methods to compute the scalar eigenfrequencies of the loop quantum corrected Schwarzschild spacetime. ## IV Scalar QNMs Calculations ### WKB approximation The first method we implement for the calculation of scalar QNMs is the third order WKB approximation. The WKB method is a semianalytic technique, first applied to BH scattering problems by Schutz and Will [48] and then improved by Iyer and Will [49]. For any barrier type potential whose extremities are fixed (which is our case, see FIG. 2), this method can be applied and yields an analytic formula that approximates the QNM frequencies. Figure 1: Penrose diagram representing the global structure of the spacetime proposed in Refs. [40; 41], which corresponds to a wormhole solution. The hypersurface \(r=r_{0}\) represents a transition surface between the BH and the white hole regions. Figure 2: Effective potential given in Eq. (7) as a function of the tortoise coordinate for various values of \(r_{0}\) and \(l=0\). The third order WKB approximation is given by [49]: \[\begin{split}\omega_{l,r_{0},n}^{2}\approx V_{0}&+\sqrt{ -2V_{0}^{\prime\prime}}\Lambda\\ &-i\left(n+\frac{1}{2}\right)\sqrt{-2V_{0}^{\prime\prime}}(1+ \Omega),\end{split} \tag{10}\] with \[\begin{split}\Lambda=\frac{1}{\sqrt{-2V_{0}^{\prime\prime}}}& \left[\frac{1}{8}\left(\frac{V_{0}^{(4)}}{V_{0}^{\prime\prime}} \right)\left(\frac{1}{4}+\kappa^{2}\right)\right.\\ &-\frac{1}{288}\left(\frac{V_{0}^{\prime\prime\prime}}{V_{0}^{ \prime\prime}}\right)^{2}\left(7+60\kappa^{2}\right)\Bigg{]},\end{split} \tag{11a}\] \[\begin{split}\Omega=\frac{1}{\left(-2V_{0}^{\prime\prime} \right)}&\left[\frac{5}{6912}\left(\frac{V_{0}^{\prime\prime \prime\prime}}{V_{0}^{\prime\prime}}\right)^{4}\left(77+188\kappa^{2}\right) \right.\\ &-\frac{1}{384}\left(\frac{V_{0}^{\prime\prime\prime 2}V_{0}^{(4)}}{V_{ 0}^{\prime\prime\prime}}\right)\left(51+100\kappa^{2}\right)\\ &+\frac{1}{2304}\left(\frac{V_{0}^{(4)}}{V_{0}^{\prime\prime}} \right)^{2}\left(67+68\kappa^{2}\right)\\ &+\frac{1}{288}\left(\frac{V_{0}^{\prime\prime\prime}V_{0}^{(5)}} {V_{0}^{\prime\prime}}\right)\left(19+28\kappa^{2}\right)\\ &-\frac{1}{288}\left(\frac{V_{0}^{(6)}}{V_{0}^{\prime\prime}} \right)\left(5+4\kappa^{2}\right)\Bigg{]},\end{split} \tag{11b}\] where \(V_{0}\) is the maximum value of \(V_{l,r_{0}}\), \(\kappa=n+1/2\), the primes corresponds to first, second and third order derivatives, while the superscript in round brackets \((i)\) denotes derivative of fourth and higher orders with respect to the tortoise coordinate. In this work, the WKB approximation will be used mainly as a consistency check for the continued fraction method. ### Continued fraction method One of the most accurate methods to calculate QNMs was implemented in BH physics by Leaver [50] and it is called the continued fraction method. This method is based on finding an analytical solution of the radial equation as a power series satisfying the QNM boundary conditions. Thus, the QNM spectrum is determined by those values of the frequencies which make the series convergent on the entire domain. In order to apply the power series method, we first need to investigate the regular/irregular singular points of the ordinary differential equation (6) (see Ref. [51] for further details). The singularities of Eq. (6) are \(\{0,r_{0},2M,\infty\}\), where the singularity at infinity is irregular and all the others are regular. The power series solutions around some singularity has a convergence radius which cannot be greater than the distance to the next neighboring singular point. Since the domain of the QNM eigenvalue problem is \((2M,\infty)\), we cannot find a well defined solution in the entire domain using a power series of \(r\). Therefore, we consider the map \[r\mapsto\frac{r-2M}{r-r_{0}}. \tag{12}\] Let \((0,r_{0},2M,\infty)\) be the ordered 4-tuple formed by the singularities of Eq. (6). This 4-tuple, according to Eq. (12), is mapped to \((2M/r_{0},\infty,0,1)\) (see FIG. 3). Moreover, the domain \((2M,\infty)\) is compactified into \((0,1)\). The singular point \(2M/r_{0}\) is always greater than \(1\), since \(0<r_{0}<2M\). Hence, in this new coordinate defined by Eq. (12), we can find a well defined analytical solution of Eq. (6) in the domain \((0,1)\), which correspond to the entire domain of interest. We, therefore, may consider the solution of Eq. (6) to be \[\psi_{\omega l}=r(r-r_{0})^{2i\omega M+\frac{i\omega r_{0}}{2}-1}e^{i\omega r} \sum_{n=0}^{\infty}a_{n}\left(\frac{r-2M}{r-r_{0}}\right)^{\zeta+n}, \tag{13}\] where \[\zeta=\frac{-2i\omega M}{\sqrt{1-\frac{r_{0}}{2M}}} \tag{14}\] is the characteristic exponent obtained from the indicial equation [51] corresponding to the ingoing solution at the horizon. The functions multiplying the summation are chosen to satisfy the boundary conditions at infinity, as well as to simplify the recurrence relation. The sequence \((a_{n})_{n\in\mathbb{N}}\) is determined by a four-term recurrence relation defined by \[\alpha_{0}a_{1}+\beta_{0}a_{0}=0 \tag{15a}\] \[\alpha_{1}a_{2}+\beta_{1}a_{1}+\gamma_{1}a_{0}=0\] (15b) \[\alpha_{n}a_{n+1}+\beta_{n}a_{n}+\gamma_{n}a_{n-1}+\delta_{n}a_{n-2}=0,\quad n =2,3,..., \tag{15c}\] where the recurrence coefficients are given by Figure 3: Schematic representation of the map defined by Eq. (12). We assign the values taken by singularities \((0,r_{0},2M,\infty)\mapsto(2M/r_{0},\infty,0,1)\) and show how the domain \((2M,\infty)\) is mapped to \((0,1)\). \[\alpha_{n}=-32i\sqrt{2}M^{5/2}(n+1)\omega(2M-r_{0})+8M(n+1)^{2}(2M-r_{0})^{3/2}, \tag{16a}\] \[\beta_{n}=64M^{4}\omega^{2}(2M-r_{0})^{1/2}+8\sqrt{2}M^{5/2}\omega(12M \omega+12in+5i)(2M-r_{0})+\] \[4M\left(-2l(l+1)+24M^{2}\omega^{2}+6iM(2n+1)\omega-n(6n+5)-2 \right)(2M-r_{0})^{3/2}+4\sqrt{2}M^{3/2}\omega(4M\omega-4in-i)(2M-r_{0})^{2}\] \[\quad+2(2n+1)(n+2iM\omega)(2M-r_{0})^{5/2},\] (16b) \[\gamma_{n}= -128M^{4}\omega^{2}(2M-r_{0})^{1/2}-16\sqrt{2}M^{5/2}\omega(12M \omega+6in-i)(2M-r_{0})\] \[+4M\left(2l(l+1)+n(-2-24iM\omega)+M\omega(-34M\omega+3i)+6n^{2}+ 1\right)(2M-r_{0})^{3/2}+8\sqrt{2}M^{3/2}\omega(6M\omega+4in-i)(2M-r_{0})^{2}\] \[+\left(-4l(l+1)+72M^{2}\omega^{2}+4iM(6n-1)\omega-8n^{2}+4n-2 \right)(2M-r_{0})^{5/2}+8\sqrt{2}M^{3/2}\omega^{2}(2M-r_{0})^{3}\] \[+\omega(-2M\omega+4in-i)(2M-r_{0})^{7/2},\] (16c) \[\delta_{n}=64M^{4}\omega^{2}(2M-r_{0})^{1/2}+8\sqrt{2}M^{5/2} \omega(12M\omega+4in-3i)(2M-r_{0})\] \[+4M\left(3n(1+4iM\omega)+M\omega(10M\omega-9i)-2n^{2}-1\right)(2 M-r_{0})^{3/2}+\left(-4\sqrt{2}M^{3/2}\omega(16M\omega+4in-3i)\right)(2M-r_{0})^{2}\] \[+\left(n(-6-32iM\omega)+12M\omega(-5M\omega+2i)+4n^{2}+2\right) (2M-r_{0})^{5/2}+8\sqrt{2}M^{3/2}\omega^{2}(2M-r_{0})^{3}\] \[+\omega(14M\omega+4in-3i)(2M-r_{0})^{7/2}-\omega^{2}(2M-r_{0})^{ 9/2}.\] This recurrence relation is in agreement with Leaver's hypothesis, which says that a radial equation with a confluent singularity and three regular singularities generates a solution whose expansion coefficients obey a 4-term recurrence relation [52]. To calculate the QNM from a 4-term recurrence relation we first have to apply the Gaussian elimination scheme, defined by \[\tilde{\alpha}_{n}\equiv\alpha_{n},\quad\tilde{\beta}_{n}\equiv\beta_{n}, \quad\tilde{\gamma}_{n}\equiv\gamma_{n},\quad\text{for }n=0,1\, \tag{17}\] and \[\tilde{\delta}_{n}\equiv 0,\quad\tilde{\alpha}_{n}\equiv\alpha_{n}, \tag{18a}\] \[\tilde{\beta}_{n}\equiv\beta_{n}-\frac{\tilde{\alpha}_{n-1}\delta_{n}}{\tilde {\gamma}_{n-1}},\quad\tilde{\gamma}_{n}\equiv\gamma_{n}-\frac{\tilde{\beta}_{n -1}\delta_{n}}{\tilde{\gamma}_{n-1}}\quad\text{for }n\geq 2. \tag{18b}\] The new recurrence coefficients now obey a 3-term recurrence relation given by \[\tilde{\alpha}_{0}a_{1}+\tilde{\beta}_{0}a_{0}=0, \tag{19a}\] \[\tilde{\alpha}_{n}a_{n+1}+\tilde{\beta}_{n}a_{n}+\tilde{\gamma}_{n}a_{n-1}= 0,\ \ n=1,2,.... \tag{19b}\] The condition that the series defined in Eq. (13) converges uniformly is given by [50] \[0=\tilde{\beta}_{0}-\frac{\tilde{\alpha}_{0}\tilde{\gamma}_{1}} {\tilde{\beta}_{1}-\frac{\tilde{\alpha}_{1}\tilde{\gamma}_{2}}{\tilde{\beta}_ {2}-\frac{\tilde{\alpha}_{2}\tilde{\gamma}_{3}}{\tilde{\beta}_{3}-...}}}, \tag{20}\] \[\equiv\tilde{\beta}_{0}-\frac{\tilde{\alpha}_{0}\tilde{\gamma}_{1 }}{\tilde{\beta}_{1}-\frac{\tilde{\alpha}_{1}\tilde{\gamma}_{2}}{\tilde{\beta} _{2}-\frac{\tilde{\alpha}_{2}\tilde{\gamma}_{3}}{\tilde{\beta}_{3}-...}}}.\] Thus, the set of frequencies that makes Eq. (20) true are, precisely, the QNM frequencies. The roots of Eq. (20) can be found numerically. The most stable root of the continued fraction defined in Eq. (20) is the fundamental mode. The \(n\)-th inversion of Eq. (20) is defined by \[\tilde{\beta}_{n}-\frac{\tilde{\alpha}_{n-1}\tilde{\gamma}_{n}}{\tilde{\beta }_{n-1}-}...-\frac{\tilde{\alpha}_{0}\tilde{\gamma}_{1}}{\tilde{\beta}_{0}}= \frac{\tilde{\alpha}_{n}\tilde{\gamma}_{n+1}}{\tilde{\beta}_{n+1}-\frac{\tilde{ \alpha}_{n+1}\tilde{\gamma}_{n+2}}{\tilde{\beta}_{n+2}-}}..., \tag{21}\] and its most stable root is the \(n\)-th eigenfrequency. ### Prony method We can also solve Eq. (4) without assuming the time dependence \(e^{-i\omega t}\) in Eq. (5). This lead to the partial differential equation (PDE): \[\frac{\partial^{2}\Psi_{I}}{\partial r_{*}^{2}}-\frac{\partial^{2}\Psi_{I}}{ \partial t^{2}}-V_{I,r_{0}}[r(r_{*})]\Psi_{I}=0, \tag{22}\] where now \(\Psi_{I}\) is a function of the variables \((t,r_{*})\). We may solve Eq. (22) numerically, setting a Gaussian wave package centered at \(r_{*}=0\) as our initial configuration for the field. The time evolution of the solution is characterized by three stages: (i): a prompt response at early times, which is strongly determined by the chosen initial conditions of the field, _(ii):_ exponential decay at intermediate times, determined by the QNMs and _(iii):_ power-law fall-off at late times, due to backscattering of the field in tail of the potential. Once the Eq. (22) is solved for some initial data configuration, by means of estimating methods, one is able to construct an analytic approximation that fits the original solution. Here we use the Prony method to find an approximate Fourier decomposition, which allows us to calculate the fundamental mode. A detailed description of the Prony method can be found in Ref. [53]. ## V Results In this section we exhibit a selection of our results, obtained from the methods described in the previous sections. As a consistency check that the continued fraction method leads to the correct values of the eigenfrequencies, we first compare the QNMs calculated with Leaver's method and the WKB approximation. We compute the QNMs \(n=0,1,2\) for different numbers of the azimuthal number \(l=0,1,2\). The results are exhibited in the top panel of FIG. 4. We define a color code for each value of \(n\), namely: green \(\leftrightarrow n=0\), blue \(\leftrightarrow n=1\) and red \(\leftrightarrow n=2\). The (green, blue and red) circles represent the QNMs calculated with the WKB method, while the solid lines were obtained by the continued fraction method. Both, WKB and continued fraction calculations, were computed for \(r_{0}/r_{h}\) ranging from 0 (Schwarzschild) to 0.99. The yellow circles, located at the top of each continuous line, represents the quasinormal frequencies of the Schwarzschild BH calculated Figure 4: _Top_: First three (\(n=0,1,2\)) QNM-frequencies of the scalar field on the quantum corrected Schwarzschild spacetime. The green plots correspond to \(n=0\), blue to \(n=1\) and red to \(n=2\). Circles represent the WKB calculations, for several values of the LQG parameter \(r_{0}\), beginning at \(r_{0}/r_{h}=0\) (yellow circles representing the Schwarzschild case) and ending at \(r_{0}/r_{h}=0.99\). The solid line shows the continued fraction calculation in the same range of parameters. _Bottom_: Computation of QNM frequencies for \(n=1\), \(l=0\) (bottom left panel) and \(n=2\), \(l=0\) (bottom right panel) for \(r_{0}/r_{h}\) ranging from 0 to 0.99, obtained using the continued fraction method with Nollert improvement [55]. All the frequencies become less damped as \(r_{0}/r_{h}\) increases. using the WKB approximation. FIG. 4 shows that, as we increase the values of \(l\), the results obtained from WKB and continued fraction methods converge to the same value, which was already expected. Nonetheless, even for \(l=0\), both, WKB and continued fraction methods, results are in very good agreement. From the continued fraction computations we also note that the curves in the complex plane, parametrized by \(r_{0}/r_{h}\), for \(n>0\) and \(l=0\), have a spiral-like shape. We display the curves for \(n=1\), \(l=0\) and \(n=2\), \(l=0\) in the left and right bottom panels of FIG. 4, respectively. We note that, in order to obtain the results shown in the bottom panel of FIG. 4, we applied the continued fraction method with the improvement proposed by Nollert [55]. The Nollert improvement is suitable to compute QNM frequencies with large imaginary part, hence it gives accurate numerical results when the LQG parameter is close to the extreme value. While the LQG parameter varies in the indicated range, the trajectory described in the complex plane moves away from the Schwarzschild QNMs and spirals towards some fixed complex value, which corresponds to the QNMs associated with the extremal case. A similar behavior was also found for the Reissner-Nordstrom BH [54]. We also note that that these curves are self-intersecting. The existence of self-intersecting curves in the orbits of the QNM frequencies is related to the fact that, for different values of the LQG parameter \(r_{0}\), the BH may present the same frequency for some given \(n\). ### \(l=0\) Modes We can fix \(l=0\) and compute the first 30 modes for several values of \(r_{0}/r_{h}\). The results, calculated with the continued fraction method, are displayed in FIG. 5. The small deviation from Schwarzschild regime (\(r_{0}/r_{h}=0,\ 0.05,\ 0.1\)) is displayed in the top left panel of FIG. 5, where we obtain the famous Schwarzschild's scalar spectrum, formed by two non-intersecting branches of QNM frequencies, with slight disturbances. There is a decrease in the damping, in accordance with FIG. 4. Nonetheless, the QNM frequencies for higher values of \(r_{0}/r_{h}\) are completely different from the Schwarzschild case. In the remaining panels of FIG. 5 we exhibit the spectrum near \(r_{0}/r_{h}=0.2,\ 0.3,\ 0.4,\ 0.5\). As the LQG parameter varies, the real part of the frequencies oscillates. We obtained frequencies with \(\text{Re}(\omega)=0\), e.g. the mode \(n=4\) for \(r_{0}/r_{h}=0.48\). However, the continued fraction method does not converge for \(\text{Re}(\omega)\to 0\), hence the existence of purely damped modes cannot be indeed stated. We remark that the existence of frequencies with real part almost equal to zero can be found in Schwarzschild's QNM spectrum for the gravitational field. The algebraically special frequency \(2M\omega\approx-i(l-1)(l+1)(l+2)/6\) is almost a pure imaginary number [57]. In the Schwarzschild case the algebraically special frequency does not exist for fields other than the gravitational field. Thus, qualitatively, in the weakly damped regime, the spectrum of the quantum corrected Schwarzschild BH for the scalar field resembles the spectrum of the Schwarzschild BH for the gravitational field. ### \(l=0\) Modes We may now fix \(l=1\) and compute the first 30 modes for the same values of the LQG parameter of Sec. V.1. The QNMs were calculated using the continued fraction method and are displayed in FIG. 6. Once more, small holonomy corrections \(r_{0}/r_{h}=0,\ 0.05,\ 0.1\) lead to small disturbances in the Schwarzschild spectrum, leading to an overall decrease in the imaginary part of the QNM frequencies. The QNMs for higher values of the LQG parameter, namely, near \(r_{0}/r_{h}=0.2,\ 0.3,\ 0.4,\ 0.5\), are displayed in the remaining panels of FIG. 6. As we increase the value of \(r_{0}/r_{h}\), again the oscillatory pattern appears in the spectrum of the quantum corrected BH. The first five overtones are also exhibited in Table 2. The pattern exhibited in both FIGs. 5 and 6 may go on forever as \(n\) increases. However such analysis requires an asymptotic study of QNMs that is beyond the scope of this paper. If this assertion is true, then the limit \(\lim_{n\to\infty}\text{Re}\,\omega\) does not exist, what would differ from the Schwarzschild's case, which is known to be \(2M\lim_{n\to\infty}\text{Re}\,\omega=\ln 3/4\pi\)[56]. ### Time domain profile We may solve numerically the time-dependent wave equation given by Eq. (22), and for that, we need to specify an initial condition. We consider the initial data as a Gaussian, \begin{table} \begin{tabular}{|c||c|c|} \hline & \multicolumn{2}{c|}{\(l=1\) (Leaver)} \\ \hline \(n\) & Schwarzschild & \(r_{0}/r_{h}=0.3\) \\ \hline \(0\) & \(0.5858-0.1953i\) & \(0.5824-0.1738i\) \\ \hline \(1\) & \(0.5288-0.6125i\) & \(0.5416-0.5395i\) \\ \hline \(2\) & \(0.4590-1.0802i\) & \(0.4892-0.9406i\) \\ \hline \(3\) & \(0.4065-1.5766i\) & \(0.4473-1.3640i\) \\ \hline \(4\) & \(0.3702-2.0815i\) & \(0.44172-1.7952i\) \\ \hline \end{tabular} \end{table} Table 2: First five overtones of scalar perturbations, expressed in \(r_{h}^{-1}\) units, calculated by the continued fraction method for \(l=1\). We considered the Schwarzschild BH, as well as the holonomy corrected Schwarzschild BH with \(r_{0}/r_{h}=0.3\). \begin{table} \begin{tabular}{|c||c|c|} \hline \multicolumn{3}{|c|}{\(l=0\) (Leaver)} \\ \hline \(n\) & Schwarzschild & \(r_{0}/r_{h}=0.3\) \\ \hline \(0\) & \(0.2209-0.2097i\) & \(0.2099-0.1828i\) \\ \hline \(1\) & \(0.1722-0.6961i\) & \(0.1705-0.5996i\) \\ \hline \(2\) & \(0.1514-1.2021i\) & \(0.1518-1.0329i\) \\ \hline \(3\) & \(0.1408-1.7073i\) & \(0.1403-1.4657i\) \\ \hline \(4\) & \(0.1341-2.2112i\) & \(0.1307-1.8974i\) \\ \hline \end{tabular} \end{table} Table 1: First five overtones of scalar perturbations, expressed in \(r_{h}^{-1}\) units, calculated by the continued fraction method for \(l=0\). We consider the Schwarzschild BH, as well as the holonomy corrected Schwarzschild BH with \(r_{0}/r_{h}=0.3\). Figure 5: First 30 QNM frequencies (\(n=0,1,...,29\)) of the scalar field on the quantum corrected Schwarzschild spacetime for \(l=0\). The top left panel shows the modes for \(r_{0}/r_{h}=0,\ 0.05,\ 0.1\), whereas the remaining panels exhibit the spectrum of the quantum corrected BH for values of the LQG parameter near \(r_{0}/r_{h}=0.2,\ 0.3,\ 0.4,\ 0.5\). All the spectra were calculated with the continued fraction method. Figure 6: First 30 QNM frequencies (\(n=0,\,1,\,...,29\)) of the scalar field on the quantum corrected Schwarzschild spacetime for \(l=1\). We display in the top left panel the modes \(r_{0}/r_{h}=0,\,\,0.05,\,\,0.1\), whereas in the remaining panels we show the spectrum for higher values of the LQG parameter, namely near \(r_{0}/r_{h}=0.2,\,\,0.3,\,\,0.4,\,\,0.5\). All spectra were calculated with the continued fraction method. according to \[\Psi_{l}(0,r_{*})=e^{-r_{*}^{2}/4};\ \partial_{t}\Psi_{l}(t,r_{*})|_{t=0}=0. \tag{23}\] The chosen initial data does not play a significant role in the time profile of the wave function from intermediate times onwards. After a transient initial stage (highly dependent on initial conditions), the time profile is dominated by the QNMs and then by the late time tail decay. In FIG. 7 we exhibit the logarithmic plot for the absolute value of the solution as a function of time. The tortoise coordinate is fixed at \(r_{*}/r_{h}=10\). We consider the cases \(l=0\) (left panel) and \(l=1\) (right panel). The logarithmic wave-form is calculated for the cases of Schwarzschild (darker blue) and holonomy corrected Schwarzschild with \(r_{0}/r_{h}=0.3\) (lighter blue). We also include the respective Prony's fittings (stars for Schwarzschild and inverted triangles for quantum corrected Schwarzschild). The three phases described in the Sec. IV.3 are clearly distinguished in FIG. 7. We highlight that the power-law tail developed at late times seems to be independent of the loop quantum correction. The fundamental modes, obtained from the Prony method, for \(l=0,1\) are showed in TABLE 3. This results can be compared with those of TABLES I and II. As we can see, both results are in excellent agreement. ## VI Discussion and conclusions Computing QNMs of BHs is a long-standing task in BH physics. The first calculation of QNMs as a boundary problem was carried out in a paper by Chandrasekhar and Detweiler [15]. Later on, Leaver developed a simple, but very powerful approach to this problem [50; 52]. Since then, the calculation of QNMs of different types of astrophysical objects has been carried out. Most of the main BHs spacetimes already have their QNMs cataloged in several tables with great precision. Simultaneously with the progress of the BH perturbation theory, the search for a quantum theory of gravity was strongly active. Among several possibilities, the theory of LQG has had many interesting results, namely, the construction of singular-free cosmological and BH solutions [31; 32] and the derivation of the Hawking-Bekenstein entropy [33]. These results might be the smoking guns to a complete and consistent theory of quantum gravity. Nevertheless, there is still a lot of work to be done until we can interpret all LQG results properly. Thus, many effective models have been studied, aiming to obtain effects that one would expect to observe in the complete LQG theory. We investigated the scalar QNMs of a quantum corrected Schwarzschild BH. We used standard methods of BH perturbation theory, namely, the third order WKB approximation, the continued fraction method (also named as Leaver's method) and the Prony method. In order to perform a consistency check, we compared the numerical results, computed through the three different methods, and obtained an excellent agreement, in the regime of applicability of each method. We computed the QNMs for different values of the multipole number \(l\) and the overtones \(n\). In particular, we obtained the first 30 overtones for the fundamental mode \(l=0\) and the first 30 overtones for the mode \(l=1\), using the Leaver's method. Our numerical results show that, for a fixed \(l\) and \(n\), the quantum corrected Schwarzschild BH perturbations become less damped as we increase the LQG parameter \(r_{0}\). Moreover, for \(l=0\) and \(n>0\), the QNMs frequencies curves in the Figure 7: Plots of \(\ln|\Psi_{l}(t,r_{*}=10)|\) for \(l=0\) (left panel) and \(l=1\) (right panel). In both plots, we include the Schwarzschild case (darker blue), the quantum corrected Schwarzschild case with \(r_{0}/r_{h}=0.3\) (lighter blue) and also their associated fittings obtained from the Prony method (stars and inverted triangles, respectively). \begin{table} \begin{tabular}{|c||c|c|} \hline \multicolumn{3}{|c|}{\(l=0\) (Prony)} \\ \hline \(n\) & Schwarzschild & \(r_{0}/r_{h}=0.3\) \\ \hline \(0\) & \(0.2209-0.2098\acute{e}\) & \(0.2098-0.1828\acute{e}\) \\ \hline \hline \multicolumn{3}{|c|}{\(l=1\) (Prony)} \\ \hline \(n\) & Schwarzschild & \(r_{0}/r_{h}=0.3\) \\ \hline \(0\) & \(0.5858-0.1953\acute{e}\) & \(0.5823-0.1737\acute{e}\) \\ \hline \end{tabular} \end{table} Table 3: Fundamental frequency of scalar perturbations, expressed in \((2M)^{-1}\) units, calculated by the Prony method for \(l=0,1\). We considered the Schwarzschild BH, as well as the holonomy corrected Schwarzschild BH with \(r_{0}/r_{h}=0.3\). complex plane are self intersecting, meaning that two different quantum corrected Schwarzschild BH configurations may have the same QNMs. Furthermore, we obtained that for middle-to-high values of \(r_{0}/r_{h}\), the scalar QNMs of the quantum corrected Schwarzschild BH may have vanishing real part, i.e. it admits purely decaying modes. We remark that purely decaying modes in a classical Schwarzschild BH exist solely for gravitational perturbations [50; 57]. The recent detection of GWs has deepened our understanding of the classical nature of gravity. It is possible that future generations of GW detectors, such as the LISA detector, can probe the quantum nature of the gravitational field. In this work, we obtained that the QNM oscillations of a quantum-corrected BH can be very different from the Schwarzschild one. Our results indicate that the story about the quantum nature of gravity can be heard from the sounds played by a BH. **Note added:** As this paper was being completed, we became aware of Ref. [58], which also covers, albeit using different techniques, the scalar QNMs of the spacetime (1), but in a different range of the parameter space and a more restrictive set of overtone numbers. In both cases, our analysis required specific techniques, such as the Leaver's method with Nollert improvement. ###### Acknowledgements. We are grateful to Fundacao Amazonia de Amparo a Estudos e Pesquisas (FAPESPA), Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) and Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES) - Finance Code 001, from Brazil, for partial financial support. ZM, HLJ and LC thank the University of Aveiro, in Portugal, for the kind hospitality during the completion of this work. This work is supported by the Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology (FCT - Fundacao para a Ciencia e a Tecnologia), references UIDB/04106/2020 and UIDP/04106/2020. The authors acknowledge support from the projects CERN/FIS-PAR/0027/2019, PTDC/FIS-AST/3041/2020, CERN/FIS-PAR/0024/2021 and 2022.04560.PTDC. This work has further been supported by the European Union's Horizon 2020 research and innovation (RISE) programme H2020-MSCA-RISE-2017 Grant No. FunFiCO-777740 and by the European Horizon Europe staff exchange (SE) programme HORIZON-MSCA-2021-SE-01 Grant No. NewFunFiCO-101086251.
2309.16247
PP-MeT: a Real-world Personalized Prompt based Meeting Transcription System
Speaker-attributed automatic speech recognition (SA-ASR) improves the accuracy and applicability of multi-speaker ASR systems in real-world scenarios by assigning speaker labels to transcribed texts. However, SA-ASR poses unique challenges due to factors such as speaker overlap, speaker variability, background noise, and reverberation. In this study, we propose PP-MeT system, a real-world personalized prompt based meeting transcription system, which consists of a clustering system, target-speaker voice activity detection (TS-VAD), and TS-ASR. Specifically, we utilize target-speaker embedding as a prompt in TS-VAD and TS-ASR modules in our proposed system. In constrast with previous system, we fully leverage pre-trained models for system initialization, thereby bestowing our approach with heightened generalizability and precision. Experiments on M2MeT2.0 Challenge dataset show that our system achieves a cp-CER of 11.27% on the test set, ranking first in both fixed and open training conditions.
Xiang Lyu, Yuhang Cao, Qing Wang, Jingjing Yin, Yuguang Yang, Pengpeng Zou, Yanni Hu, Heng Lu
2023-09-28T08:40:48Z
http://arxiv.org/abs/2309.16247v1
# PP-MET: A Real-World Personalized Prompt Based Meeting Transcription System ###### Abstract Speaker-attributed automatic speech recognition (SA-ASR) improves the accuracy and applicability of multi-speaker ASR systems in real-world scenarios by assigning speaker labels to transcribed texts. However, SA-ASR poses unique challenges due to factors such as speaker overlap, speaker variability, background noise, and reverberation. In this study, we propose PP-MeT system, a real-world personalized prompt based meeting transcription system, which consists of a clustering system, target-speaker voice activity detection (TS-VAD), and TS-ASR. Specifically, we utilize target-speaker embedding as a prompt in TS-VAD and TS-ASR modules in our proposed system. In constrast with previous system, we fully leverage pre-trained models for system initialization, thereby bestowing our approach with heightened generalizability and precision. Experiments on M2MeT2.0 Challenge dataset show that our system achieves a cp-CER of 11.27% on the test set, ranking first in both fixed and open training conditions. Xiang Lyu, Yuhang Cao, Qing Wang, Jingjing Yin, Yuguang Yang, Pengpeng Zou, Yanni Hu, Heng Lu Ximalaya Inc., Shanghai, China SA-ASR, TS-VAD, TS-ASR, personalized prompt, M2MeT2.0 Challenge ## 1 Introduction The rapid advancements in deep learning have led to remarkable strides in automatic speech recognition (ASR), substantially enhancing its overall performance. Despite these achievements, ASR systems continue to face challenges in real-world far-field scenarios, such as meetings or home parties, where background noise, unavoidable reverberation, and overlapping speech from multiple speakers can significantly degrade their performance. In order to develop a robust ASR system in such challenging acoustic environments, numerous research studies have concentrated on multi-channel multi-party speech recognition and diarization within dinner party scenarios [1, 2]. The objective of the M2MeT2.0 challenge [3, 4] is to address the ASR task in multi-party meetings, which involves providing precise transcriptions and identifying the corresponding speakers. To advance the practical application of current multi-speaker speech recognition systems, the M2MET 2.0 Challenge evaluates the task of Speaker-attributed ASR (SA-ASR). Additionally, the challenge includes two sub-tracks: fixed training condition track and open training condition track. Speaker-attributed ASR (SA-ASR) poses several challenges due to the complexity of accurately attributing speech to specific speakers. The SA-ASR task improves the accuracy and applicability of multi-speaker ASR systems in real-world scenarios by assigning speaker labels to transcribed texts. Unlike traditional ASR systems that transcribe speech without considering speaker identities, SA-ASR goes a step further by associating each recognized word or phrase with the corresponding speaker. SA-ASR faces unique challenges due to factors like speaker overlap, speaker variability, background noise, and reverberation. Overcoming these challenges involves developing advanced algorithms and techniques for speaker diarization, speech separation, and speaker recognition to accurately attribute spoken words to their respective speakers. The development of SA-ASR systems has the potential to improve the performance and usability of speech recognition in scenarios where multiple speakers are present, enabling applications that require speaker-specific information and analysis. In this study, we present the PP-MeT system, a personalized-prompt based meeting transcription system designed to address the ASR task in multi-party meetings. Our approach comprises three essential components: a clustering system, target-speaker voice activity detection (TS-VAD), and target-speaker ASR (TS-ASR). To enhance the system's performance and applicability, we integrate target-speaker embeddings as prompts within the TS-VAD and TS-ASR modules. Leveraging pre-trained models during system initialization further empowers our approach, granting it superior generalizability and precision. In experiments conducted on the M2MeT2.0 dataset, our integrated PP-MeT system achieves a concatenated minimum permutation character error rate (cp-CER) of only 11.27% on the test set, achieving the top position in both fixed and open training conditions. We also release our inference system with pre-trained models at website1. Footnote 1: [https://github.com/XimalayaEverestIntelligentLab/M2MET2.0](https://github.com/XimalayaEverestIntelligentLab/M2MET2.0) The rest of this paper is organized as follows. In Section 2, we detail the architecture of the PP-MeT system. Datasets and experimental setup are described in Section 3. Section 4 presents the experimental results of M2MeT2.0 Challenge test set and our ablation study. Finally, we conclude in Section 5. ## 2 Proposed System Description The overview of our proposed PP-MeT system for the M2MeT2.0 Challenge is shown in Figure 1. ### Speaker Embedding System As M2MeT2.0 encourage the participants to use pre-trained models, we use two pre-trained models2 from Wespeaker toolkit [5, 6]. One is Resnet34 from the CN-Celeb example, and another is ResNet34-LM, which is obtained by further training ResNet34 with a large-margin technique. We also train a ResNet34 model with Speechbrain toolkit3 to introduce diversity to our speaker embedding model. We will refer to these three speaker embedding models as SV-1/2/3 and the corresponding personalized prompt as Prompt-1/2/3 for simplicity. Footnote 2: [https://github.com/wenet-e2e/wespeaker/blob/master/docs/pretrained.md](https://github.com/wenet-e2e/wespeaker/blob/master/docs/pretrained.md) Footnote 3: [https://github.com/speechbrain/speechbrain/tree/develop](https://github.com/speechbrain/speechbrain/tree/develop) ### Clustering System Before proceeding to TS-VAD and TS-ASR systems, we need to estimate the number of speakers and initialize personalized prompts using clustering algorithm. First, we extract voice speech segments based on VAD results for each session. Then we split each segment into subsegments using a fixed 3s window size and 1.5s window-shift. After that, we use speaker embedding model to extract embedding for each subsegment. Finally, we feed the L2-normalized embedding into the clustering algorithm and obtain the speaker number for each session, as well as the label for each subsegment. We use DOVER-lap toolkit [7]4 to merge clustering results from different channel and speaker embedding models. We compare Auto-tuning Spectral Clustering with Normalized Eigen Gap [8](NME-SC) with Agglomerative Hierarchical Clustering(AHC) algorithm. As NME-SC outperforms AHC by a large margin, we use NME-SC algorithm results to initialize personalized prompts. Footnote 4: [https://github.com/desh2608/dover-lap](https://github.com/desh2608/dover-lap) After obtaining the clustering system result, for each speaker, we extract speech that contains only the targeted speaker as personalized speech. Then we repeat the speaker embedding extraction steps over the personalized speech and use the mean-pooled L2-normalized speaker embedding as the personalized prompt. ### TS-VAD System As the clustering system can not handle overlap speech, it results in a high miss error in multi-party meeting scenarios. To further reduce DER, we use TS-VAD system to give a more accurate estimate of speaker labels. We use ResNet34 model as backbone for our TS-VAD system, which is the same as that of speaker embedding model. First, we extract the pooling layer input as frame-level speaker embedding. Then we do a stats-pooling with 3-second stride to extract the frame level mean and std feature, and concatenate it with the original frame-level speaker embedding. We do mean-pooling and attention-pooling for frame-level speaker embedding and personalized prompts, respectively. After that, we use a conformer decoder layer to explore the relationship between the frame-level speaker embedding and personalized prompt. We feed the frame-level speaker embedding feature as conformer decoder input, and each personalized prompt as decoder memory. Finally, we concatenate the conformer decoder layer output and use a BiLSTM layer to explore the relationship among each speaker. The BiLSTM output is fed into a fully-connect(FC) layer with a sigmoid activation function to generate the final TS-VAD probability[9, 10]. The detailed TS-VAD model structure is shown in Figure 2. ### TS-ASR system Far-Field ASR poses a greater challenge compared to ASR of speech captured by a close-proximity microphone due to the Figure 1: The overview of our proposed PP-MeT system. degraded quality of the signal. To address this, we endeavor to engage in speech enhancement. In our practical pursuit, there exist two pivotal components. Firstly, we employ a sophisticated dereverberation method based on weighted prediction error (WPE) [11] to mitigate the effects of late reverberation. In the challenge, we utilize an accelerated GPU-version of WPE, incorporating the following parameters: taps=12, delay=2, iterations=3. Secondly, in order to further attenuate late reverberation and minimize noise interference, the weighted delay-and-sum acoustic beamforming (BeamformIt) method [12] is employed. As M2MeT2.0 requires participants to give transcription for each speaker, we upgrade the traditional ASR model into TS-ASR system with personalized prompt module, which enables it to yield different transcription given different personalized prompt [13, 14]. We feed the personalized prompt into a FC layer, and do Hadamard product with the output from the first layer of both asr encoder and decoder. As our TS-ASR model makes little modification to the traditional ASR model, we can easily adapt a pre-trained ASR model into a TS-ASR model. We use Unfied-Conformer [15] model pretrained on wenetspeech5 from [16] as the TS-ASR model backbone. The detailed TS-ASR model structure is shown in Figure 3. Footnote 5: [https://github.com/went-e2e/wenet/blob/main/docs/pretrained_models.md](https://github.com/went-e2e/wenet/blob/main/docs/pretrained_models.md) ## 3 Experimental Setup ### Datasets The original M2MeT1.0 dataset [3] contains 118.75 hours of speech data in total. The dataset is divided into 104.75 hours for training, 4 hours for development (denoted as Dev 1.0), and 10 hours as test set (denoted as Test 1.0) for scoring and ranking in M2MeT1.0 Challenge. Test 1.0 is used as development set in M2MeT2.0 Challenge. M2MeT2.0 uses a new 10 hours dataset (denote as Test 2.0) as test set. AISHELL4 [17] is a real-recorded Mandarin speech dataset collected by 8-channel circular microphone array for speech processing in a conference scenario. This dataset consists of 211 recorded meeting sessions, each containing 4 to 8 speakers, with a total length of 120 hours, aiming to bridge the advanced research on multi-speaker processing and the practical application scenarios. CN-Celeb [18] is a large-scale speaker recognition dataset collected 'in the wild'. This dataset contains more than 130, 000 utterances from 1, 000 Chinese celebrities, and covers 11 different genres in the real world. Both M2MeT and AISHELL4 datasets are far-field multi-channel datasets, while the CN-Celeb dataset is a near-field dataset. Figure 4 shows the data preparation. By Oracle VAD, the non-overlap speech of each speaker is obtained from both near-field and far-field data. Then the personalized prompt is extracted. The M2MeT dataset is processed according to the given prior information into continuous voice speech. All far-field multi-channel datasets are pre-processed to generate enhanced data by WPE and BF. The original far-field 8-channel data and the enhanced data compose the speech of each speaker, which is used in the next training process. The data flow of each training process is shown in Figure 5. The near-field data is processed into 3-second uniform segments and used in speaker embedding training. In TS-VAD model training, the continuous voice speech and non-overlap speech with online augmentation are processed into 16-second uniform segments, and the target-speaker embedding is used as a prompt. Moreover, the speech segment of each speaker and the personalized prompt are used in TS Figure 3: TS-ASR model structure Figure 2: TS-VAD model structure ASR model training. ### System Setup For all systems, we use 80-dimension log-mel filter bank (Fbank) feature as input. The Fbank feature is extracted using a 25ms window length and 10ms window shift. #### 3.2.1 Speaker embedding system we use CN-Celeb data [18] to train our speaker embedding model and split each utterance into 3s uniform length segments. When iterating over all segments, we introduce diversity by randomly offsetting the start frame of the segments from -1.5s to 1.5s. All these three speaker embedding models are trained using AAM softmax loss [19] and generate 256 dimension speaker embedding as output. We use a cyclical learning rate policy to dynamically adjust the lr for 16 epochs. #### 3.2.2 TS-VAD system we use M2MeT2.0 training data and Aishell-4 data for training. For each session, first, we extract and combine all voiced speech as our real training data. Then, for each speaker, we extract and combine speech that contains only the target speaker as personalized speech. Finally, we initialize Prompt-1/2/3 using personalized speech. If the number of speaker is less than 4, we pad Prompt-1/2/3 using zero vectors. During training, we split the real training data into 16s segments and iterate over each segment. We also do an online data simulation by choosing personalized speech from random speakers to fill up the voiced region of real data[20]. It is important that the randomly chosen speakers are from the same session, in case the model learns background noise feature of each session, rather than the essential difference of each speaker. We train three TS-VAD models based on SV-1/2/3 and Prompt-1/2/3. For all TS-VAD models, we use 2 layers, 256-dimension input, 512-dimension hidden dimension, and 8 heads for the conformer decoder setup. We use 2 layers, 1024 dimension input, and 512 hidden dimensions for BiLSTM setup. TS-VAD training consists three key stages. In stage 1, we copy the pre-trained speaker embedding parameter into the TS-VAD model, freeze the backbone part and train the model using real and simulated data until convergence with 1e-3 lr. In stage 2, we train the whole model using real and simulated data until convergence with 1e-4 lr. In stage 3, we finetune the whole model only using real data with 1e-5 lr. We choose the model with the lowest DER on Test 1.0 for decoding. During TS-VAD decoding, we initialize Prompt-1/2/3 from clustering system. We can iterate over the TS-VAD system by re-initialize Prompt-1/2/3 using TS-VAD system outputs. #### 3.2.3 TS-ASR system We use the WeNet toolkit and its pre-trained Unified-Conformer model on WeNetSpeech as backbone. Since M2MeT2.0 and Aishell-4 training data comprise multiple channels, on one hand, we directly feed the model with raw mean-pooled data, on the other hand, we feed the model with enhanced single-channel data. Additionally, we incorporate speed augmentation techniques during the training process. It is imperative to note that when the audio speed is altered, the corresponding personalized prompt for that particular speed variation should be rendered. We also train three TS-ASR systems based on Prompt-1/2/3. For all TS-ASR models, we use 12-layer conformer encoder with 512 dimension output, 2048 dimension linear units, and 8 attention heads. We use a 3-layer bi-transformer decoder with 2048 dimension linear units and 8 attention heads. For the personalized prompt module, we feed the 256 dimension personalized prompt into a FC layer, project it into a 512 dimension vector and do a Hadamard product with the first layer output of both encoder and decoder. TS-ASR training also consists three key stages. In stage 1, we freeze the Unified-Conformer backbone, and only train the personalized prompt module using raw data and enhanced Figure 4: Data preparation before training. Figure 5: Data flow in each training process. data. In stage 2, we train the whole model with 1e-4 lr. In stage 3, we finetune the whole model with 1e-5 lr using enhanced data. ## 4 Experimental Results ### Results on M2MeT2.0 Challenge M2MeT2.0 challenge uses concatenated minimum permutation character error rate (cp-CER) as the evaluation metric. It computes the minimum CER given all speaker permutations, which requires the system to give the correct transcription and speaker label. The calculation of cp-CER is divided into three steps. First, recognition results and reference transcriptions belonging to the same speaker are concatenated on the timeline in a session. Second, the character error rate (CER) of all permutations of speakers is calculated. Finally, the lowest CER is selected as the cp-CER. Table 1 presents the cp-CER results of the official baseline and each competition system. Our system achieves 15.05%, 16.84%, and 11.27% cp-CER on Dev 1.0, Test 1.0, and Test 2.0, respectively. Notice that cp-CER on Dev 1.0 and Test 1.0 is achieved using oracle diarization result. We can observe that our PP-MeT model gives better results over the official baseline and achieve up to 30.28% absolute cp-CER improvement due to the enhanced dataset and advanced model architectures. achieving first place in the challenge. ### Ablation Study We conduct a detailed ablation study to better understand the contribution of cp-CER from each system, and the significance of pre-trained models. #### 4.2.1 Clustering System As clustering system gives the estimate of speaker number and rough speaker label, its performance determines the superior limit of the whole PP-MeT system. In Table 2, we study the impact of different speaker embedding models and clustering algorithms in clustering systems. SV-1/2/3 achieves 7.13%, 6.49%, and 7.06% on CN-Celeb dev trials, respectively. The threshold for AHC clustering is tuned on Dev 1.0. Results show that given each model and channel, NME-SC outperforms AHC significantly. DOVER-lap makes the clustering result more stable by leveraging clustering results from different channels and models. As the accuracy of speaker embedding directly affects the quality of speaker embedding, DER relates to speaker embedding performance evidently. The lowest DER is achieved by SV-2, which also achieves the lowest EER on CN-Celeb trials. #### 4.2.2 TS-VAD system In Table 3, we study the impact of pre-trained speaker embedding model and different model architectures in TS-VAD system. Results show that pre-trained model contributed heavily to the performance of TS-VAD system. If TS-VAD model backbone parameter is randomly initialized, it only achieves 13.28% DER on Test 1.0, which is only slightly better than that of clustering system. Also, TS-VAD model backbone should match that of the personalized prompt. If we initialize the TS-VAD model backbone parameter using pre-trained ECAPA-TDNN speaker embedding model and train with Prompt-1. It achieves 7.68% DER, which is much worse than its counterpart using matched speaker embedding model and prompt. The above results demonstrate the importance of pre-trained models in TS-VAD system, and using matched speaker embedding model for initialization and personalized prompt makes it easier to explore the relationship between frame-level speaker embedding and personalized prompt. We can also observe that the DER drops moderately if we iterate the TS-VAD system by refining Prompt-1/2/3 using TS-VAD system output. #### 4.2.3 TS-ASR System In Table 4, we study the impact of pre-trained models and personalized prompts in TS-ASR system. First, we try to finetune the pre-trained unified-conformer ASR model directly without any structure modification. Results show that the pre-trained ASR model achieves 32.63% and 35.89% cp-CER on Dev 1.0 and Test 1.0. After finetuning the model on M2MET2.0 and Aishell-4 data, the cp-CER drops to 22.55% and 26.43%, respectively. However, the cp-CER improvement is largely due to the model performance on nonoverlap speech. It fails to decrease further because the traditional ASR model cannot handle overlap speech. Then, we try to train the TS-ASR model from scratch with Prompt-1. However, the TS-ASR model with a unified-conformer backbone fails to converge. This demonstrates the necessity of pre-trained ASR model backbone in our TS-ASR system. \begin{table} \begin{tabular}{c c c c} \hline \hline System & Dev 1.0 & Test 1.0 & Test 2.0 \\ \hline **PP-MeT (Rank 1st)** & **15.05** & **16.84** & **11.27** \\ Rank 2nd Team & – & – & 18.64 \\ Rank 3rd Team & – & – & 22.83 \\ Rank 4th Team & – & – & 23.51 \\ Rank 5th Team & – & – & 24.82 \\ Official Baseline & 47.4 & 52.57 & 41.55 \\ \hline \hline \end{tabular} \end{table} Table 1: The cp-CER (%)\(\downarrow\) results of each competition system on the M2MeT Dev 1.0, Test 1.0, and Test 2.0. Finally, we train three TS-ASR models based on the Prompt-1/2/3. cp-CER on TS-ASR model with pretrained ASR model backbone and Prompt-1/2/3 drops dramatically both on Dev 1.0 and Test 1.0. The result shows that pre-trained ASR model with Prompt-2 achieves the lowest cp-CER, which means that the performance of pre-trained speaker embedding model also affects the performance of TS-ASR on overlapped speeches. We also try to finetune the TS-ASR model further using LF-MMI with k2 toolkit6, and introducing LM information by decoding with HLG. However, the cp-CER fails to drop on both Dev 1.0 and Test 1.0. This is due to the fact that in multi-party meeting scenario, the transcription from each session is highly irrelevant. External LM information can not help to decrease cp-CER. Footnote 6: [https://github.com/k2-fsa/k2](https://github.com/k2-fsa/k2) In Table 4, _Test 1.0_ cp-CER is calculated using segments and prompts from TS-VAD system. The gap between cp-CER of _Test 1.0_ and Test 1.0 means the degradation introduced by TS-VAD system, which is approximately 2%. We obtain final results by leveraging each system results using SCTK rover toolkit7. Footnote 7: [https://github.com/usinstgov/SCTK](https://github.com/usinstgov/SCTK) ## 5 Conclusion In this paper, we present our PP-MET system for the Multi-channel Multi-party Meeting Transcription Challenge 2.0 (M2MeT2.0) to address the ASR task in a multi-party meeting scenario. Compared with the other conventional systems, we incorporate target-speaker embedding as a personalized prompt in both TS-VAD and TS-ASR stage. Moreover, to further enhance the system's robustness and reduce the training cost, pre-trained models are used in our system's initialization, enabling fast adaptation across all modules. Experimental results shows proposed system outperforms conventional systems by a large margin. In future work, we plan to explore the potential of expanding personalized prompts on the time axis. Additionally, we aim to enhance the TS-ASR model by jointly training the speaker embedding module with the ASR backbone, further improving its performance.
2309.08079
MPCGPU: Real-Time Nonlinear Model Predictive Control through Preconditioned Conjugate Gradient on the GPU
Nonlinear Model Predictive Control (NMPC) is a state-of-the-art approach for locomotion and manipulation which leverages trajectory optimization at each control step. While the performance of this approach is computationally bounded, implementations of direct trajectory optimization that use iterative methods to solve the underlying moderately-large and sparse linear systems, are a natural fit for parallel hardware acceleration. In this work, we introduce MPCGPU, a GPU-accelerated, real-time NMPC solver that leverages an accelerated preconditioned conjugate gradient (PCG) linear system solver at its core. We show that MPCGPU increases the scalability and real-time performance of NMPC, solving larger problems, at faster rates. In particular, for tracking tasks using the Kuka IIWA manipulator, MPCGPU is able to scale to kilohertz control rates with trajectories as long as 512 knot points. This is driven by a custom PCG solver which outperforms state-of-the-art, CPU-based, linear system solvers by at least 10x for a majority of solves and 3.6x on average.
Emre Adabag, Miloni Atal, William Gerard, Brian Plancher
2023-09-15T00:39:56Z
http://arxiv.org/abs/2309.08079v2
# MPCGPU: Real-Time Nonlinear Model Predictive Control ###### Abstract Nonlinear Model Predictive Control (NMPC) is a state-of-the-art approach for locomotion and manipulation which leverages trajectory optimization at each control step. While the performance of this approach is computationally bounded, implementations of direct trajectory optimization that use iterative methods to solve the underlying moderately-large and sparse linear systems, are a natural fit for parallel hardware acceleration. In this work, we introduce MPCGPU, a GPU-accelerated, real-time NMPC solver that leverages an accelerated preconditioned conjugate gradient (PCG) linear system solver at its core. We show that MPCGPU increases the scalability and real-time performance of NMPC, solving larger problems, at faster rates. In particular, for tracking tasks using the Kuka IWVA manipulator, MPCGPU is able to scale to kilohertz control rates with trajectories as long as 512 knot points. This is driven by a custom PCG solver which outperforms state-of-the-art, CPU-based, linear system solvers by at least 10x for a majority of solves and 3.6x on average. ## I Introduction Nonlinear Model Predictive Control (NMPC) is a feedback control strategy which repeatedly solves finite horizon optimal control problems (OCP) in real time, enabling robots to adapt to changes in their environment. This approach has seen great recent success in applications to both locomotion and manipulation [1, 2, 3, 4, 5]. Most implementations of NMPC leverage trajectory optimization [6] to solve the underlying optimal control problems. Two popular classes of these algorithms are shooting methods and direct methods. Shooting methods parameterize only the input trajectory and use Bellman's optimality principle [7] to iteratively solve a sequence of smaller optimization problems [8, 9]. Direct methods explicitly represent the states, controls, dynamics, and any additional constraints, leading to moderately-large nonlinear programs with structured sparsity patterns [10]. There has been historical interest in parallel strategies [11] for solving trajectory optimization problems. This is growing increasingly important with the impending end of Moore's Law and the end of Dennard Scaling, which have led to a utilization wall that limits the performance a single CPU chip can deliver [12, 13]. Several more recent efforts have shown that significant computational benefits are possible by exploiting the natural parallelism in the computation of the (gradients of the) dynamics and cost functions on GPUs and FPGAs [14, 15, 16, 17, 18, 19, 20, 21]. However, multiple-shooting and consensus approaches to computing trajectory updates at each algorithmic iteration [22, 23, 24, 25] have only seen modest gains when implemented on alternative hardware platforms [26, 27]. On the other hand, direct methods naturally expose more parallelism that can be exploited through hardware acceleration. Importantly, these approaches are computationally dominated by the solutions of a moderately-large and sparse linear systems. Iterative methods, like the Preconditioned Conjugate Gradient (PCG) algorithm [28], are particularly well-suited for parallel solutions of linear systems, as they are computationally dominated by matrix-vector products and vector reductions [29, 30], and have shown past success in outperforming state-of-the-art CPU implementations for solving very-large linear systems on GPUs [31, 32]. In this work, we introduce MPCGPU, a GPU-accelerated, real-time NMPC solver that exploits the structured sparsity and the natural parallelism in direct trajectory optimization (see Figure 1). At our solver's core is a custom, accelerated implementation of PCG tuned for the Schur complement of the KKT systems of trajectory optimization problems. We show that MPCGPU increases the scalability and real-time performance of NMPC, solving larger problems, at faster rates. In particular, for tracking tasks using the Kuka IIWA manipulator, MPCGPU is able to scale to kilohertz rates with trajectories as long as 512 knot points. This is driven by a custom, GPU-accelerated, PCG solver which outperforms state-of-the-art, CPU-based, linear system solvers by at least 10x for a majority of solves and 3.6x on average. We release our software and experiments open source at: [https://github.com/a2r-lab/MPCGPU](https://github.com/a2r-lab/MPCGPU). Fig. 1: At each control step, MPCGPU uses a parallel construction of the Schur complement of the KKT system, a parallel-friendly PCG solver, and a parallel line search to provide real-time performance through GPU acceleration. ## II Related Work There has been a significant amount of prior work developing general purpose sparse linear system solvers on the GPU both using factorization-based approaches [33, 34, 35, 36, 37, 38, 39, 40, 41, 42], as well as iterative methods [29, 32, 43, 44, 45, 46, 47, 48]. There has also been work developing and implementing Block-Cyclic-Reduction and other tree-structured methods that are optimized for block-tridiagonal systems [49, 50, 51, 52]. These general purpose approaches have found speedups through GPU usage, but only once the problem size grows to more than tens if not hundreds of thousands of variables or for instances of (linear) (power-flow) problems [32, 39, 40, 41, 42, 53]. As such, these general purpose solvers are not performant for most trajectory optimization problems (e.g., our examples in Section V have 448 to 7,168 variables). For the nonlinear trajectory optimization problem, evolutionary, particle-swarm, Monte-Carlo, and other sampling based approaches have been implemented on GPUs [54, 55, 56, 57, 58, 59, 60, 61, 62]. Most prior work on gradient-based parallel nonlinear trajectory optimization has been fully confined to the CPU [63, 24, 22, 23, 25, 64], relied on the CPU for many of the computations [65, 66], focused only on the problem of optimizing BLAS functions on the GPU [67, 68], or was limited to GPU acceleration of the naturally parallel (gradients of the) dynamics and cost functions [14, 16, 18, 19]. There are two existing lines of work which fully implemented gradient-based nonlinear trajectory optimization on the GPU. The first leveraged shooting based methods and found them to not expose much natural parallelism, limiting their performance [26, 27]. The second used a Block-Cyclic-Reduction-based direct method to exploit the particular structure exposed by position-based dynamics [15]. This work adds to the literature by designing a GPU-accelerated, gradient-based, direct trajectory optimization solver for standard reduced-coordinate dynamics [69] leveraging a custom parallel PCG solver at its core. ## III Background ### _Direct Trajectory Optimization_ Trajectory optimization [6], also known as numerical optimal control, solves an (often) nonlinear optimization problem to compute a robot's optimal path through an environment as a series of states (x \(\in\mathbb{R}^{n}\)) and controls (u \(\in\mathbb{R}^{m}\)). These problems assume a discrete-time dynamical system, \[x_{k+1}=f(x_{k},u_{k},h),\quad x_{0}=x_{s}, \tag{1}\] with a timestep \(h\), and minimize an additive cost function, \[J(X,U)=\ell_{f}(x_{N})+\sum_{k=0}^{N-1}\ell(x_{k},u_{k}). \tag{2}\] Direct methods for trajectory optimization form a moderately-large and sparse nonlinear program. While there are a variety of algorithmic approaches used to solve these problems, most methods can be reduced to a three step process which is repeated until convergence [10, 70, 71]. _Step 1:_ Form a quadratic approximation around a nominal trajectory (\(x,u\)), resulting in the quadratic program (QP): \[\min_{\begin{subarray}{c}\delta x_{0:N}\\ \delta u_{0:N-1}\end{subarray}}\frac{1}{2}\delta x_{N}^{T}Q_{N}\delta x_{N}+q_{ N}^{T}\delta x_{N}+ \tag{3}\] \[\sum_{k=0}^{N-1}\frac{1}{2}\delta x_{k}^{T}Q\delta x_{k}+q^{T} \delta x_{k}+\frac{1}{2}\delta u_{k}^{T}R\delta u_{k}+r^{T}\delta u_{k}\] \[\text{s.t.}\quad\delta x_{k+1}-A_{k}\delta x_{k}-B_{k}\delta u_{ k}=0\quad\forall k\in[0,N),\] \[\qquad\qquad\delta x_{0}=x_{s}-x_{0}\] _Step 2:_ Compute \(\delta X^{*},\delta U^{*}\) by solving the KKT system: \[\begin{bmatrix}G&C^{T}\\ C&0\end{bmatrix}\begin{bmatrix}-\delta Z\\ \lambda\end{bmatrix}=\begin{bmatrix}g\\ c\end{bmatrix} \tag{4}\] where: \[\delta z_{k} =\begin{bmatrix}\delta x_{k}&\delta u_{k}\end{bmatrix}^{T} \delta z_{N}=\delta x_{N}\] \[G =\begin{bmatrix}Q_{0}&&\\ &R_{0}&&\\ &&\ddots&\\ &&&Q_{N}\end{bmatrix}\] \[g =\begin{bmatrix}q_{0}&r_{0}&q_{1}&r_{1}&\ldots&q_{N}\end{bmatrix}^{T}\] \[C =\begin{bmatrix}I&&&&\\ -A_{0}&-B_{0}&I&&\\ &&\ddots&-A_{N-1}&-B_{N-1}&I\end{bmatrix}\] \[e_{k} =x_{k+1}-f(x_{k},u_{k})\] \[c =\begin{bmatrix}x_{s}-x_{0}&e_{0}&e_{1}&\ldots&e_{N-1}\end{bmatrix} ^{T}\] _Step 3:_ Apply the update step, \(\delta X^{*},\delta U^{*}\), while ensuring descent on the original nonlinear problem through the use of a merit-function and a trust-region or line-search. ### _The Schur Complement Method_ As the KKT system (Equation 4) is a saddle point system, one approach is to solve it through a two step process by forming the symmetric positive definite _Schur Complement_, \(S\), which can be used to solve for \(\lambda^{*}\) and then \(\delta z^{*}\): \[\begin{split} S&=-CG^{-1}C^{T}&\gamma=c-CG^{-1}g\\ S\lambda^{*}&=\gamma&\delta z^{*}=G^{-1}(g-C^{T}\lambda^{*})\end{split} \tag{5}\] By defining the variables \(\theta\), \(\phi\), and \(\zeta\), \[\begin{split}\theta_{k}&=A_{k}Q_{k}^{-1}A_{k}^{T}+B_{k}R_{k}^{-1} B_{k}^{T}+Q_{k+1}^{-1}\\ \phi_{k}&=-A_{k}Q_{k}^{-1}\\ \zeta_{k}&=-A_{k}Q_{k}^{-1}q_{k}-B_{k}R_{k}^{-1}r_{k}+Q_{k+1}^{-1} q_{k+1},\end{split} \tag{6}\] \(S\) and \(\gamma\) take the form: \[\begin{split} S&=\begin{bmatrix}Q_{0}^{-1}&\phi_{0}^{T}\\ \phi_{0}&\theta_{0}&\phi_{1}^{T}\\ &\ddots&\phi_{N-2}&\theta_{N-2}&\phi_{N-1}^{T}\\ &&\phi_{N-1}&\theta_{N-1}\end{bmatrix}\\ \gamma&=c+\begin{bmatrix}Q_{0}^{-1}q_{0}&\zeta_{0}&\zeta_{1}&\ldots&\zeta_{N-1}\end{bmatrix}^{T} \end{split} \tag{7}\] ### _Iterative Methods_ Iterative methods solve the problem \(S\lambda^{*}=\gamma\) for a given \(S\) and \(\gamma\) by iteratively refining an estimate for \(\lambda\) up to some tolerance \(\epsilon\). The most popular of these methods is the conjugate gradient (CG) algorithm which has been used for state-of-the-art results on large-scale optimziation problems on the GPU [53, 72]. The convergence rate of CG is directly related to the spread of the eigenvalues of \(S\). Thus, a preconditioning matrix \(\Phi\approx S\) is often applied to instead solve the equivalent problem with better numerical properties: \(\Phi^{-1}S\lambda^{*}=\Phi^{-1}\gamma\). To do so, the preconditioned conjugate gradient (PCG) algorithm leverages matrix-vector products with \(S\) and \(\Phi^{-1}\), as well as vector reductions, both parallel friendly operations (see Algorithm 1). ### _Graphics Processing Units (GPUs)_ Compared to a multi-core CPU, a GPU is a larger set of simpler processors, optimized for parallel execution of identical instructions. GPUs are best at computing regular and separable computations, over large data sets, with limited synchronization (e.g., large matrix multiplication) [73]. Our work uses NVIDIA's CUDA extensions to C++ [74]. ## IV The MPCGPU Solver In this section we describe the design of the MPCGPU solver which exploits the sparsity and natural parallelism found in direct trajectory optimization algorithms and iterative linear system solvers. To further promote efficient GPU acceleration, unlike generic approaches, which require a kernel launch and CPU-GPU synchronization for each matrix operation, MPCGPU uses only three kernels that are asynchronously queued, resulting in only a single CPU-GPU synchronization. We also only transfer the initial and final values between the CPU and GPU to reduce I/O overheads. As shown in Figure 2, our approach can be broken down into a three step process. At each control step we first compute each block row of the Schur complement system, \(S\) and \(\gamma\), as well as our preconditioner, \(\Phi^{-1}\), in parallel by taking advantage of the structured sparsity of those matrices. Next we use our custom GPU-optimized, warm-started, PCG solver, GBD-PCG (Algorithm 2), to compute the optimal Lagrange multipliers, \(\lambda^{*}\), and reconstruct the optimal trajectory update, \(\delta X^{*},\delta U^{*}\). Finally we leverage a parallel line search to compute the final trajectory \(X,U\) which we send to the (simulated) robot for execution and simultaneously measure the current state of the (simulated) robot to begin our next control step. In the remainder of this section we provide further details on our approach. Our open-source implementation can be found at: [https://github.com/a2r-lab/MPCGPU](https://github.com/a2r-lab/MPCGPU). ### _Parallel Computation of \(S,\gamma\), and \(\Phi^{-1}\)_ To efficiently compute \(S,\gamma\), and \(\Phi^{-1}\) on the GPU, we need to find a naturally parallel approach to form the values as well as an efficient data storage format that minimizes overheads. We also need to find an effective preconditioner that is parallel-friendly in its computation. We first leverage the block-tridiagonal structure of the Schur complement, \(S\), as shown in Equation 7, which is for the most part independent across timesteps, \(k\), for each block-row. This pattern also extends to each block-row of the \(\gamma\) vector. To further remove the need for synchronizations, for each \(k\), we also compute the only cross-timestep quantities, \(Q_{k+1}\) and \(q_{k+1}\). While this results in those terms being computed twice, it still proves to be more efficient than forcing a synchronization point between all block-rows. To ensure efficient computation of the underlying dynamics and kinematic quantities, we leverage the GRiD library, which was shown to outperform state-of-the-art CPU libraries even when taking into account I/O overheads [18]. We further parallelize across and within the many small matrix inversions and matrix multiplications within each parallel block-row computation. Leveraging best practices [16, 18], we also group together the various types of mathematical operations, storing intermediate values in shared memory, and re-ordering computations where needed. We leverage the Symmetric Stair Preconditioner [30, 75] which is a parallel-friendly preconditioner optimized for block-tridiagonal systems wich has an analytical inverse and results in the block-tridiagonal matrix: \[\Phi^{-1}=\begin{bmatrix}Q_{0}&-Q_{0}\phi_{0}^{T}\theta_{0}^{-1}\\ -\theta_{0}^{-1}\phi_{0}Q_{0}^{-1}&\theta_{0}^{-1}&-\theta_{0}^{-1}\phi_{1}^{ T}\theta_{1}^{-1}\\ &-\theta_{1}^{-1}\phi_{1}\theta_{0}^{-1}&\theta_{1}^{-1}\\ &&\ddots\end{bmatrix} \tag{8}\] The structure of \(\Phi^{-1}\) also permits mostly parallel computation as only the values of each \(\theta_{k}^{-1}\) need to be shared across timesteps. Our approach thus requires only a single global synchronization across blocks, and allows us to store the block-tridiagonal \(S\) and \(\Phi^{-1}\) matrices in a custom, compressed, dense format for increased IO bandwidth and memory efficiency. ### _GPU Parallel PCG for Block-Tridiagonal Systems_ The core of our solver is a custom GPU Parallel PCG implementation specifically optimized for block-tridiagonal systems, GBD-PCG (Algorithm 2). That is, we leverage the sparsity structure of \(S\) and \(\Phi^{-1}\) to maximize cache usage and natural parallelism resulting in a refactored, low-latency implementation with minimal synchronizations. These optimizations can be leveraged in the most computationally expensive part of the algorithm, the large matrix-vector products in lines 5, 6, and 8 of Algorithm 1, as each element of the product depends on at most \(3n_{b}\) elements from the matrix and \(3n_{b}\) elements from the vector, where \(n_{b}\) is the block dimension. We exploit this by grouping threads that access similar elements into thread blocks and storing \(S\), \(\Phi^{-1}\), and all PCG iterates concurrently in shared (cache) memory on the GPU. We also operate as many steps of the algorithm fully in parallel as possible between the thread synchronizations needed for the parallel reductions of scalar values on lines 6, 12, and 21 of Algorithm 2. Similarly, we only use device memory (RAM) for those scalar reductions and for the values of \(p\) and \(r\) that need to be shared between blocks on lines 9 and 18 of Algorithm 2. This means that the choice of a sparse preconditioner not only enables its efficient computation and memory storage, but also reduces the number of synchronizations and amount of memory that needs to be shared through RAM during each PCG iterate. This holistic co-design across algorithm stages is part of the reason why MPCGPU is so performant. Finally, we warm-start the values for \(\lambda\) based on the previous solve which we found greatly increased overall performance by reducing the number of PCG iterations needed for convergence. ``` 1:for block \(b=0:N\)in parallel do 2:\(r_{b}=\gamma_{b}-S_{b}\lambda_{b-1:b+1}\) 3: Load \(r_{b-1:b-1}\), \(r_{b+1:b+1}\) 4:\(\tilde{r}_{b},p_{b}=\Phi_{b}^{-1}r_{b-1:b+1}\) 5:\(\eta_{b}=r_{b}^{T}\tilde{r_{b}}\) 6:\(\eta=\textbf{ParallelReduce}(\eta_{b})\) 7:for iter \(i=1:\text{max\_iter}\)do 8:for block \(b=0:N\)in parallel do 9: Load \(p_{b-1}\), \(p_{b+1}\) 10:\(\Upsilon_{b}=S_{b}p_{b-1:b+1}\) 11:\(\upsilon_{b}=p_{b}\Upsilon_{b}\) 12:\(\upsilon=\textbf{ParallelReduce}(\upsilon_{b})\) 13:for block \(b=0:N\)in parallel do 14:\(\alpha=\eta/\upsilon\) 15:\(\lambda_{b}=\lambda_{b}+\alpha p_{b}\) 16:\(r_{b}=r_{b}-\alpha\Upsilon_{b}\) 17:for block \(b=0:N\)in parallel do 18: Load \(r_{b-1:b-1}\), \(r_{b+1:b+1}\) 19:\(\tilde{r}_{b}=\Phi_{b}^{-1}r_{b}\) 20:\(\eta_{b}^{\prime}=r_{b}^{T}\tilde{r}_{b}\) 21:\(\eta^{\prime}=\textbf{ParallelReduce}(\eta_{b}^{\prime})\) 22:if\(\eta^{\prime}<\epsilon\)then return \(\lambda\) 23:for block \(b=0:N\)in parallel do 24:\(\beta=\eta^{\prime}/\eta\) 25:\(p_{b}=\tilde{r}_{b}^{\prime}+\beta p_{b}\) 26:\(\eta=\eta^{\prime}\) 27:return \(\lambda\) ``` **Algorithm 2** GPU Parallel PCG for Block-Tridiagonal Systems (GBD-PCG) (\(S,\Phi^{-1},\gamma,\lambda,\epsilon\)) \(\rightarrow\)\(\lambda^{*}\) ### _Parallel Line Search_ We leverage a parallel line search, computing all possible iterates for \(\alpha\in\mathbb{A}\) in parallel and selecting the iterate with the best value according to its L1 merit function [10]: \[M(Z;\mu)=J(Z)+\mu|c|. \tag{9}\] This is allows MPCGPU to evaluate all possible line search iterates in the same amount of time as it would take the GPU to compute a single iterate under a standard backtracking implementation. Importantly, this approach not only reduces latency of this algorithmic step, but has also been shown to improve the convergence of NMPC on similar whole-body trajectory tracking problems [26]. Fig. 2: A high level overview of MPCGPU which: 1) in parallel on the GPU computes \(S\), \(\gamma\), and \(\Phi^{-1}\) and stores those values in an optimized dense format, 2) uses our GBD-PCG solver to compute \(\lambda^{*}\) and reconstructs \(\delta X^{*},\delta U^{*}\) through GPU-friendly matrix-vector multiplications and vector reductions, and 3) leverages a parallel line search to compute the final trajectory, \(X,U\). This trajectory is then passed to the (simulated) robot and the current state of the (simulated) robot is measured and fed back into our solver which is run again, warm-started with our last solution. ## V Results In this section we present a two-part evaluation of MPCGPU through a case study of online, dynamic, multi-goal, end-effector position tracking using whole-body NMPC for a simulated Kuka IIWA manipulator. First, we compare the performance of our underlying GBD-PCG iterative linear system solver with the state-of-the-art, CPU-based, QLDL solver [76]. Second, we show how the end-to-end performance enabled by MPCGPU allows us to scale to long time horizons and fast control rates. Source code accompanying this evaluation can be found at [https://github.com/a2r-lab/MPCGPU](https://github.com/a2r-lab/MPCGPU). ### _Methodology_ Results were collected on high-performance workstation with a \(3.2\)GHz 16-core Intel i9-12900K and a \(2.2\)GHz NVIDIA GeForce RTX 4090 GPU running Ubuntu 22.04 and CUDA 12.1. Code was compiled with g++11.4, and time was measured with the Linux system call clock_gettime(), using CLOCK_MONOTONIC as the source. Our performance analysis is drawn from 100 NMPC trials of a 10 second, 5 goal, pick-and-place circuit for a simulated Kuka IIWA-14 (see Figure 3). Each NMPC trial consists of thousands of linear system solves which are needed for the many iterations of the underlying trajectory optimization problem for end-effector position tracking solved at each control step. All hyperparameter values can be found in our open-source source code and resulted in an average tracking error of \(\sim\)10cm, providing similar performance as previous experiments with GPU accelerated NMPC [27]. In particular, we note that all solvers used the same quadratic cost functions for each amount of knot points, and solver-specific hyperparameter values were independently tuned for maximal performance. ### _Linear System Solver Performance_ We evaluate the performance of our underlying GBD-PCG linear system solver over its thousands of solves during each of our 100 NMPC trials running at a 500hz control rate and compare its performance to the state-of-the-art CPU-based QLDL solver [76] operating in the same context. **Average Solve Time:** Our results show that our GPU based solver outperforms QLDL across most problem sizes and is only marginally slower at the smallest problem size, obtaining as much as a 3.6x average speedup (see Figure 4). This is driven by the GPU's ability to leverage large scale parallelism to gracefully scale to larger problem sizes. We note that the speedup plateaus at 256 knot points as we begin to run out of hardware resources on our specific GPU. These results show that unlike generic approaches [53, 32], our domain-specific co-design approach enables the GPU to outperform the CPU even on moderately-sized linear systems (our experiments range from only 448 to 7,168 variables). presents a uni-modal timing distribution with almost all solves occurring between 280 and 305 \(\mu\)s. GBD-PCG, on the other hand, presents a bi-modal timing distribution clustered both much faster and a little slower than QDLDL. For example, for \(\epsilon=1e^{-4}\), 65% of GBD-PCG solves are \(\geq\)10x faster than the fastest QDLDL solve, and the slowest GBD-PCG solve is only 2.5x slower than the slowest QDLDL solve (with only 10% of solves \(\geq\)2x slower). Furthermore, while all values of \(\epsilon\) shown in the plot were able to successfully track the target trajectory, the less exact of a solve was required, the more of the distribution mass shifted to being \(\geq\)10x faster than QDLDL (65%, 52%, 20% for \(1e^{-4},5e^{-5},1e^{-5}\) respectively). However, when \(\epsilon\) was reduced even farther, our entire NMPC controller was unable to accurately track our target trajectory. These results present interesting directions for future work to find ways to eliminate the second slower mode of the solve time distribution while ensuring robust NMPC convergence. ### _End-to-End NMPC Performance_ To validate efficacy for use in NMPC for robotics applications, we also demonstrate the impact of our approach on the number of iterations of MPCGPU we could achieve at each control step for varying control rates and trajectory lengths. Figure 6 shows the resulting number of average trajectory optimization solver iterations we can compute while meeting the specified control rates and trajectory lengths using both our GBD-PCG solver as well as QDLDL to solve the thousands of underlying linear systems.1 Footnote 1: We note that in the QDLDL case, as the NMPC loop is running on the GPU, data needs to be copied onto the host before executing the solve and converted into the sparse CSR matrix format. To ensure fair comparisons and avoid overheads for unnecessary data transfers and transformations, we implemented a variant of our parallel Schur complement computation which directly stores data in the CSR format expected by QDLDL. Regardless of the linear system solver, our GPU-first approach, with both fast parallel construction of the Schur complement and fast parallel computation of the line search, enables trajectories as long as \(128\) knot points to operate at a \(1\)kHz control rate, and achieve at least \(4\) iterations at a \(500\)Hz control rate, for a per-iteration rate of \(2\)kHz. Furthermore, similar to what we witness in the case of average linear system solve times, as the problem gets larger and the control rate increases, our fully GPU-based approach is increasingly performant. Highlights include our approach's ability to scale to \(512\) knot points at a \(1\)kHz control rate and execute \(8\) iterations for 128 knot points at a \(500\)Hz control rate, for a per-iteration rate of \(4\)kHz. These results compare favorably to previously reported results in the literature of about \(500\)Hz to \(1\)kHz per-iteration rates for trajectories of \(30\) to \(120\) knot points using state-of-the-art CPU-based [77, 78] and GPU-based [26] solvers for similar NMPC tasks. As such, our GPU-first approach opens up the possibility for our NMPC solver to either leverage longer horizon trajectories, produce more optimal solutions at each control step, run at faster control rates, or include some combination of those highly beneficial traits. ## VI Conclusion and Future Work In this work, we introduce MPCGPU, a GPU-accelerated, real-time NMPC solver built around a parallel PCG solver. MPCGPU exploits the structured sparsity and natural parallelism in both direct trajectory optimization algorithms and iterative linear system solvers. Our experiments show that our approach is able to scale NMPC to larger problems, and operate it at faster rates, than is possible with existing state-of-the-art solvers. In particular, for tracking tasks using the Kuka IIWA manipulator, MPCGPU is able to scale to kilohertz control rates with trajectories as long as 512 knot points. For this problem, our GPU-based PCG solver outperforms a state-of-the-art CPU-based linear system solver by as much as 10x for a majority of solves and 3.6x on average. There are many promising directions for future work to improve the functionality and usability of our approach. Most importantly, like all iterative methods, our GPU-based PCG solver exhibits variability in its solve times. Future work which learns when to leverage iterative methods vs. factorization-based methods, or which learns dynamic values for hyperparameters to reduce the worst-case runtimes, without sacrificing overall NMPC robustness, would greatly improve average-case performance. Furthermore, it would be interesting to explore the performance implications of adding additional constraints either through expanding the KKT system or through augmented Lagrangian methods [10, 79]. Finally, we would like to evaluate our approach on physical robots at the edge using low-power GPU platforms such as the NVIDIA Jetson [80]. Fig. 6: Average number of trajectory optimization iterations of MPCGPU executed at each control step for varying control rates and trajectory lengths. Our GPU-first approach to Schur complement construction and our parallel line search enables high control rates and long trajectories regardless of the underlying solver. However, the improved scalability of GBD-PCG enables our approach to scale to \(512\) knot points at \(1\)kHz and execute \(8\) iterations for 128 knot points at \(500\)Hz, for a per-iteration rate of \(4\)kHz.
2309.10636
Partition regularity of Pythagorean pairs
We address a core partition regularity problem in Ramsey theory by proving that every finite coloring of the positive integers contains monochromatic Pythagorean pairs, i.e., $x,y\in \mathbb{N}$ such that $x^2\pm y^2=z^2$ for some $z\in \mathbb{N}$. We also show that partitions generated by level sets of multiplicative functions taking finitely many values always contain Pythagorean triples. Our proofs combine known Gowers uniformity properties of aperiodic multiplicative functions with a novel and rather flexible approach based on concentration estimates of multiplicative functions.
Nikos Frantzikinakis, Oleksiy Klurman, Joel Moreira
2023-09-19T14:19:48Z
http://arxiv.org/abs/2309.10636v3
# Partition regularity of Pythagorean pairs ###### Abstract. We address a core partition regularity problem in Ramsey theory by proving that every finite coloring of the positive integers contains monochromatic Pythagorean pairs, i.e., \(x,y\in\mathbb{N}\) such that \(x^{2}\pm y^{2}=z^{2}\) for some \(z\in\mathbb{N}\). We also show that partitions generated by level sets of multiplicative functions taking finitely many values always contain Pythagorean triples. Our proofs combine known Gowers uniformity properties of aperiodic multiplicative functions with a novel and rather flexible approach based on concentration estimates of multiplicative functions. Key words and phrases:Partition regularity, Pythagorean triples, multiplicative functions, concentration inequalities, Gowers uniformity 2020 Mathematics Subject Classification: Primary: 05D10; Secondary:11N37,11B30, 37A44 The first author was supported by the Research Grant-ELIDEK HFRI-FM17-1684. ## 1. Introduction ### Motivation The study of the problem of finding a solution to the problem of finding a \(\bullet\) In fact, Theorem 1.2 implies that all four elements \(x,y\) and \(y^{\prime},z^{\prime}\) in part (i) and (ii) respectively can be taken to be of the same color. \(\bullet\) We can also extend [21, Theorem 2.7], covering more general homogeneous equations of the form \(p(x,y,z)=ax^{2}+by^{2}+cz^{2}+dxy+exz+fyz=0\) where \(a,b,c,d,e,f\in\mathbb{Z}\). Our method allows to show that if \(e^{2}-4ac\) and \(f^{2}-4bc\) are non-zero squares, then for every finite coloring of the integers there exist distinct monochromatic \(x,y\) and an integer \(z\) such that \(p(x,y,z)=0\).1 In contrast, [21, Theorem 2.7] assumes in addition that \((e+f)^{2}-4c(a+b+d)\) is a non-zero square. Footnote 1: Arguing as in Step 2 of [21, Appendix C] we get parametrizations for \(x,y\) of the form covered in Section 1.5. \(\bullet\) The assumption that \(a,b,c\in\mathbb{N}\) are all squares is not sufficient for partition regularity of the equation \(ax^{2}+by^{2}=cz^{2}\). For example, the equation \(x^{2}+y^{2}=4z^{2}\) is not partition regular, so in this case our result is optimal, as only pairs and not triples can be partition regular. See Section 1.6 for more details and conjectural necessary and sufficient conditions for partition regularity of such equations. We establish a stronger density version of these partition regularity results. It is clear that the set of odd numbers, which has additive density \(1/2\), does not contain integers \(x,y\) such that \(x^{2}+y^{2}=z^{2}\) for some \(z\in\mathbb{N}\), ruling out a potential density version using additive density. On the other hand, since the equation \(x^{2}+y^{2}=z^{2}\) is homogeneous, the set of solutions is invariant under dilations, and using a dilation-invariant notion of density turns out to be more fruitful. To this end, we recall some standard notions. A _multiplicative Folner sequence in \(\mathbb{N}\)_ is a sequence \(\Phi=(\Phi_{K})_{K=1}^{\infty}\) of finite subsets of \(\mathbb{N}\) asymptotically invariant under dilation, in the sense that \[\forall x\in\mathbb{N},\qquad\lim_{K\to\infty}\frac{\big{|}\Phi_{K}\cap(x \cdot\Phi_{K})\big{|}}{|\Phi_{K}|}=1.\] An example of a multiplicative Folner sequence is given by (2.12). The _upper multiplicative density_ of a set \(\Lambda\subset\mathbb{N}\) with respect to a multiplicative Folner sequence \(\Phi=(\Phi_{K})_{K=1}^{\infty}\) is the quantity \[\bar{d}_{\Phi}(\Lambda):=\limsup_{K\to\infty}\frac{\big{|}\Phi_{K}\cap\Lambda \big{|}}{|\Phi_{K}|},\] and we write \(d_{\Phi}(\Lambda)\) if the previous limit exists. We say that \(\Lambda\subset\mathbb{N}\) has _positive multiplicative density_ (or, more precisely, positive upper Banach density with respect to multiplication) if \(\bar{d}_{\Phi}(\Lambda)>0\) for some multiplicative Folner sequence \(\Phi\). A finite coloring of \(\mathbb{N}\) always contains a monochromatic cell with positive multiplicative density, thus, the next result strengthens Theorem 1.1. **Theorem 1.2**.: _Let \(a,b,c\in\mathbb{N}\) be squares. Then for every \(\Lambda\subset\mathbb{N}\) with positive multiplicative density, there exist_ * _distinct_ \(x,y\in\Lambda\) _and_ \(z\in\mathbb{N}\) _such that_ \(ax^{2}+by^{2}=cz^{2}\)_._ * _distinct_ \(y,z\in\Lambda\) _and_ \(x\in\mathbb{N}\) _such that_ \(ax^{2}+by^{2}=cz^{2}\)_._ **Remarks**.: \(\bullet\) In fact, we prove the following stronger property: If \(\bar{d}_{\Phi}(\Lambda)>0\), then there exist a sub-sequence \(\Psi\) of \(\Phi\) and distinct \(x,y\in\mathbb{N}\) such that \(ax^{2}+by^{2}=cz^{2}\) for some \(z\in\mathbb{N}\), and \[d_{\Psi}\big{(}(x^{-1}\Lambda)\cap(y^{-1}\Lambda)\big{)}>0.\] A similar statement also holds with the roles of \(x\) and \(z\) reversed. \(\bullet\) If \(a+b\neq c\), it is not true that every \(\Lambda\subset\mathbb{N}\) with positive multiplicative density contains \(x,y,z\) such that \(ax^{2}+by^{2}=cz^{2}\). To see this when \(a=b=c=1\) (the argument is similar whenever \(a+b\neq c\)), let \(\Phi\) be any multiplicative Folner sequence and \(\alpha\) be an irrational such that the sequence \((n^{2}\alpha)\) is equidistributed (mod 1) with respect to a subsequence \(\Phi^{\prime}\) of \(\Phi\) (such an \(\alpha\) and \(\Phi^{\prime}\) exist by the ergodicity of the multiplicative action \(T_{n}x=n^{2}x\), \(n\in\mathbb{N}\), defined on \(\mathbb{T}\) with its Haar measure). Let \(\Lambda:=\{n\in\mathbb{N}\colon\{n^{2}\alpha\}\in[1/5,2/5)\}\), which has positive upper density with respect to \(\Phi^{\prime}\). If \(x,y,z\in\Lambda\), then \(\{(x^{2}+y^{2})\alpha\}\in[2/5,4/5)\) and \(\{z^{2}\alpha\}\in[1/5,2/5)\), hence we cannot have \(x^{2}+y^{2}=z^{2}\). This example was shown to us by V. Bergelson. We remark that the previous results also resolve the first part of Problem 3 in [21] and also Problem 6 in [21]. The latter implies that the starting point in Sarkozy's theorem [43] (or the variant in [33] dealing with the equation \(x+y=n^{2}\)) can be taken to be a square: **Corollary 1.3**.: _For every finite coloring of \(\mathbb{N}\) there exist_ 1. _distinct_ \(m,n\in\mathbb{N}\) _such that the integers_ \(m^{2}\) _and_ \(m^{2}+n^{2}\) _have the same color._ 2. _distinct_ \(m,n\in\mathbb{N}\) _such that the integers_ \(m^{2}\) _and_ \(n^{2}-m^{2}\) _have the same color._ To prove part (i), let \(C_{1},\ldots,C_{k}\) be a coloring of \(\mathbb{N}\). Using part (ii) of Theorem 1.1 for the coloring \(C^{\prime}_{i}:=\{n\in\mathbb{N}\colon n^{2}\in C_{i}\}\), \(i=1,\ldots,k\), we deduce that there exist \(i_{0}\in\{1,\ldots,k\}\) and \(x,z\in C^{\prime}_{i_{0}}\) such that \(x^{2}+y^{2}=z^{2}\). Then \(x^{2},z^{2}\in C_{i_{0}}\). Letting \(m:=x\) and \(n:=y\) we get that \(m^{2},m^{2}+n^{2}\in C_{i_{0}}\). The proof of part (ii) is similar and uses part (i) of Theorem 1.1. A coloring \(C_{1},\ldots,C_{k}\) of the squares induces a coloring \(C^{\prime}_{1},\ldots,C^{\prime}_{k}\) of \(\mathbb{N}\) in the natural way: \(C^{\prime}_{i}:=\{n\in\mathbb{N}\colon n^{2}\in C_{i}\}\), \(i=1,\ldots,k\). Applying Theorem 1.1 for the induced coloring we deduce the following result: **Corollary 1.4**.: _For every finite coloring of the squares there exist_ 1. _distinct squares_ \(x,y\) _with the same color such that_ \(x+y\) _is a square._ 2. _distinct squares_ \(x,y\) _with the same color such that_ \(x-y\) _is a square._ ### Pythagorean triples on level sets of multiplicative functions Our second objective is to lend support to the hypothesis that Pythagorean triples are partition regular by proving that the level sets of multiplicative functions that take finitely many values always include Pythagorean triples. Since the equation \(x^{2}+y^{2}=z^{2}\) is homogeneous, one might expect that a presumed counterexample to partition regularity would have "multiplicative structure", so Theorem 1.5 below addresses the most obvious possibilities. We also remark that Rado's theorem implies that a given linear system of equations is partition regular as soon as it has monochromatic solutions in every coloring realized using a (finitely valued) completely multiplicative function; but of course this result does not apply to the Pythagorean equation. **Theorem 1.5**.: _Let \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) be a completely multiplicative function that takes finitely many values. Then there exist distinct \(x,y,z\in\mathbb{N}\) such that_ \[x^{2}+y^{2}=z^{2}\quad\text{and}\quad f(x)=f(y)=f(z)=1.\] **Remarks**.: \(\bullet\) There is nothing special about the value \(1\) in Theorem 1.5. If \(\zeta\in\mathbb{S}^{1}\) is any other number in the range of \(f\), then since the equation \(x^{2}+y^{2}=z^{2}\) is invariant under dilations of the variables \(x,y,z\), we get that there exist distinct \(x,y,z\in\mathbb{N}\), such that \[x^{2}+y^{2}=z^{2}\quad\text{and}\quad f(x)=f(y)=f(z)=\zeta.\] \(\bullet\) With a bit more effort we can extend Theorem 1.5 to cover more general equations of the form \[ax^{2}+by^{2}=cz^{2} \tag{1.1}\] where \(a,b,c\in\mathbb{N}\) are squares and we have either \(a=c\), or \(b=c\), or \(a+b=c\). We outline the additional steps needed to be taken to prove such a result in Section 8.3. Note that having one of these three identities satisfied is a necessary condition for the partition regularity of (1.1). For more details and related problems see the discussion in Section 1.6. Related linear equations \(ax+by=cz\) on the level sets of completely multiplicative functions \(f:\mathbb{N}\to\{-1,1\}\) have been studied in the works of Brudern [9] and more recently by de la Breteche and Granville [7]. One consequence of such results [7, Corollary 2], is that the number of Pythagorean triples \((x,y,z)\) modulo any prime \(p\geq 3\), that is, solutions to \(x+y=z\) where \(x,y,z\leq N<p\) are quadratic residues, is at least \(\frac{1}{2}(k^{\prime}+o_{N\to\infty}(1))N^{2}\) where \(k^{\prime}=.005044...\) is a sharp constant. ### Parametric reformulation of the main results To prove our main results it is convenient to restate them using solutions of (1.1) in parametric form. Our assumptions give that \(a=a_{0}^{2}\), \(b=b_{0}^{2}\), \(c=c_{0}^{2}\) for some \(a_{0},b_{0},c_{0}\in\mathbb{N}\). Then a simple computation shows that the following are solutions of \(ax^{2}+by^{2}=cz^{2}\): \[x=k\,\ell_{1}\,(m^{2}-n^{2}),\quad y=k\,\ell_{2}\,mn,\quad z=k\,\ell_{3}\,(m^{ 2}+n^{2}),\quad m,n\in\mathbb{N},\] where \(\ell_{1}:=a_{0}bc\), \(\ell_{2}:=2ab_{0}c\), \(\ell_{3}:=abc_{0}\). So in order to prove Theorem 1.2 it suffices to establish the following result. **Theorem 1.6**.: _Suppose that \(\Lambda\subset\mathbb{N}\) satisfies \(\bar{d}_{\Phi}(\Lambda)>0\) for some multiplicative Folner sequence \(\Phi\). Then for every \(\ell,\ell^{\prime}\in\mathbb{N}\) there exist_ * \(m,n\in\mathbb{N}\) _with_ \(m>n\) _such that_ \(\ell\,(m^{2}-n^{2})\) _and_ \(\ell^{\prime}\,mn\) _are distinct and_ \[\bar{d}_{\Phi}\big{(}(\ell\,(m^{2}-n^{2}))^{-1}\Lambda\cap(\ell^{\prime}\,mn) ^{-1}\Lambda\big{)}>0.\] * \(m,n\in\mathbb{N}\) _such that_ \(\ell\,(m^{2}+n^{2})\) _and_ \(\ell^{\prime}\,mn\) _are distinct and_ \[\bar{d}_{\Phi}\big{(}(\ell\,(m^{2}+n^{2}))^{-1}\Lambda\cap(\ell^{\prime}\,mn) ^{-1}\Lambda\big{)}>0.\] **Remark**.: Since \(2(m^{2}+n^{2})=(m+n)^{2}+(m-n)^{2}\) and \(4mn=(m+n)^{2}-(m-n)^{2}\), applying (ii) with \(2\ell\) in place of \(\ell\) and \(4\ell^{\prime}\) in place of \(\ell^{\prime}\), we can add * \(m,n\in\mathbb{N}\) such that \(\ell\,(m^{2}+n^{2})\) and \(\ell^{\prime}\,(m^{2}-n^{2})\) are distinct and \[\bar{d}_{\Phi}\big{(}(\ell\,(m^{2}+n^{2}))^{-1}\Lambda\cap(\ell^{\prime}\,(m^ {2}-n^{2}))^{-1}\Lambda\big{)}>0.\] In order to prove Theorem 1.5, it suffices to establish the following result. **Theorem 1.7**.: _Let \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) be a completely multiplicative function that takes finitely many values. Then there exist \(k,m,n\in\mathbb{N}\), with \(m>n\), such that the integers \(m^{2}-n^{2}\), \(2mn\), \(m^{2}+n^{2}\) are distinct and_ \[f(k\,(m^{2}-n^{2}))=f(k\,2mn)=f(k\,(m^{2}+n^{2}))=1. \tag{1.2}\] ### Other results Our methodology is flexible enough to allow us to handle a variety of other dilation-invariant pairs. We record a few cases next. #### 1.5.1. A question from [16] The next result is related to [16, Question 7.1]. It is only here that we use logarithmic averages \[\mathbb{E}_{m,n\in[N]}^{\log}:=\frac{1}{(\log N)^{2}}\sum_{m,n\in[N]}\frac{1} {mn}\] in order to have access to a result from [47]. **Theorem 1.8**.: _Suppose that \(\Lambda\subset\mathbb{N}\) satisfies \(\bar{d}_{\Phi}(\Lambda)>0\) for some multiplicative Folner sequence \(\Phi\). Then_ \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N]}^{\log}\bar{d}_{\Phi}((n^{2}+n)^{- 1}\Lambda\cap(m^{2})^{-1}\Lambda)>0.\] **Remark**.: Our method also implies the following ergodic version of the previous result, as posed in [16], using Cesaro instead of logarithmic averages: If \((T_{g})_{g\in\mathbb{N}}\) is a measure-preserving action of \((\mathbb{N},\times)\) on a probability space \((X,\mu)\) and \(A\subset X\) is measurable with \(\mu(A)>0\), then \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N]}^{\log}\,\mu(T_{n^{2}+n}^{-1}A\cap T _{m^{2}}^{-1}A)>0.\] This follows from property (6.15) that we prove below. Our argument also allows us to replace \(n^{2}+n\) and \(m^{2}\) by \(n^{2}+an\) and \(m^{\tau}\) respectively, where \(r\in\mathbb{N}\) and \(a\) is a non-zero integer. The proof of Theorem 1.8 follows closely the argument used to prove part (ii) of Theorem 2.2. We will outline this argument in Section 6.2. #### 1.5.2. General linear forms We can also prove variants of Theorem 1.6 that cover more general patterns of the form \[(k\,L_{1}(m,n)\cdot L_{2}(m,n),k\,L_{3}(m,n)\cdot L_{4}(m,n)),\] where \(L_{i}(m,n)=a_{i}m+b_{i}n\) for some \(a_{i}\in\mathbb{N}\), \(b_{i}\in\mathbb{Z}\), \(i=1,2,3,4\), and at least one of the forms, say \(L_{4}(m,n)\), is not a rational multiple of the others. Suppose we want to show, under the previous assumptions, that if \(\Lambda\subset\mathbb{Z}\) satisfies \(\bar{d}_{\Phi}(\Lambda)>0\) for some multiplicative Folner sequence \(\Phi\), then there exist \(m,n\in\mathbb{Z}\) such that \(L_{1}(m,n)\cdot L_{2}(m,n)\) and \(L_{3}(m,n)\cdot L_{4}(m,n)\) are distinct integers and satisfy \[\bar{d}_{\Phi}\big{(}(L_{1}(m,n)\cdot L_{2}(m,n))^{-1}\Lambda\cap(L_{3}(m,n) \cdot L_{4}(m,n))^{-1}\Lambda\big{)}>0.\] Without loss of generality we can assume that \(b_{4}\neq 0\). By making the substitution \(m\mapsto b_{4}\,m\) and \(n\mapsto n-a_{4}\,m\) (an operation that preserves our assumptions about the forms \(L_{i}\)) we can assume that \(a_{4}=0\). Since the form \(L_{4}\) is not a rational multiple of \(L_{i}\) for \(i=1,2,3\), we have \(a_{i}\neq 0\) for \(i=1,2,3\). We do another substitution \(n\mapsto a_{1}\,a_{2}\,a_{3}\,n\). We then factor out \(a_{i}\) from the linear form \(L_{i}\) for \(i=1,2,3\). We see that it is sufficient to consider the case where the \(L_{i}\) are integer multiples of forms satisfying \(a_{1}=a_{2}=a_{3}=1\) and \(a_{4}=0\), \(b_{4}\neq 0\). Making a last substitution \(m\mapsto m-b_{3}\,n\), we get that it suffices prove that \[\bar{d}_{\Phi}\big{(}(\ell\,(m+an)\cdot(m+bn))^{-1}\Lambda\cap(\ell^{\prime} \,m\,n)^{-1}\Lambda\big{)}>0\] whenever \(\ell,\ell^{\prime}\in\mathbb{N}\) and \(a,b\in\mathbb{Z}\). This case can be covered by repeating the argument used to prove Theorem 1.6 (which covers the case \(a=1,b=-1\)) without any essential change. #### 1.5.3. More general expressions and averages The methods used to establish part (ii) of Theorem 1.6, would also allow to cover patterns of the form \[\Big{(}k\,(m^{2}+n^{2})^{r}\prod_{i=1}^{l}L_{i}(m,n),\ k\,\prod_{i=1}^{l^{ \prime}}L_{i}^{\prime}(m,n)\Big{)},\] where \(k\in\mathbb{N}\), \(l,l^{\prime},r\in\mathbb{Z}_{+}\) are such that \(|l|+|l^{\prime}|>0\),2 and at least one of the linear forms \(L_{i},L_{i}^{\prime}\) is not a rational multiple of the others. It should also be possible to cover variants of Theorem 2.2 below in which the averages over squares \(\mathbb{E}_{m,n\in[N]}\) are replaced by averages over discs, i.e., \(\mathbb{E}_{m^{2}+n^{2}\leq N}\). However, we do not pursue these directions here. Footnote 2: The case \(l=l^{\prime}=0\) is covered in [16, Theorem 1.5]. ### Further directions Our approach opens the way for studying several other compelling partition regularity problems that were previously considered intractable. We note here some promising directions. A result of Rado [41] implies that if \(a,b,c\in\mathbb{N}\), then the linear equation \(ax+by=cz\) is partition regular if and only if either \(a,b\), or \(a+b\) equals \(c\), in which case we say that the triple \((a,b,c)\)_satisfies Rado's condition_. It follows that a necessary condition for the partition regularity of the equation (1.1) is that the triple \((a,b,c)\) satisfies Rado's condition. Perhaps this condition is also necessary, but very little is known in this direction; in fact, there is no triple \((a,b,c)\) for which the partition regularity of (1.1) is currently known. We state a related problem of intermediate difficulty along the lines of Theorem 1.5. **Problem 1**.: _Suppose that the triple \((a,b,c)\) satisfies Rado's condition. Then for any completely multiplicative function \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) taking finitely many values, there exist distinct \(x,y,z\in\mathbb{N}\), such that_ \[ax^{2}+by^{2}=cz^{2}\quad\text{and}\quad f(x)=f(y)=f(z)=1.\] Theorem 1.5 solves this problem when \(a=b=c=1\) and as we mentioned in the second remark following the theorem, a similar argument applies to triples that satisfy Rado's condition and consist of squares. It would be interesting to solve Problem 1 for some other triples such as \((1,1,2)\) and \((1,2,1)\). The first one corresponds to the equation \[x^{2}+y^{2}=2z^{2},\] which was conjectured to be partition regular by Gyarmati and Ruzsa [29] and has parametric solutions of the form \[x=k\,(m^{2}-n^{2}+2mn),\,y=k\,(m^{2}-n^{2}-2mn),\,z=k\,(m^{2}+n^{2}).\] The second one corresponds to the equation \[x^{2}+2y^{2}=z^{2}\] with parametric solutions of the form \[x=k\,(m^{2}-2n^{2}),\,y=k\,(2mn),\,z=k\,(m^{2}+2n^{2}).\] Both parametrizations involve at least two quadratic forms that do not factor into products of linear forms. This is a problem for our method, since a useful variant of Proposition 2.15 is not known in this case, not even if \(f_{1},f_{2},f_{3}\) are all equal to the Liouville function. Another interesting problem is to relax the conditions on the coefficients \(a,b,c\) in Theorem 1.1. We mention two representative problems that seem quite challenging. **Problem 2**.: _Show that for every finite coloring of \(\mathbb{N}\) there exist_ * _distinct_ \(x,y\in\mathbb{N}\) _with the same color and_ \(z\in\mathbb{N}\) _such that_ \(x^{2}+y^{2}=2z^{2}\)_._ * _distinct_ \(x,y\in\mathbb{N}\) _with the same color and_ \(z\in\mathbb{N}\) _such that_ \(x^{2}+2y^{2}=z^{2}\)_._ _Show also similar properties with the roles of the variables \(y\) and \(z\) or \(x\) and \(z\) reversed._ **Remark**.: More generally, we believe that if for \(a,b,c\in\mathbb{N}\) at least one of the integers \(ac,bc,(a+b)c\) is a square, then for every finite coloring of the integers there exist distinct \(x,y\in\mathbb{N}\) with the same color and \(z\in\mathbb{N}\) such that \(ax^{2}+by^{2}=cz^{2}\). Theorem 1.1 verifies this if both \(ac\) and \(bc\) are squares. We also expect that if at least one of the integers \(bc\), \((c-a)b\) is a square, then for every finite coloring of the integers there exist distinct \(x,z\in\mathbb{N}\) with the same color and \(y\in\mathbb{N}\) such that \(ax^{2}+by^{2}=cz^{2}\). It may also be that stronger density regularity results hold, as in Theorem 1.2 and Theorem 1.6. The broader issue is to find conditions for the polynomials \(P,Q\in\mathbb{Z}[m,n]\) such that the following holds: If \(\Lambda\subset\mathbb{N}\) satisfies \(\bar{d}_{\Phi}(\Lambda)>0\) for some multiplicative Folner sequence \(\Phi\), then there exist \(m,n\in\mathbb{N}\) such that the integers \(P(m,n)\) and \(Q(m,n)\) are positive, distinct, and \[\bar{d}_{\Phi}\big{(}(P(m,n))^{-1}\Lambda\cap(Q(m,n))^{-1}\Lambda\big{)}>0.\] Equivalently, using the terminology from [16], the problem is to determine for which polynomials \(P,Q\in\mathbb{Z}[m,n]\) we have that \(\{P(m,n)/Q(m,n)\colon m,n\in\mathbb{N}\}\) is a set measurable multiplicative recurrence. ### Notation We let \(\mathbb{N}:=\{1,2,\ldots\}\), \(\mathbb{Z}_{+}:=\{0,1,2,\ldots\}\), \(\mathbb{R}_{+}:=[0,+\infty)\), \(\mathbb{S}^{1}\) be the unit circle, and \(\mathbb{U}\) be the closed complex unit disk. With \(\mathbb{P}\) we denote the set of primes and throughout we use the letter \(p\) to denote primes. For \(t\in\mathbb{R}\), we let \(e(t):=e^{2\pi it}\). For \(z\in\mathbb{C}\), with \(\Re(z)\), \(\operatorname{Im}(z)\) we denote the real and imaginary parts of \(z\) respectively. For \(N\in\mathbb{N}\), we let \([N]:=\{1,\ldots,N\}\). We often denote sequences \(a\colon\mathbb{N}\to\mathbb{U}\) by \((a(n))\), instead of \((a(n))_{n\in\mathbb{N}}\). If \(A\) is a finite non-empty subset of the integers and \(a\colon A\to\mathbb{C}\), we let \[\mathbb{E}_{n\in A}\,a(n):=\frac{1}{|A|}\sum_{n\in A}\,a(n).\] We write \(a(n)\ll b(n)\) if for some \(C>0\) we have \(a(n)\leq C\,b(n)\) for every \(n\in\mathbb{N}\). Throughout this article, the letter \(f\) is typically used for multiplicative functions and the letter \(\chi\) for Dirichlet characters. ### Acknowledgement Part of this research was conducted while the authors were at the Institute for Advanced Studies at Princeton during parts of the 2022/23 academic year. We are grateful to the Institute for its hospitality and support and acknowledge the NSF grant DMS-1926686. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) licence to any Author Accepted Manuscript version arising from this submission. ## 2. Roadmap to the proofs This section outlines how we prove our main results in their parametric reformulation, which is given in Theorems 1.6 and 1.7. For various facts and notions concerning multiplicative functions we refer the reader to Section 3.3. ### Reduction of Theorem 1.6 to a positivity property for multiplicative functions We first use a version of the Furstenberg correspondence principle (see [4]) to reformulate the results in an ergodic language. **Theorem 2.1**.: _Let \(\ell,\ell^{\prime}\in\mathbb{N}\), let \(T=(T_{n})_{n\in\mathbb{N}}\) be a measure preserving action of \((\mathbb{N},\times)\) on a probability space \((X,\mu)\),3 and let \(A\subset X\) be measurable with \(\mu(A)>0\). Then there exist_ Footnote 3: Meaning, \(T_{n}\colon X\to X\), \(n\in\mathbb{N}\), are invertible measure preserving transformations such that \(T_{1}:=\operatorname{id}\) and \(T_{mn}=T_{m}\circ T_{n}\) for every \(m,n\in\mathbb{N}\) \[m,n\in\mathbb{N}\text{ with }m>n\text{ such that }\ell\,(m^{2}-n^{2})\text{ and }\ell^{\prime}\,mn\text{ are distinct and} \tag{1}\] \[\mu(T_{\ell(m^{2}-n^{2})}^{-1}A\cap T_{\ell^{\prime}mn}^{-1}A)>0. \tag{2.1}\] \[m,n\in\mathbb{N}\text{ such that }\ell\,(m^{2}+n^{2})\text{ and }\ell^{\prime}\,mn \text{ are distinct and} \tag{2.2}\] \[\mu(T_{\ell(m^{2}+n^{2})}^{-1}A\cap T_{\ell^{\prime}mn}^{-1}A)>0. \tag{2.3}\] _In fact, the set of \(m,n\in\mathbb{N}\) for which (2.1) and (2.2) hold has positive lower density._ **Remarks**.: \(\bullet\) The reduction to the previous multiple recurrence statement is merely a convenience. It facilitates the purpose of getting a further reduction to a positivity property for completely multiplicative functions that we describe in Theorem 2.2. Alternatively, one could carry out this last reduction directly, as in [21, Section 10.2]. \(\bullet\) Using the terminology from [16], Theorem 2.1 can be rephrased as saying that for every \(\ell,\ell^{\prime}\in\mathbb{N}\) both subsets of \(\mathbb{Q}^{>0}\) \[\left\{\ell(m^{2}-n^{2})/(\ell^{\prime}mn)\colon m,n\in\mathbb{N},\ m>n \right\}\text{ and }\left\{\ell(m^{2}+n^{2})/(\ell^{\prime}mn)\colon m,n\in\mathbb{N}\right\}\] are sets of measurable multiplicative recurrence. A function \(f\colon\mathbb{N}\to\mathbb{U}\), where \(\mathbb{U}\) is the complex unit disk, is called _multiplicative_ if \[f(mn)=f(m)\cdot f(n)\quad\text{ whenever }(m,n)=1.\] It is called _completely multiplicative_ if the previous equation holds for all \(m,n\in\mathbb{N}\). Let \[\mathcal{M}:=\{f\colon\mathbb{N}\to\mathbb{S}^{1}\text{ is a completely multiplicative function}\}.\] Throughout, we assume that \(\mathcal{M}\) is equipped with the topology of pointwise convergence. It easily follows that \(\mathcal{M}\) is a metrizable compact space with this topology. We can identify \(\mathcal{M}\) with the Pontryagin dual of the (discrete) group of positive rational numbers under multiplication. Note that the map \(r/s\mapsto\mu(T_{r}^{-1}A\cap T_{s}^{-1}A)\), \(r,s,\in\mathbb{N}\), from \((\mathbb{Q}_{+},\times)\) to \([0,1]\) is well defined and positive definite. Using a theorem of Bochner-Herglotz, we get that there exists a finite Borel measure \(\sigma\) on \(\mathcal{M}\) such that \(\sigma(\{1\})>0\) (in fact, \(\sigma(\{1\})\geq\delta^{2}\), where \(\delta=\mu(A)\)) and for every \(r,s\in\mathbb{N}\), \[\int_{\mathcal{M}}f(r)\cdot\overline{f(s)}\,d\sigma(f)=\mu(T_{r}^{-1}A\cap T_ {s}^{-1}A).\] In particular, we have \[\mu(T_{\ell(m^{2}-n^{2})}^{-1}A\cap T_{\ell^{\prime}mn}^{-1}A)=\int_{\mathcal{ M}}f(\ell(m^{2}-n^{2}))\cdot\overline{f(\ell^{\prime}mn)}\,d\sigma(f)\] for every \(m,n\in\mathbb{N}\) with \(m>n\), and \[\mu(T_{\ell(m^{2}+n^{2})}^{-1}A\cap T_{\ell^{\prime}mn}^{-1}A)=\int_{\mathcal{ M}}f(\ell(m^{2}+n^{2}))\cdot\overline{f(\ell^{\prime}mn)}\,d\sigma(f)\] for every \(m,n\in\mathbb{N}\). Therefore, Theorem 2.1 follows from the following result. **Theorem 2.2**.: _Let \(\sigma\) be a positive bounded measure on \(\mathcal{M}\) such that \(\sigma(\{1\})>0\) and_ \[\int_{\mathcal{M}}f(r)\cdot\overline{f(s)}\,d\sigma(f)\geq 0\quad\text{for every }r,s\in\mathbb{N}. \tag{2.3}\] _Then for every \(\ell,\ell^{\prime}\in\mathbb{N}\)_ * _we have_ (2.4) \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\int_{\mathcal{M}}f(\ell(m^{2}-n^{ 2}))\cdot\overline{f(\ell^{\prime}mn)}\,d\sigma(f)>0.\] (2.5) * _we have_ (2.5) \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}}f(\ell(m^{2}+n^{ 2}))\cdot\overline{f(\ell^{\prime}mn)}\,d\sigma(f)>0.\] **Remark**.: The limit in (2.4) exists by [20, Theorem 1.4] and the bounded convergence theorem,4 however the limit in (2.5) may not always exist. Footnote 4: The statement of [20, Theorem 1.4] does not have the restriction \(m>n\) in the averaging, but the argument used there also covers this case without essential changes. The reduction up to this point is similar to that in [21]. The methods in [21] were only able to address a variant of (i) in which the expressions under the integral were products of linear factors and were "pairing up" when \(n=0\) and becoming non-negative.5 This positivity property is not shared by the expressions in (2.4) (and (2.5)), which is the main reason why it was not possible to deal with Pythagorean pairs in [21]. To overcome this obstacle, we do not use a decomposition result that covers all elements of \(\mathcal{M}\) simultaneously (as was the case in [21]), but rather work separately with aperiodic and pretentious multiplicative functions. In particular, coupled with some measurability properties, this allows us to exploit the uniform concentration estimates of Propositions 2.5 and 2.11, which are not shared by all elements of \(\mathcal{M}\). We shall give a brief outline of our approach in the next subsections. ### Proof plan for part (i) of Theorem 2.2 We prove Theorem 2.2 by taking an average over the grid \[\{(Qm+1,Qn):m,n\in\mathbb{N}\},\] where \(Q\in\mathbb{N}\) is chosen depending only on \(\sigma\). In view of (2.3) it suffices to prove positivity in (2.4) when the average is taken along this subset of pairs. With \(\ell,\ell^{\prime}\in\mathbb{N}\) fixed, we introduce the following notation: for \(\delta>0\), \(f\in\mathcal{M}\), and \(Q,m,n\in\mathbb{N}\), let \[A_{\delta}(f,Q;m,n):=w_{\delta}(m,n)\cdot f\big{(}\ell\left((Qm+1)^{2}-(Qn)^{2 }\right)\big{)}\cdot\overline{f\big{(}\ell^{\prime}\left(Qm+1\right)Qn\big{)}}, \tag{2.6}\] where \(w_{\delta}:\mathbb{N}^{2}\to[0,1]\) is the weight defined in (3.2) of Lemma 3.3 for reasons that will become clear in a moment (at a first reading the reader could just take \(w_{\delta}=1\)). Then part (i) of Theorem 2.2 follows immediately from the next result, the fact that \(0\leq w_{\delta}(m,n)\leq 1\), and the positivity property (2.3). **Theorem 2.3**.: _Let \(\sigma\) be a Borel probability measure on \(\mathcal{M}\) such that \(\sigma(\{1\})>0\). Then there exist \(\delta_{0}>0\) and \(Q_{0}\in\mathbb{N}\) such that_ \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}}A_{\delta_{0}}(f,Q_{ 0};m,n)\,d\sigma(f)>0. \tag{2.7}\] **Remark**.: The values of \(\delta_{0}>0\) and \(Q_{0}\in\mathbb{N}\) depend on \(\sigma\) but not on \(\ell,\ell^{\prime}\). To analyse the limit in (2.7) we use the theory of completely multiplicative functions. When \(f\) is aperiodic, the mean values of \(A_{\delta}(f,Q;m,n)\) vanish for every \(Q\). This is a consequence of the following result, which in turn follows from results in [21] (see also [38] for related work), we shall explain later on how. **Proposition 2.4**.: _Let \(f\colon\mathbb{N}\to\mathbb{U}\) be an aperiodic completely multiplicative function. Then for every \(\delta>0\) and \(Q\in\mathbb{N}\) we have_ \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\,A_{\delta}(f,Q;m,n)=0. \tag{2.8}\] _Furthermore, for every completely multiplicative function \(f\colon\mathbb{N}\to\mathbb{U}\) the previous limit exists._ Let \[\mathcal{M}_{p}=\{f\colon\mathbb{N}\to\mathbb{S}^{1}\colon f\,\text{ is a pretentious completely multiplicative function}\}; \tag{2.9}\] we show in Lemma 3.6 that \(\mathcal{M}_{p}\) is a Borel subset of \(\mathcal{M}\). It follows from Proposition 2.4 and the bounded convergence theorem, that in order to establish (2.7) it suffices to show that there exist \(\delta_{0}>0\) and \(Q_{0}\in\mathbb{N}\) such that \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}_{p}}A_{\delta_{0}}(f, Q_{0};m,n)\,d\sigma(f)>0. \tag{2.10}\] If \(f\) is pretentious, then it "pretends" to be a twisted Dirichlet character, and thus exhibits some periodicity. We exploit this periodicity by choosing a highly divisible \(Q\) for which the averages of \(A_{\delta}(f,Q;m,n)\) take a much simpler form. More precisely, we make use of the following concentration estimate, which is an immediate consequence of [35, Lemma 2.5]. **Proposition 2.5**.: _Let \(f\colon\mathbb{N}\to\mathbb{U}\) be a multiplicative function such that \(f\sim\chi\cdot n^{it}\) for some \(t\in\mathbb{R}\) and Dirichlet character \(\chi\) with period \(q\). Let also \(\Phi_{K}\) be as in (2.12) and suppose that \(K\) is large enough so that \(q\) divides all elements of \(\Phi_{K}\). Then_ \[\limsup_{N\to\infty}\max_{Q\in\Phi_{K}}\mathbb{E}_{n\in[N]}|f(Qn+1)-(Qn)^{it} \cdot\exp\big{(}F_{N}(f,K)\big{)}|\ll\mathbb{D}(f,\chi\cdot n^{it};K,\infty)+ K^{-1/2},\] _where the implicit constant is absolute and_ \[F_{N}(f,K):=\sum_{K<p\leq N}\frac{1}{p}\,\big{(}f(p)\cdot\overline{\chi(p)}\cdot p ^{-it}-1\big{)}. \tag{2.11}\] **Remarks**.: \(\bullet\) It is important for our argument that the implicit constant is independent of \(K\) and the quantity \(F_{N}(f,K)\) does not depend on \(Q\) as long as \(Q\in\Phi_{K}\) and \(q\mid Q\). \(\bullet\) It follows from [35, Lemma 2.5] that \(\max_{Q\in\Phi_{K}}\) can be replaced by \(\sup_{Q\in\Psi_{K}}\) where \(\Psi_{K}:=\{q\cdot\prod_{p\leq K}p^{a_{\dagger}}\colon a_{1},\ldots,a_{K}\in \mathbb{N}\}\). \(\bullet\) If \(f\sim\chi\cdot n^{it}\), then the sequence \(A(N):=\sum_{1<p\leq N}\frac{1}{p}\,\big{|}1-f(p)\cdot\overline{\chi(p)}\cdot p ^{-it}\big{|}\), \(N\in\mathbb{N}\), is slowly varying, in the sense that for a fixed pretentious \(f\) we have for every \(c\in(0,1)\) that \(\lim_{N\to\infty}\sup_{n\in[N^{c},N]}|A(n)-A(N)|=0\).6 Keeping this in mind, if we use partial summation on the interval \([N^{c},N]\) and then let \(c\to 0\), we deduce that the main estimate of Proposition 2.5 still holds if we replace \(\mathbb{E}_{n\in[N]}\) with \(\mathbb{E}_{n\in[N]}^{\log}\). Footnote 6: If \(a_{p}:=1-f(p)\cdot\overline{\chi(p)}\cdot p^{-it}\), \(p\in\mathbb{P}\), we note that \(\sup_{n\in[N^{c},N]}|A(n)-A(N)|\leq(B_{N}\cdot C_{N})^{1/2}\), where \(B_{N}:=\sum_{p\in[N^{c},N]}\frac{|a_{p}|^{2}}{p}\), \(C_{N}:=\sum_{p\in[N^{c},N]}\frac{1}{p}\), \(N\in\mathbb{N}\). The sequence \(C_{N}\) is bounded and \(\lim_{N\to\infty}B_{N}=0\) because \(\sum_{p\in\mathbb{P}}\frac{|a_{p}|^{2}}{p^{2}}<+\infty\). In order to establish (2.10) we divide the integral into two parts. The first is supported on multiplicative functions other than the Archimedean characters \((n^{it})_{n\in\mathbb{N}}\), \(t\in\mathbb{R}\), in which case we show using Proposition 2.5 that for a highly divisible \(Q_{0}\) the contribution is essentially non-negative. The second is supported on Archimedean characters. We show that this part is positive using our assumption \(\sigma(\{1\})>0\) and by taking \(\delta_{0}\) small enough so that the weight \(w_{\delta_{0}}\) neutralizes the effect of the Archimedean characters that are different from \(1\). To carry out the first part, the key idea is to average over "multiplicatively large" values of \(Q\). More precisely, for each \(K\in\mathbb{N}\) let \[\Phi_{K}:=\Big{\{}\prod_{p\leq K}p^{a_{p}}\colon K<a_{p}\leq 2K\Big{\}}. \tag{2.12}\] The sequence \((\Phi_{K})\) is a multiplicative Folner sequence with the property that, for every \(q\in\mathbb{N}\), as soon as \(K\) is large enough, every \(Q\in\Phi_{K}\) is divisible by \(q\). It also has the property that for every \(Q\in\Phi_{K}\) and a prime \(p\in\mathbb{P}\), we have \(p|Q\) if and only if \(p\leq K\). Let also \[\mathcal{A}:=\{(n^{it})_{n\in\mathbb{N}}\colon t\in\mathbb{R}\}. \tag{2.13}\] Note that \(\mathcal{A}\) is a Borel subset of \(\mathcal{M}\) since it is a countable union of compact sets (we caution the reader that \(\mathcal{A}\) is not closed with the topology of pointwise convergence, in fact, it is dense in \(\mathcal{M}\)). The most important step in establishing property (2.10) is the following fact: **Lemma 2.6**.: _Let \(f\in\mathcal{M}_{p}\setminus\mathcal{A}\), \(\delta>0\), \(\ell,\ell^{\prime}\in\mathbb{N}\), and \(\Phi_{K}\) be as in (2.12). Then_ \[\lim_{K\to\infty}\mathbb{E}_{Q\in\Phi_{K}}\lim_{N\to\infty}\mathbb{E}_{m,n\in[ N]}A_{\delta}(f,Q;m,n)=0.\] _(Note that the inner limit exists by Proposition 2.4.)_ Roughly, to prove Lemma 2.6 we use the concentration estimate of Proposition 2.5 to deduce that for \(Q\in\Phi_{K}\) the average \(\mathbb{E}_{m,n\in[N]}A_{\delta}(f,Q;m,n)\) is asymptotically equal to \(C_{\ell,\ell^{\prime}}(K)\cdot\overline{f(Q)}\cdot Q^{it}\) for some \(C_{\ell,\ell^{\prime}}(K)\in\mathbb{U}\) and \(t\in\mathbb{R}\). Since \(f\not\in\mathcal{A}\), by Lemma 3.2 the average of the last expression, taken over \(Q\in\Phi_{K}\), converges to \(0\) as \(K\to\infty\). Using the previous result, the fact that the limit \(\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\,A_{\delta}(f,Q;m,n)\) exists (by Proposition 2.4), and applying the bounded convergence theorem twice, we deduce: **Corollary 2.7**.: _Let \((\Phi_{K})\) and \(\mathcal{A}\) be defined by (2.12) and (2.13) respectively. Let also \(\sigma\) be a Borel probability measure on \(\mathcal{M}_{p}\). Then for every \(\delta>0\) we have_ \[\lim_{K\to\infty}\mathbb{E}_{Q\in\Phi_{K}}\lim_{N\to\infty}\mathbb{E}_{m,n\in[ N]}\,\int_{\mathcal{M}_{p}\setminus\mathcal{A}}A_{\delta}(f,Q;m,n)\,d\sigma(f)=0.\] We are left to study the part of the integral supported on \(\mathcal{A}\). For such functions the limits \(\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}A_{\delta}(f,Q;m,n)\) do not depend on \(Q\), and so the previous argument will not help. It is the presence of the weight \(w_{\delta}\) that will allow us to prove the following: **Lemma 2.8**.: _Let \(\sigma\) be a Borel probability measure on \(\mathcal{M}\) such that \(\sigma(\{1\})>0\) and \(\mathcal{A}\) be as in (2.13). Then there exist \(\delta_{0},\rho_{0}>0\), depending only on \(\sigma\), such that_ \[\liminf_{N\to\infty}\,\inf_{Q\in\mathbb{N}}\Re\Big{(}\mathbb{E}_{m,n\in[N]} \int_{\mathcal{A}}A_{\delta_{0}}(f,Q;m,n)\,d\sigma(f)\Big{)}\geq\rho_{0}. \tag{2.14}\] **Remark**.: The weight \(w_{\delta}(m,n)\) is introduced to force positivity in this case, since for some choices of \(\ell,\ell^{\prime}\) and measures \(\sigma\), the unweighted expressions have negative real parts. However, rather miraculously, if \(\ell=1\) and \(\ell^{\prime}=2\) (which is the case to consider for Pythagorean pairs), we get positivity even in the unweighted case, and a somewhat simpler argument applies. We do not pursue this approach here though because it lacks generality. Finally, we will see how the previous results allow us to reach our goal, which is to prove Theorem 2.3, thus completing the proof of part (i) of Theorems 1.2 and 2.2. Proof of Theorem 2.3 assuming Proposition 2.4, Corollary 2.7, and Lemma 2.8.: By combining Corollary 2.7 and Lemma 2.8, we deduce that there exist \(\delta_{0},\rho_{0}>0\), depending only on \(\sigma\), such that \[\liminf_{K\to\infty}\mathbb{E}_{Q\in\Phi_{K}}\lim_{N\to\infty}\mathbb{E}_{m,n \in[N]}\int_{\mathcal{M}_{p}}A_{\delta_{0}}(f,Q;m,n)\,d\sigma(f)\geq\rho_{0}.\] (There is no need to take the real part on this expression since it is real.) From this we immediately deduce that (2.10) holds for some \(Q_{0}\in\mathbb{N}\). As we also explained before, this fact, together with Proposition 2.4, imply (2.7) via the bounded convergence theorem, completing the proof. To establish Theorem 2.3, it remains to prove Proposition 2.4, Lemma 2.6 (Corollary 2.7 is an immediate consequence), and Lemma 2.8. We do this in Section 4. ### Proof plan for part (ii) of Theorem 2.2 The general strategy is similar to that used to prove part (i) of Theorem 2.2, but there are two major differences. The first is the required concentration estimate, which is given in Proposition 2.11 below. Unlike Proposition 2.5, this result is new and of independent interest, and its proof occupies a considerable portion of the argument. The second difference is that the limit in (2.5) may not exist, which causes additional technical problems. We introduce the following notation: for \(\delta>0\), \(f\in\mathcal{M}\), and \(Q,m,n\in\mathbb{N}\), let \[B_{\delta}(f,Q;m,n):=\tilde{w}_{\delta}(m,n)\cdot f\big{(}\ell\big{(}(Qm+1)^{ 2}+(Qn)^{2}\big{)}\big{)}\cdot\overline{f\big{(}\ell^{\prime}\,(Qm+1)(Qn) \big{)}}, \tag{2.15}\] where \(\tilde{w}_{\delta}(m,n)\) is the weight defined in (3.3) of Lemma 3.3. Arguing as before, we get that part (ii) of Theorem 2.2 follows from the following positivity property. **Theorem 2.9**.: _Let \(\sigma\) be a Borel probability measure on \(\mathcal{M}\) such that \(\sigma(\{1\})>0\) and (2.3) holds. Then there exists \(\delta_{0}>0\) such that_ \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}}B_{\delta_{0}}(f, 1;m,n)\,d\sigma(f)>0. \tag{2.16}\] **Remark**.: Initially, we only establish a slight variant of this, where \(1\) is replaced by \(Q_{N}\), where \(Q_{N}\) varies with \(N\), but belongs to a finite set for every \(N\in\mathbb{N}\). We explain at the end of this subsection how this seemingly weaker positivity property implies (2.16). Again, to analyse the limit in (2.16), we use the theory of completely multiplicative functions. If \(f\) is aperiodic, we have the following result, which we will deduce from the results in [21]. **Proposition 2.10**.: _Let \(f\colon\mathbb{N}\to\mathbb{U}\) be an aperiodic multiplicative function. Then for every \(\delta>0\) and \(Q\in\mathbb{N}\) we have_ \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\,B_{\delta}(f,Q;m,n)=0. \tag{2.17}\] **Remark**.: It follows that (2.17) also holds even if \(Q\) depends on \(N\), but its values are taken from a finite subset of \(\mathbb{N}\). If \(f\) is pretentious we will crucially use the following concentration estimate (which is a direct consequence of a more general result proved in Section 5) to analyse the average (2.16). **Proposition 2.11**.: _Let \(f\colon\mathbb{N}\to\mathbb{U}\) be a multiplicative function such that \(f\sim\chi\cdot n^{it}\) for some \(t\in\mathbb{R}\) and Dirichlet character \(\chi\) with period \(q\). Let also \(\Phi_{K}\) be as in (2.12) and suppose that \(K\) is large enough so that, say, \(\mathbb{D}_{1}(f,\chi\cdot n^{it};K,\infty)\leq 1\) and \(q\) divides all elements of \(\Phi_{K}\). Then_ \[\limsup_{N\to\infty}\max_{Q\in\Phi_{K}}\mathbb{E}_{m,n\in[N]} \big{|}f\big{(}(Qm+1)^{2}+(Qn)^{2}\big{)}-Q^{2it}\cdot(m^{2}+n^{2})^{it}\cdot \exp\big{(}G_{N}(f,K)\big{)}\big{|}\ll\] \[\mathbb{D}_{1}(f,\chi\cdot n^{it};K,\infty)+K^{-1/2},\] _where the implicit constant is absolute and_ \[G_{N}(f,K):=2\sum_{\begin{subarray}{c}K<p\leq N,\\ p\equiv 1\ (\text{mod }4)\end{subarray}}\frac{1}{p}\,(f(p)\cdot\overline{\chi(p)} \cdot p^{-it}-1), \tag{2.18}\] \[\mathbb{D}_{1}(f,\chi\cdot n^{it};K,\infty)^{2}:=\sum_{\begin{subarray}{c}K< p,\\ p\equiv 1\ (\text{mod }4)\end{subarray}}\frac{1}{p}\,(1-\Re(f(p)\cdot\overline{ \chi(p)}\cdot p^{-it})).\] **Remarks**.: \(\bullet\) It is important for our argument that the implicit constant does not depend on \(K\) and that \(\exp\big{(}G_{N}(f,K)\big{)}\) is the same for all \(Q\in\Phi_{K}\) that are divisible by \(q\). It is also important for our applications that we get some uniformity over the \(Q\in\Phi_{K}\). \(\bullet\) Using Proposition 5.1 below, we can also replace the \(\max_{Q\in\Phi_{K}}\) with \(\sup_{Q\in\Psi_{K}}\), where \(\Psi_{K}:=\{q\cdot\prod_{p\leq K}p^{a_{i}}\colon a_{1},\ldots,a_{K}\in\mathbb{ N}\}\). \(\bullet\) For the future applications in mind, we prove a somewhat more general and quantitatively more explicit variant, see Proposition 5.1 below. As in the proof of Theorem 2.3, in order to prove Theorem 2.9 we split the integral into two parts, one that is supported on Archimedean characters and the other on its complement. To handle the second part, we use the following result, which is proved using Proposition 2.11 and can be compared to Corollary 2.7. Again, taking multiplicative averages over the variable \(Q\) is a key maneuver, but the non-convergence of the averages \(\mathbb{E}_{m,n\in[N]}\,B_{\delta}(f,Q;m,n)\) causes considerable technical difficulties in our proofs. **Proposition 2.12**.: _Let \((\Phi_{K})\), \(\mathcal{A}\), \(B_{\delta}(f,Q;m,n)\) be defined by (2.12), (2.13), (2.15), respectively, and \(\delta\)>0. Let also \(\sigma\) be a Borel probability measure on \(\mathcal{M}_{p}\). Then_ \[\lim_{K\to\infty}\limsup_{N\to\infty}\Big{|}\mathbb{E}_{Q\in\Phi_{K}}\, \mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}_{p}\setminus\mathcal{A}}\,B_{\delta}( f,Q;m,n)\,d\sigma(f)\Big{|}=0.\] **Remark**.: Unlike the case of Corollary 2.7, we cannot pass the limit over \(N\) inside the average over \(Q\). This will cause some minor problems in our later analysis, which we will overcome by using the positivity property (2.3) of the measure \(\sigma\) (this is why this positivity property is used in the statement of Theorem 2.9 but not in Theorem 2.3). We are left to study the contribution of the set \(\mathcal{A}\) of Archimedean characters in which case the presence of the weight \(\tilde{w}_{\delta}\) allows us to establish positivity by taking \(\delta\) small enough. **Lemma 2.13**.: _Let \(\sigma\) be a Borel probability measure on \(\mathcal{M}_{p}\) such that \(\sigma(\{1\})>0\) and \(\mathcal{A}\) be as in (2.13). Then there exist \(\delta_{0},\rho_{0}>0\), depending only on \(\sigma\), such that_ \[\liminf_{N\to\infty}\inf_{Q\in\mathbb{N}}\Re\Big{(}\mathbb{E}_{m,n\in[N]} \int_{\mathcal{A}}B_{\delta_{0}}(f,Q;m,n)\,d\sigma(f)\Big{)}\geq\rho_{0}.\] We conclude this section by noting how the previous results allow us to reach our goal, which is to prove Theorem 2.9, thus completing the proof of part (ii) of Theorems 1.2 and 2.2. Proof of Theorem 2.9 assuming Proposition 2.10, Proposition 2.12, and Lemma 2.13.: We start by combining Proposition 2.12 and Lemma 2.13. We deduce that there exist \(\delta_{0},\rho_{0}>0\), depending only on \(\sigma\), such that \[\liminf_{K\to\infty}\liminf_{N\to\infty}\mathbb{E}_{Q\in\Phi_{K}}\Re\Big{(} \mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}_{p}}B_{\delta_{0}}(f,Q;m,n)\,d\sigma(f )\Big{)}\geq 2\rho_{0}.\] In this case, it is a little bit tricky to deduce that (2.16) holds. We do it as follows. The last estimate implies that there exist \(K_{0}\in\mathbb{N}\) and \(Q_{N}\in\Phi_{K_{0}}\), \(N\in\mathbb{N}\), such that \[\liminf_{N\to\infty}\Re\Big{(}\mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}_{p}}B_{ \delta_{0}}(f,Q_{N};m,n)\,d\sigma(f)\Big{)}\geq\rho_{0}.\] Note that since \(Q_{N}\) belongs to a finite set, Proposition 2.10 implies that in the last expression we can replace \(\mathcal{M}_{p}\) with \(\mathcal{M}\). Hence, \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}}B_{\delta_{0}}(f, Q_{N};m,n)\,d\sigma(f)\geq\rho_{0}.\] (The real part is no longer needed since the last expression is known to be real by (2.3).) Since \(Q_{N}\) takes values in a finite set with upper bound say \(Q_{0}\), and since by the positivity property (2.3) we have \(\int B_{\delta_{0}}(f,Q_{N};m,n)\,d\sigma(f)\geq 0\) for every \(m,n\in\mathbb{N}\), we deduce that \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}}B_{\delta_{0}}(f,1;m,n)\,d\sigma(f)\geq\rho_{0}/(Q_{0})^{2}.\] This establishes (2.16) and ends the proof. In order to establish Theorem 2.9, it remains to prove Proposition 2.10, Proposition 2.12, and Lemma 2.13. We do this in Section 6, after having established Proposition 2.11 in Section 5, which is crucially used in the proof of Proposition 2.12. ### Proof plan of Theorem 1.7 For notational convenience, when we write \(\mathbb{E}_{k\in\mathbb{N}}^{*}\) in the following statements, we mean the limit \(\lim_{K\to\infty}\mathbb{E}_{k\in\Phi_{K}}\), where \((\Phi_{K})\) is an arbitrary multiplicative Folner sequence, chosen so that all the limits in the following statements exist. Since our setting will always involve a countable collection of limits, such a Folner sequence always exists and can be taken as a subsequence of any given multiplicative Folner sequence. Our argument is divided into two parts. In the first part we reduce the problem to a positivity property of pretentious multiplicative functions and in the second part we verify this positivity property. To carry out the first part, we note that to prove Theorem 1.7, it is only necessary to establish the subsequent averaged version. **Theorem 2.14**.: _Suppose that the completely multiplicative function \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) takes finitely many values and \(F:=\mathbf{1}_{\{1\}}\). Then_ \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{*} F(f(k\,(m^{2}-n^{2})))\cdot F(f(k\,2mn))\cdot F(f(k\,(m^{2}+n^{2})))>0. \tag{2.19}\] **Remark**.: The "multiplicative average" \(\mathbb{E}_{k\in\mathbb{N}}^{*}\) is needed in our analysis to "clear out" some unwanted terms. We write \(f=gh\), where \(g\) has aperiodicity properties and \(h\) is pretentious (see Lemma 7.3 for the exact statement). Since \(f\) is finite-valued, it follows that \(g\) takes values in \(d\)-roots of unity for some \(d\in\mathbb{N}\), hence we have \[F\circ g=\mathbf{1}_{g=1}=\mathbb{E}_{0\leq j<d}\,g^{j}.\] We use the previous facts to analyse the average in (2.19). The aperiodic part is covered by the next result, which is direct consequence of [21, Theorem 9.7]. **Proposition 2.15**.: _Let \(f_{1},f_{2},f_{3}\colon\mathbb{N}\to\mathbb{U}\) be completely multiplicative functions and suppose that either \(f_{1}\) or \(f_{2}\) is aperiodic. Then_ \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,f_{1}(m^{2}-n^{2})\cdot f_{2}(mn )\cdot f_{3}(m^{2}+n^{2})=0.\] Combining the above and some technical maneuvering, we get the following reduction, which completes the first part needed to prove Theorem 2.14. **Proposition 2.16**.: _Suppose that for every finite-valued completely multiplicative function \(h\colon\mathbb{N}\to\mathbb{S}^{1}\), with \(h\sim 1\), and modified Dirichlet character \(\tilde{\chi}\colon\mathbb{N}\to\mathbb{S}^{1}\), we have_ \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{* }\;A(k\,(m^{2}-n^{2}))\cdot A(k\,2mn)\cdot A(k\,(m^{2}+n^{2}))>0,\] _where_ \[A(n):=F(h(n))\cdot F(\tilde{\chi}(n)),\quad n\in\mathbb{N},\quad F:=\mathbf{1 }_{\{1\}}.\] _Then for every finite-valued completely multiplicative function \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) we have_ \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{ *}\,F(f(k\,(m^{2}-n^{2})))\cdot F(f(k\,2mn))\cdot F(f(k\,(m^{2}+n^{2})))>0.\] Therefore, it remains to verify the assumption of this result. For this purpose, we will make crucial use of the following concentration estimates, which easily follow from Propositions 2.5 and 2.11, as we will see later. **Corollary 2.17**.: _Let \(f\colon\mathbb{N}\to\mathbb{U}\) be a finite-valued multiplicative function such that \(f\sim\chi\) for some Dirichlet character \(\chi\) with period \(q\). Then for every \(\varepsilon>0\) there exists \(Q_{0}=Q_{0}(f,\varepsilon)\in\mathbb{N}\) such that the following holds:_ * _For all_ \(Q\in\mathbb{N}\) _such that_ \(Q_{0}\mid Q\) _we have_ \[\limsup_{N\to\infty}\mathbb{E}_{n\in[N]}\big{|}f(Qn+1)-1\big{|}\ll\varepsilon,\] _where the implicit constant is absolute._ * _For all_ \(Q\in\mathbb{N}\) _such that_ \(Q_{0}\mid Q\) _we have_ \[\limsup_{N\to\infty}\mathbb{E}_{m,n\in[N]}\big{|}f\big{(}(Qm+1)^{2}+(Qn)^{2} \big{)}-1\big{|}\ll\varepsilon,\] _where the implicit constant is absolute._ Finally, using the previous concentration estimates and the key maneuver of taking multiplicative averages over \(Q\in\mathbb{N}\), which was also a crucial element in the proof of Theorem 2.2, we verify the assumptions of Proposition 2.16. **Proposition 2.18**.: _Let \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) be finite-valued pretentious multiplicative function and \(\tilde{\chi}\colon\mathbb{N}\to\mathbb{S}^{1}\) be a modified Dirichlet character. Then_ \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{*} \;A(k\,(m^{2}-n^{2}))\cdot A(k\,2mn)\cdot A(k\,(m^{2}+n^{2}))>0,\] _where_ \[A(n):=F(f(n))\cdot F(\tilde{\chi}(n)),\quad n\in\mathbb{N},\quad F:=\mathbf{1 }_{\{1\}}.\] Thus, to prove Theorem 2.14 it remains to verify Propositions 2.16 and 2.18. We do this in Sections 7 and 8 (the other results mentioned in this subsection are needed in the proofs of these two results and will also be verified). ## 3. Background and preparation ### Some elementary facts We will use the following elementary property. **Lemma 3.1**.: _Let \(a\colon\mathbb{Z}\to\mathbb{U}\) be an even sequence and \(l_{1},l_{2}\in\mathbb{Z}\), not both of them \(0\). Suppose that for some \(\varepsilon>0\) and for some sequence \(L_{N}\colon\mathbb{N}\to\mathbb{U}\) we have_ \[\limsup_{N\to\infty}\mathbb{E}_{n\in[N]}|a(n)-L_{N}|\leq\varepsilon.\] _Then_ \[\limsup_{N\to\infty}\mathbb{E}_{m,n\in[N]}|a(l_{1}m+l_{2}n)-L_{lN}|\leq 2l\cdot\varepsilon\] _where \(l:=|l_{1}|+|l_{2}|\)._ Proof.: We have \[\mathbb{E}_{m,n\in[N]}|a(l_{1}m+l_{2}n)-L_{lN}|\leq\frac{1}{N^{2}}\sum_{|k|\leq l N }w_{N}(k)\,|a(k)-L_{lN}|, \tag{3.1}\] where for \(k\in\mathbb{Z}\) we let \[w_{N}(k):=|m,n\in[N]\colon l_{1}m+l_{2}n=k|.\] For every \(k\in\mathbb{Z}\) and \(m\in[N]\) there exists at most one \(n\in[N]\) for which \(l_{1}m+l_{2}n=k\), hence \(|w_{N}(k)|\leq N\) for every \(k\in\mathbb{Z}\). Since \(a\) is even, we deduce that the right hand side in (3.1) is bounded by \[2\,l\cdot\mathbb{E}_{k\in[lN]}|a(k)-L_{lN}|.\] The asserted estimate now follows from this and our assumption. The next well-known property of multiplicative functions will also be used several times. **Lemma 3.2**.: _Let \((\Phi_{K})\) be a multiplicative Folner sequence. If \(f\colon\mathbb{N}\to\mathbb{U}\) is a completely multiplicative function and \(f\neq 1\), then_ \[\lim_{K\to\infty}\mathbb{E}_{n\in\Phi_{K}}\,f(n)=0.\] Proof.: Since \(f\neq 1\) there exists \(p\in\mathbb{P}\) such that \(f(p)\neq 1\). By the definition of \(\Phi_{K}\) we have \[\lim_{K\to\infty}\frac{|\Phi_{K}\cap(p\cdot\Phi_{K})|}{|\Phi_{K}|}=1.\] From this and the fact that \(f(pn)=f(p)\cdot f(n)\) we get \[\mathbb{E}_{n\in\Phi_{K}}\,f(n)=\mathbb{E}_{n\in p\Phi_{K}}\,f(n)+o_{K\to \infty}(1)=f(p)\cdot\mathbb{E}_{n\in\Phi_{K}}\,f(n)+o_{K\to\infty}(1)\] Since \(f(p)\neq 1\), we deduce that \(\mathbb{E}_{n\in\Phi_{K}}\,f(n)=o_{K\to\infty}(1)\) ### Some useful weights In the proof of Theorems 1.1 and 1.2 we will utilize weighted averages. The weights are employed to ensure that the averages \(\mathbb{E}_{m,n\in[N]}\,A_{\delta}(f,Q,m,n)\) and \(\mathbb{E}_{m,n\in[N]}\,B_{\delta}(f,Q,m,n)\), where \(A_{\delta},B_{\delta}\) are as in (2.6), (2.15) respectively, have a positive real part if \(f\) is an Archimedean character and \(\delta\) is sufficiently small. We will now define these weights. If \(\delta\in(0,1/2)\), we consider the circular arc with center \(1\) given by \[I_{\delta}:=\{e(\phi)\colon\phi\in(-\delta,\delta)\}.\] **Lemma 3.3**.: _For every \(\delta\in(0,1/2)\) let \(F_{\delta}\colon\mathbb{S}^{1}\to[0,1]\) be the trapezoid function that is equal to \(1\) on the arc \(I_{\delta/2}\) and \(0\) outside the arc \(I_{\delta}\). Let also_ \[w_{\delta}(m,n):=F_{\delta}\big{(}(\ell(m^{2}-n^{2}))^{i}\cdot(\ell^{\prime}mn )^{-i}\big{)}\cdot\mathbf{1}_{m>n},\quad m,n\in\mathbb{N}, \tag{3.2}\] _and_ \[\tilde{w}_{\delta}(m,n):=F_{\delta}\big{(}(\ell(m^{2}+n^{2}))^{i}\cdot(\ell^{ \prime}mn)^{-i}\big{)},\quad m,n\in\mathbb{N}. \tag{3.3}\] _Then_ \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\,w_{\delta}(m,n)>0\ \ \text{and}\ \ \lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\,\tilde{w}_{\delta}(m,n)>0.\] **Remark**.: We opted for a continuous function for \(F_{\delta}\) instead of an indicator function, to make it easier to prove Propositions 4.1 and 6.1 later on. Proof.: We first cover the weight in (3.2). Note that the limit we want to evaluate is equal to \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\,F_{\delta}\big{(}(\ell((m/N)^{2}-(n/N )^{2}))^{i}\cdot(\ell^{\prime}(m/N)\cdot(n/N))^{-i}\big{)}\cdot\mathbf{1}_{m/ N>n/N}.\] Let \(\tilde{F}_{\delta}\colon[0,1]\times[0,1]\to[0,1]\) be given by \[\tilde{F}_{\delta}(x,y):=F_{\delta}\big{(}(\ell(x^{2}-y^{2}))^{i}\cdot(\ell^{ \prime}xy)^{-i}\big{)}\cdot\mathbf{1}_{x>y},\quad x,y\in[0,1].\] Then \(\tilde{F}_{\delta}\) is Riemann integrable on \([0,1]\times[0,1]\) as it is bounded and continuous except for a set of Lebesgue measure \(0\). Hence, the limit we aim to compute exists and is equal to the Riemann integral \[\int_{0}^{1}\int_{0}^{1}\tilde{F}_{\delta}(x,y)\,dx\,dy>0.\] The positivity follows as \(\tilde{F}_{\delta}\) is non-negative and does not vanish almost everywhere. To verify the non-vanishing property, note that if \(x,y\in[0,1]\) are such that \(x\sim ay\) where \(a:=\frac{\ell^{\prime}+\sqrt{(\ell^{\prime})^{2}+4\ell^{2}}}{2\ell}>1\), then \(x>y\) and \(\ell(x^{2}-y^{2})\sim\ell^{\prime}xy\), and as a consequence \((\ell(x^{2}-y^{2}))^{i}\cdot(\ell^{\prime}xy)^{-i}\sim 1\). Hence, \(\tilde{F}_{\delta}(x,y)\geq F_{\delta}(1)-1/2=1/2\) on a neighborhood of the line \(x\sim ay\) that lies within the unit square, proving that \(\tilde{F}_{\delta}\) does not vanish almost everywhere. This completes the proof for the weight (3.2). The argument for the second weight (3.3) is very similar, so we only summarize it. Let \(\tilde{F}_{\delta}\colon[0,1]\times[0,1]\to[0,1]\) be given by \[\tilde{F}_{\delta}(x,y):=F_{\delta}\big{(}(\ell(x^{2}+y^{2}))^{i}\cdot(\ell^{ \prime}xy)^{-i}\big{)}\cdot\mathbf{1}_{(0,1]\times(0,1]}(x,y).\] Then the limit we want to evaluate exists and is equal to the Riemann integral \[\int_{0}^{1}\int_{0}^{1}\tilde{F}_{\delta}(x,y)\,dx\,dy.\] The integral is positive because \(\tilde{F}_{\delta}\) is non-negative and does not vanish almost everywhere. To verify the non-vanishing property, we argue as follows. Pick \(k\in\mathbb{Z}_{+}\) such that \(b:=\ell^{\prime}/\ell\cdot e^{2k\pi}>2\) and let \(a:=\frac{b+\sqrt{b^{2}-4}}{2}\). If \(x,y\in[0,1]\) are such that \(x\sim ay\), then \(x>y\) and \(\ell(x^{2}+y^{2})\sim e^{2k\pi}\,\ell^{\prime}xy\), and as a consequence \(x>y\) and \((\ell(x^{2}+y^{2}))^{i}\cdot(\ell^{\prime}xy)^{-i}\sim e^{2k\pi i}=1\) ### Multiplicative functions We record here some basic notions and facts about multiplicative functions that will be used throughout the article. #### 3.3.1. Dirichlet characters A _Dirichlet character_\(\chi\) is a periodic completely multiplicative function, and is often thought of as a multiplicative function on \(\mathbb{Z}_{m}\) for some \(m\in\mathbb{N}\). In this case, \(\chi\) takes the value \(0\) on integers that are not coprime to \(m\), and takes values on \(\phi(m)\)-roots of unity on all other integers, where \(\phi\) is the Euler totient function. If \(\chi\) is a Dirichlet character, we define the _modified Dirichlet character_\(\tilde{\chi}\colon\mathbb{N}\to\mathbb{S}^{1}\) to be the completely multiplicative function satisfying \[\tilde{\chi}(p):=\begin{cases}\chi(p),&\quad\text{if }\chi(p)\neq 0\\ 1,&\quad\text{if }\chi(p)=0.\end{cases}\] We note in passing that modified Dirichlet characters \(\tilde{\chi}\), which can be seen as finite colorings of \(\mathbb{N}\), are precisely the colorings that appear in Rado's theorem when showing that certain systems of linear equations are not partition regular. In particular, a system of linear equations is partition regular if and only if it has a monochromatic solution in any coloring realized by a modified Dirichlet character. #### 3.3.2. Distance between multiplicative functions Following Granville and Soundararajan [25, 27], in this and the next subsection, we define a distance and a related notion of pretentiousness between multiplicative functions. If \(f,g\colon\mathbb{N}\to\mathbb{U}\) are multiplicative functions and \(x,y\in\mathbb{R}_{+}\) with \(x<y\) we let \[\mathbb{D}(f,g;x,y)^{2}:=\sum_{x<p\leq y}\frac{1}{p}\,(1-\Re(f(p)\cdot\overline {g(p)})). \tag{3.4}\] We also let \[\mathbb{D}(f,g)^{2}:=\sum_{p\in\mathbb{P}}\frac{1}{p}\,(1-\Re(f(p)\cdot \overline{g(p)})). \tag{3.5}\] Note that if \(|f|=|g|=1\), then \[\mathbb{D}(f,g)^{2}=\frac{1}{2}\cdot\sum_{p\in\mathbb{P}}\frac{1}{p}\,|f(p)-g (p)|^{2}.\] It can be shown (see [26] or [27, Section 4.1]) that \(\mathbb{D}\) satisfies the triangle inequality \[\mathbb{D}(f,g)\leq\mathbb{D}(f,h)+\mathbb{D}(h,g)\] for all \(f,g,h\colon\mathbb{P}\to\mathbb{U}\). Also, for all \(f_{1},f_{2},g_{1},g_{2}\colon\mathbb{P}\to\mathbb{U}\), we have (see [25, Lemma 3.1]) \[\mathbb{D}(f_{1}f_{2},g_{1}g_{2})\leq\mathbb{D}(f_{1},g_{1})+\mathbb{D}(f_{2}, g_{2}). \tag{3.6}\] #### 3.3.3. Pretentious multiplicative functions If \(f,g\colon\mathbb{N}\to\mathbb{U}\) are multiplicative functions, we say that \(f\)_pretends to be \(g\)_, and write \(f\sim g\), if \(\mathbb{D}(f,g)<+\infty\). It follows from (3.6) that if \(f_{1}\sim g_{1}\) and \(f_{2}\sim g_{2}\), then \(f_{1}f_{2}\sim g_{1}g_{2}\). We say that \(f\) is _pretentious,_ if \(f\sim n^{it}\cdot\chi\) for some \(t\in\mathbb{R}\) and Dirichlet character \(\chi\), in which case \[\sum_{p\in\mathbb{P}}\frac{1}{p}\,(1-\Re(f(p)\cdot\overline{\chi(p)}\cdot p^{ -it}))<+\infty.\] The value of \(t\) is uniquely determined; this follows from (3.6) and the fact that \(n^{it}\not\sim\chi\) for every non-zero \(t\in\mathbb{R}\) and Dirichlet character \(\chi\) (see for example [27, Corollary 11.4]). Although real valued or finite-valued multiplicative functions always have a mean value, we caution the reader that this is not the case for general multiplicative functions with values on the unit circle. For example, we have \[\mathbb{E}_{n\in[N]}\,n^{it}=N^{it}/(1+it)+o_{N}(1),\] so we have non-convergent means when \(t\neq 0\). But even multiplicative functions satisfying \(f\sim 1\) can have non-convergent means. In particular, if \(f\sim 1\) is a completely multiplicative function, then it is known (see for example [17, Theorems 6.2]) that there exists \(c\neq 0\) such that \[\mathbb{E}_{n\in[N]}\,f(n)=c\cdot e(A(N))+o_{N}(1)\] where \(A(N):=\sum_{p\leq N}\frac{1}{p}\;\mathrm{Im}(f(p))\), \(N\in\mathbb{N}\). Hence, we have non-convergent means when, for example, \[\sum_{p\in\mathbb{P}}\frac{1}{p}\;\mathrm{Im}(f(p))=+\infty,\] which is the case if \(f(p):=e(1/\log\log p)\), \(p\in\mathbb{P}\). This oscillatory behavior of the mean values of some complex-valued multiplicative functions has to be taken into account and will cause problems in the proofs of some of our main results. Finally, we record an observation that will only be used in the proof of Theorem 1.5. **Lemma 3.4**.: _Let \(f\colon\mathbb{N}\to\mathbb{U}\) be a pretentious finite-valued multiplicative function. Then \(f\sim\chi\) for some Dirichlet character \(\chi\) and_ \[\sum_{p\in\mathbb{P}}\frac{1}{p}|1-f(p)\cdot\overline{\chi(p)}|<+\infty. \tag{3.7}\] **Remark**.: It can be shown using (3.7) that finite-valued pretentious multiplicative functions always have convergent means. Proof.: Since \(f\) is pretentious we have \(f\sim n^{it}\cdot\chi\) for some \(t\in\mathbb{R}\) and Dirichlet character \(\chi\). Then \(\mathbb{D}(n^{it},g)<+\infty\) where \(g:=f\cdot\overline{\chi}\) is a finite-valued multiplicative function. This can only happen if \(t=0\) (hence, \(f\sim\chi\)) and \[\sum_{p\in\mathbb{P}}\frac{1}{p}|\,\mathrm{Im}(f(p)\cdot\overline{\chi(p)})|<+\infty.\] If we combine this with \(\mathbb{D}(f,\chi)<+\infty\), we deduce that (3.7) holds. #### 3.3.4. Aperiodic multiplicative functions We say that a multiplicative function \(f\colon\mathbb{N}\to\mathbb{U}\) is _aperiodic_ if for every \(a,b\in\mathbb{N}\), \[\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\,f(an+b)=0.\] The following well known result of Daboussi-Delange [13, Corollary 1] states that a multiplicative function is aperiodic if and only if it is non-pretentious. **Lemma 3.5**.: _Let \(f\in\mathcal{M}\). Then either \(f\sim\chi\cdot n^{it}\) for some Dirichlet character \(\chi\) and \(t\in\mathbb{R}\), or \(f\) is aperiodic._ In our arguments we typically distinguish two cases. One where a multiplicative function is aperiodic, then we show that the expressions we are interested in vanish. The complementary one where the multiplicative function is pretentious is treated using concentration estimates. ### Some Borel measurability results Recall that \(\mathcal{M}\) is equipped with the topology of pointwise convergence. In the proof of Theorem 2.2 we require certain Borel measurability properties of subsets of \(\mathcal{M}\) and related maps. The second property proved below will only be used in the proof of part (ii) of Theorem 2.2. Recall that if \(f\) is pretentious, then there exist a unique \(t=t_{f}\in\mathbb{R}\) and a Dirichlet character \(\chi\) such that \(f\sim n^{it}\cdot\chi\). **Lemma 3.6**.: 1. _The set_ \(\mathcal{M}_{p}\) _of pretentious completely multiplicative functions is Borel._ 2. _The map_ \(f\mapsto t_{f}\) _from_ \(\mathcal{M}_{p}\) _to_ \(\mathbb{R}\) _is Borel measurable._ Proof.: We prove (i). For \(a,b\in\mathbb{N}\) we let \(M_{a,b}\) be the set of \(f\in\mathcal{M}\) such that \[\limsup_{N\to\infty}|\mathbb{E}_{n\in[N]}\,f(an+b)|>0.\] Clearly \(M_{a,b}\) is a Borel subset of \(\mathcal{M}\). By Lemma 3.5 we have \(\mathcal{M}_{p}=\bigcup_{a,b\in\mathbb{N}}M_{a,b}\) and the result follows. We prove (ii). By [32, Theorem 14.12], it suffices to show that the graph \[\Gamma:=\{(f,t_{f})\in\mathcal{M}_{p}\times\mathbb{R}\colon f\in\mathcal{M}_{ p}\}\] is a Borel subset of \(\mathcal{M}_{p}\times\mathbb{R}\). If \(\chi_{k}\), \(k\in\mathbb{N}\), is an enumeration of all Dirichlet characters, and \[\Gamma_{k}:=\{(f,t_{f})\in\mathcal{M}_{p}\times\mathbb{R}\colon f\sim n^{it_{ f}}\cdot\chi_{k}\},\] then \[\Gamma=\bigcup_{k\in\mathbb{N}}\Gamma_{k}.\] Hence, it suffices to show that for every \(k\in\mathbb{N}\) the set \(\Gamma_{k}\) is Borel. Note that \[\Gamma_{k}:=\{(f,t)\in\mathcal{M}_{p}\times\mathbb{R}\colon\mathbb{D}(f,n^{it} \cdot\chi_{k})<\infty\}.\] Since for \(k\in\mathbb{N}\) the map \((f,t)\mapsto\mathbb{D}(f,n^{it}\cdot\chi_{k})\) is clearly Borel, the set \(\Gamma_{k}\) is Borel. This completes the proof. ## 4. Type I Pythagorean pairs As explained in Section 2.2, in order to complete the proof of Theorem 2.3 (and thus of part (i) of Theorem 2.2) it remains to prove Proposition 2.4, Lemma 2.6, and Lemma 2.8. We do this in this section. We start with Proposition 2.4, which we state here in an equivalent form. **Proposition 4.1**.: _Let \(f\colon\mathbb{N}\to\mathbb{U}\) be an aperiodic completely multiplicative function, let \(\ell,\ell^{\prime},Q\in\mathbb{N}\) and \(\delta>0\). Then, with \(w_{\delta}:\mathbb{N}^{2}\to[0,1]\) described by (3.2), we have_ \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\,w_{\delta}(m,n)\cdot f(\ell(Qm+1)^{ 2}-(Qn)^{2})\cdot\overline{f(\ell^{\prime}(Qm+1)(Qn))}=0. \tag{4.1}\] _Furthermore, the limit in (4.1) exists for all multiplicative functions \(f\colon\mathbb{N}\to\mathbb{U}\)._ Proof.: Recall that \[w_{\delta}(m,n):=F_{\delta}\big{(}(\ell(m^{2}-n^{2}))^{i}\cdot(\ell^{\prime} mn)^{-i}\big{)}\cdot\mathbf{1}_{m>n},\quad m,n\in\mathbb{N},\] where \(F_{\delta}\colon\mathbb{S}^{1}\to[0,1]\) is the continuous function defined in Lemma 3.3. Since \(F_{\delta}\) can be approximated uniformly by trigonometric polynomials, using linearity we deduce that it suffices to verify (4.1) with \(w_{\delta}(m,n)\) replaced by \((m^{2}-n^{2})^{ki}\cdot(mn)^{-ki}\) for arbitrary \(k\in\mathbb{Z}\). Furthermore, since \(\lim_{n\to\infty}(\log(Qn+1)-\log(Qn))=0\), the limit in (4.1) remains unchanged if we replace \((m^{2}-n^{2})^{ki}\cdot(mn)^{-ki}\cdot\mathbf{1}_{m>n}\) with \(((Qm+1)^{2}-(Qn)^{2})^{ki}\cdot((Qm+1)(Qn))^{-ki}\cdot\mathbf{1}_{m>n}\) (after we omit \(+1\) the \(Q\)'s are going to cancel because of the conjugate). Hence, in order to establish (4.1) it suffices to show that for every \(k\in\mathbb{Z}\) we have \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\,\mathbf{1}_{m>n}\cdot f_{k}((Qm+1)^{ 2}-(Qn)^{2})\cdot\overline{f_{k}((Qm+1)(Qn))}=0 \tag{4.2}\] where \(f_{k}(n):=f(n)\cdot n^{ki}\), \(n\in\mathbb{N}\). Since \(f\) is aperiodic, so is \(f_{k}\). Combining [21, Theorem 2.5] and [21, Lemma 9.6], we deduce that (4.2) holds, completing the proof. Finally, to prove convergence for all multiplicative functions, we argue as before, using the fact that convergence in the case \(w_{\delta}=1\) follows from [20, Theorem 1.4]. We note that although [20, Theorem 1.4] only covers the case without the weight \(\mathbf{1}_{m>n}\), exactly the same argument can be used to cover this weighted variant. Next we restate and prove Lemma 2.6. Recall that \(A_{\delta}\), \(\mathcal{M}_{p}\), and \(\mathcal{A}\) were defined in (2.6), (2.9), and (2.13) respectively. **Lemma 2.6**.: _Let \(f\in\mathcal{M}_{p}\setminus\mathcal{A}\), \(\delta>0\), \(\ell,\ell^{\prime}\in\mathbb{N}\), and let \(\Phi_{K}\) be the Folner set described in (2.12). Then_ \[\lim_{K\to\infty}\mathbb{E}_{Q\in\Phi_{K}}\lim_{N\to\infty}\mathbb{E}_{m,n\in [N]}\,A_{\delta}(f,Q;m,n)=0. \tag{4.3}\] Proof.: Let \(\delta>0\) and \(f\in\mathcal{M}_{p}\setminus\mathcal{A}\). Then for some \(t\in\mathbb{R}\) and Dirichlet character \(\chi\) we have \[f(n)=n^{it}\cdot g(n),\quad\text{where $g\sim\chi$, $g\neq 1$}. \tag{4.4}\] For reasons that will become clear later, for \(\delta>0\) and \(Q\in\mathbb{N}\), let \[\tilde{L}_{\delta}(f,Q):=f(Q)\cdot Q^{-it}\cdot\lim_{N\to\infty}\mathbb{E}_{m, n\in[N]}\,A_{\delta}(f,Q;m,n). \tag{4.5}\] Note that the limit in the definition of \(\tilde{L}_{\delta}(f,Q)\) exists by the second part of Proposition 4.1. The idea to prove (4.3) is to show that \(\tilde{L}_{\delta}(f,Q)\) does not depend strongly on \(Q\) (it depends only on the prime factors of \(Q\)), so that, as a function of \(Q\) it is orthogonal to any non-trivial completely multiplicative function with respect to multiplicative averages. Since the left hand side of (4.3) is the correlation between \(\tilde{L}(f,Q)\) and the completely multiplicative function \(Q\mapsto f(Q)\cdot Q^{-it}\), which is non-trivial by (4.4), the conclusion will follow. Fix \(\varepsilon>0\) and take \(K_{0}=K_{0}(\varepsilon,f)\) so that \[\sum_{p\geq K_{0}}\frac{1}{p}(1-\Re(f(p)\cdot\overline{\chi(p)}\cdot p^{-it} ))+K_{0}^{-1/2}\leq\varepsilon.\] Using Proposition 2.5 (and noting that the function \(K\mapsto\mathbb{D}(f,\chi\cdot n^{it};K,N)\) is decreasing for any fixed \(f\) and \(N\)), it follows that for every \(N>K>K_{0}\) and \(Q\in\Phi_{K}\), \[\mathbb{E}_{n\in[N]}|f(Qn+1)-(Qn)^{it}\cdot\exp\big{(}F_{N}(f,K)\big{)}|\ll\varepsilon. \tag{4.6}\] Using this identity and Lemma 3.1 with \(a(n):=f(Qn+1)\cdot(Qn)^{-it}\) and \(l_{1}=1\), \(l_{2}=-1\), it follows that \[\limsup_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\Big{|}f(Q(m-n)+1)-\big{(}Q(m-n )\big{)}^{it}\exp\big{(}F_{2N}(f,K)\big{)}\Big{|}\ll\varepsilon. \tag{4.7}\] Using (4.6) and Lemma 3.1 with \(a(n):=f(Qn+1)\cdot(Qn)^{-it}\) and \(l_{1}=l_{2}=1\), it follows that \[\limsup_{N\to\infty}\mathbb{E}_{m,n\in[N]}\Big{|}f(Q(m+n)+1)-\big{(}Q(m+n) \big{)}^{it}\exp\big{(}F_{2N}(f,K)\big{)}\Big{|}\ll\varepsilon. \tag{4.8}\] Combining (4.6), (4.7), (4.8), and since all terms involved are \(1\)-bounded, we deduce that for every \(K>K_{0}\) and \(Q\in\Phi_{K}\), \[\limsup_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n} \Big{|}f\big{(}(Qm+1)^{2}-(Qn)^{2}\big{)}\cdot\overline{f(Qm+1)}\] \[-Q^{it}\cdot(m^{2}-n^{2})^{it}\cdot m^{-it}\cdot\exp\big{(}2F_{2N }(f,K)-F_{N}(f,K)\big{)}\Big{|}\ll\varepsilon.\] Multiplying by \(c_{\ell,\ell^{\prime}}\cdot w_{\delta}(m,n)\cdot\overline{f(Qn)}\cdot Q^{-it}\cdot f (Q)=c_{\ell,\ell^{\prime}}\cdot w_{\delta}(m,n)\cdot\overline{f(n)}\cdot Q^{-it}\), where \(c_{\ell,\ell^{\prime}}:=f(\ell)\cdot\overline{f(\ell^{\prime})}\), we deduce that \[\limsup_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\Big{|}A_{\delta}(f, Q;m,n)\cdot Q^{-it}\cdot f(Q)\\ -c_{\ell,\ell^{\prime}}\cdot w_{\delta}(m,n)\cdot(m^{2}-n^{2})^{ it}\cdot m^{-it}\cdot\overline{f(n)}\cdot\exp\big{(}2F_{2N}(f,K)-F_{N}(f,K) \big{)}\Big{|}\ll\varepsilon\] This implies that, for every \(K>K_{0}\) \[\limsup_{N\to\infty}\sup_{Q\in\Phi_{K}}\Big{|}\tilde{L}_{\delta}( f,Q)-\\ c_{\ell,\ell^{\prime}}\cdot\mathbb{E}_{m,n\in[N]}\,w_{\delta}(m,n) \cdot(m^{2}-n^{2})^{it}\cdot m^{-it}\cdot\overline{f(n)}\cdot\exp\big{(}2F_{2N }(f,K)-F_{N}(f,K)\big{)}\Big{|}\ll\varepsilon.\] Since the second term does not depend on \(Q\), we conclude that for every \(K>K_{0}\) and \(Q,Q^{\prime}\in\Phi_{K}\), \(\big{|}\tilde{L}_{\delta}(f,Q)-\tilde{L}_{\delta}(f,Q^{\prime})\big{|}\ll\varepsilon\). We can choose \(\varepsilon\) arbitrarily small by sending \(K\to\infty\), so it follows that \[\lim_{K\to\infty}\max_{Q,Q^{\prime}\in\Phi_{K}}\big{|}\tilde{L}_{\delta}(f,Q)- \tilde{L}_{\delta}(f,Q^{\prime})\big{|}=0.\] For \(K\in\mathbb{N}\), let \(Q_{K}\) be any element of \(\Phi_{K}\). From the last identity and (4.5) it follows that \[\lim_{K\to\infty}\mathbb{E}_{Q\in\Phi_{K}}\lim_{N\to\infty}\mathbb{E}_{m,n\in[ N]}\,A_{\delta}(f,Q;m,n)=\lim_{K\to\infty}\tilde{L}_{\delta}(f,Q_{K})\cdot \mathbb{E}_{Q\in\Phi_{K}}\overline{f(Q)}\cdot Q^{it}.\] By (4.4) we have that \(Q\mapsto f(Q)\cdot Q^{-it}\) is a non-trivial multiplicative function, hence the last limit is zero by Lemma 3.2. This establishes (4.3) and completes the proof. Lastly, we restate and prove Lemma 2.8. **Lemma 2.8**.: _Let \(\sigma\) be a Borel probability measure on \(\mathcal{M}_{p}\) such that \(\sigma(\{1\})>0\) and let \(\mathcal{A}\) be as in (2.13). Then there exist \(\delta_{0},\rho_{0}>0\), depending only on \(\sigma\), such that_ \[\liminf_{N\to\infty}\inf_{Q\in\mathbb{N}}\Re\Big{(}\mathbb{E}_{m,n\in[N]}\int_ {\mathcal{A}}A_{\delta_{0}}(f,Q;m,n)\,d\sigma(f)\Big{)}\geq\rho_{0}. \tag{4.9}\] Proof.: Let \(a:=\sigma(\{1\})>0\) and for \(\delta>0\) let \[\mu_{\delta}:=\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\,w_{\delta}(m,n).\] Note that by Lemma 3.3 we have \(\mu_{\delta}>0\). For \(T\in\mathbb{R}_{+}\) we consider the sets \[\mathcal{A}_{T}:=\{(n^{it})_{n\in\mathbb{N}}\colon t\in[-T,T]\}.\] These sets are closed and as a consequence Borel. Since \(\mathcal{A}_{N}\) increases to \(\mathcal{A}\) as \(N\to\infty\), and the Borel measure \(\sigma\) is finite, there exists \(T_{0}=T_{0}(\sigma)>0\) such that \[\sigma(\mathcal{A}\setminus\mathcal{A}_{T_{0}})\leq\frac{a}{2}. \tag{4.10}\] Note also that since \(\lim_{n\to\infty}\sup_{Q\in\mathbb{N}}|\log(Qn+1)-\log(Qn))|=0\), we have \[\lim_{N\to\infty}\sup_{f\in\mathcal{A}_{T_{0}},Q\in\mathbb{N}} \mathbb{E}_{m,n\in[N],m>n}\big{|}f((Qm+1)^{2}-(Qn)^{2}))\cdot\overline{f(\ell^{ \prime}(Qm+1)(Qn))}-\\ f(\ell(m^{2}-n^{2}))\cdot\overline{f(\ell^{\prime}mn)}\big{|}=0,\] and by the definition of \(w_{\delta}\) given in Lemma 3.3, we have \[\lim_{\delta\to 0^{+}}\limsup_{N\to\infty}\sup_{f\in\mathcal{A}_{T_{0}}}\big{|} \mathbb{E}_{m,n\in[N]}\,w_{\delta}(m,n)\cdot f(\ell(m^{2}-n^{2}))\cdot\overline {f(\ell^{\prime}mn)}-\mathbb{E}_{m,n\in[N]}\,w_{\delta}(m,n)\big{|}=0.\] We deduce from the last two identities that if \(\delta_{0}\) is small enough (depending only on \(T_{0}\) and hence only on \(\sigma\)), then for every \(Q\in\mathbb{N}\) we have \[\lim_{N\to\infty}\inf_{Q\in\mathbb{N}}\Re\Big{(}\mathbb{E}_{m,n\in[N]}\int_{ \mathcal{A}_{T_{0}}}A_{\delta_{0}}(f,Q;m,n)\,d\sigma(f)\Big{)}\geq\sigma( \mathcal{A}_{T_{0}})\cdot\mu_{\delta_{0}}\geq a\cdot\mu_{\delta_{0}},\] where we used that \(1\in\mathcal{M}_{T_{0}}\), hence \(\sigma(\mathcal{A}_{T_{0}})\geq\sigma(\{1\})=a\). On the other hand, using (4.10) and the triangle inequality, we get \[\lim_{N\to\infty}\sup_{Q\in\mathbb{N}}\Big{|}\mathbb{E}_{m,n\in[N]}\int_{ \mathcal{A}\setminus\mathcal{A}_{T_{0}}}A_{\delta_{0}}(f,Q;m,n)\,d\sigma(f) \Big{|}\leq\frac{a\cdot\mu_{\delta_{0}}}{2}.\] Combining the last two estimates we deduce that (4.9) holds with \(\rho_{0}:=\frac{a\cdot\mu_{\delta_{0}}}{2}\). ## 5. Nonlinear concentration estimates Our goal is to prove the concentration estimate of Proposition 2.11, which is a crucial ingredient in the proof of part (ii) of Theorem 1.2 and in the proof of Theorem 1.5. In fact, we will prove a more general and quantitatively more explicit statement with further applications in mind. Let \(f,g\colon\mathbb{N}\to\mathbb{U}\) be multiplicative functions and let \(\chi\) be a Dirichlet character and \(t\in\mathbb{R}.\) For every \(K_{0}\in\mathbb{N}\) we let \[G_{N}(f,K_{0}):=2\sum_{\begin{subarray}{c}K_{0}<p\leq N,\\ p\equiv 1\ (\mathrm{mod}\ 4)\end{subarray}}\frac{1}{p}\,(f(p)\cdot\overline{ \chi(p)}\cdot n^{-it}-1) \tag{5.1}\] and \[\mathbb{D}_{1}(f,g;x,y)^{2}:=\sum_{\begin{subarray}{c}x<p\leq y,\\ p\equiv 1\ (\mathrm{mod}\ 4)\end{subarray}}\frac{1}{p}\,(1-\Re(f(p)\cdot \overline{g(p)})). \tag{5.2}\] **Proposition 5.1**.: _Let \(K_{0},N\in\mathbb{N}\) and \(f\colon\mathbb{N}\to\mathbb{U}\) be a multiplicative function. Let also \(t\in\mathbb{R}\), \(\chi\) be a Dirichlet character with period \(q,\)\(Q=\prod_{p\leq K_{0}}p^{a_{p}}\) for some \(a_{p}\in\mathbb{N}\), and suppose that \(q\mid Q\). If \(N\) is large enough, depending only on \(Q\), then for all \(a,b\in\mathbb{Z}\) with \(-Q\leq a,b\leq Q\) and \((a^{2}+b^{2},Q)=1\) we have_ \[\mathbb{E}_{m,n\in[N]}\,\big{|}f\big{(}(Qm+a)^{2}+(Qn+b)^{2}\big{)} -\chi(a^{2}+b^{2})\cdot Q^{2it}\cdot(m^{2}+n^{2})^{it}\cdot\exp\big{(}G_{N}(f, K_{0})\big{)}\big{|}\ll\] \[(\mathbb{D}_{1}+\mathbb{D}_{1}^{2})(f,\chi\cdot n^{it};K_{0}, \sqrt{N})+Q^{2}\cdot\mathbb{D}_{1}(f,\chi\cdot n^{it};N,3Q^{2}N^{2})+Q\cdot \mathbb{D}_{1}(f,\chi\cdot n^{it};\sqrt{N},N)+K_{0}^{-1/2}, \tag{5.3}\] _where \(G_{N},\mathbb{D}_{1}\) are as in (5.1), (5.2), and the implicit constant is absolute._ **Remarks**.: \(\bullet\) Note that if \(f\sim\chi\cdot n^{it},\) we have \(\lim_{N\to\infty}\mathbb{D}(f,\chi\cdot n^{it};N,3Q^{2}N^{2})=0\) and \(\lim_{N\to\infty}\mathbb{D}_{1}(f,\chi\cdot n^{it};\sqrt{N},N)=0\). Hence, renaming \(K_{0}\) as \(K\), taking the max over all \(Q\in\Phi_{K}\), and then letting \(N\to\infty\) in (5.3) gives the estimate in Proposition 2.11. \(\bullet\) The averaging over both variables \(m,n\in\mathbb{N}\) is crucial for our argument. Indeed, by slightly modifying example of [34, Lemma 2.1] one can construct completely multiplicative functions (both pretentious and aperiodic) \(f:\mathbb{N}\to\{-1,1\}\), such that the averages \[\frac{1}{N}\sum_{n\in[N]}f(n^{2}+1)\] behave rather "erratically" and a similar concentration estimate fails. We leave the details to the interested reader. The proof is carried out in several steps, covering progressively more general settings. Throughout the argument we write \(p\mid\mid n\) if \(p\mid n\) but \(p^{2}\nmid n\). ### Preparatory counting arguments The following lemma will be used multiple times subsequently. **Lemma 5.2**.: _For \(Q,N\in\mathbb{N}\), \(a,b\in\mathbb{Z}\), and primes \(p,q\) such that \(p,q\equiv 1\pmod{4}\) and \((pq,Q)=1\), let_ \[w_{N,Q}(p,q):=\frac{1}{N^{2}}\sum_{\begin{subarray}{c}m,n\in[N],\\ p,q\,||\,(Qm+a)^{2}+(Qn+b)^{2}\end{subarray}}1. \tag{5.4}\] _Then_ \[w_{N,Q}(p,p)=\frac{2}{p}\Big{(}1-\frac{1}{p}\Big{)}^{2}+O\Big{(}\frac{1}{N} \Big{)}, \tag{5.5}\] _and if \(p\neq q\) we have_ \[w_{N,Q}(p,q)=\frac{4}{pq}\Big{(}1-\frac{1}{p}\Big{)}^{2}\Big{(}1-\frac{1}{q} \Big{)}^{2}+O\Big{(}\frac{1}{N}\Big{)}. \tag{5.6}\] **Remarks**.: \(\bullet\) It is the approximate identity \(w_{N,Q}(p,q)=w_{N,Q}(p,p)\cdot w_{N,Q}(q,q)+O\big{(}\frac{1}{N}\big{)}\) that, at the end, makes the concentration estimates work. \(\bullet\) Because of the \(O\big{(}\frac{1}{N}\big{)}\) errors these approximate identities will only be useful to us for sums that contain \(o(N)\) terms. Proof.: Throughout the discussion, we use \(\epsilon\) to designate a number in \(\{0,1,2,3,4\}\). We first establish (5.5). Let \(p\) satisfy the assumptions. Note first that if \(p\mid Qn+b\) and \(p\mid(Qm+a)^{2}+(Qn+b)^{2}\), then also \(p^{2}\mid(Qm+a)^{2}+(Qn+b)^{2}\), hence we get no contribution to the sum (5.4) in this case. So we can assume that \(p\nmid Qn+b\). Since \(p\equiv 1\pmod{4}\), the number \(-1\) is a quadratic residue \(\bmod{p}\), and we have exactly two solutions \(m\pmod{p}\) to the congruence \[(Qm+a)^{2}+(Qn+b)^{2}\equiv 0\pmod{p}. \tag{5.7}\] Hence, for those \(n\in[N]\) we have \(2[N/p]+\epsilon\) solutions in the variable \(m\in[N]\) to (5.7). Since there are \(N-[N/p]+\epsilon\) integers \(n\in[N]\) with \(p\nmid Qn+b\) (we used that \((p,Q)=1\) here), we get a total of \[2[N/p]\,(N-[N/p])+O(N)=2N^{2}/p-2N^{2}/p^{2}+O(N)\] solutions of \(m,n\in[N]\) to the congruence (5.7). Similarly, we get that if \(p\nmid Qn+b\), then the number of solutions \(m,n\in[N]\) to the congruence \((Qm+a)^{2}+(Qn+b)^{2}\equiv 0\pmod{p^{2}}\) is \[2[N/p^{2}]\,(N-[N/p])+O(N)=2N^{2}/p^{2}-2N^{2}/p^{3}+O(N).\] (We used that \(-1\) is also a quadratic residue \(\bmod{p^{2}}\).) These solutions should be subtracted from the previous solutions of (5.7) in order to count the number of solutions of \(m,n\in[N]\) for which \(p\mid(Qm+a)^{2}+(Qn+b)^{2}\). We deduce that \[\frac{1}{N^{2}}\sum_{\begin{subarray}{c}m,n\in[N],\\ p\,||\,(Qm+a)^{2}+(Qn+b)^{2}\end{subarray}}1=\frac{2}{p}-\frac{4}{p^{2}}+ \frac{2}{p^{3}}+O\Big{(}\frac{1}{N}\Big{)}=\frac{2}{p}\Big{(}1-\frac{1}{p} \Big{)}^{2}+O\Big{(}\frac{1}{N}\Big{)}, \tag{5.8}\] which proves (5.5). Next, we establish (5.6). Let \(p,q\) satisfy the assumptions. As explained in the previous case, those \(n\in[N]\) for which \(p\mid Qn+b\) or \(q\mid Qn+b\) do not contribute to the sum (5.4) defining \(w_{N,Q}(p,q)\), hence we can assume that \((pq,Qn+b)=1\). Let \[A_{r,s}:=\frac{1}{N^{2}}\sum_{\begin{subarray}{c}m,n\in[N],\\ r,s[(Qm+a)^{2}+(Qn+b)^{2},\,(rs,Qn+b)=1\end{subarray}}1\] and note that \[w_{N,Q}(p,q)=A_{p,q}-A_{p^{2},q}-A_{p,q^{2}}+A_{p^{2},q^{2}}. \tag{5.9}\] We first compute \(A_{p,q}\). Since \(p\equiv q\equiv 1\pmod{4}\), the number \(-1\) is a quadratic residue mod \(p\) and mod \(q\), and we get by the Chinese remainder theorem, that for each \(n\in[N]\) with \(p\nmid Qn+b\) we have \(4\) solutions \(m\pmod{pq}\) to the congruence \[(Qm+a)^{2}+(Qn+b)^{2}\equiv 0\pmod{pq}.\lx@note{footnote}{If $pq>N$ these may translate to no solutions in $m\in[N]$, but this is also going to be reflected in our computation below since in this case $4[N/(pq)]+\epsilon=\epsilon$ could very well be $0$.} \tag{5.10}\] We deduce that for each \(n\in[N]\) with \((pq,Qn+b)=1\) we have \(4[N/(pq)]+\epsilon\) solutions in the variable \(m\in[N]\) to the congruence (5.10). Since the number of \(n\in[N]\) for which \((pq,Qn+b)=1\) is \(N-[N/p]-[N/q]+[N/pq]\), we get that the total number of solutions to the congruence (5.10) with \(m,n\in[N]\) and \((pq,n)=1\) is \[4[N/(pq)]\left(N-[N/p]-[N/q]+[N/(pq)]\right)+O(N)=\\ N^{2}\cdot(4/(pq))\cdot(1-1/p-1/q+1/(pq))+O(N). \tag{5.11}\] Hence, \[A_{p,q}:=\frac{4}{pq}\Big{(}1-\frac{1}{p}\Big{)}\Big{(}1-\frac{1}{q}\Big{)}+O \Big{(}\frac{1}{N}\Big{)}.\] Similarly, using that \(-1\) is also a quadratic residue mod \(p^{k}\) and mod \(q^{k}\) for \(k=1,2\), we find that \[A_{p^{2},q}=\frac{4}{p^{2}q}\Big{(}1-\frac{1}{p}\Big{)}\Big{(}1-\frac{1}{q} \Big{)}+O\Big{(}\frac{1}{N}\Big{)},\] and \[A_{p,q^{2}}=\frac{4}{pq^{2}}\Big{(}1-\frac{1}{p}\Big{)}\Big{(}1-\frac{1}{q} \Big{)}+O\Big{(}\frac{1}{N}\Big{)}.\] Also, \[A_{p^{2},q^{2}}=\frac{4}{p^{2}q^{2}}\Big{(}1-\frac{1}{p}\Big{)}\Big{(}1-\frac {1}{q}\Big{)}+O\Big{(}\frac{1}{N}\Big{)}.\] Using the last four identities and (5.9), we deduce that (5.6) holds. This completes the proof. We will also need to give upper bounds for \(w_{N,Q}(p,q)\) when \(p,q\) are not necessarily primes, and also give upper bounds that do not involve the error terms \(O(1/N)\) that cause us problems in some cases (this is only relevant when \(pq\geq N\)). The next lemma is crucial for us and gives an upper bound that is good enough for our purposes. **Lemma 5.3**.: _For \(l,Q,N\in\mathbb{N}\) and \(a,b\in\mathbb{Z}\) with \(-Q\leq a,b\leq Q\), let_ \[w_{N,Q}(l):=\frac{1}{N^{2}}\sum_{\begin{subarray}{c}m,n\in[N],\\ l|(Qm+a)^{2}+(Qn+b)^{2}\end{subarray}}1.\] _If \(l\) is a sum of two squares, then_ \[w_{N,Q}(l)\ll\frac{Q^{2}}{l}, \tag{5.12}\] _where the implicit constant is absolute. In particular, if \(w_{N,Q}(p,q)\) is as in (5.4), taking \(l=p\) and \(l=pq\) where \(p,q\) are distinct primes of the form \(1\pmod{4}\), we get_ \[w_{N,Q}(p,p)\ll\frac{Q^{2}}{p},\quad w_{N,Q}(p,q)\ll\frac{Q^{2}}{pq}. \tag{5.13}\] **Remark**.: These estimates will allow us to show later that the contribution of the \(m,n\in[N]\) for which \((Qm+a)^{2}+(Qn+b)^{2}\) have large prime divisors (say \(\geq\sqrt{N}\)) is negligible for our purposes. In contrast, we could not have done the same for the \(n\in[N]\) for which \(n^{2}+1\) have large prime divisors. Proof.: Recall that an integer is a sum of two squares if and only if on its factorization as a product of primes, all prime factors congruent to \(3\pmod{4}\) occur with even multiplicity. It follows that if \(l\) is a sum of two squares and \(l\mid(Qm+a)^{2}+(Qn+b)^{2}\), then \((Qm+a)^{2}+(Qn+b)^{2}/l\) is also a sum of two squares. We deduce from this and our assumption \(|a|,|b|\leq Q\) that if \(l\) is a sum of two squares, then \[w_{N,Q}(l)\leq\frac{1}{N^{2}}\sum_{k\leq 3Q^{2}N^{2}/l}r_{2}(k)\ll\frac{Q^{2}}{l},\] where \(r_{2}(k)\) denotes the number of representations of \(k\) as a sum of two squares, and to get the second estimate we used the well-known fact \(\sum_{k\leq n}r_{2}(k)\ll n\). This completes the proof. ### Concentration estimate for additive functions We start with a concentration estimate for additive functions that will eventually get lifted to a concentration estimate for multiplicative functions. **Definition 5.1**.: We say that \(h\colon\mathbb{N}\to\mathbb{U}\) is _additive_, if it satisfies \(h(mn)=h(m)+h(n)\) whenever \((m,n)=1\). **Lemma 5.4** (Turan-Kubilius inequality for sums of squares).: _Let \(K_{0},N\in\mathbb{N}\), \(a,b\in\mathbb{Z}\) with \(-Q\leq a,b,\leq Q\), and \(h\colon\mathbb{N}\to\mathbb{U}\) be an additive function such that_ * \(h(p)=0\) _for all primes_ \(p\leq K_{0}\) _and_ \(p>N\)_;_ * \(h(p)=0\) _for all primes_ \(p\equiv 3\pmod{4}\)_;_ * \(h(p^{k})=0\) _for all primes_ \(p\) _and_ \(k\geq 2\)_._ _Let also \(Q=\prod_{p\leq K_{0}}p^{a_{p}}\) for some \(a_{p}\in\mathbb{N}\). Then for all large enough \(N\), depending only on \(K_{0}\), we have_ \[\mathbb{E}_{m,n\in[N]}\left|h\big{(}(Qm+a)^{2}+(Qn+b)^{2}\big{)}-H_{N}(h,K_{0} )\right|^{2}\ll\mathbb{D}^{2}(h;K_{0},\sqrt{N})+Q^{2}\cdot\mathbb{D}^{2}(h; \sqrt{N},N)+K_{0}^{-1}, \tag{5.14}\] _where the implicit constant is absolute,_ \[H_{N}(h,K_{0}):=2\sum_{K_{0}<p\leq N}\frac{h(p)}{p} \tag{5.15}\] _and_ \[\mathbb{D}^{2}(h;K_{0},N):=\sum_{K_{0}<p\leq N}\frac{|h(p)|^{2}}{p}.\] Proof.: We consider the additive functions \(h_{1},h_{2}\), which are the restrictions of \(h\) to the primes \(K_{0}<p\leq\sqrt{N}\) and \(\sqrt{N}<p\leq N\).8 More precisely, Footnote 8: If we worked with \(h\) only, we would run into trouble establishing (5.25) below, since a non-acceptable term of the form \(O(\sum_{p,q\leq N}N^{-1})\) would appear in our estimates. For \(h_{1}\) this term becomes \(O(\sum_{p,q\leq\sqrt{N}}N^{-1})=O((\log N)^{-2})\), which is acceptable. We could have also worked with the restriction to the interval \([K_{0},N^{a}]\) for any \(a\leq 1/2\). In the case of linear concentration estimates this splitting is not needed since the error that appears in this case is \(O(\sum_{pq\leq N}N^{-1})=O(\log\log N/\log N)\). \[h_{1}(p^{k}):=\begin{cases}h(p),&\quad\text{if}\ \ k=1\ \text{and}\ K_{0}<p\leq \sqrt{N}\\ 0,&\quad\text{otherwise}\end{cases}\] and \[h_{2}(p^{k}):=\begin{cases}h(p),&\text{if }\ k=1\text{ and }\sqrt{N}<p\leq N\\ 0,&\text{otherwise}\end{cases}.\] We also define \[H_{i,N}(h_{i},K_{0}):=2\sum_{K_{0}<p\leq N}\frac{h_{i}(p)}{p},\quad i=1,2, \tag{5.16}\] and the technical variant \[H^{\prime}_{1,N}(h_{i},Q,K_{0}):=\sum_{K_{0}<p\leq N}w_{N,Q}(p)\cdot h_{1}(p), \tag{5.17}\] where \[w_{N,Q}(p):=\frac{1}{N^{2}}\sum_{\begin{subarray}{c}m,n\in[N],\\ p\,||\,(Qm+1)^{2}+(Qn)^{2}\end{subarray}}1. \tag{5.18}\] (Note that \(w_{N,Q}(p)=w_{N,Q}(p,p)\) where \(w_{N,Q}(p,q)\) is as in (5.4).) The reason for introducing this variant is because it gives the mean value of \(h_{1}\) along sums of squares. Indeed, using properties (i)-(iii), we have \[\mathbb{E}_{m,n\in[N]}\,h_{1}((Qm+a)^{2}+(Qn+b)^{2})=\mathbb{E}_{ m,n\in[N]}\sum_{p\,||\,(Qm+a)^{2}+(Qn+b)^{2}}h_{1}(p)\\ =\frac{1}{N^{2}}\sum_{K_{0}<p\leq N}h_{1}(p)\sum_{\begin{subarray} {c}m,n\in[N],\\ p\,||\,(Qm+a)^{2}+(Qn+b)^{2}\end{subarray}}1=H^{\prime}_{1,N}(h_{1},Q,K_{0}). \tag{5.19}\] Using (5.5) of Lemma 5.2 and that \(h_{1}(p)=0\) for \(p>\sqrt{N}\), we get \[|H_{1,N}(h_{1},K_{0})-H^{\prime}_{1,N}(h_{1},Q,K_{0})|\ll\sum_{K_{0}<p\leq \sqrt{N}}\frac{1}{p^{2}}+\frac{1}{\sqrt{N}}\leq\frac{1}{K_{0}}+\frac{1}{\sqrt{ N}}. \tag{5.20}\] Hence, in order to prove (5.14), it suffices to estimate \[\mathbb{E}_{m,n\in[N]}|h_{1}((Qm+a)^{2}+(Qn+b)^{2})-H^{\prime}_{1,N}(h_{1},Q,K _{0})|^{2} \tag{5.21}\] and \[\mathbb{E}_{m,n\in[N]}|h_{2}((Qm+a)^{2}+(Qn+b)^{2})|^{2}+|H_{2,N}(h_{1},K_{0}) |^{2}. \tag{5.22}\] We first deal with the expression (5.21). Using (5.19) and expanding the square below we get \[\mathbb{E}_{m,n\in[N]}\,\big{|}h_{1}((Qm+a)^{2}+(Qn+b)^{2})-H^{ \prime}_{1,N}(h_{1},Q,K_{0})\big{|}^{2}=\\ \mathbb{E}_{m,n\in[N]}\,\big{|}h_{1}((Qm+a)^{2}+(Qn+b)^{2})\big{|}^ {2}-|H^{\prime}_{1,N}(h_{1},Q,K_{0})|^{2}. \tag{5.23}\] To estimate this expression, first note that since \(h_{1}\) is additive and \(h_{1}(p^{k})=0\) for \(k\geq 2\), we have \[\mathbb{E}_{m,n\in[N]}\,\big{|}h_{1}((Qm+a)^{2}+(Qn+b)^{2})\big{|}^{2}= \mathbb{E}_{m,n\in[N]}\,\Big{|}\sum_{p\,||\,(Qm+a)^{2}+(Qn+b)^{2}}h_{1}(p) \Big{|}^{2}. \tag{5.24}\] Expanding the square, using the fact that \(h_{1}(p)=0\) unless \(K_{0}<p\leq\sqrt{N}\), and the definition of \(w_{N,Q}(p,q)\) given in (5.4), we get that the right hand side is equal to \[\sum_{K_{0}<p\leq\sqrt{N}}|h_{1}(p)|^{2}\cdot w_{N,Q}(p,p)+\sum_{K_{0}<p,q\leq \sqrt{N},\,p\neq q}h_{1}(p)\cdot\overline{h_{1}(q)}\cdot w_{N,Q}(p,q).\] Using equation (5.5) of Lemma 5.2 we get that the first term is at most \[2\cdot\sum_{K_{0}<p\leq\sqrt{N}}\frac{|h_{1}(p)|^{2}}{p}+O(N^{-1/2}).\] Using equations (5.5) and (5.6) of Lemma 5.2 we get that the second term is equal to (we crucially use the bound \(p,q\leq\sqrt{N}\) here and the prime number theorem) \[\sum_{K_{0}<p,q\leq\sqrt{N},\,p\neq q}h_{1}(p)\cdot\overline{h_{1} (q)}\cdot w_{N,Q}(p,p)\cdot w_{N,Q}(q,q)+O((\log N)^{-2})\leq\\ (H^{\prime}_{1,N}(h_{1},Q,K_{0}))^{2}+O((\log N)^{-2}),\] where to get the last estimate we added to the sum the contribution of the diagonal terms \(p=q\) (which is non-negative), used (5.17), and the fact that \(h_{1}(p)=0\) for \(p>\sqrt{N}\). Combining (5.23) with the previous estimates, we are led to the bound \[\mathbb{E}_{m,n\in[N]}\left|h_{1}((Qm+a)^{2}+(Qn+b)^{2})-H^{ \prime}_{1,N}(h_{1},Q,K_{0})\right|^{2}\ll\\ \mathbb{D}^{2}(h_{1};K_{0},\sqrt{N})+O((\log N)^{-2}). \tag{5.25}\] Next we estimate the expression (5.22). Since \(h_{2}\) is additive and satisfies properties (i)-(iii), we get using (5.24) (with \(h_{2}\) in place of \(h_{1}\)) and expanding the square \[\mathbb{E}_{m,n\in[N]}\left|h_{2}((Qm+a)^{2}+(Qn+b)^{2})\right|^{2}=\sum_{ \sqrt{N}<p,q\leq N}h_{2}(p)\,\overline{h_{2}(q)}\,w_{N,Q}(p,q).\] Since \(h_{2}(p)\neq 0\) only when \(p\equiv 1\,(\text{mod }4)\), using (5.13) of Lemma 5.3, we get that the right hand side is bounded by \[\ll Q^{2}\cdot\Big{(}\sum_{\sqrt{N}<p,q\leq N}\frac{|h_{2}(p)|\,| h_{2}(q)|}{pq}+\sum_{\sqrt{N}<p\leq N}\frac{|h_{2}(p)|^{2}}{p}\Big{)}=\\ Q^{2}\cdot\Big{(}\Big{(}\sum_{\sqrt{N}<p\leq N}\frac{|h_{2}(p)|}{p} \Big{)}^{2}+\mathbb{D}^{2}(h_{2};\sqrt{N},N)\Big{)}\leq\\ Q^{2}\cdot\Big{(}\sum_{\sqrt{N}<p\leq N}\frac{|h_{2}(p)|^{2}}{p} \cdot\sum_{\sqrt{N}<p\leq N}\frac{1}{p}+\mathbb{D}^{2}(h_{2};\sqrt{N},N)\Big{)} \ll Q^{2}\cdot\mathbb{D}^{2}(h_{2};\sqrt{N},N),\] where we crucially used the estimate \[\sum_{\sqrt{N}<p\leq N}\frac{1}{p}\ll 1.\] Similarly we find \[(H_{2,N}(h_{2},K_{0}))^{2}=4\,\Big{(}\sum_{\sqrt{N}<p\leq N}\frac{|h_{2}(p)|}{ p}\Big{)}^{2}\ll\mathbb{D}^{2}(h_{2};\sqrt{N},N).\] Combining the previous estimates we get the following bound for the expression in (5.22) \[\mathbb{E}_{m,n\in[N]}\big{(}h_{2}((Qm+a)^{2}+(Qn+b)^{2})\big{)}^{2}+(H_{2,N}( h_{1},K_{0}))^{2}\ll Q^{2}\cdot\mathbb{D}^{2}(h_{2};\sqrt{N},N). \tag{5.26}\] Combining the bounds (5.20), (5.25), (5.26), we get the asserted bound (5.14), completing the proof. ### Concentration estimates for multiplicative functions Next we use Lemma 5.4 to get a variant that deals with multiplicative functions. **Lemma 5.5**.: _Let \(K_{0},N\in\mathbb{N}\), \(a,b\in\mathbb{Z}\) with \(-Q\leq a,b\leq Q\), and \(f\colon\mathbb{N}\to\mathbb{U}\) be a multiplicative function such that_ 1. \(f(p)=1\) _for all primes_ \(p\leq K_{0}\) _and_ \(p>N\)_;_ 2. \(f(p)=1\) _for all primes_ \(p\equiv 3\pmod{4}\)_;_ 3. \(f(p^{k})=1\) _for all primes_ \(p\) _and_ \(k\geq 2\)_._ _Let also \(Q=\prod_{p\leq K_{0}}p^{a_{p}}\) with \(a_{p}\in\mathbb{N}\). If \(N\) is large enough, depending only on \(K_{0}\), then_ \[\mathbb{E}_{m,n\in[N]}\left|f\big{(}(Qm+a)^{2}+(Qn+b)^{2}\big{)}- \exp\big{(}G_{N}(f,K_{0})\big{)}\right|\ll\\ (\mathbb{D}+\mathbb{D}^{2})(f,1;K_{0},\sqrt{N})+Q\cdot\mathbb{D}( f,1;\sqrt{N},N)+K_{0}^{-\frac{1}{2}}, \tag{5.27}\] _where the implicit constant is absolute and_ \[G_{N}(f,K_{0}):=2\sum_{K_{0}<p\leq N}\frac{1}{p}\,(f(p)-1). \tag{5.28}\] Proof.: Let \(h:\mathbb{N}\to\mathbb{C}\) be the additive function given on prime powers by \[h(p^{k}):=f(p^{k})-1.\] We note that due to our assumptions on \(f\), properties (i)-(iii) of Lemma 5.4 are satisfied. Using that \(z=e^{z-1}+O(|z-1|^{2})\) for \(|z|\leq 1\) and property (iii), we have \[f(m^{2}+n^{2})=\prod_{p^{k}\,\|\,m^{2}+n^{2}}f(p^{k})=\prod_{p\,\|\,m^{2}+n^{2 }}\exp\big{(}h(p)+O(|h(p)|^{2})\big{)}.\] Applying the estimate \(|\prod_{i\leq k}z_{i}-\prod_{i\leq k}w_{i}|\leq\sum_{i\leq k}|z_{i}-w_{i}|\), we deduce that for all \(m,n\in\mathbb{N}\) we have \[f(m^{2}+n^{2})=\exp(h(m^{2}+n^{2}))+O\Big{(}\sum_{p\,\|\,m^{2}+n^{2}}|h(p)|^{ 2}\Big{)}.\] Using this and since \(G_{N}(f,K_{0})=H_{N}(h,K_{0})\), where \(H_{N}(h,K_{0})\) is given by (5.15), we get \[\mathbb{E}_{m,n\in[N]}\left|f\big{(}(Qm+a)^{2}+(Qn+b)^{2}\big{)} -\exp\big{(}G_{N}(f,K_{0})\big{)}\right|\ll\\ \mathbb{E}_{m,n\in[N]}|\exp\big{(}h((Qm+a)^{2}+(Qn+b)^{2})\big{)} -\exp(H_{N}(h,K_{0}))|+\\ \mathbb{E}_{m,n\in[N]}\sum_{p\,\|\,(Qm+a)^{2}+(Qn+b)^{2}}|h(p)|^{ 2}. \tag{5.29}\] Next we use the inequality \(|e^{z_{1}}-e^{z_{2}}|\leq|z_{1}-z_{2}|\), which is valid for \(\Re z_{1},\Re z_{2}\leq 0\), to bound the last expression by \[\mathbb{E}_{m,n\in[N]}|h((Qm+a)^{2}+(Qn+b)^{2})-H_{N}(h,K_{0})|+\mathbb{E}_{m,n\in[N]}\sum_{p\,\|\,(Qm+a)^{2}+(Qn+b)^{2}}|h(p)|^{2}. \tag{5.30}\] To bound the first term we use Lemma 5.4. It gives that for all large enough \(N\), depending on \(K_{0}\) only, we have \[\mathbb{E}_{m,n\in[N]}\left|h\big{(}(Qm+a)^{2}+(Qn+b)^{2}\big{)}-H _{N}(h,K_{0})\right|\ll\\ \mathbb{D}(h;K_{0},\sqrt{N})+Q\cdot\mathbb{D}(h;\sqrt{N},N)+K_{0} ^{-\frac{1}{2}}\ll\mathbb{D}(f,1;K_{0},\sqrt{N})+Q\cdot\mathbb{D}(f,1;\sqrt{N}, N)+K_{0}^{-\frac{1}{2}}, \tag{5.31}\] where to get the last bound we used that \(|h(p)|^{2}\leq 2-2\,\Re(f(p))\), which holds since \(|f(p)|\leq 1\). To bound the second term in (5.30), we note that using properties (i)-(iii) of Lemma 5.4, we have \[\mathbb{E}_{m,n\in[N]}\sum_{p\,\parallel\,(Qm+a)^{2}+(Qn+b)^{2}}|h(p)|^{2}=\sum _{K_{0}<p\leq N}|h(p)|^{2}\,w_{N,Q}(p)\ll \tag{5.32}\] \[\sum_{K_{0}<p\leq N}\frac{|h(p)|^{2}}{p}+O((\log N)^{-1})\ll\mathbb{D}^{2}(f,1 ;K_{0},N)+O((\log N)^{-1}),\] where \(w_{N,P}(p)\) is as in (5.18) and we used equation (5.5) of Lemma 5.2 and the prime number theorem to get the first bound. Combining (5.29)-(5.32) we get the asserted bound. We use the previous result to deduce the following improved version. **Lemma 5.6**.: _Let \(K_{0},N\in\mathbb{N}\) and \(f\colon\mathbb{N}\to\mathbb{U}\) be a multiplicative function such that \(f(p)=1\) for all primes \(p>N\) with \(p\equiv 1\pmod{4}\). Let also \(Q=\prod_{p\leq K_{0}}p^{a_{p}}\) with \(a_{p}\in\mathbb{N}\). If \(N\) is large enough, depending only on \(K_{0}\), then for all \(a,b\in\mathbb{Z}\) with \(-Q\leq a,b\leq Q\) and \((a^{2}+b^{2},Q)=1\) we have_ \[\mathbb{E}_{m,n\in[N]}\left|f\big{(}(Qm+a)^{2}+(Qn+b)^{2}\big{)}- \exp\big{(}G_{N}(f,K_{0})\big{)}\right|\ll\\ (\mathbb{D}_{1}+\mathbb{D}_{1}^{2})(f,1;K_{0},\sqrt{N})+Q\cdot \mathbb{D}_{1}(f,1;\sqrt{N},N)+K_{0}^{-\frac{1}{2}}, \tag{5.33}\] _where the implicit constant is absolute and_ \[G_{N}(f,K_{0}):=2\,\sum_{\begin{subarray}{c}K_{0}<p\leq N;\\ p\equiv 1\pmod{4}\end{subarray}}\,\frac{1}{p}\,(f(p)-1), \tag{5.34}\] \[\mathbb{D}_{1}(f,1;\sqrt{N},N)^{2}:=\sum_{\begin{subarray}{c}\sqrt{N}<p\leq N,\\ p\equiv 1\pmod{4}\end{subarray}}\frac{1}{p}\,(1-\Re(f(p))).\] Proof.: We first define the multiplicative function \(\tilde{f}\colon\mathbb{N}\to\mathbb{U}\) on prime powers as follows \[\tilde{f}(p^{k}):=\begin{cases}f(p^{k}),&\quad\text{if}\,\,\,\,p>K_{0}\\ 1,&\quad\text{otherwise}\end{cases}.\] Since \(p\leq K_{0}\) implies \(p\mid Q\) and \((a^{2}+b^{2},Q)=1\), we get that \(p\nmid(Qm+a)^{2}+(Qn+b)^{2}\) for every \(p\leq K_{0}\), hence \[f((Qm+a)^{2}+(Qn+b)^{2})=\tilde{f}((Qm+a)^{2}+(Qn+b)^{2})\quad\text{for every }m,n\in\mathbb{N}.\] Note also that \(G_{N}(\tilde{f},Q_{0})=G_{N}(f,Q_{0})\) and \(\mathbb{D}_{1}(\tilde{f},1;K_{0},N)=\mathbb{D}_{1}(f,1;K_{0},N)\). It follows that in order to establish (5.33), it is enough to show that for all large enough \(N\), depending only on \(K_{0}\), we have \[\mathbb{E}_{m,n\in[N]}|\tilde{f}((Qm+a)^{2}+(Qn+b)^{2})-\exp(G_{N }(\tilde{f},Q_{0}))|\ll\\ (\mathbb{D}_{1}+\mathbb{D}_{1}^{2})(\tilde{f},1;K_{0},\sqrt{N})+Q \cdot\mathbb{D}_{1}(\tilde{f},1;\sqrt{N},N)+K_{0}^{-\frac{1}{2}}. \tag{5.35}\] In order to establish (5.35) we make a series of further reductions that will eventually allow us to apply Lemma 5.5. For every \(p\equiv 3\pmod{4}\), we have that \(p\mid m^{2}+n^{2}\) implies that \(p\mid m\) and \(p\mid n\). Consequently, the contribution to the average of those \(m,n\in[N]\) for which \((Qm+a)^{2}+(Qn+b)^{2}\) is divisible by some prime \(p\equiv 3\pmod{4}\) is (note that \((Qm+a)^{2}+(Qn+b)^{2}\) is only divisible by primes \(p>K_{0}\)) \[\ll\frac{1}{N^{2}}\sum_{K_{0}<p\leq N}\left[\frac{N}{p}\right]^{2}\ll\frac{1}{K _{0}},\] which is acceptable. Next we show that the contribution to the average in (5.35) of those \(m,n\in[N]\) for which \((Qm+a)^{2}+(Qn+b)^{2}\) is divisible by \(p^{2}\) for some prime \(p\equiv 1\pmod{4}\) is also acceptable. Indeed, for fixed \(n\in[N]\) such that \(p\nmid Qn\) there exist at most \(2[N/p^{2}]+2\) values of \(m\in[N]\) such that \(p^{2}\mid(Qm+a)^{2}+(Qn+b)^{2}.\) On the other hand, if \(p\mid Qn+b\) and \(p\mid(Qm+a)^{2}+(Qn+b)^{2},\) then also \(p\mid Qm+a\). Hence, the contribution to the average in (5.35) of those \(m,n\in[N]\) for which \((Qm+a)^{2}+(Qn+b)^{2}\) is divisible by \(p^{2}\) for some prime \(p\equiv 1\pmod{4}\) is bounded by (note again that \((Qm+a)^{2}+(Qn+b)^{2}\) is only divisible by primes \(p>K_{0}\)) \[\ll\frac{1}{N^{2}}\Big{(}\sum_{K_{0}<p\leq N}\Big{(}\left[\frac{N}{p^{2}} \right]+1\Big{)}N+\sum_{K_{0}<p\leq N}\left[\frac{N}{p}\right]^{2}\Big{)}\ll \frac{1}{K_{0}}+\frac{1}{\log N},\] where we used the prime number theorem to bound \(\frac{1}{N}\sum_{K_{0}<p\leq N}1.\) Combining the above reductions, we deduce that in order to establish the estimate (5.35) we may further assume that \[\tilde{f}(p^{k})=1\ \ \text{for all}\ \ p\in\mathbb{P},\,k\geq 2,\ \ \text{and}\ \ \tilde{f}(p^{k})=1\ \ \text{for all}\ \ p\equiv 3\pmod{4},\,k\in\mathbb{N}. \tag{5.36}\] We are now in a situation where Lemma 5.5 is applicable and gives that for all large enough \(N\), depending only on \(K_{0}\), if \(\mathbb{D}_{1}(f,1;K_{0},N)\leq 1\), we have (note that (5.36) implies that \(\mathbb{D}_{1}(\tilde{f},1;K_{0},N)=\mathbb{D}(\tilde{f},1;K_{0},N)\)) \[\mathbb{E}_{m,n\in[N]}|\tilde{f}((Qm+a)^{2}+(Qn+b)^{2})-\exp(G_{ N}(\tilde{f},K_{0}))|\ll\\ (\mathbb{D}_{1}+\mathbb{D}_{1}^{2})(\tilde{f},1;K_{0},\sqrt{N})+ Q\cdot\mathbb{D}_{1}(\tilde{f},1;\sqrt{N},N)+K_{0}^{-\frac{1}{2}}.\] Combining this bound with the bounds we got in order to arrive to this reduction, we get that (5.35) is satisfied. This completes the proof. ### Proof of Proposition 5.1 We start with some reductions. Suppose that the statement holds when \(\chi=1\) and \(t=0\), we will show that it holds for arbitrary \(\chi\) and \(t\). So let \(f\sim\chi\cdot n^{it}\). Then \(\tilde{f}:=f\cdot\overline{\chi}\cdot n^{-it}\sim 1\), hence we get the following bound for \(\tilde{f}\) \[\mathbb{E}_{m,n\in[N]}\left|\tilde{f}\big{(}(Qm+a)^{2}+(Qn+b)^{2 }\big{)}-\exp\big{(}G_{N}(\tilde{f},K_{0})\big{)}\right|\ll\\ (\mathbb{D}_{1}+\mathbb{D}_{1}^{2})(f,1;K_{0},\sqrt{N})+Q^{2} \cdot\mathbb{D}(f,1;N,3Q^{2}N^{2})+Q\cdot\mathbb{D}_{1}(f,1;\sqrt{N},N)+K_{0} ^{-1/2}. \tag{5.37}\] Note that since \(\chi\) is periodic with period \(q\) and \(q\mid Q\), we have \(\chi((Qm+a)^{2}+(Qn+b)^{2})=\chi(a^{2}+b^{2})\) for every \(m,n\in\mathbb{N}\). Furthermore, since by assumption \((a^{2}+b^{2},Q)=1\) and \(q\mid Q\), we have \((a^{2}+b^{2},q)=1\), hence \(|\chi(a^{2}+b^{2})|=1\). Also, \(\lim_{m,n\to\infty}((Qm+a)^{2}+(Qn+b)^{2})^{it}-Q^{2it}\cdot(m^{2}+n^{2})^{it}=0\) and \(\mathbb{D}_{1}(\tilde{f},1;x,y)=\mathbb{D}_{1}(f,\chi\cdot n^{it};x,y)\). Lastly, note that \[G_{N}(\tilde{f},K_{0})=2\sum_{\begin{subarray}{c}K_{0}<p\leq N,\\ p\equiv 1\pmod{4}\end{subarray}}\frac{1}{p}\,(\tilde{f}(p)-1)=2\sum_{ \begin{subarray}{c}K_{0}<p\leq N,\\ p\equiv 1\pmod{4}\end{subarray}}\frac{1}{p}\,(f(p)\cdot\overline{\chi(p)}\cdot n ^{-it}-1)=G_{N}(f,K_{0}).\] After inserting this information in (5.37), we get that (5.3) is satisfied. So it suffices to show that if \(f\sim 1\) and \(Q=\prod_{p\leq K_{0}}p^{a_{p}}\) for some \(a_{p}\in\mathbb{N}\), then if \(N\) is large enough, depending only on \(Q\), and \(\mathbb{D}(f,1;K_{0},N)\leq 1\), we have \[\mathbb{E}_{m,n\in[N]}\left|f\big{(}(Qm+a)^{2}+(Qn+b)^{2}\big{)}- \exp\big{(}G_{N}(f,K_{0})\big{)}\right|\ll\\ (\mathbb{D}_{1}+\mathbb{D}_{1}^{2})(f,1;K_{0},\sqrt{N})+Q^{2} \cdot\mathbb{D}(f,1;N,3Q^{2}N^{2})+Q\cdot\mathbb{D}_{1}(f,1;\sqrt{N},N)+K_{0}^ {-1/2}. \tag{5.38}\] For every \(N\in\mathbb{N}\) we decompose \(f\) as \(f=f_{N,1}\cdot f_{N,2}\), where the multiplicative functions \(f_{N,1},f_{N,2}\colon\mathbb{N}\to\mathbb{U}\) are defined on prime powers as follows \[f_{N,1}(p^{k}):= \begin{cases}f(p),&\text{ if }\ k=1\text{ and }p>N,\,p\equiv 1\pmod{4}\,\\ 1,&\text{ otherwise}\end{cases}\] \[f_{N,2}(p^{k}):= \begin{cases}1,&\text{ if }\ k=1\text{ and }p>N,\,p\equiv 1\pmod{4}\.\\ f(p^{k}),&\text{ otherwise}\end{cases}\] We first study the function \(f_{N,1}\). Following the notation of Lemma 5.3 for \(l,Q,N\in\mathbb{N}\) we let \[w_{N,Q}(l):=\frac{1}{N^{2}}\sum_{\begin{subarray}{c}m,n\in[N],\\ l(Qm+a)^{2}+(Qn+b)^{2}\end{subarray}}1.\] Lemma 5.3 implies that if \(l\) is a sum of two squares, then \[w_{N,Q}(l)\ll\frac{Q^{2}}{l}. \tag{5.39}\] Since for \(N\gg Q\) we have \(f_{N,1}((Qm+a)^{2}+(Qn+b)^{2})-1\neq 0\) only if \((Qm+a)^{2}+(Qn+b)^{2}\) is divisible by one or two primes \(p>N\),9 we get Footnote 9: For \(m,n\in[N]\) and \(-Q\leq a,b\leq Q\) we have \((Qm+a)^{2}+(Qn+b)^{2}\ll Q^{2}N^{2}\), so if \((Qm+a)^{2}+(Qn+b)^{2}\) was divisible by three or more primes greater than \(N\), we would have \(N^{3}\ll Q^{2}N^{2}\), which fails if \(Q\ll N\). \[\mathbb{E}_{m,n\in[N]}|f_{N,1}\big{(}(Qm+a)^{2}+(Qn+b)^{2}\big{)}-1| \leq\\ \sum_{\begin{subarray}{c}N<p\leq 3Q^{2}N^{2},\\ p\equiv 1\pmod{4}\end{subarray}}|f(p)-1|\,w_{N,Q}(p)+\sum_{\begin{subarray}{ c}N<p,q\leq 3Q^{2}N^{2},\\ p,q\equiv 1\pmod{4}\end{subarray}}|f(pq)-1|\,w_{N,Q}(pq), \tag{5.40}\] where we used that \(f_{N,1}(p)=f(p)\) for all \(p>N\) and in the second sum we have ignored the contribution of the diagonal terms \(p=q\) since, by construction, \(f_{N,1}(p^{2})=1\) for all primes \(p\). Using (5.39) for \(l:=p\), which is a sum of two squares since \(p\equiv 1\pmod{4}\), we estimate the first term as follows10 Footnote 10: Bounding \(w_{N,Q}(p)\) using (5.5) would lead to non-acceptable errors here, because the range of summation is much larger than \(N\). \[\sum_{\begin{subarray}{c}N<p\leq 3Q^{2}N^{2},\\ p\equiv 1\pmod{4}\end{subarray}}|f(p)-1|\,w_{N,Q}(p)\ll Q^{2}\sum_{ \begin{subarray}{c}N<p\leq 3Q^{2}N^{2},\\ p\equiv 1\pmod{4}\end{subarray}}\frac{|f(p)-1|}{p}\leq\\ Q^{2}\cdot\Big{(}\sum_{\begin{subarray}{c}N<p\leq 3Q^{2}N^{2},\\ p\equiv 1\pmod{4}\end{subarray}}\frac{|f(p)-1|^{2}}{p}\Big{)}^{\frac{1}{2}} \cdot\Big{(}\sum_{\begin{subarray}{c}N<p\leq 3Q^{2}N^{2},\\ p\equiv 1\pmod{4}\end{subarray}}\frac{1}{p}\Big{)}^{\frac{1}{2}}\ll Q^{2} \cdot\mathbb{D}_{1}(f,1;N,3Q^{2}N^{2}),\] where we used that \(\sum_{N\leq p\leq 3Q^{2}N^{2}}\frac{1}{p}\ll 1\) for \(N\geq Q\). Similarly, using (5.39) for \(l:=pq\), which is a sum of two squares since \(pq\equiv 1\pmod{4}\), we estimate the second term in (5.40) as follows (note that since \(p\neq q\), we have \(f(pq)=f(p)f(q)\)) \[\sum_{\begin{subarray}{c}N<p,q\leq 3Q^{2}N^{2},\,p\neq q,\\ p,q\equiv 1\ (\text{mod }4)\end{subarray}}|f(pq)-1|\,w_{N,Q}(pq)\ll Q^{2}\sum_{ \begin{subarray}{c}N<p,q\leq 3Q^{2}N^{2},\\ p,q\equiv 1\ (\text{mod }4)\end{subarray}}\frac{|f(p)-1|+|f(q)-1|}{pq}\leq\] \[2\,Q^{2}\cdot\Big{(}\sum_{\begin{subarray}{c}N<p,q\leq 3Q^{2}N^{2},\\ p,q\equiv 1\ (\text{mod }4)\end{subarray}}\frac{|f(p)-1|^{2}}{pq}\Big{)}^{\frac{1}{2}} \cdot\Big{(}\sum_{\begin{subarray}{c}N<p,q\leq 3Q^{2}N^{2},\\ p,q\equiv 1\ (\text{mod }4)\end{subarray}}\frac{1}{pq}\Big{)}^{\frac{1}{2}}\ll Q^{2} \cdot\mathbb{D}_{1}(f,1;N,3Q^{2}N^{2}),\] where we used that \(\sum_{N\leq p\leq 3Q^{2}N^{2}}\frac{1}{p}\ll 1\) for \(N\geq Q\). Combining the above estimates and (5.40), we deduce that for \(N\gg Q\) we have \[\mathbb{E}_{m,n\in[N]}\big{|}f_{N,1}\big{(}(Qm+a)^{2}+(Qn+b)^{2}\big{)}-1|\ll Q ^{2}\cdot\mathbb{D}_{1}(f,1;N,3Q^{2}N^{2}). \tag{5.41}\] Next, we move to the function \(f_{2}\). Since \(f_{2}(p)=1\) for all primes \(p\geq N\), Lemma 5.6 is applicable. We get that if \(N\) is large enough, depending on \(K_{0}\), we have \[\mathbb{E}_{m,n\in[N]}|f_{N,2}\big{(}(Qm+a)^{2}+(Qn+b)^{2}\big{)}- \exp(G_{N}(f,K_{0}))|\ll\\ (\mathbb{D}_{1}+\mathbb{D}_{1}^{2})(f,1;K_{0},\sqrt{N})+Q\cdot \mathbb{D}_{1}(f,1;\sqrt{N},N)+K_{0}^{-\frac{1}{2}}, \tag{5.42}\] where we used that \(f_{N,2}(p)=f(p)\) for all primes \(p\equiv 1\ (\text{mod }4)\) with \(p\leq N\), hence \(G_{N}(f_{N,2},K_{0})=G_{N}(f,K_{0})\) and \(\mathbb{D}_{1}(f_{N,2},1;K_{0},N)=\mathbb{D}_{1}(f,1;K_{0},N)\). Finally, we use the triangle inequality and combine (5.41) and (5.42), to obtain that the left hand side in (5.38) is bounded by \[\mathbb{E}_{m,n\in[N]}\big{(}|f_{N,1}\big{(}(Qm+a)^{2}+(Qn+b)^{2} \big{)}-1|+|f_{N,2}\big{(}(Qm+a)^{2}+(Qn+b)^{2}\big{)}-\exp(G_{N}(f,K_{0}))| \big{)}\] \[\ll(\mathbb{D}_{1}+\mathbb{D}_{1}^{2})(f,1;K_{0},\sqrt{N})+Q^{2} \cdot\mathbb{D}_{1}(f,1;N,3Q^{2}N^{2})+Q\cdot\mathbb{D}_{1}(f,1;\sqrt{N},N)+ K_{0}^{-1/2}.\] Thus (5.38) holds, completing the proof. ## 6. Type II Pythagorean pairs and more ### Proof of Theorem 2.9 As explained in Section 2.3, in order to complete the proof of Theorem 2.9 (and hence of part (ii) of Theorem 2.2) it remains to prove Proposition 2.10, Proposition 2.12, and Lemma 2.13. We do this in this section. We repeat the statement of Proposition 2.10 and explain how it can be derived from results in [21]. **Proposition 6.1**.: _Let \(f\colon\mathbb{N}\to\mathbb{U}\) be an aperiodic multiplicative function. Then for every \(\delta>0\) and \(\ell,\ell^{\prime},Q\in\mathbb{N}\) we have_ \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\,\tilde{w}_{\delta}(m,n)\cdot f(\ell ((Qm+1)^{2}+(Qn)^{2}))\cdot\overline{f(\ell^{\prime}(Qm+1)(Qn))}=0. \tag{6.1}\] _where \(\tilde{w}_{\delta}(m,n)\) is as in (3.3)._ Proof.: Recall that \[\tilde{w}_{\delta}(m,n):=F_{\delta}\big{(}(\ell(m^{2}+n^{2}))^{i}\cdot(\ell^{ \prime}mn)^{-i}\big{)},\quad m,n\in\mathbb{N},\] where \(F_{\delta}\colon\mathbb{S}^{1}\to[0,1]\) is the continuous function defined in Lemma 3.3. Using uniform approximation of \(F_{\delta}\) by trigonometric polynomials and linearity, we get that it suffices to verify (6.1) when \(\tilde{w}_{\delta}(m,n)\) is replaced by \((m^{2}+n^{2})^{ki}\cdot(mn)^{-ki}\) for arbitrary \(k\in\mathbb{Z}\). Furthermore, the limit remains unchanged if we replace \((m^{2}+n^{2})^{ki}\cdot(mn)^{-ki}\) with \(((Qm+1)^{2}+(Qn)^{2})^{ki}\cdot((Qm+1)(Qn))^{-ki}\) (the \(+1\) makes no difference asymptotically so we can omit it and then \(Q^{2}\) is going to cancel because of the conjugate). Hence, it suffices to establish that for every \(k\in\mathbb{Z}\) we have \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}\,f_{k}((Qm+1)^{2}+(Qn)^{2})\cdot \overline{f_{k}((Qm+1)(Qn))}=0 \tag{6.2}\] where \(f_{k}(n):=f(n)\cdot n^{ki}\), \(n\in\mathbb{N}\). Since \(f\) is aperiodic, so is \(f_{k}\). By [21, Lemma 9.7], we deduce that (6.2) holds, completing the proof. Recall that in (2.12) we defined the multiplicative Folner sequence \((\Phi_{K})\) by \[\Phi_{K}:=\Big{\{}\prod_{p\leq K}p^{a_{p}}\colon K<a_{p}\leq 2K\Big{\}},\quad K \in\mathbb{N}.\] Note that every \(q\in\mathbb{N}\) divides all elements of \(\Phi_{K}\) when \(K\in\mathbb{N}\) is large enough depending on \(q\). The next result is a key ingredient in the proof of Proposition 2.12 below. **Lemma 6.2**.: _Let \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) be a completely multiplicative function such that \(f\sim n^{it}\cdot\chi\) for some \(t\in\mathbb{R}\) and Dirichlet character \(\chi\). Let also \(\delta>0\) be fixed, \(\tilde{w}_{\delta}\) be as in (3.3), and \((\Phi_{K})\) be as in (2.12). For \(Q,N\in\mathbb{N}\) we let_ \[L_{\delta,N}(f,Q):=\mathbb{E}_{m,n\in[N]}\,\tilde{w}_{\delta}(m,n)\cdot f( \ell((Qm+1)^{2}+(Qn)^{2}))\cdot\overline{f(\ell^{\prime}(Qm+1)\,n)} \tag{6.3}\] _and_ \[\tilde{L}_{\delta,N}(f,Q):=Q^{-it}\cdot L_{\delta,N}(f,Q). \tag{6.4}\] _Then_ \[\lim_{K\to\infty}\limsup_{N\to\infty}\max_{Q,Q^{\prime}\in\Phi_{K}}|\tilde{L} _{\delta,N}(f,Q)-\tilde{L}_{\delta,N}(f,Q^{\prime})|=0. \tag{6.5}\] Proof.: For \(K\in\mathbb{N}\), let \(F_{N}(f,K)\) and \(G_{N}(f,K)\) be defined as in (2.11) and (2.18) respectively. We apply the concentration inequalities of Proposition 2.5 and Proposition 2.11. Since \(f\sim n^{it}\cdot\chi\) for some \(t\in\mathbb{R}\) and Dirichlet character \(\chi\), we get that \[\lim_{K\to\infty}\limsup_{N\to\infty}\max_{Q\in\Phi_{K}}\mathbb{E}_{m\in[N]}|f (Qm+1)-(Qm)^{it}\exp\big{(}F_{N}(f,K)\big{)}|=0\] and \[\lim_{K\to\infty}\limsup_{N\to\infty}\max_{Q\in\Phi_{K}}\mathbb{E}_{m,n\in[N] }\big{|}f\big{(}(Qm+1)^{2}+(Qn)^{2}\big{)}-Q^{2it}\cdot(m^{2}+n^{2})^{it}\cdot \exp\big{(}G_{N}(f,K)\big{)}\big{|}=0.\] We deduce that if \[M_{\delta,N}(f):=f(\ell)\cdot\overline{f(\ell^{\prime})}\cdot\mathbb{E}_{m,n \in[N]}\,\tilde{w}_{\delta}(m,n)\cdot(m^{2}+n^{2})^{it}\cdot m^{-it}\cdot \overline{f(n)},\] then \[\lim_{K\to\infty}\limsup_{N\to\infty}\max_{Q\in\Phi_{K}}|\tilde{L}_{\delta,N }(f,Q)-M_{\delta,N}(f)\cdot\exp(G_{N}(f,K))\cdot\overline{\exp(F_{N}(f,K))}|=0.\] Using this identity and the triangle inequality we deduce that (6.5) holds. Recall that \(\mathcal{M}_{p}\) and \(\mathcal{A}\) were defined in (2.9) and (2.13) respectively. The next result follows easily from Lemma 3.2 and the continuity of finite Borel measures. **Lemma 6.3**.: _Let \(\sigma\) be a Borel probability measure on \(\mathcal{M}_{p}\). Then for every \(\varepsilon>0\) there exist a Borel subset \(\mathcal{M}_{\varepsilon}\) of \(\mathcal{M}_{p}\setminus\mathcal{A}\) and \(K_{0}\in\mathbb{N}\), such that_ \[\sigma((\mathcal{M}_{p}\setminus\mathcal{A})\setminus\mathcal{M}_{\varepsilon})\leq\varepsilon \tag{6.6}\] _and_ \[\sup_{f\in\mathcal{M}_{\varepsilon}}|\mathbb{E}_{Q\in\Phi_{K}}\,f(Q)\cdot Q^{- it_{f}}|\leq\varepsilon\,\text{ for all }\,K\geq K_{0}, \tag{6.7}\] _where \(t_{f}\) is the unique real for which \(f\sim n^{it_{f}}\cdot\chi\) for some Dirichlet character \(\chi\)._ **Remark**.: The important point is that \(K_{0}\) does not depend on \(f\) as long as \(f\in\mathcal{M}_{\varepsilon}\). Proof.: Let \(\varepsilon>0\). For \(m\in\mathbb{N}\), we let \[\mathcal{M}_{\varepsilon,m}:=\{f\in\mathcal{M}_{p}\setminus\mathcal{A}\colon| \mathbb{E}_{Q\in\Phi_{K}}\,f(Q)\cdot Q^{-it_{f}}|\leq\varepsilon\,\text{ for all }\,K\geq m\}.\] Note that by Lemma 3.6 the map \(f\mapsto t_{f}\) from \(\mathcal{M}_{p}\) to \(\mathbb{R}\) is Borel, hence for every \(\varepsilon>0\) the sets \(\mathcal{M}_{\varepsilon,m}\) form an increasing family of Borel sets. Since for \(f\notin\mathcal{A}\) we have \(f\cdot n^{-it_{f}}\neq 1\), we get by Lemma 3.2 that for every \(f\in\mathcal{M}_{p}\setminus\mathcal{A}\) we have \[\lim_{K\to\infty}\mathbb{E}_{Q\in\Phi_{K}}\,f(Q)\cdot Q^{-it_{f}}=0.\] Hence, \[(\mathcal{M}_{p}\setminus\mathcal{A}):=\bigcup_{m\in\mathbb{N}}\mathcal{M}_{ \varepsilon,m}.\] It follows that there exists \(m_{0}\in\mathbb{N}\) such that \[\sigma((\mathcal{M}_{p}\setminus\mathcal{A})\setminus\mathcal{M}_{\varepsilon,m_{0}})\leq\varepsilon.\] Renaming \(\mathcal{M}_{\varepsilon,m_{0}}\) as \(\mathcal{M}_{\varepsilon}\) and letting \(K_{0}:=m_{0}\), gives the asserted statement. Using the previous two results we are going to prove Proposition 2.12, which we formulate again for convenience. **Proposition 2.12**.: _Let \((\Phi_{K})\), \(\mathcal{A}\), \(B_{\delta}(f,Q;m,n)\) be defined by (2.12), (2.13), (2.15), respectively, and \(\delta\)>0. Let also \(\sigma\) be a Borel probability measure on \(\mathcal{M}_{p}\). Then_ \[\lim_{K\to\infty}\limsup_{N\to\infty}\Big{|}\mathbb{E}_{Q\in\Phi_{K}}\, \mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}_{p}\setminus\mathcal{A}}\,B_{\delta}( f,Q;m,n)\,d\sigma(f)\Big{|}=0\] Proof.: Let \(\delta,\varepsilon>0\). By Lemma 6.3 there exists \(K_{0}=K_{0}(\sigma)\in\mathbb{N}\) and a Borel subset \(\mathcal{M}_{\varepsilon}\) of \(\mathcal{M}\setminus\mathcal{A}\), such that \[\sigma((\mathcal{M}_{p}\setminus\mathcal{A})\setminus\mathcal{M}_{\varepsilon })\leq\varepsilon/4 \tag{6.8}\] and \[\sup_{f\in\mathcal{M}_{\varepsilon}}|\mathbb{E}_{Q\in\Phi_{K}}\,f(Q)\cdot Q^{- it_{f}}|\leq\varepsilon/2\,\text{ for all }\,K\geq K_{0}. \tag{6.9}\] Because of (6.8), and since \(|B_{\delta}(f,Q;m,n)|\leq 1\), it suffices to show that \[\lim_{K\to\infty}\limsup_{N\to\infty}\Big{|}\mathbb{E}_{Q\in\Phi_{K}}\, \mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}_{\varepsilon}}\,B_{\delta}(f,Q;m,n)\, d\sigma(f)\Big{|}\leq\varepsilon. \tag{6.10}\] As in Lemma 6.2, for \(Q,N\in\mathbb{N}\) we let \[\tilde{L}_{\delta,N}(f,Q):=f(Q)\cdot Q^{-it_{f}}\cdot\mathbb{E}_{m,n\in[N]}\, B_{\delta}(f,Q;m,n).\] We also let for \(Q,N\in\mathbb{N}\) \[I(Q,N):=\mathbb{E}_{m,n\in[N]}\int_{\mathcal{M}_{\varepsilon}}\,B_{\delta}(f, Q;m,n)\,d\sigma(f)=\int_{\mathcal{M}_{\varepsilon}}\overline{f(Q)\cdot Q^{- it_{f}}}\cdot\tilde{L}_{\delta,N}(f,Q)\,d\sigma(f). \tag{6.11}\] Finally, for \(K\in\mathbb{N}\), we let \(Q_{K}\) be an arbitrary element of \(\Phi_{K}\), and define \[I_{1}(Q,N):=\int_{\mathcal{M}_{\varepsilon}}\overline{f(Q)\cdot Q^{-it_{f}}} \cdot\tilde{L}_{\delta,N}(f,Q_{K})\,d\sigma(f),\quad Q\in\Phi_{K},N\in\mathbb{ N}. \tag{6.12}\] Recall that by part (ii) of Lemma 3.6 the map \(f\mapsto t_{f}\) from \(\mathcal{M}_{p}\) to \(\mathbb{R}\) is Borel, so the integral defining \(I_{1}(Q,N)\) is well defined. Using (6.11) and (6.12) we get that \[\max_{Q\in\Phi_{K}}|I(Q,N)-I_{1}(Q,N)|\leq\max_{Q\in\Phi_{K}}|\tilde{L}_{ \delta,N}(f,Q)-\tilde{L}_{\delta,N}(f,Q_{K})|,\quad K\in\mathbb{N}.\] We deduce from this and equation (6.5) of Lemma 6.2, that \[\lim_{K\to\infty}\limsup_{N\to\infty}\max_{Q\in\Phi_{K}}|I(Q,N)-I_{1}(Q,N)|=0.\] It follows from the above facts that in order to show that (6.10) holds, it suffices to show that \[\limsup_{K\to\infty}\limsup_{N\to\infty}|\mathbb{E}_{Q\in\Phi_{K}}I_{1}(Q,N)| \leq\varepsilon. \tag{6.13}\] Using the definition of \(I_{1}(Q,N)\) in (6.12) and the estimate (6.9), we get that for every \(K\geq K_{0}\) we have \[\sup_{N\in\mathbb{N}}|\mathbb{E}_{Q\in\Phi_{K}}I_{1}(Q,N)|\leq\sup_{f\in \mathcal{M}_{\varepsilon}}|\mathbb{E}_{Q\in\Phi_{K}}\,f(Q)\cdot Q^{-it_{f}}| \leq\varepsilon.\] Hence, \[\limsup_{K\to\infty}\limsup_{N\to\infty}|\mathbb{E}_{Q\in\Phi_{K}}\,I_{1}(Q,N )|\leq\varepsilon,\] establishing (6.13) and completing the proof. Finally we restate and prove Lemma 2.13. **Lemma 2.13**.: _Let \(\sigma\) be a Borel probability measure on \(\mathcal{M}_{p}\) such that \(\sigma(\{1\})>0\) and \(\mathcal{A}\) as in (2.13). Then there exist \(\delta_{0},\rho_{0}>0\), depending only on \(\sigma\), such that_ \[\liminf_{N\to\infty}\inf_{Q\in\mathbb{N}}\Re\Big{(}\mathbb{E}_{m,n\in[N]} \int_{\mathcal{A}}B_{\delta_{0}}(f,Q;m,n)\,d\sigma(f)\Big{)}\geq\rho_{0}. \tag{6.14}\] Proof.: Using the positiveness property of the weight \(\tilde{w}_{\delta}(m,n)\) in Lemma 3.3 the proof is identical to the one used to establish Lemma 2.8 and so we omit it. ### Proof of Theorem 1.8 We sketch the proof of Theorem 1.8. Following the reduction in Section 2.1 we need to show that under the assumptions of Theorem 2.2 we have \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N]}^{\log}\int_{\mathcal{M}}f(n(n+1)) \cdot\overline{f(m^{2})}\,d\sigma(f)>0. \tag{6.15}\] To prove this we follow the argument used in the proof of part (ii) of Theorem 2.2.11 We will restrict our average to the grid \(\{(Qn,m)\colon m,n\in\mathbb{N}\}\), this is why for \(f\in\mathcal{M}\) and \(Q,m,n\in\mathbb{N}\), we let Footnote 11: We follow part (ii) of Theorem 2.2 and not part (i) because we do not know that the limit of the averages \(\mathbb{E}_{m,n\in[N]}^{\log}\,f(n(n+1))\cdot\overline{f(m^{2})}\) exists for every \(f\in\mathcal{M}\). \[B(f,Q;m,n):=f((Qn)(Qn+1))\cdot\overline{f(m^{2})}.\] (For reasons that will become clear shortly, in this case we do not have to introduce any kind of weight \(w_{\delta}\).) We first claim that if \(f\in\mathcal{M}\) is aperiodic, then for every \(Q\in\mathbb{N}\) we have \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N]}^{\log}\,B(f,Q;m,n)=0.\] (This corresponds to Proposition 2.10.) Since \(f\) is completely multiplicative, it suffices to show that \[\lim_{N\to\infty}\mathbb{E}_{n\in[N]}^{\log}\,f(n)\cdot f(Qn+1)=0\quad\text{ or}\quad\lim_{N\to\infty}\mathbb{E}_{m\in[N]}^{\log}\,f^{2}(m)=0. \tag{6.16}\] Suppose that \(f^{2}\) does not have logarithmic mean value \(0\). Then by a consequence of a result of Halasz [30], we have \(f^{2}\sim 1\).12 Combining this with the following consequence of a result of Tao in [47], we deduce that (6.16) holds. Footnote 12: Halasz’s theorem gives that \(f^{2}\sim n^{it}\) for some \(t\in\mathbb{R}\), but for logarithmic averages we have that if \(g\sim n^{it}\) for some \(t\neq 0\), then \(g\) has mean \(0\). **Lemma 6.4**.: _Suppose that \(f\in\mathcal{M}\) is aperiodic and satisfies \(f^{2}\sim 1\). Then for every \(Q\in\mathbb{N}\) we have_ \[\lim_{N\to\infty}\mathbb{E}_{n\in[N]}^{\log}\,f(n)\cdot f(Qn+1)=0. \tag{6.17}\] Proof.: We say that \(f\in\mathcal{M}\) is strongly aperiodic if for every Dirichlet character \(\chi\) and \(A\geq 1\) we have \(\lim_{N\to\infty}\min_{|t|\leq AN}\mathbb{D}(f,\chi\cdot n^{it};1,N)=+\infty\). It was shown in [47, Corollary 1.5] that if \(f\) is strongly aperiodic, then (6.17) holds for every \(Q\in\mathbb{N}\). Thus, it remains to show that if \(f\) is aperiodic and \(f^{2}\sim 1\), then \(f\) is strongly aperiodic. This can be shown exactly as in the proof of [19, Proposition 6.1]; the assumption \(f^{2}\sim 1\) in our setting replaces the assumption \(f^{k}=1\) for some \(k\in\mathbb{N}\) that was used in [19]. Using the previous claim and the bounded convergence theorem, we get that it suffices to establish (6.15) when the range of integration \(\mathcal{M}\) is replaced by the subset \(\mathcal{M}_{p}\) of pretentious multiplicative functions. Next we claim that if \((\Phi_{K})\) is as in (2.12) and \(\sigma\) is a Borel probability measure on \(\mathcal{M}_{p}\), then \[\lim_{K\to\infty}\limsup_{N\to\infty}\left|\mathbb{E}_{Q\in\Phi_{K}}\mathbb{ E}_{m,n\in[N]}^{\log}\int_{\mathcal{M}_{p}\setminus\{1\}}\,B(f,Q;m,n)\,d \sigma(f)\right|=0. \tag{6.18}\] (This corresponds to Proposition 2.12. Note that \(\mathcal{A}\) can be replaced by \(\{1\}\) in this case, which is the reason why the weight \(\tilde{w}_{\delta}\) is not needed for this argument.) To prove this, we argue as in the proof of Proposition 2.12. If \(f\sim n^{it_{f}}\cdot\chi\) for some \(t_{f}\in\mathbb{R}\) and Dirichlet character \(\chi\), for \(Q,N\in\mathbb{N}\), we let \[\tilde{L}_{N}(f,Q):=Q^{-it_{f}}\cdot\mathbb{E}_{m,n\in[N]}^{\log}f(n(Qn+1)) \cdot\overline{f(m^{2})}\] and show that \[\lim_{K\to\infty}\limsup_{N\to\infty}\max_{Q,Q^{\prime}\in\Phi_{K}}|\tilde{L}_ {\delta,N}(f,Q)-\tilde{L}_{\delta,N}(f,Q^{\prime})|=0.\] We do this exactly as in the proof of Lemma 6.2, using in this case the concentration estimate of Proposition 2.5 for logarithmic averages (see the third remark following Proposition 2.5). Then (6.18) follows exactly as in the proof of Proposition 2.12. The reason why we only have to exclude the multiplicative function \(\{1\}\) in the integral in (6.18) (versus the set \(\mathcal{A}\) of all Archimedean characters), is because in our current setting we have \[\mathbb{E}_{m,n\in[N]}\,B(f,Q;m,n)=f(Q)\cdot Q^{it_{f}}\cdot\tilde{L}_{N}(f,Q),\] and \(Q\mapsto f(Q)\cdot Q^{it_{f}}\) is the trivial multiplicative function only when \(f=1\). Note also that the variant of Lemma 2.13 is trivial in our case, since \(\mathcal{A}\) is replaced by \(\{1\}\). With the above information we can complete the proof of (6.15) exactly as we did at the end of Section 2.3. ## 7. Pythagorean triples on level sets - Reduction to the pretentious case First, let us recall a convention made in Section 2.4, which we will continue to follow in this and the next section. When we write \(\mathbb{E}_{k\in\mathbb{N}}^{*}\), we mean the limit \(\lim_{k\to\infty}\mathbb{E}_{n\in\Phi_{K}}\), where \((\Phi_{K})\) is a multiplicative Folner sequence chosen so that all the limits in the following statements exist. Since it will always be the case in our arguments that only a countable collection of limits needs to be considered, such a Folner sequence can be taken as a subsequence of any given multiplicative Folner sequence. As explained in Section 2.4, the proof of Theorem 2.14 splits in two parts, Propositions 2.16 and 2.18. Our goal in this section is to establish the first part, which we now state in a more general form (we do not assume that \(f\) takes finitely many values). **Proposition 7.1**.: _Suppose that for every completely multiplicative function \(h\colon\mathbb{N}\to\mathbb{S}^{1}\), with \(h\sim n^{it}\) for some \(t\in\mathbb{R}\), modified Dirichlet character \(\tilde{\chi}\colon\mathbb{N}\to\mathbb{S}^{1}\), and open arc \(I\) on \(\mathbb{S}^{1}\) around \(1\), we have_ \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{* }\,\,A(k\,(m^{2}-n^{2}))\cdot A(k\,2mn)\cdot A(k\,(m^{2}+n^{2}))>0,\] _where_ \[A(n):=F(h(n))\cdot F(\tilde{\chi}(n)),\quad n\in\mathbb{N},\quad F:=\mathbf{1}_{I}.\] _Then for every completely multiplicative function \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) and open arc \(I\) around \(1\), we have_ \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\mathbb{E}_{k\in\mathbb{N}}^{*} \,F(f(k\,(m^{2}-n^{2})))\cdot F(f(k\,2mn))\cdot F(f(k\,(m^{2}+n^{2})))>0\] _where \(F\) is as before. Furthermore, if our assumptions hold for all finite-valued completely multiplicative functions \(h\), then the conclusion holds for all finite-valued completely multiplicative functions \(f\)._ ### Preparation Recall that we write \(f\sim g\) if \(\mathbb{D}(f,g)<+\infty\) where \(\mathbb{D}(f,g)\) is as in (3.5). **Lemma 7.2**.: _Let \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) be a completely multiplicative function such that \(f\sim n^{it}\) for some \(t\in\mathbb{R}\). Then for every \(d\in\mathbb{N}\), there exists a completely multiplicative function \(g\colon\mathbb{N}\to\mathbb{S}^{1}\), such that \(g\sim n^{it/d}\) and \(g^{d}=f\). Furthermore, if \(f\) takes finitely many values, then so does \(g\)._ Proof.: Suppose first that \(f\sim 1\). Then \(f(p)=e(\theta_{p})\), \(p\in\mathbb{P}\), for some \(\theta_{p}\in[-1/2,1/2)\) with \(\sum_{p\in\mathbb{P}}\frac{1-\cos(\theta_{p})}{p}<+\infty\). Hence, \(\sum_{p\in\mathbb{P}}\frac{\theta_{p}^{2}}{p}<+\infty\). We define the completely multiplicative function \(g\colon\mathbb{N}\to\mathbb{S}^{1}\) by \[g(p):=e(\tilde{\theta}_{p}),\ \ \text{where}\ \ \tilde{\theta}_{p}:=\theta_{p}/d, \ p\in\mathbb{P}.\] We have \(g^{d}=f\). Also \(\sum_{p\in\mathbb{P}}\frac{\tilde{\theta}_{p}^{2}}{p}<+\infty\), hence \(g\sim 1\). Now suppose that \(f\sim n^{it}\), and let \(d\in\mathbb{N}\). Then \(f\cdot n^{-it}\sim 1\), and the previous argument gives that there exists \(h\colon\mathbb{N}\to\mathbb{S}^{1}\) with \(h\sim 1\) such that \(h^{d}=f\cdot n^{-it}\). Let \(g:=h\cdot n^{it/d}\). Then \(g^{d}=f\) and \(g\sim n^{it/d}\). A similar statement is not always true when \(f\sim\chi\) where \(\chi\) is a Dirichlet character (not even when \(f=\chi\)). We remind the reader that modified Dirichlet characters \(\tilde{\chi}\) were defined in Section 3.3. If a completely multiplicative function \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) is such that \(f^{l}\) is aperiodic for every \(l\in\mathbb{N}\), then things are easier for us. If this is not the case (for example, it is never the case when \(f\) is finite-valued), then the next lemma gives a useful decomposition to work with. **Lemma 7.3**.: _Let \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) be an aperiodic completely multiplicative function such that \(f^{d}\) is pretentious for some \(d\in\mathbb{N}\), and suppose that \(d\geq 2\) is the smallest such \(d\). Then there exist completely multiplicative functions \(g,h\colon\mathbb{N}\to\mathbb{S}^{1}\) and a Dirichlet character \(\chi\), such that_ * \(f=g\cdot h\)__ * \(g,\ldots,g^{d-1}\) _are aperiodic and_ \(g^{d}=\tilde{\chi}\)_._ * \(h\sim n^{it}\) _for some_ \(t\in\mathbb{R}\)_._ _Furthermore, if \(f\) takes finitely many values, then so does \(h\) and \(h\sim 1\)._ Proof.: By our assumption, we have that \(f,\ldots,f^{d-1}\) are aperiodic and \(f^{d}\sim n^{it}\cdot\chi\) for some \(t\in\mathbb{R}\) and Dirichlet character \(\chi\). Then \(f^{d}\cdot\overline{\tilde{\chi}}\sim n^{it}\), and Lemma 7.2 gives that there exists a completely multiplicative function \(h\colon\mathbb{N}\to\mathbb{S}^{1}\) such that \[h\sim n^{it/d}\quad\text{and}\quad h^{d}=f^{d}\cdot\overline{\tilde{\chi}}.\] Let \(g:=f\cdot\overline{h}\). Then obviously \(f=g\cdot h\). Also, for \(j=1,\ldots,d-1\) we have \(g^{j}=f^{j}\cdot h^{j}\) is aperiodic, since by assumption \(f^{j}\) is aperiodic and \(h^{j}\) is pretentious. Moreover, \[g^{d}=f^{d}\cdot\overline{h}^{d}=\tilde{\chi}.\] Lastly, suppose that \(f\) takes finitely many values. Since \(g\) also takes finitely many values, and \(h:=f\cdot\overline{g}\), we have that \(h\) takes finitely many values. Also, since \(h\) takes finitely many values and \(h\sim n^{it}\) for some \(t\in\mathbb{R}\), we have that \(t=0\). This completes the proof. Since \(\chi\) is a Dirichlet character, there exists \(r\in\mathbb{N}\) such that \(\tilde{\chi}^{r}=1\). We gather some facts about \(g\) that we shall use in the proof of Proposition 7.1: * \(g^{rd}=\tilde{\chi}^{r}=1\), hence \(g\) takes values in \((rd)\)-roots of unity and the sequence \((g^{j})_{j\in\mathbb{N}}\) is periodic with period \(rd\). * \(g^{d}=\tilde{\chi}\), \(g^{2d}=\tilde{\chi}^{2}\), \(\ldots\), \(g^{(r-1)d}=\tilde{\chi}^{r-1}\), \(g^{rd}=1\). * \(g^{j}\) is aperiodic if \(j\not\equiv 0\pmod{d}\). ### Proof of Proposition 7.1 In this subsection we prove Proposition 7.1. For convenience we use the following notation. **Definition 7.1**.: If \(I\) is a circular arc around \(1\) and \(d\in\mathbb{N}\), we let \[I/d:=\{e(t/d)\colon e(t)\in I,t\in[-1/2,1/2)\}.\] Let \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) be a completely multiplicative function and \(I\) be an open arc around \(1\). Let also \(F\colon\mathbb{S}^{1}\to[0,1]\) be a continuous function such that \[\mathbf{1}_{I/4}\leq F\leq\mathbf{1}_{I/2}.\] It suffices to show that under the assumption of Proposition 7.1 we have \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{*} \,F(f(k(m^{2}-n^{2})))\cdot F(f(k\,2mn))\cdot F(f(k(m^{2}+n^{2})))>0. \tag{7.1}\] We consider three cases. **Case 1.** If \(f\) is pretentious, then \(f=h\cdot\tilde{\chi}\), where \(h\sim n^{it}\) for some \(t\in\mathbb{R}\), and \(\tilde{\chi}\) is a modified Dirichlet character, and the conclusion follows from our assumption. **Case 2.** Suppose that \(f\) is aperiodic and \(f^{d}\) is pretentious for some \(d\geq 2\). We use Lemma 7.3 to get a decomposition \(f=gh\), where \(g\) takes values on \(rd\) roots of unity for some \(r\in\mathbb{N}\), \(g,\ldots,g^{d-1}\) are aperiodic and \(g^{d}=\tilde{\chi}\) for some modified Dirichlet character \(\tilde{\chi}\), and \(h\sim n^{it}\) for some \(t\in\mathbb{R}\). Note first that in order to establish (7.1) it suffices to show that \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{ *}\,c_{k,m,n}\cdot F(g(k(m^{2}-n^{2})))\cdot F(g(k\,2mn))\cdot F(g(k(m^{2}+n^ {2})))>0, \tag{7.2}\] where \[c_{k,m,n}:=F(h(k(m^{2}-n^{2})))\cdot F(h(k\,2mn))\cdot F(h(k(m^{2}+n^{2}))), \quad k,m,n\in\mathbb{N}. \tag{7.3}\] This is so, since if \(g(n),h(n)\in I/2\), then \(f(n)=g(n)\cdot h(n)\in I\). **Main Claim.**_If for \(G:=\mathbf{1}_{\{1\}}\) and \(c_{k,m,n}\) as in (7.3) we have_ \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{ *}\,c_{k,m,n}\cdot G(\tilde{\chi}(k(m^{2}-n^{2})))\cdot G(\tilde{\chi}(k\,2 mn))\cdot G(\tilde{\chi}(k(m^{2}+n^{2})))>0, \tag{7.4}\] _then (7.2) holds._ Note that (7.4) is satisfied from the hypothesis of Proposition 7.1. So to finish the proof of Proposition 7.1 in Case 2, it remains to verify the above claim. We start with a simple identity. Since \(g\) takes values in \(rd\) roots of unity, we have \[\mathbf{1}_{g=1}=\mathbb{E}_{0\leq j<rd}\,g^{j}.\] Since \(F\geq\mathbf{1}_{\{1\}}\), it suffices to verify (7.2) with \(\sum_{j=0}^{rd-1}g^{j}\) in place of \(F\circ g\). Let \[J:=\{0\leq j<rd\colon j\not\equiv 0\pmod{d}\}.\] Recall that \(g^{j}\) is aperiodic for \(j\in J\). Also \(g^{d}=\tilde{\chi}\) and \(\tilde{\chi}\) takes values on \(r\)-th roots of unity, hence \[\sum_{j=0}^{rd-1}g^{j}=\sum_{j=0}^{r-1}\tilde{\chi}^{j}+\sum_{j\in J}g^{j}=r \cdot\mathbf{1}_{\tilde{\chi}=1}+\sum_{j\in J}g^{j}.\] Hence, in order to verify (7.2), it suffices to show that \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{* }\ c_{k,m,n}\cdot H(k(m^{2}-n^{2}))\cdot H(k\,mn)\cdot H(k(m^{2}+n^{2}))>0, \tag{7.5}\] where \[H:=r\cdot\mathbf{1}_{\tilde{\chi}=1}+\sum_{j\in J}g^{j}.\] After expanding the product we get a finite sum of expressions of the form \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{* }\ c_{k,m,n}\cdot H_{1}(k(m^{2}-n^{2}))\cdot H_{2}(k\,mn)\cdot H_{3}(k(m^{2}+ n^{2})), \tag{7.6}\] where each \(H_{1},H_{2},H_{3}\) is either of the form \(r\cdot\mathbf{1}_{\tilde{\chi}=1}\), or of the form \(g^{j}\) for some \(j\in J\). With this in mind, we see that the positiveness property (7.5) would follow once we establish the following three claims: 1. If \(H_{1}=H_{2}=H_{3}=r\cdot\mathbf{1}_{\tilde{\chi}=1}\), then the limit in (7.6) is positive. 2. If \(H_{1}=H_{2}=r\cdot\mathbf{1}_{\tilde{\chi}=1}\) and \(H_{3}=g^{j}\) for some \(j\in J\), then the limit in (7.6) is \(0\). 3. If \(H_{1}=g^{j}\) or \(H_{2}=g^{j}\) for some \(j\in J\), then the limit in (7.6) is \(0\). (We do not combine the last two cases because the argument we use is different.) We prove (i). This follows immediately from the assumption (7.4) of the Main Claim. We prove (ii). We will show that for every \(m,n\in\mathbb{N}\) with \(m>n\) we have \[\mathbb{E}_{k\in\mathbb{N}}^{*}\ c_{k,m,n}\cdot H_{1}(k(m^{2}-n^{2}))\cdot H_ {2}(k\,mn)\cdot H_{3}(k(m^{2}+n^{2}))=0.\] Using the definition of \(c_{k,m,n}\) in (7.3) and uniform approximation of \(F\), it suffices to show that for every \(m,n\in\mathbb{N}\) with \(m>n\) we have \[\mathbb{E}_{k\in\mathbb{N}}^{*}\ H_{1}^{\prime}(k(m^{2}-n^{2}))\cdot H_{2}^{ \prime}(k\,mn)\cdot H_{3}^{\prime}(k(m^{2}+n^{2}))=0,\] where \(H_{1}^{\prime}:=\tilde{\chi}^{j_{1}}\cdot h^{j_{2}}\), \(H_{2}^{\prime}:=\tilde{\chi}^{j_{3}}\cdot h^{j_{4}}\), and \(H_{3}^{\prime}:=g^{j_{5}}\cdot h^{j_{6}}\), for some \(j_{1},j_{2},j_{3},j_{4},j_{6}\in\mathbb{Z}\) and \(j_{5}:=j\in J\). Factoring out the multiplicative average \(\mathbb{E}_{k\in\mathbb{N}}^{*}\), we get that it suffices to show that \[\mathbb{E}_{k\in\mathbb{N}}^{*}\ H(k)=0\quad\text{where}\quad H:=\tilde{\chi} ^{j_{1}+j_{3}}\cdot h^{j_{2}+j_{4}+j_{6}}\cdot g^{j_{5}}.\] Since \(g^{j_{5}}\) is aperiodic and \(\tilde{\chi}^{j_{1}+j_{3}}\cdot h^{j_{2}+j_{4}+j_{6}}\) is pretentious, we get that \(H\neq 1\), hence \(\mathbb{E}_{k\in\mathbb{N}}^{*}\ H(k)=0\). We prove (iii). Suppose that \(H_{1}=g^{j_{1}}\) for some \(j_{1}\in J\), the argument is similar for \(j_{2}\). Using the definition of \(c_{k,m,n}\) from (7.3) and uniform approximation of \(F\), it suffices to show that \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{*}\ H_{1}^{\prime}(k(m^{2}-n^{2}))\cdot H_{2}^{ \prime}(k\,mn)\cdot H_{3}^{\prime}(k(m^{2}+n^{2}))=0,\] where \(H_{1}^{\prime}:=g^{j_{1}}\cdot h^{j_{2}}\), \(H_{2}^{\prime}:=\tilde{\chi}^{j_{3}}\cdot h^{j_{4}}\) or \(g^{j_{5}}\cdot h^{j_{6}}\), \(H_{3}^{\prime}:=\tilde{\chi}\cdot h^{j_{7}}\) or \(g^{j_{8}}\cdot h^{j_{9}}\), for some \(j_{2},\ldots,j_{9}\in\mathbb{Z}\). Factoring out the multiplicative average \(\mathbb{E}_{k\in\mathbb{N}}^{*}\ (H_{1}^{\prime}\cdot H_{2}^{\prime}\cdot H_{3}^{ \prime})(k)\), we get that it suffices to show that \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,H_{1}^{\prime}((m^{2}-n^{2})) \cdot H_{2}^{\prime}(mn)\cdot H_{3}^{\prime}(m^{2}+n^{2})=0, \tag{7.7}\] where \(H_{1}^{\prime}\) is an aperiodic completely multiplicative function (since \(g^{j_{1}}\) is aperiodic and \(h^{j_{2}}\) is pretentious), and \(H_{2}^{\prime}\), \(H_{3}^{\prime}\) are completely multiplicative functions. The hypothesis of Proposition 2.15 is satisfied and we deduce that (7.7) holds. This finishes the proof of the Main Claim and the proof of Case 2. **Case 3.** Suppose that \(f^{l}\) is aperiodic for every \(l\in\mathbb{N}\). In this case we claim that the following identity holds \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{*} \,F(f(k(m^{2}-n^{2})))\cdot F(f(k\,mn))\cdot F(f(k(m^{2}+n^{2})))=\Big{(}\int F \,dm_{\mathbb{S}^{1}}\Big{)}^{3}.\] If we prove this, then (7.1) holds, since \(\int F\,dm_{\mathbb{S}^{1}}\geq m_{\mathbb{S}^{1}}(I/4)>0\). Using uniform approximation of \(F\), it suffices to show that \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{*}\, F_{1}(k(m^{2}-n^{2}))\cdot F_{2}(k\,mn)\cdot F_{3}(k(m^{2}+n^{2}))=0\] when for \(i=1,2,3\) we have \(F_{i}=f^{j_{i}}\), \(j_{i}\in\mathbb{Z}\), and at least one of the \(j_{1},j_{2},j_{3}\) is non-zero. We consider two cases. Suppose first that \(j_{1}=j_{2}=0\). Then \(j_{3}\neq 0\). After factoring out the multiplicative average \(\mathbb{E}_{k\in\mathbb{N}}^{*}\) it suffices to show that \[\mathbb{E}_{k\in\mathbb{N}}^{*}\,f_{3}^{j_{3}}(k)=0.\] This is the case since \(f_{3}^{j_{3}}\) is a non-trivial completely multiplicative function. Suppose now that \(j_{1}\neq 0\), the argument is similar if \(j_{2}\neq 0\). After factoring out the multiplicative average \(\mathbb{E}_{k\in\mathbb{N}}^{*}\) it suffices to show that \[\lim_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,F_{1}((m^{2}-n^{2}))\cdot F_{2}(2 mn)\cdot F_{3}(m^{2}+n^{2})=0. \tag{7.8}\] By our assumption, \(F_{1}=f^{j_{1}}\) is aperiodic. Note also that all \(F_{1},F_{2},F_{3}\) are completely multiplicative function. The asserted identity then follows again from Proposition 2.15.13 Footnote 13: It is crucial for this part of the argument that we avoided working with an aperiodicity assumption on \(F_{3}\), since such an assumption does not imply that (7.8) holds (but it does hold if \(F_{1}\) or \(F_{2}\) are aperiodic completely multiplicative functions). ## 8. Pythagorean triples on level sets - The pretentious case Our goal in this section is to prove Proposition 2.18, which combined with Proposition 7.1 (Proposition 2.16 is a direct consequence) implies Theorem 1.5. We first restate Proposition 2.18 in a slightly more convenient form. Let \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) be a pretentious completely multiplicative function taking finitely many values. Then for some \(d\in\mathbb{N}\) it takes values on \(d\)-th roots of unity. We can assume that \(d\) is minimal with this property, in which case we have \(f^{j}\neq 1\) for \(j=1,\dots,d-1\). In this case we will show the following. **Proposition 8.1**.: _Let \(d\in\mathbb{N}\) and \(f\colon\mathbb{N}\to\mathbb{S}^{1}\) be pretentious multiplicative function taking values on \(d\)-th roots of unity and \(\tilde{\chi}\colon\mathbb{N}\to\mathbb{S}^{1}\) be a modified Dirichlet character. Then_ \[\liminf_{N\to\infty}\mathbb{E}_{m,n\in[N],m>n}\,\mathbb{E}_{k\in\mathbb{N}}^{* }\ A(k\,(m^{2}-n^{2}))\cdot A(k\,2mn)\cdot A(k\,(m^{2}+n^{2}))>0,\] _where_ \[A(n):=F(f(n))\cdot F(\tilde{\chi}(n)),\quad n\in\mathbb{N},\quad F:=\mathbf{1 }_{\{1\}}. \tag{8.1}\] **Remark**.: Note that in the argument that follows we only deal with countably many choices of multiplicative functions and other choices of parameters, so we can choose a subsequence of positive integers \((N_{l})\) along which all the limits (as \(l\to\infty\)) that appear below exist. We make this implicit assumption throughout. Before giving the proof of Proposition 8.1, we show how the concentration estimates of Corollary 2.17 follow from Propositions 2.5 and 2.11. ### Proof of Corollary 2.17 We will deduce part (i) from Proposition 2.5. In a similar fashion we can deduce part (ii) from Proposition 2.11. Let \(\varepsilon>0\) and \(\varepsilon<1\). Since \(f\) is a finite-valued pretentious multiplicative function we have by Lemma 3.4 that \(f\sim\chi\) for some Dirichlet character \(\chi\) with period \(q\) and \[\sum_{p\in\mathbb{P}}\,\frac{1}{p}\,|1-f(p)\cdot\overline{\chi(p)}|<\infty.\] Hence, there exists \(K_{0}\in\mathbb{N}\) such that \[\sum_{p\geq K_{0}}\,\frac{1}{p}\,|1-f(p)\cdot\overline{\chi(p)}|+K_{0}^{-1/2} \leq\varepsilon.\] This implies that for every \(K\geq K_{0}\) we have \[\mathbb{D}(f,\chi;K,\infty)\leq\varepsilon\quad\text{and}\quad\big{|}\exp \big{(}F_{N}(f,K)\big{)}-1\big{|}\ll\varepsilon\] where \(F_{N}(f,K)=\sum_{K<p\leq N}\frac{1}{p}\left(f(p)\cdot\overline{\chi(p)}-1\right)\). We let \(Q_{0}:=q\cdot\prod_{p\leq K_{0}}p\). If \(Q\in\mathbb{N}\) is such that \(Q_{0}\mid Q\), then \(Q\in\Psi_{K}\) for some \(K\geq K_{0}\), where \(\Psi_{K}:=\{q\cdot\prod_{p\leq K}p^{a_{i}}\colon a_{1},\ldots,a_{K}\in\mathbb{ N}\}\). Applying Proposition 2.5 (see the second remark following the statement) with \(t=0\), we get that if \(K\geq K_{0}\), then \[\limsup_{N\to\infty}\mathbb{E}_{n\in[N]}|f(Qn+1)-\exp\big{(}F_{N}(f,K)\big{)}| \ll\varepsilon.\] This completes the proof. ### Proof of Proposition 8.1 Recall that \(A(n)\) is given by (8.1). Since \(A(n)\geq 0\) for every \(n\in\mathbb{N}\), it suffices to show that there exist \(Q\in\mathbb{N}\) and \(N_{l}\to\infty\) (which can be taken to be a subsequence of any given \(M_{l}\to\infty\)) such that all limits appearing below as \(l\to\infty\) exist and \[\lim_{l\to\infty}\mathbb{E}_{m,n\in[N_{l}],m>n}\,\mathbb{E}_{k\in \mathbb{N}}^{*}\,A(k((Qm+1)^{2}-(Qn)^{2})\cdot A(k\,2(Qm+1)(Qn))\cdot\\ A(k((Qm+1)^{2}+(Qn)^{2}))>0.\] Since \(f\) takes values on \(d\)-th roots of unity and \(\tilde{\chi}\) takes values on \(d^{\prime}\)-th roots of unity for some \(d,d^{\prime}\in\mathbb{N}\), we have \[F(f)=\mathbf{1}_{f=1}=\mathbb{E}_{0\leq j<d}\,f^{j},\quad F(\tilde{\chi})= \mathbf{1}_{\tilde{\chi}=1}=\mathbb{E}_{0\leq j<d^{\prime}}\,\tilde{\chi}^{j}. \tag{8.2}\] Let \(m,n\in\mathbb{N}\) with \(m>n\) be fixed. In order to compute \[\mathbb{E}_{k\in\mathbb{N}}^{*}\,A(k(m^{2}-n^{2}))\cdot A(k\,2mn)\cdot A(k(m^ {2}+n^{2})),\] we use (8.2), expand, and use that by Lemma 3.2 we have \(\mathbb{E}_{k\in\mathbb{N}}^{*}\,g(k)=0\) for all completely multiplicative functions \(g\colon\mathbb{N}\to\mathbb{U}\) with \(g\neq 1\) (in particular this holds if \(g:=f^{k}\cdot\tilde{\chi}^{k^{\prime}}\neq 1\)). We see that the previous expression is equal to \(1/(dd^{\prime})^{3}\) times \[\sum_{k_{i},k_{i}^{\prime}\in\mathcal{K}}(f^{k_{1}}\cdot\tilde{\chi}^{k_{i}^{ \prime}})(m^{2}-n^{2})\cdot(f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{\prime}})(2mn) \cdot(f^{k_{3}}\cdot\tilde{\chi}^{k_{3}^{\prime}})(m^{2}+n^{2}),\] where \[\mathcal{K}:=\{0\leq k_{1},k_{2},k_{3}<d,\,0\leq k_{1}^{\prime},k_{2}^{\prime},k_{3}^{\prime}<d^{\prime}\colon f^{k_{1}+k_{2}+k_{3}}\cdot\tilde{\chi}^{k_{1} ^{\prime}+k_{2}^{\prime}+k_{3}^{\prime}}=1\}.\] In what follows, we implicitly assume that all \(k_{i},k_{i}^{\prime}\) belong to \(\mathcal{K}\). Let \(q\) be the period of \(\chi\), then \(\tilde{\chi}(qn+1)=1\) for every \(n\in\mathbb{N}\). Taking the previous facts in mind, we see that in order to establish the needed positiveness it suffices to show that there exists \(Q\in\mathbb{N}\) such that \(q\mid Q\) and \[L(Q):=\sum_{k_{i},k_{i}^{\prime}\in\mathcal{K}}\,\Re(L_{k_{1},k_{2},k_{3},k_{2} ^{\prime}}(Q))>0, \tag{8.3}\] where \[L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q):=\lim_{l\to\infty}\mathbb{E}_ {m,n\in[N_{l}],m>n}\,f^{k_{1}}((Qm+1)^{2}-(Qn)^{2})\cdot f^{k_{2}}(2(Qm+1)(Qn)) \cdot\\ f^{k_{3}}((Qm+1)^{2}+(Qn)^{2})\cdot\tilde{\chi}^{k_{2}^{\prime}}(2(Qn)). \tag{8.4}\] (We used that \(\tilde{\chi}(j)=1\) for \(j\in Q\mathbb{Z}+1\).) **Claim 1** (\(f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{\prime}}=1\)).: _For every \(\varepsilon>0\) there exists \(Q_{0}=Q_{0}(f,\tilde{\chi},\varepsilon)\in\mathbb{N}\) with \(q\mid Q_{0}\), such that the following holds: If \(Q\in\mathbb{N}\) satisfies \(Q_{0}\mid Q\), then for all \(k_{1},k_{2},k_{3},k_{2}^{\prime}\in\mathcal{K}\) with \(f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{\prime}}=1\) we have_ \[\Re(L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q))\geq 1-\varepsilon. \tag{8.5}\] _As a consequence, there exists \(Q_{0}:=Q_{0}(f,\tilde{\chi})\), such that if \(Q\in\mathbb{N}\) satisfies \(Q_{0}\mid Q\), then_ \[\sum_{k_{1},k_{2},k_{3},k_{2}^{\prime}\,\cdot\,f^{k_{2}}\cdot\tilde{\chi}^{k_{ 2}^{\prime}}=1}\Re(L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q))\geq 1. \tag{8.6}\] We prove the claim. Let \(\varepsilon>0\). Note first that since \(f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{\prime}}=1\), we get using (8.4) that \[L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q):=\lim_{l\to\infty}\mathbb{ E}_{m,n\in[N_{l}],m>n}\,f^{k_{1}}(Q(m-n)+1)\cdot f^{k_{1}}(Q(m+n)+1)\\ f^{k_{2}}(Qm+1)\cdot f^{k_{3}}((Qm+1)^{2}+(Qn)^{2}). \tag{8.7}\] Using this identity, Corollary 2.17, and Lemma 3.1, we deduce that there exists \(Q_{0}=Q_{0}(f,\varepsilon)\), with \(q\mid Q_{0}\), such that if \(Q\in\mathbb{N}\) satisfies \(Q_{0}\mid Q\), then for all \(k_{1},k_{2},k_{3},k_{2}^{\prime}\in\mathcal{K}\) such that \(f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{\prime}}=1\) we have \[|L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q)-1|\ll_{d}\varepsilon.\] This proves (8.5). Since \(L_{0,0,0,0}=1\), using (8.5) for \(\varepsilon=1/2\), we deduce (8.6). This completes the proof of Claim 1. **Claim 2** (\(f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{\prime}}\neq 1\)).: _Let \(Q_{0}\in\mathbb{N}\) be such that (8.6) holds for every \(Q\in\mathbb{N}\) such that \(Q_{0}\mid Q\). Then for every \(\varepsilon>0\) there exists \(Q_{1}=Q_{1}(f,\tilde{\chi},\varepsilon)\in\mathbb{N}\) such that \(Q_{0}\mid Q_{1}\) (hence (8.5) holds for \(Q=Q_{1}\)) and_ \[\sum_{k_{1},k_{2},k_{3},k_{2}^{\prime}\,:\,f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{ \prime}}\neq 1}\Re(L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q_{1}))\geq-\varepsilon. \tag{8.8}\] We prove the claim. Let \(\varepsilon>0\). It suffices to show that \[\lim_{K\to\infty}\mathbb{E}_{Q\in\Phi_{K}}\,L_{k_{1},k_{2},k_{3},k_{2}^{\prime }}(Q)=0,\quad\text{as long as }f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{\prime}}\neq 1. \tag{8.9}\] Note that \[L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q):=(f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{ \prime}})(2Q)\cdot L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}^{\prime}(Q),\] where \[L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}^{\prime}(Q):=\lim_{l\to\infty}\mathbb{ E}_{m,n\in[N_{l}],m>n}\,f^{k_{1}}(Q(m-n)+1)\cdot f^{k_{1}}(Q(m+n)+1)\\ f^{k_{2}}(Qm+1)\cdot f^{k_{3}}((Qm+1)^{2}+(Qn)^{2})\cdot f^{k_{2} }(n)\cdot\tilde{\chi}^{k_{2}}(n). \tag{8.10}\] We prove (8.9). Let \(\varepsilon^{\prime}>0\). Using (8.10), Corollary 2.17, and Lemma 3.1, we get that there exists \(Q_{2}=Q_{2}(f,\varepsilon^{\prime})\) such that the following holds: If \(Q\in\mathbb{N}\) satisfies \(Q_{2}\mid Q\), then, for all \(k_{1},k_{2},k_{3},k_{2}^{\prime}\in\mathcal{K}\) we have \[|L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}^{\prime}(Q)-c_{k_{2}}|\ll\varepsilon^{ \prime}, \tag{8.11}\] where \[c_{k_{2}}:=\lim_{l\to\infty}\mathbb{E}_{n\in[N_{l}]}f^{k_{2}}(n)\cdot\tilde{ \chi}^{k_{2}}(n).\lx@note{footnote}{The limit exists since $f^{k_{2}}\cdot\tilde{\chi}^{k_{2}}$ is finite-valued, but we do not have to use this.}\] Hence, by (8.4), (8.10), and (8.11), we have \[|L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q)-c_{k_{2}}\cdot(f^{k_{2}}\cdot\tilde{ \chi}^{k_{2}^{\prime}})(2Q)|\ll\varepsilon^{\prime}\quad\text{for all $Q$ with $Q_{2}\mid Q$.} \tag{8.12}\] Since by assumption \(f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{\prime}}\neq 1\), we have \[\lim_{K\to\infty}\mathbb{E}_{Q\in\Phi_{K}}\,(f^{k_{2}}\cdot\tilde{\chi}^{k_{2} ^{\prime}})(Q)=0.\] Combining this with (8.12), we get that (8.9) holds. This proves Claim 2. Putting together the two claims, in particular the estimates (8.6) and (8.8), we deduce that for every \(\varepsilon>0\) there exists \(Q_{1}=Q_{1}(f,\tilde{\chi},\varepsilon)\in\mathbb{N}\) with \(q\mid Q_{1}\), such that \(L(Q_{1})\geq 1-\varepsilon\), hence (8.3) holds for \(Q=Q_{1}\). This completes the proof. ### More general equations Our methods allow us to extend Theorem 1.5 and cover more general equations of the form \[ax^{2}+by^{2}=cz^{2} \tag{8.13}\] where \(a,b,c\in\mathbb{N}\) are squares satisfying Rado's condition, i.e., we have either \(a=c\), or \(b=c\), or \(a+b=c\). We summarize the key differences in the argument. Suppose first that \(a=c\) (the case \(b=c\) is similar). Then, as in Section 1.4, we get parametrizations of (8.13) of the form \[x=\ell_{1}\,(m^{2}-n^{2}),\quad y=\ell_{2}\,mn,\quad z=\ell_{3}\,(m^{2}+n^{2}),\] for some \(\ell_{1},\ell_{2},\ell_{3}\in\mathbb{N}\), and our hypothesis \(a=c\) implies that we can take \(\ell_{1}=\ell_{3}\). This fact is then used to handle Claim 1 in the proof of Proposition 8.1, and the rest of the argument remains unchanged. To see how Claim 1 is handled, note that in our setting the expressions \(L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q)\) in (8.4) take the form \[L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q):=c_{k_{1},k_{2},k_{3}} \cdot\lim_{l\to\infty}\mathbb{E}_{m,n\in[N_{l}],m>n}\,f^{k_{1}}((Qm+1)^{2}-(Qn )^{2})\cdot f^{k_{2}}(2(Qm+1)(Qn))\cdot\\ f^{k_{3}}((Qm+1)^{2}+(Qn)^{2})\cdot\tilde{\chi}^{k_{2}^{\prime}}(2(Qn )),L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q), \tag{8.14}\] where \[c_{k_{1},k_{2},k_{3}}:=(f^{k_{1}}\cdot\tilde{\chi}^{k_{1}^{\prime}})(\ell_{1} )\cdot(f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{\prime}})(\ell_{2})\cdot(f^{k_{3}} \cdot\tilde{\chi}^{k_{3}^{\prime}})(\ell_{3}).\] Using additionally that \(\ell_{1}=\ell_{3}\) and that \(f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{\prime}}=1\), \(f^{k_{1}+k_{2}+k_{3}}\cdot\tilde{\chi}^{k_{1}^{\prime}+k_{2}^{\prime}+k_{3}^{ \prime}}=1\), which are standing assumptions in Claim 1, we deduce that \(c_{k_{1},k_{2},k_{3}}=1\). With this information at hand, the proof of Claim 1 in our setting is exactly the same as in the case of Pythagorean triples. Now suppose that \(a+b=c\), in which case the argument is a bit different and somewhat simpler. As shown in Step 2 of [21, Appendix C], we can obtain parametrizations of (8.13) of the form \[x=k\,(m+\ell_{1}n)\cdot(m+\ell_{2}n),\quad y=k\,(m+\ell_{3}n)\cdot(m+\ell_{4}n ),\quad z=k\,(m^{2}+(\ell_{5}n)^{2}),\] for suitable \(\ell_{1},\ell_{2},\ell_{3},\ell_{4},\ell_{5}\in\mathbb{N}\) that satisfy \(\ell_{1}\neq\ell_{2}\), \(\ell_{3}\neq\ell_{4}\), and \(\{\ell_{1},\ell_{2}\}\neq\{\ell_{3},\ell_{4}\}\). Note that our assumption \(a+b=c\) was used to ensure that the coefficient of \(m\) is \(1\) in all linear forms. We average on the grid \(\{(Qm+1,Qn)\colon m,n\in\mathbb{N}\}\). We will demonstrate how Claims 1 and 2 in the proof of Proposition 8.1 can be established within our framework. The remainder of the argument remains unaltered. In our context, the expressions \(L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q)\) in (8.4) take the form \[L_{k_{1},k_{2},k_{3},k_{2}^{\prime}}(Q):=\lim_{l\to\infty}\mathbb{ E}_{m,n\in[N_{l}],m>n}\,f^{k_{1}}\big{(}(Q(m+\ell_{1}n)+1)(Q(m+\ell_{2}n)+1) \big{)}\cdot\\ f^{k_{2}}\big{(}(Q(m+\ell_{3}n)+1)(Q(m+\ell_{4}n)+1)\big{)} \cdot f^{k_{3}}((Qm+1)^{2}+(Q\ell_{5}n)^{2}). \tag{8.15}\] Using the concentration estimates of Corollary 2.17, we can see that Claim 1 holds without assuming that \(f^{k_{2}}\cdot\tilde{\chi}^{k_{2}^{\prime}}=1\). Therefore, in our setting, Claim 2 in the proof of Proposition 8.1 is already addressed by this case and requires no further explanation.
2309.09401
ChatGPT Hallucinates when Attributing Answers
Can ChatGPT provide evidence to support its answers? Does the evidence it suggests actually exist and does it really support its answer? We investigate these questions using a collection of domain-specific knowledge-based questions, specifically prompting ChatGPT to provide both an answer and supporting evidence in the form of references to external sources. We also investigate how different prompts impact answers and evidence. We find that ChatGPT provides correct or partially correct answers in about half of the cases (50.6% of the times), but its suggested references only exist 14% of the times. We further provide insights on the generated references that reveal common traits among the references that ChatGPT generates, and show how even if a reference provided by the model does exist, this reference often does not support the claims ChatGPT attributes to it. Our findings are important because (1) they are the first systematic analysis of the references created by ChatGPT in its answers; (2) they suggest that the model may leverage good quality information in producing correct answers, but is unable to attribute real evidence to support its answers. Prompts, raw result files and manual analysis are made publicly available.
Guido Zuccon, Bevan Koopman, Razia Shaik
2023-09-17T23:49:12Z
http://arxiv.org/abs/2309.09401v1
# ChatGPT Hallucinates when Attributing Answers ###### Abstract. Can ChatGPT provide evidence to support its answers? Does the evidence it suggests actually exist and does it really support its answer? We investigate these questions using a collection of domain-specific knowledge-based questions, specifically prompting ChatGPT to provide both an answer and supporting evidence in the form of references to external sources. We also investigate how different prompts impact answers and evidence. We find that ChatGPT provides correct or partially correct answers in about half of the cases (\(\approx 50.6\%\) of the times), but its suggested references only exist \(14\%\) of the times. We further provide insights on the generated references that reveal common traits among the references that ChatGPT generates, and show how even if a reference provided by the model does exist, this reference often does not support the claims ChatGPT attributes to it. Our findings are important because (1) they are the first systematic analysis of the references created by ChatGPT in its answers; (2) they suggest that the model may leverage good quality information in producing correct answers, but is unable to attribute real evidence to support its answers. Prompts, raw result files and manual analysis are made publicly available at [https://www.github.com/anonymized](https://www.github.com/anonymized). Large Language Models, ChatGPT, Attribution, Hallucinations + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results + Footnote †: journal: Information systems Evaluation of retrieval results reports day the model hallucinates attributions; i.e., attributions that appear realistic, but that do not exist in reality. An example of this is shown in Figure 1, where none of the four references provided, included a Wikipedia page1, exist. Footnote 1: This is interesting to observe because ChatGPT’s creators, the OpenAI company, have stated that the model was trained on Wikipedia pages. In this paper, we aim to provide a thorough analysis of attribution generated by ChatGPT. To this aim, we perform a qualitative analysis of ChatGPT responses and generated references for a large dataset of domain-specific knowledge-based questions, and we answer the following research questions: 1. [leftmargin=*] 2. Can ChatGPT provide supporting evidence (in the form of references) to its answers? 3. Does the supporting evidence/references actually exist (i.e., not hallucinated)? 4. Do the statements provided by ChatGPT about the evidence actually align with what the reference says? ## 2. Related Work The recent progress in instruction-based large language models (LLMs), for instance, ChatGPT, has demonstrated their proficiency in adhering to user guidelines to successfully accomplish tasks (Bahdan et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2020). These models, typically possessing tens of billions of parameters, are pre-trained on a wide range of substantial text data. This allows them to create pertinent and coherent responses on a diverse selection of topics (Bahdan et al., 2017). Various studies have evaluated ChatGPT's performance on a number of different downstream tasks, consistently noticing an improved efficiency in task resolution, such as question answering (Han et al., 2018; Chen et al., 2019), and ranking (Chen et al., 2019; Chen et al., 2020). LLMs are increasingly been used for information seeking tasks, ranging from straightforward question-answering situations (Bahdan et al., 2017). However, LLMs are fronted with a number of open challenges (Krishnan et al., 2019), including hallucinations and correct attribution of answers, which currently limit how much people should rely on these tools for information seeking. Hallucinations in LLMs like ChatGPT refer to the generation of factually incorrect or entirely made-up information (Chen et al., 2019). These models are trained to generate text based on patterns they have learned from the input data, often through the next token prediction task, and while they are adept at creating coherent and plausible-sounding responses, they sometimes produce outputs that are not grounded in reality. Hallucinations have been reported to occur more often when common sense and logical reasoning is required (Han et al., 2018), or when dealing with common misconceptions (Han et al., 2018; Chen et al., 2020), while LLMs appear to perform more robustly on tasks requiring factual responses (Han et al., 2018) - though our results will show this not to be necessarily the case within the specific domain of the question set we consider. One of the key reasons for this is that these models do not have a true understanding of the world or access to real-time information. While the presence of frequent, incorrect statements in the training data may let the model learn to reproduce similar incorrect statements, even with a diverse and high-quality training dataset, LLMs can still hallucinate information due to the way they handle uncertainty. Given a prompt that is ambiguous or open-ended, the model may generate text that seems reasonable but is completely fabricated. The reliance on retrieve-then-generate pipelines, where a generation occurs from evidence retrieved through an initial round of search, and the use of attribution have been poised to be possible mitigation strategies (Krishnan et al., 2019). Attribution refers to the ability of a LLM to provide evidence (in the form of a snippet, citation or reference) that supports the answer, or part of, that it generates (Bahdan et al., 2017; Chen et al., 2019; Chen et al., 2019; Chen et al., 2020; Chen et al., 2020); Chen et al. (2020); Chen et al. (2020)). The availability of such evidence would enable the verifiability of the answer (Chen et al., 2019) - i.e., a user could follow the reference made by the LLM to verify whether the cited source supports the provided answer. It has also been suggested that attribution could improve the factual correctness of the answers (Chen et al., 2019); however in our experiments we show that factual correctness is relatively low despite answers having attributions, at least for the commonly used ChatGPT LLM. We notice that while the quality and correctness of citations and references produced by LLMs have been somewhat investigated in the context of specific techniques aimed at offering attribution capabilities (Bahdan et al., 2017; Chen et al., 2019; Chen et al., 2020), the evaluation of attribution references produced by the popular ChatGPT has been limited (Chen et al., 2019; Chen et al., 2020). In this paper we aim to address this gap by providing a qualitative analysis of ChatGPT's attribution capabilities and the references it generates using a large datasets of domain-specific knowledge-based questions. ## 3. Methodology To answer our research questions regarding the quality of the attributions made by ChatGPT in its answers, we prompted the model with domain-specific knowledge-based questions. As questions to submit to ChatGPT, we used the topics from the Ag-evaluate collection (Chen et al., 2019), a test collection for both passage and document retrieval in the Agriculture domain. The topics consist of natural language questions created by agricultural scientists and crop growers, and the collection contains a total of 160 topics2. Along with questions, the collection also provides a topic-creator authored answer, which forms the ground truth, and sparse relevance judgements over a dataset of 9M+ passages extracted from specialised agricultural websites and scientific publications. An example question and ground truth answer is shown in Figure 1. We selected this collection for the experiment because (i) it was public and readily available, (ii) it provided ground truth answers, along with evidence to sources for the answers, thus ensuring that such source do exist, (iii) we had access to a domain expert that could interpret the answers and the evidence provided in support. Footnote 2: We only used the training portion of the topics for our experiments. Figure 2. GPTChat prompt format. Questions were issued to the online version of ChatGPT (in the GPT3.5 version), and answers collected. In issuing questions, we embedded them into the simple prompt of Figure 2, which instructs the model to answer the question and provide evidence for the answer. ChatGPT's answers were examined by the first two authors of the paper along with a research assistant, all computer scientist. This group of assessors were responsible to identify whether the answer contained references, and if it did, then they went on to locate the evidence, if it existed. Subsequently, a fourth annotator, the third author of this paper, analysed once again the answers. This annotator is an agricultural scientist in a leading university in Ag-Tech, and is expert in the crops - the topic area the questions in the Ag-valuate collection focus on. This annotator validated the previous annotations, correcting for errors. In addition, she validated the answers of ChatGPT for correctness and provided further comments regarding the model's answers. Finally, she also examined the references that were included in ChatGPT's answers and that we successfully located. For each of these, she assessed whether the reference contained the claims ChatGPT made with its regards. Annotations were provided with respect to the following annotation schema: 1. [leftmargin=*] 2. Was the answer provided correct? Specifically, does the answer provided by ChatGPT align with the ground truth answer? Options: "Yes, fully", "Only partially", "No (or no to a large extent)". 3. Are references provided in the answer? This was a binary question (yes/no), if yes, then the annotators had to select whether references were (i) "Academic publication (journal, conference)", (ii) "Professional magazine/online publication", (iii) "Wikipedia page", (iv) "Other specialised website". 4. For each reference, does it actually exist? For a journal reference, does the paper with that title exist? Does the suggested Wikipedia article exist? Options: "Yes", "No"; in addition, annotators were to add a comment regarding the source to reflect whether the journal existed (including issue etc.), but not the article, or the website existed, but not the specific page. 5. Is a URL provided with the references that actually points to mentioned source? Options: "Yes", "No". 6. Does the statement about the reference in ChatGPT's answer align with the actual reference content? Options: "Yes, fully", "Only partially", "No (or no to a large extent)". In addition, the annotators could add comments regarding the correctness of the answers and of the references, or any other observation worth noting. ## 4. Results ### Answer Correctness Before diving into the analysis of the results for our research questions, we investigate the correctness of the answers provided by ChatGPT. Recall that the answers were assessed for correctness by the third author of this paper, an Ag-Tech expert. We found that a large portion of the provided answers were incorrect (49.4%), with only 13.1% being either fully correct and and 37.5% partially correct. Some of the incorrect answers were not just wrong, they were misleading and disastrous too. For example, for the question "What type of herbicides are effective against sowthistle?", ChatGPT lists "2,4-D", "Dicamba" and "glyphosate". However, sowthistle is resistant to "2,4-D" and "glyphosate", and in parts of the world, e.g. in part of Australia, sowthistle is also resistant to "Dicamba". This meant that growers that to followed ChatGPT's suggestion, would spend a considerable amount of money to purchase and apply these herbicides, without obtaining the intended outcome. These herbicides also have disadvantages. For example, glyphosate products are harmful to animals if they touch or eat plants treated with it; some studies have also suggested glyphosate may be linked to cancer (and is classified as a probable human carcinogen), though others have suggested there is no link (Han et al., 2018). Other answers were wrong in the context of the question, but their content was not necessarily incorrect. For example, for the question "What is the best treatment for soybean to improve its digestibility?", the model provides an open ended answer, where there is no clear identification of a best treatment, nor a comparison between the treatment options it identifies. ### RQ1: Is there Supporting Evidence? Our first research question aimed to assess whether ChatGPT produced evidence, in the form of references to publications or websites, to support the answers it produced. Despite the model being explicitly told that references should be contained in the answer, we found that 14 out of the 160 answers (9%) did not contain references. For the answers that included references (the remaining 91%), there were on average 3.08 references per answer. Figure 3 shows the distribution of reference types for the 450 references that were provided. Most references generated by ChatGPT are academic publications, followed by Wikipedia pages. When examining the answers that did not contain references, we identified some specific patterns. Often, this occurred when the model stated it did not know how to answer the questions. Other times, when the model did provide an actionable answer but no Figure 3. Distribution of references across four reference categories. Most references are academic publications. evidence in the form of a reference, it did suggest how to find references. In these cases, the model may suggest how the user could attempt to identify potential evidence themselves. At times, it did so giving high level suggestions, e.g., for question "When does awnless barnyard grass germinate?", it suggested "you may want to search for more general information on the germination of warm-season annual grass species, which can provide insight into the factors that influence the timing of germination for awnless barnyard grass". Other times, it provided more specific information, including data repositories to search. For example, for the question "Are deep phosphorus applications economic in Central Queensland?", the answer of the model explicitly suggested to search agricultural journals or online databases such as Google Scholar, JSTOR, or the Agriculture and Applied Economics Association. Other times the model mentioned studies or publications but in a way that it was too generic to identify a single source that we could check. For example, for the question "How much water can I lose if i let my cover crop grow for too long?", ChatGPT mentioned "According to a publication by the USDA Natural Resources Conservation Service", "A study by the University of California Division of Agriculture and Natural Resources", and"a publication by the National Center for Appropriate Technology". However, it did not provide specific details such as authors, venue or URL that could have helped identifying these references. There were cases however, when the model while providing an answer, it simply did not provide any reference or help to locate one. In the few cases in which this happened, ChatGPT would apologise for not been able to find specific articles or Wikipedia pages -- though it then resorted to claim there was evidence (not better specified) to answer the question. ### RQ2: Does the Evidence Exist? Next, we investigate whether the evidence provided by ChatGPT as part of a reference actually exists. The model generated in total 450 references for the 160 questions we submitted. Of these references, 385 (86%) did not exist, demonstrating the large extent of hallucinations ChatGPT produces with respect to the attribution of its answers. When we analysed the references generated by ChatGPT and that we could not locate, we often were able to locate the journals mentioned by the model. These journals were most often high quality journals in the Agricultural Science and Tech space. In that journal, we could find the volume and issue provided in the answer, which corresponded to the publication year the model mentioned. Page ranges also matched as being in these volumes/issues; however, they were incorrect as often did not match the exact start/end pages of articles, and instead referred to incorrect article boundaries. Nonetheless, articles with the titles provided by ChatGPT did not exist in these journals, nor at all existed when we searched for them via Google or on Google Scholar. We also noted that often the authors ChatGPT provided in the references generated were actual researchers -- but not necessarily in the Ag Tech space; e.g., some were prominent medical scientists. There were cases in which an article that was provided as a reference was actually found by title, but with different authors, volume, issue and year data. We then analysed the references that did exist: 14% of the total references provided by ChatGPT. Of these, the large majority (85%) were references to Wikipedia pages, with the remaining approximately distributed among the remaining three types of references. (Note, we checked if a page with that title existed, and did not check the URL; we discuss the existence of URL separately below). It is not surprising that generated references to Wikipedia pages were found to exist: (1) ChatGPT was trained on a corpus containing Wikipedia pages; (2) the format of the title of Wikipedia pages has been standardised with explicit style guides3, and thus potentially it is fairly easy to guess the title of a possible Wikipedia page for a topic. This becomes evident when comparing the titles of Wikipedia pages provided by ChatGPT that do exist with those that do not exist. Footnote 3: e.g., [https://en.wikipedia.org/wiki/Wikipedia:Article](https://en.wikipedia.org/wiki/Wikipedia:Article), titles. Sometimes references contained an URL. This was either because the reference was a web page (92 were Wikipedia pages and 12 were other type of web pages), or because the URL was the DOI associated with a publication. Note that in the analysis above that considered whether an evidence existed, we did not consider whether the URL itself existed4. We then analyse the URLs provided by ChatGPT next. First, we note that when giving a Wikipedia page as reference, ChatGPT would seldom also provide an URL to the page. For the URLs that were produced, we did observe that they appeared to be realistic. In particular, Wikipedia pages had URLs that matched the stylistic guidelines used by Wikipedia. URLs that referred to DOIs also appear to follow the typical structure for DOIs, and URLs that corresponded to relevant agencies in Ag-Tech, like the United States Department of Agriculture (USDA) or the Queensland Department of Agriculture And Fisheries (DAF) were also mostly following the format of URLs these entities commonly use -- with especially the website domain and the first level URL path being correct. However, the large majority of these URLs did not exist: only 34 existed. Of the URLs that existed, most were Wikipedia pages. We remind the reader that Wikipedia was used in the training of ChatGPT, and that Wikipedia pages have a rather straightforward URL structure: if a Wikipedia page about a topic/key-term X exist, then it is easy to guess its URL to be [https://en.wikipedia.org/wiki/X](https://en.wikipedia.org/wiki/X). Footnote 4: In the case of the reference being a Wikipedia page or webpage, we used the title of the page to check on the mentioned website if that page existed. The fact that the reference mentions provided by ChatGPT look realistic, both in cases on the bibliography data of scientific articles, or the URL format of web pages, but these reference more than often do not exist, adds to the challenges posed by hallucinations. A user that examines ChatGPT answers, may believe the answers to be correct because references are provided that look like those one would expect: from titles being topical and likely similar to those one would use in a scientific article, to cited journal being among the top in the field, and websites and Wikipedia pages looking topical in terms of provided title, and credible in terms of provided source. It is only if the user were to perform some due diligence by attempting to locate the cited references that they would not be able to locate these, and thus possibly question whether ChatGPT's answers are correct.
2310.00220
Optimization of Tritium Breeding Ratio in a DT and DD Submersion Tokamak Fusion Reactor
The mass of stars is enough to confine a plasma to fuse light atoms, but this is not possible to engineer on Earth. Fortunately, nuclear engineering can rely on the magnetic confinement of a plasma using superconducting coils so long as the Tritium Breeding Ratio (TBR) is optimized. This paper will investigate some of the materials which can increase the rate at which Tritium is produced within the breeding blanket layer of Submersion Tokamak reactors, a design that uses magnetic confinement of a plasma in the shape of a torus to execute nuclear fusion. Using the Paramak Python module to model several geometries and OpenMC to run a simulation, it can be observed how neutron multipliers, enrichment, and the neutron energy spectrum affect TBR. This experiment will mainly observe different material choices that have been considered and their TBR based on their cross sections, dose rate, thermal properties and safety. By altering the neutron energy spectrum to account for DD and DT plasma, the difference in these compounds' Tritium breeding efficacy is noted. Neutron energy spectra are an important factor in optimising the TBR levels as the neutrons generated by the fusion reactions in the plasma interact with the breeder material in the blanket and produce tritium through the reaction with Lithium. Since Tritium is a rare isotope of hydrogen that is used as fuel in fusion reactions and has a short half-life, it is essential to produce tritium within the fusion reactor itself. Without the tritium breeding capability, it would not be feasible to generate energy via fusion. A TBR greater than unity indicates that the reactor can generate more tritium than it consumes, ensuring self-sufficiency in the tritium inventory. Since Tritium is the most reliable and efficient fuel for these reactors, optimising the TBR is of paramount importance in the long road to commercialization of nuclear fusion.
Vikram Goel, Soha Aslam, Sejal Dua
2023-09-30T01:54:29Z
http://arxiv.org/abs/2310.00220v1
# Optimization of Tritium Breeding Ratio in a DT and DD Submersion Tokamak Fusion Reactor ###### Abstract The mass of stars is enough to confine a plasma to fuse light atoms, but this is not possible to engineer on Earth. Fortunately, nuclear engineering can rely on the magnetic confinement of a plasma using superconducting coils so long as the Tritium Breeding Ratio (TBR) is optimized. This paper will investigate some of the materials which can increase the rate at which Tritium is produced within the breeding blanket layer of Submersion Tokamak reactors, a design that uses magnetic confinement of a plasma in the shape of a torus to execute nuclear fusion. Using the Paramak Python module to model several geometries and OpenMC to run a simulation, it can be observed how neutron multipliers, enrichment, and the neutron energy spectrum affect TBR. This experiment will mainly observe different material choices that have been considered and their TBR based on their cross sections, dose rate, thermal properties and safety. By altering the neutron energy spectrum to account for DD and DT plasma, the difference in these compounds' Tritium breeding efficacy is noted. Neutron energy spectra are an important factor in optimising the TBR levels as the neutrons generated by the fusion reactions in the plasma interact with the breeder material in the blanket and produce tritium through the reaction with Lithium. Since Tritium is a rare isotope of hydrogen that is used as fuel in fusion reactions and has a short half-life, it is essential to produce tritium within the fusion reactor itself. Without the tritium breeding capability, it would not be feasible to generate energy via fusion. A TBR greater than unity indicates that the reactor can generate more tritium than it consumes, ensuring self-sufficiency in the tritium inventory. Since Tritium is the most reliable and efficient fuel for these reactors, optimising the TBR is of paramount importance in the long road to commercialization of nuclear fusion. Nuclear Engineering Fusion Reactor Tokamak Tritium Breeding Ratio Monte Carlo ## 1 Introduction In the realm of energy production, nuclear fusion has emerged as a field of immense importance and potential as a promising source of clean and sustainable power. Fusion energy, simply put, is the energy released when light atomic nuclei combine to form heavier nuclei. This process occurs at incredibly high temperatures and pressures, replicating the conditions found in the core of stars. Unlike current energy sources, such as fossil fuels, it produces virtually no greenhouse gas emissions or long-lived radioactive isotopes. Furthermore, fusion energy has the potential to provide an almost limitless source of power, promising a sustainable and clean solution to our ever-growing energy needs. To comprehend the various forms of nuclear fusion reactions, it is essential to explore the fundamental principles governing these processes. Fusion reactions can occur between different isotopes, with the most promising reactions involving deuterium (H2) and tritium (H3) nuclei. The fusion of these hydrogen isotopes releases a significant amount of energy and produces alpha particles as a byproduct. Additionally, alternative fusion reactions, such as the fusion of deuterium with helium-3 or boron-11, offer unique advantages and challenges, making them worthy of investigation, this paper will not delve into these though they are worthy of investigation. To study and analyse the complex behaviour of nuclear fusion reactions, scientists often employ Monte Carlo simulations. These simulations utilise probabilistic methods to model the behaviour of particles within a system, providing insights into the dynamics of nuclear reactions. Monte Carlo simulations involve the generation of random numbers to determine the characteristics of each particle, allowing researchers to predict the behaviour and outcomes of fusion processes accurately. By simulating a large number of particles, scientists can obtain statistically significant results and optimise experimental designs, ultimately advancing our understanding of fusion energy and aiding in the development of practical fusion reactors. \[\text{Tritium Breeding Rate (TBR)}=\frac{\text{number of tritium atoms produced}}{\text{number of tritium atoms fused}} \tag{1}\] One of the most important statistics concerning fusion reactors is the Tritium Breeding Ratio, hitherto TBR. This is a representation of the ratio of the rate at which tritium is produced in a fusion reactor to the rate at which it is consumed 1. Since Tritium is the most important fuel source in most reactors, it is essential to produce Tritium within the reactor itself: Tritium has a very short lifespan (12.3 years) and out of all the Hydrogen in the universe, tritium only makes up 10-18% while deuterium sits at 0.2%. This small quantity is meaningless and therefore tritium is made artificially using CANDU fission reactors so it is not viable to continuously add tritium to reactors, therefore, maintaining a TBR greater than unity is essential for commercialising DT nuclear fusion. The main component of nuclear fusion reactors, specifically tokamaks, that is affiliated with the TBR is the breeding blanket. The breeding blanket is usually made of either Solids, Liquids (PbLi), Molten Salts (FLiBE), or pure Lithium. For this paper, a survey of two several breeder materials was undertaken as well as observing the impacts of Li6 enrichment and neutron multiplier concentrations. ## 2 Methodology ### OpenMC, DAGMC, Paramak and Defining Computational Intensity Neutronics modelling and simulation is conducted using a monte carlo simulation tool, known as OpenMC. The OpenMC module simulates neutral particles (presently neutrons and photons) moving stochastically through an arbitrarily defined model that represents a real-world experimental setup [1]. It was developed by the Computational Reactor Physics Group (CPRG) based at the Massachusetts Institute of Technology (MIT) and has been extensively utilised in the area of nuclear engineering. For this research, a geometry for a Submersion Tokamak reactor was constructed using the Paramak Python module developed by Jonathan Shimwell [2]. This library makes use of parametric shapes, components and reactors, and is essential for constructing complex geometries that could be assigned different materials within the OpenMC model. Though OpenMC manipulates Boolean operators to construct simple geometries, it is difficult to construct a complex tokamak with a high degree of similarity to what is used in actuality through OpenMC alone. Upon constructing the geometry, Paramaks computer-aided design (CAD) would be converted into a 'dagmc.h5m' file, the format which works with OpenMC, using the cad_to_dagmc python module. Then, a neutronic simulation was conducted using a Python script. Within the OpenMC model, the different parametric shapes could be assigned to various elements and densities then, after introducing a neutron source a simulation using 10 rounds of 10,000 particles was conducted. Within the model, interchanging between Lithium compounds was simple and separate tallies for these different blanket materials were generated using the 2.5 and 14 MeV neutron energy spectra. ### Monte Carlo Simulation OpenMC uses a method that resembles the MC21 monte carlo code (A neutron and photon transport code). To provide workability in a macro environment OpenMC uses an analogue estimator to obtain its tallies for reaction rates, using massive amounts of nuclear data, a survey is taken of the number of actual reactions that occur and then this is used for their estimates on what the reaction rates and therefore, the scores should be. This is represented by the OpenMC Analog Estimator Formula: \[R_{x}=\frac{1}{W}\sum_{i\in A}w_{i} \tag{2}\] where Rx is the reaction rate for reaction x, i denotes an index for each event, A is the set of all events resulting in reaction x, and W is the total starting weight of the particles, and wi is the pre-collision weight of the particle as it enters the event [1]. This analogue estimator is the simplest way to estimate and is easy to implement, using this OpenMC may use nuclear cross sections data for neutron multipliers and Li6 to generate its scores on the TBR, but this estimator has variance since low-probability reactions will require an immense amount of information and particles to calculate to a high degree of accuracy. Therefore, OpenMC also uses a collision estimator and a track length estimator to get around this issue. The collision estimator does not simply add to a tally once a reaction has been carried out, it will use every collision to add to the score, for example, in the context of Tritium Breeding, it will not only add to the tally every time tritium is produced but every time there is a collision between the reactants that may or may not produce tritium. ## 3 Materials The design used for this paper is the Tokamak, it is the leading style of fusion reactor and its geometry is shown in Figure 1. The magnetic coils that were created for this simulation were Niobium Titanium (NbTi) coils chosen due to their superconductivity, high critical current density, fabrication availability and mechanical strength. These attributes mean that it has zero electrical resistance, allows a high current to pass through without reverting to its conducting state, can be easily made into wires and coils and importantly, can withstand the immense electromagnetic forces within a tokamak. We also used a similar metal, Nb3Sn for the outer toroidal field coils for the same reasons, Tungsten for the first wall and components directly adjacent to the plasma due to its thermal properties and Titanium supports. A Beryllium blanket wall was used, to maximise neutron multiplication and Stainless Steel 316 for the poloidal field coils casing as it tolerates elevated temperatures. For this simulation, we observe how these different materials compare when used as breeder blankets in producing Tritium. This will be done with both Deuterium-Tritium (D-T) and Deuterium-Deuterium (D-D) fuel. While D-T is a lot more powerful and yields more energy and tritium, the deuterium isotope of hydrogen is far more abundant. It has a longer life making it less expensive and more feasible for implementation. The two hydrogen plasmas will react to form 14.1 MeV and 2.5 MeV neutrons as shown: \[D+T\rightarrow[\alpha+3.5MeV]+[n+14.1MeV] \tag{3}\] \[D+D\to 3He+[n+2.5MeV] \tag{4}\] Figure 1: Diagram of the Geometry of the Submersion Tokamak A Beryllium (Be) or Lead (Pb) neutron multiplier will increase the number of neutrons by initialising the (n,2n) reaction. For the breeder blanket materials, a variety of Lithium-based alloys and compounds were used, Lithium is especially important for the breeder blanket as it reacts with neutrons to produce tritium and, when paired with a Beryllium (Be) neutron multiplier, it will efficiently produce tritium at a rate higher than unity (TBR = 1). It is the most reliable element to use in breeder blankets as its only two stable naturally occurring isotopes react with the neutrons as shown: \[{}^{6}Li+n\rightarrow\alpha+T+4.78MeV \tag{5}\] \[{}^{7}Li+n\rightarrow\alpha+T+n+2.47MeV \tag{6}\] Energy Production and extraction within these tokamak reactors is a tedious engineering task: The kinetic energy of the incident neutrons must be converted into thermal energy for it to be excavated by the coolant. The breeder blanket can augment the incident energy via exothermic reactions including the \({}^{6}Li(n,t)^{4}He\) reaction which has a Q value of 4.79MeV. Assuming the use of Pb multipliers, DT fusion devices will perform better multiplication due to high neutron multiplication cross-section at those energies as shown in Figure 2. ## 4 Results Table 1 of the Tritium Breeding ratio for several Lithium-Based materials that are used as a breeder blanket in a Submersion Tokamak with both 14.1 MeV and 2.5 MeV neutrons, these were obtained upon the completion of the monte carlo simulation: ### Lithium-6 Concentration It appears as though the concentration of Lithium-6 is the factor that most affects the TBR as it has a high cross-section for neutron absorption and is the isotope of Lithium that produces the Tritium via Li6(n, alpha)H3, so the TBR of breeder material Li17Pb83 was measured over different levels of enrichment of Lithium-6: Li17Pb83 is usually used as a breeder with a Li-6 concentration of 90% or higher, this is feasible as it has a low overall Lithium concentration and high reliance on the lead as a multiplier. Figure 3, depicting the relationship between Li-6 enrichment level and TBR, can be described by this equation with under a 5% margin of error for the largest outlier: \[Y=0.34\log\left[1.4(X)+1.11\right]\qquad,0<X<1 \tag{7}\] Figure 2: Cross Section vs Energy [3] where Y is the TBR and X is the Li6 enrichment as a decimal. Since lead, as shown in Figure 2, has a high cross section for neutron multiplication by the (n,2n) reaction at 6.5 MeV or higher, D-D neutrons (<6.5MeV) perform worse at a Tritium breeding rate of less than half of what it is for D-T neutrons (>6.5MeV). The answer considering the TBR is clear: The higher the concentration of Lithium and thereby, Lithium-6, the greater the raw Tritium Breeding, but this is not to say that pure Lithium-6 is the best material to use as a breeder - there are many other factors to consider, Pure Lithium has inferior thermal properties when compared to Lithium Titanate or Lithium Orthosilicate and is unable to withstand immense heat, it is also more unstable than these compounds and therefore poses a safety risk when used in the breeder blankets. It is also not as strong mechanically and is far more pliable in its pure state, a reason why alloys or compounds prove beneficial. The material must withstand extreme temperatures, and electromagnetic forces so Lithium is simply not as reliable. Furthermore, as this graph for Li-6 concentration vs TBR is logarithmic, there is little need, in most cases to enrich further than 40%, increasing Li-6 concentration to this level changes the TBR by about 0.7, but a further increase to 90% will only change the TBR by about 0.1. ### Ceramic, Liquid and Molten Salt Blankets The breeder materials observed using the OpenMC simulation are ceramic liquid and molten salt breeders. Li4SiO4 (Lithium Orthosilicate), Li2O (Lithium Oxide), Li2ZrO3 (Lithium Zirconate), and Li2TiO3 (Lithium Titanate) make up the ceramic breeders, they all have TBRs close to 1, ranging from 0.97 to 1.0 (nearest hundredth) and their energy \begin{table} \begin{tabular}{l r r} \hline **Lithium-Based material** & **Tritium Breeding Ratio** & **Tritium Breeding Ratio** \\ & **Tally (14MeV)** & **Tally (2.5MeV)** \\ \hline Li2ZrO3 (Lithium Zirconate) & 0.974073 & 0.490369 \\ \hline Li2TiO3 (Lithium Titanate) & 0.980266 & 0.501079 \\ \hline Li4SiO4 (Lithium Orthosilicate) & 1.00325 & 0.525374 \\ \hline LiCl (Lithium Chloride) & 1.04656 & 0.539386 \\ \hline FLiBe (LiF and BeF2) & 1.08196 & 0.569398 \\ \hline Li2O (Lithium Oxide) & 1.09383 & 0.542774 \\ \hline Li17Pb83 (90\% Li6) & 1.14519 & 0.570423 \\ \hline Li (Pure Lithium) & 1.10905 & 0.543946 \\ \hline \end{tabular} \end{table} Table 1: TBR Results Figure 3: Enrichment vs TBR multiplication are also similar with Lithium Titanate coming out on top at 1.15 and the lowest at 1.05 [4]. They have similar thermal properties including their melting point and thermal expansion. Their thermal properties and resistance to strong electromagnetic forces within the tokamak make them especially useful over pure Lithium or weaker alternatives. Lithium vacancy in Li4SiO4 may interact with tritium resulting in some confinement, an essential outcome for extracting energy and recycling Tritium efficiently. The formation energies of the vacancy-tritium complexes are in the range of 0.41 - 1.28 eV under oxygen-rich conditions [5]. Liquid breeder materials have several qualities that make them superior when compared to their ceramic counterparts including thermal conductivity, fabrication availability and the ease at which they can be removed for Tritium Extraction [6]. PbLi100, a liquid breeder that we simulated in addition to those in the table above, has a TBR of 1.24918 with D-T and 0.639227 with D-D, topping both categories. Though Lead Lithium with its high TBRs may seem ideal, they suffer from high inefficiency as the next chapter describes. ### Dose Rates Even Li17Pb83 and other PbLi-based breeders have the highest TBRs at around 1.2 in our simulation. But still, these liquid breeders with high cross sections for neutron multiplication and tritium production may not be as efficient as FLiBe, the leading molten salt blanket. FLiBe is usually a mixture of LiF and BeF2 and is one of the molten salt blankets. Though it may not have such a high TBR, it is incredibly efficient in absorbing radiation. Below is a graph illustrating how FLiBe and PbLi compare with Lithium and other blankets in terms of their Dose Rates over time: The green areas in Figure 4 represent the recycling limits, and only FLiBe and a few other salts can meet these limits in a short amount of time whereas PbLi is far more inefficient. FLiBe has a high heat capacity and a high melting point, attributes that may contribute to its efficiency [7]. ### Neutron Multiplier Concentration Another question regarding the materials to use in a breeder blanket to maximise tritium breeding and efficiency is the ratio of neutron multipliers to the Lithium material, the two materials needed for the two reactions used to produce the hydrogen isotope. \begin{table} \begin{tabular}{l c c} \hline \hline PbLi100 & 1.24918 (DT TBR) & 0.639227 (DD TBR) \\ \hline \hline \end{tabular} \end{table} Table 2: PbLi100 TBR Figure 4: Neutronic comparison of liquid breeders for ARC-like reactor blankets [7] This chart illustrates the TBR of a Lithium based blanket paired with a simple Lead neutron multiplier over different ratios of Pb to Li. (This used a fixed mass of Li6, therefore, the Li6 was always 0.2 x total breeder blanket mass, it was adjustable in the simulation by calculating the required fraction out of total Lithium to make it 20%, for example, 90% Lithium x 22.2..% = 20% Lithium-6): The parabolic shape of these results is due to there being a constant mass of Lithium-6, this means that at lower lead concentration, there is no extra Li-6 that could significantly increase the TBR and at higher lead concentration, this helps the TBR to increase as there is a better concentration of multipliers. In Li17Pb83, there is a much higher concentration of lead, but this is compensated with high Li6 enrichment, therefore, it makes sense to assume a relatively constant amount of Lithium 6 should be present. ## 5 Conclusion This paper has observed the effects, advantages and results of using different types of blanket materials, different levels of Li-6 enrichment and concentration of neutron multipliers. We have also addressed other concerns such as thermal properties, energy multiplication and dose rates/efficiency. In the matter of the TBR, the best blanket material appears to be Li17Pb83, it is the breeder that, unlike FLiBe, has no toxicity [8] and unlike PbLi100, is not so inefficient in terms of dose rate. It also has relatively low concentrations of Lithium, a scarce material that may prove difficult to obtain in the coming decades. This liquid breeder also comes with the advantages that liquid breeders have over their solid counterparts as previously discussed and it has a very high TBR when compared to most other materials. FLiBe was a very successful breeder especially when one takes into account the dose rates and apart from the hazard risks with this breeder it may be the most successful one that was used. To further advance our understanding of the topic of breeder blankets and tritium breeding, further research into other attributes must be explored such as different thicknesses of breeders and alternate fuel sources such as boron-11 or helium 3. It is also essential to address the issues of tritium extraction and confinement aside from uranium beds that may be proliferation concerns. Construction of several breeds of tokamaks and even stellarator reactors using Paramak or the new Parametric Stellarator python libraries to pair with an OpenMC simulation could also be researched to further understand the TBRs of different blanket materials in more than one type of reactor. The field of computational application for nuclear engineering should be used to predict and theorise on how these reactors and particles interact and will, in the future, be of great use as fusion becomes the energy source for our planet's fossil fuel-free future. Figure 5: TBR against Lead concentration in Breeder Blanket ## Acknowledgments This was supported by my mentor, Ms. Soha Aslam who instructed and guided my research process, teaching me about how to use the python packages I needed and about this field as a whole. This was also made possible by the Paramak library, developed by Dr. Jonathan Shimwell.
2309.12753
Bordifications of the moduli spaces of tropical curves and abelian varieties, and unstable cohomology of $\mathrm{GL}_g(\mathbb{Z})$ and $\mathrm{SL}_g(\mathbb{Z})$
We construct bordifications of the moduli spaces of tropical curves and of tropical abelian varieties, and show that the tropical Torelli map extends to their bordifications. We prove that the classical bi-invariant differential forms studied by Cartan extend to these bordifications by studying their behaviour at infinity, and consequently deduce infinitely many new non-zero unstable cohomology classes in the cohomology of the general and special linear groups $\mathrm{GL}_g(\mathbb{Z})$ and $\mathrm{SL}_g(\mathbb{Z})$. In addition, we completely determine the cohomology of the link of the moduli space of tropical abelian varieties within a certain range, and show that it contains the stable cohomology of the general linear group. In the process, we define new transcendental invariants associated to the minimal vectors of quadratic forms, and show that part of the cohomology of the general linear group $\mathrm{GL}_g(\mathbb{Z})$ admits the structure of a motive.
Francis Brown
2023-09-22T09:53:49Z
http://arxiv.org/abs/2309.12753v2
Bordifications of the moduli spaces of tropical curves and abelian varieties, and unstable cohomology of \(\mathrm{GL}_{g}(\mathbb{Z})\) and \(\mathrm{SL}_{g}(\mathbb{Z})\) ###### Abstract. We construct borderifications of the moduli spaces of tropical curves and of tropical abelian varieties, and show that the tropical 'Torelli map extends to their borderifications. We prove that the classical bi-invariant differential forms studied by Cartan extend to these borderifications by studying their behaviour at infinity, and consequently deduce infinitely many new non-zero unstable cohomology classes in the cohomology of the general and special linear groups \(\mathrm{GL}_{g}(\mathbb{Z})\) and \(\mathrm{SL}_{g}(\mathbb{Z})\). In addition, we completely determine the cohomology of the link of the moduli space of tropical abelian varieties within a certain range, and show that it contains the stable cohomology of the general linear group. In the process, we define new transcendental invariants associated to the minimal vectors of quadratic forms. We also show that a certain part of the cohomology of the general linear group \(\mathrm{GL}_{g}(\mathbb{Z})\) admits the structure of a motive. ## 1. Introduction The main goal of this paper is an algebro-geometric construction of borderifications of the moduli spaces of tropical curves and abelian varieties. Before describing these in detail, we first present some applications to the cohomology of the special and general linear groups. ### Unstable cohomology of linear groups Let \(\mathcal{P}_{g}\) denote the space of symmetric positive definite \(g\times g\) matrices \(X\) with real entries. It is a connected contractible space equipped with a right action of \(h\in\mathrm{GL}_{g}(\mathbb{R})\) via the map \(h(X)=h^{T}Xh\). Since the orbifold \(\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\) is a \(K(\pi,1)\) one has \(H^{n}(\mathrm{GL}_{g}(\mathbb{Z});\mathbb{R})=H^{n}(\mathcal{P}_{g}/\mathrm{ GL}_{g}(\mathbb{Z});\mathbb{R})\). Block direct sum of matrices \(X\mapsto X\oplus 1\) defines a map \(\mathcal{P}_{g}\to\mathcal{P}_{g+1}\). The stable cohomology is defined to be the limit with respect to these maps: \[H^{n}(\mathrm{GL}_{\varpi}(\mathbb{Z});\mathbb{R})=\varprojlim_{g}H^{n}( \mathrm{GL}_{g}(\mathbb{Z});\mathbb{R})\.\] It was famously computed by Borel [1], from which he deduced the ranks of the rational algebraic \(K\)-theory of the integers (and, more generally, of all number fields), which is of fundamental importance in the modern theory of motives. Very little is known about the unstable cohomology of the groups \(\mathrm{SL}_{g}(\mathbb{Z})\) and \(\mathrm{GL}_{g}(\mathbb{Z})\). See figures 1 and 3 for the range in which their cohomology groups are completely known. Even less is known about the cohomology with compact supports \(H^{n}_{c}(\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z});\mathbb{R})\), which we shall denote by \(H^{n}_{c}(\mathrm{GL}_{g}(\mathbb{Z});\mathbb{R})\). The notation is justified by duality: indeed, when \(g\) is odd, the orbifold \(\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\) is orientable and its compactly supported cohomology is Poincare dual to ordinary cohomology. However, in the case when \(g\) is even, the cohomology with and without compact supports are _a priori_ unrelated. There is no stability property for cohomology with compact supports, as one may see from figure 2. The ordinary cohomology of the special linear group \(\mathrm{SL}_{g}(\mathbb{Z})\) coincides with that of \(\mathrm{GL}_{g}(\mathbb{Z})\) when \(g\) is odd, but is built out of the cohomology of \(\mathrm{GL}_{g}(\mathbb{Z})\) with and without compact supports in the case when \(g\) is even. See figure 3 for a table of known results. ###### Contents * 1 Introduction * 2 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.1 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.2 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.3 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.4 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.5 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.6 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.7 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.8 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.9 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.10 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.11 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.12 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.13 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.14 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.15 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.16 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.17 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.18 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.19 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.20 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.21 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.22 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.23 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.24 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.25 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.26 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.27 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.28 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.29 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.30 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.31 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.32 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.33 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.34 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.35 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.36 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.37 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.38 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.39 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.39 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.40 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.41 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.42 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.43 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.44 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.45 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.46 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.47 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.48 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.49 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.41 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.42 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.43 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.44 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.45 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.46 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.47 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.48 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.49 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.410 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.410 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.42 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.43 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.44 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.45 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.46 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.47 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.48 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.49 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.410 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.411 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.42 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.43 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.444 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.45 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.46 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.47 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.48 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.49 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.410 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.411 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.42 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.44 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.46 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.47 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.48 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.49 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.410 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.411 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.42 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.411 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.44 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.44 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.44 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.45 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.46 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.47 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.48 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.49 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.49 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.410 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.411 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.42 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * 2.44 Compactly-supported cohomology of \(\operatorname{GL}_{g}(\mathbb{Z})\) * to be the graded subalgebra generated by \(1,\omega^{5},\dots,\omega^{2g-5}\). The subscript \(nc\) stands for 'non-compact' type. One has \(\Omega(g)=\Omega_{c}(g)\oplus\Omega_{nc}(g)\). **Theorem 1.1**.: _Let \(g>1\) be odd. There are injective maps_ \[(i). \Omega^{\bullet}_{nc}(g)\otimes_{\mathbb{Q}}\mathbb{R} \hookrightarrow H^{\bullet}(\operatorname{GL}_{h}(\mathbb{Z}); \mathbb{R})\quad\text{ for all }h\geq g\,\] \[(ii). \Omega^{\bullet}_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R} \hookrightarrow H^{\bullet+1}_{c}(\operatorname{GL}_{g}(\mathbb{Z}); \mathbb{R})\,\] \[(iii). \Omega^{\bullet}_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R} \hookrightarrow H^{\bullet+2}_{c}(\operatorname{GL}_{g+1}(\mathbb{Z}); \mathbb{R})\.\] _These facts imply the following results about the special linear group:_ \[(iv). \Omega^{\bullet}_{nc}(g)\otimes_{\mathbb{Q}}\mathbb{R}\hookrightarrow H ^{\bullet}(\operatorname{SL}_{h}(\mathbb{Z});\mathbb{R})\quad\text{ for all }h\geq g\,\] \[(v). (\Omega^{\bullet}_{nc}(g)\oplus\Omega^{d_{g+1}-\bullet-2}_{c}(g ))\otimes_{\mathbb{Q}}\mathbb{R}\hookrightarrow H^{\bullet}(\operatorname{SL} _{g+1}(\mathbb{Z});\mathbb{R})\.\] Statement \((i)\) (and implicitly \((ii)\)) is discussed in a research announcement of Ronnie Lee [10], but no proof seems to have appeared in print. The statement also appears in Franke [11], and is proven in a slightly weaker form in [12], both using automorphic methods. Our proof bears some similarity to the strategy suggested by Lee and implies that the map \((i)\) for \(h=g\) and \((ii)\) are canonically split: the key point is the construction of representatives for elements in \(\Omega_{c}(g)\) which have compact support. Statement \((i)\) implies, but is much stronger than, Borel's result on the injectivity of (1.2) in small degrees. The statement \((iii)\) about the unstable cohomology of linear groups of even rank is new. It follows from a much stronger theorem (theorem 1.3 below) on the existence of cohomology classes in the moduli space of tropical abelian varieties and uses recent results on the acyclicity of the 'inflation complex' in [15]. To illustrate the content of \((iv)\) and \((v)\), consider the following table of the known cohomology of \(\operatorname{SL}_{g}(\mathbb{Z})\), taken from [10] and based on computer calculations of [1]. Only the boxed class in \(H^{8}(\operatorname{SL}_{6}(\mathbb{Z});\mathbb{R})\), which comes from a class in \(H^{8}(\operatorname{GL}_{6}(\mathbb{Z});\mathbb{R})\), is unexplained. Its dual homology class lies in \(H_{8}(\operatorname{GL}_{6}(\mathbb{Z});\mathbb{R})\), and is possibly proportional to the image of the second Morita class in \(H_{8}(\operatorname{Out}(F_{6}))\) under the Jacobian map. It is not known if this image is zero or not. It was proven by Bismut and Lott [1] that for \(g>1\) odd, the class of \(\omega^{2g-1}\) in \(H^{2g-1}(\operatorname{GL}_{g}(\mathbb{Z}))\) vanishes. Conjecture 5.3 in [13] states that the class of \(\omega^{2g-1}\) also vanishes in \(H^{2g-1}(\operatorname{GL}_{g+1}(\mathbb{Z}))\). Cuspidal classes in the cohomology of \(\operatorname{GL}_{n}(\mathbb{Z})\) were constructed for the first time in [1]. Figure 3. An entry in row \(g\) and column \(n\) equals \(\dim H^{n}(\operatorname{SL}_{g}(\mathbb{Z});\mathbb{R})\); blank entries are zero. All non-zero entries in this table are explained by the previous theorem except for the three boxed entries. Two of them would in fact follow from our results above, together with another theorem announced by Lee. Indeed, the class in \(H^{9}(\operatorname{SL}_{6}(\mathbb{Z});\mathbb{R})\), which corresponds to a class in \(H^{12}_{c}(\operatorname{GL}_{6}(\mathbb{Z});\mathbb{R})\), is denoted by \([\omega^{5}|\omega^{5}]\) in Lee’s notation. The class in \(H^{15}(\operatorname{SL}_{7}(\mathbb{Z});\mathbb{R})\) corresponds to a class in \(H^{13}_{c}(\operatorname{GL}_{7}(\mathbb{Z});\mathbb{R})\) which should be related to it by the inflation map of [15]. ### Moduli of tropical abelian varieties Theorem 1.1 is a consequence of a stronger result concerning cohomology classes on the link \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\) of the moduli space of tropical abelian varieties, which we presently explain. The reason for the vertical bar notation is that \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\) is merely the topological incarnation of a richer kind of hybrid geometric object, denoted by \(L\mathcal{A}_{g}^{\mathrm{trop}}\), which we shall discus later. In any case, as a set, \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\) is the quotient of the rational closure \(\mathcal{P}_{g}^{\mathrm{rt}}\) of \(\mathcal{P}_{g}\) consisting of positive semi-definite matrices with rational kernel, modulo the action of \(\mathrm{GL}_{g}(\mathbb{Z})\), and by \(\mathbb{R}^{\times}\) acting by scalar multiplication. We consider its decomposition into perfect cones, due to Voronoi [22, 23], which are described as follows. Let \(Q\) be a positive definite quadratic form, and let \[M_{Q}=\{\lambda\in\mathbb{Z}^{g}\backslash\{0\}:Q(\lambda)\leq Q(\mu)\quad \text{for all }\mu\in\mathbb{Z}^{g}\backslash\{0\}\}\] denote the set of minimal vectors of \(Q\). The set of \(\lambda\lambda^{T}\), for \(\lambda\in M_{Q}\), span a convex polyhedral cone \(\sigma_{Q}\) in the link \(L\mathcal{P}_{g}^{\mathrm{rt}}\) of \(\mathcal{P}_{g}^{\mathrm{rt}}\), whose points are projective classes of symmetric matrices. The topological space \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\) is obtained by gluing together \(\mathrm{GL}_{g}(\mathbb{Z})\)-equivalence classes of quotients \(\sigma_{Q}/\mathrm{Aut}(\sigma_{Q})\) of polyhedral cones by their finite groups of automorphisms. **Examples 1.2**.: (See figure 4). The quadratic form \(Q(x_{1},x_{2})=x_{1}^{2}+x_{1}x_{2}+x_{2}^{2}\) has minimal vectors \(M_{Q}=\{(\pm 1,0),(0,\pm 1),\pm(1,-1)\}\), and hence \[\sigma_{Q}=\left\{\begin{pmatrix}\alpha_{1}+\alpha_{3}&-\alpha_{3}\\ -\alpha_{3}&\alpha_{2}+\alpha_{3}\end{pmatrix}:\alpha_{1},\alpha_{2},\alpha_{3 }\geq 0\right\}\] is the convex hull of the rank \(1\) matrices \(\left(\begin{smallmatrix}1&0\\ 0&0\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}0&0\\ 0&1\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}1&-1\\ -1&1\end{smallmatrix}\right)\). There is a single \(\mathrm{GL}_{2}(\mathbb{Z})\)-orbit of cells of maximal dimension in \(\left|L\mathcal{A}_{2}^{\mathrm{trop}}\right|\) generated by \(\sigma_{Q}\). The stabiliser of \(\sigma_{Q}\) is the symmetric group on three elements which permutes the vertices of \(L\sigma_{Q}\). Denote the closed subspace of \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\) corresponding to symmetric matrices with vanishing determinant by \(\left|\partial L\mathcal{M}_{g}^{\mathrm{trop}}\right|\). Its open complement: \[\left|L\mathcal{A}_{g}^{\circ,\mathrm{trop}}\right|=\left|L\mathcal{A}_{g}^{ \mathrm{trop}}\right|\ \backslash\ \left|\partial L\mathcal{A}_{g}^{\mathrm{trop}}\right|\] is nothing other than the space \(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\) where \(L\mathcal{P}_{g}=\mathcal{P}_{g}/\mathbb{R}^{\times}\) is the link of \(\mathcal{P}_{g}\). **Theorem 1.3**.: _Let \(g>1\) be odd. Every form of compact type \(\omega\in\Omega_{c}(g)\) extends to a smooth differential form on \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\). This defines an injective map of graded algebras:_ \[\Omega_{c}^{\bullet}(g)\otimes_{\mathbb{Q}}\mathbb{R}\longrightarrow H_{dR}^{ \bullet}(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|). \tag{1.3}\] Theorem 1.1 follows from theorem 1.3 using a de Rham theorem for certain kinds of topological spaces, such as \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\), which are defined by gluing quotients of polyhedra together by finite group actions (see below) as well as results from [14]. In fact, theorem 1.3 enables us to completely determine the cohomology of \(\left|L\mathcal{A}_{\bullet}^{\mathrm{trop}}\right|\) in a certain range. If \(g>1\) is odd and \(\kappa(g)\) denotes the stable range for the cohomology of the general linear group \(\mathrm{GL}_{g}(\mathbb{Z})\), we show in corollary 14.13 that \[\begin{split}& H_{dR}^{n}(\left|L\mathcal{A}_{g}^{\mathrm{trop}} \right|)\cong\Omega_{c}^{n}(g)_{\mathbb{Q}}\otimes\mathbb{R}\\ & H_{dR}^{-1}(\left|L\mathcal{A}_{g-1}^{\mathrm{trop}}\right|)=0 \end{split} \tag{1.4}\] for \(n\geq d_{g}-\kappa(g)\). Using the recent results of [22], we may take \(\kappa(g)=g\). ### Borodification of the moduli space of tropical abelian varieties In order to prove theorem 1.3, we construct an explicit bordification \[\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\longrightarrow\left| L\mathcal{A}_{g}^{\mathrm{trop}}\right|\.\] It has a boundary \(\left|\partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\subset \left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\) with the property that \[\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\ \backslash\ \left|\partial L \mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|=\left|L\mathcal{A}_{g}^{ \circ,\mathrm{trop}}\right|\.\] The topological space \(\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\) is technically the space obtained by gluing together 'wonderful' compactifications [10] of perfect cones \(\sigma_{Q}\) by blowing up specific boundary strata which lie at infinity. It may be constructed informally as follows. Consider the projective space \(\mathbb{P}(\mathcal{Q}(V))\) whose points are projective classes of quadratic forms \(\mathcal{Q}(V)\) in a vector space \(V\) of dimension \(g\) over \(\mathbb{Q}\). The vanishing of the determinant defines a hypersurface \(\mathrm{Det}\subset\mathbb{P}(\mathcal{Q}(V))\) whose complement satisfies: \[L\mathcal{P}_{g}\ \subset\ (\mathbb{P}(\mathcal{Q}(V))\setminus\mathrm{Det} )\,(\mathbb{R})\.\] Now consider the space obtained by blowing up the subspaces \(\mathbb{P}(\mathcal{Q}(V/K))\) whose points are quadratic forms with kernel \(K\), for all rational subspaces \(0\neq K\subset V\), in increasing order of dimension. Since \(\mathbb{P}(\mathcal{Q}(V/K))\) is contained in the determinant locus \(\mathrm{Det}\), every blow-up adds a new boundary component at infinity. Such a boundary component is indexed by a nested sequence \(0<K_{1}<K_{2}<\ldots<K_{n}<V\), and is isomorphic to a product of projective spaces. The closure of \(L\mathcal{P}_{g}\) in the iterated blow-up admits an action by \(\mathrm{GL}_{g}(\mathbb{Z})\), whose quotient is the modification \(\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\). We show in an appendix that it is homeomorphic to the Borel-Serre compactification [11]. For this construction it is not obvious that \(\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\) in fact has an algebraic structure of finite type (see SS1.7) which is crucial for us. Theorem 1.3 is proven by studying the behaviour of the differential forms (1.1) on the boundary of \(\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\). The existence of this bondification, and the techniques used to construct it, have a variety of other applications, which we discuss below. ### Borodification of the tropical Torelli map The main thrust of this paper is to provide a general technique for constructing bordifications of spaces built out of quotients of polyhedral cells. For example, we construct a bordification \[\left|L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\right|\longrightarrow \left|L\mathcal{M}_{g}^{\mathrm{trop}}\right|\] of the link of the moduli space of tropical curves whose existence was alluded to in [1]. It is obtained by gluing together the 'Feynman polytopes' associated to stable graphs and provides a single geometric object whose cells are the spaces underling the Feynman motives of [1, 1]. This bordification is presumably equivalent to the bordification of Culler and Vogtmann's Outer space which was constructed in [1, 1]. The bordifications of \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\) and \(\left|L\mathcal{M}_{g}^{\mathrm{trop}}\right|\) are related as follows. The tropical Torelli map \(\lambda:\left|L\mathcal{M}_{g}^{\mathrm{trop}}\right|\to\left|L\mathcal{A}_{g} ^{\mathrm{trop}}\right|\) was studied in [22, 23, 24, 25, 26]. It is non-degenerate on the subspace \(\left|L\mathcal{M}_{g}^{\mathrm{red}}\right|\subset\left|L\mathcal{M}_{g}^{ \mathrm{trop}}\right|\) indexed by \(3\)-edge connected graphs. **Theorem 1.4**.: _The tropical Torelli map extends to a map \(\lambda^{\mathcal{B}}\) of bordifications, giving a commutative diagram where the vertical maps are blow-downs:_ \[\begin{array}{ccc}\left|L\mathcal{M}_{g}^{\mathrm{red},\mathcal{B}}\right|& \stackrel{{\lambda^{\mathcal{B}}}}{{\longrightarrow}}&\left|L \mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\\ \downarrow&&\downarrow\\ \left|L\mathcal{M}_{g}^{\mathrm{red}}\right|&\stackrel{{\lambda}} {{\longrightarrow}}&\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\end{array}\] This diagram gives relations between the cohomology of the four spaces. Note that the cohomology of \(\left|L\mathcal{M}_{g}^{\mathrm{trop}}\right|\) is related to the cohomology of Kontsevich's commutative graph complex \(\mathcal{GC}_{0}\), and to the top-weight cohomology of the moduli stack \(\mathcal{M}_{g}\)[3]. The cohomology of the bordification \(\left|L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\right|\) is described by a new graph complex SS8.2 which involves nested sequences of graphs related to the Hopf algebra structure on graph homology. It would be very interesting to study its homology in relation to that of \(\mathcal{GC}_{0}\). ### Canonical integrals of perfect cones A further consequence of the properties of canonical forms (1.1) on the bordification of \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\) is that we may assign transcendental invariants by integrating over the polyhedral cones in the Voronoi decomposition. **Theorem 1.5**.: _Let \(Q\) be a positive definite quadratic form of rank \(g\), and let \(\sigma_{Q}\) be the associated cone. Let \(\omega\in\Omega_{\mathrm{can}}^{d}\) be any canonical form (1.1) of degree \(d=\dim\sigma_{Q}\). If \(\sigma_{Q}\) has rank \(g\) then the following integral is finite:_ \[I_{Q}(\omega)=\int_{\sigma_{Q}}\omega<\infty. \tag{1.5}\] _The integrals \(I_{Q}(\omega)\) satisfy quadratic relations arising from Stokes' formula (theorem 13.13)._ The condition that \(\sigma_{Q}\) has rank \(g\) means that \(\sigma_{Q}\) meets the interior \(L\mathcal{P}_{g}\) of the link of the space of positive definite matrices (i.e., it does not completely lie at infinity). In particular, every 'perfect' quadratic form of maximal dimension may be assigned a volume, which is non-zero. Theorem 1.5 provides interesting transcendental invariants of quadratic forms which may provide a new perspective on the extensive literature on perfect forms in relation to sphere packing problems, which was one of Voronoi's original goals. In the case when the cone \(\sigma_{Q}\) is cographical, i.e., the image under the tropical Torelli map of the cell associated to a graph in \(\left|L\mathcal{M}_{g}^{\mathrm{red}}\right|\), the integrals (1.5) reduce to the canonical integrals \(I_{G}(\omega)\) studied in [1]. It was proved in _loc. cit._ that these are generalised Feynman integrals of the sort arising in perturbative quantum field theory. An important slogan, therefore, from the present paper is that the _volumes of cographical cells in the perfect cone decomposition of the space of symmetric matrices are Feynman integrals_. The Borel regulator, in particular, is a linear combinations of integrals (1.5). The arithmetic nature of the period integrals (1.5) is not known. However, a computation by Borinsky and Schnetz [1] implies that the volume of the principle cone for \(\mathrm{GL}_{5}(\mathbb{Z})\) is a linear combination of non-trivial multiple zeta values of weight \(8\) which involves \(\zeta(3,5)\). ### Polyhedral cell complexes We finally turn to the main geometric constructions. Theorem 1.3 hinges upon a de Rham theorem for topological spaces obtained by gluing together quotients of polyhedra by finite group actions. In order to make sense of algebraic differential forms such as (1.1) on these spaces, we embed each polyhedron into an ambient algebraic variety, which are glued together according to the same pattern. The most basic construction along these lines is a category, which we call \(\mathrm{PLC}_{k}\), of polyhedral linear configurations over a field \(k\subset\mathbb{R}\). Its objects are triples: \((\mathbb{P}(V),L_{\sigma},\sigma)\), where \(V\) is a finite-dimensional vector space over \(k\), \(\sigma\subset\mathbb{P}(V)(\mathbb{R})\) is a closed convex polyhedron, and \(L_{\sigma}\subset\mathbb{P}(V)\) is the union of linear subspaces defined by the Zariski closure of the boundary \(\partial\sigma\). A morphism \(\phi:(\mathbb{P}(V),L_{\sigma},\sigma)\to(\mathbb{P}(V^{\prime}),L_{\sigma^{ \prime}},\sigma^{\prime})\) in this category is a linear map \(\phi:\mathbb{P}(V)\to\mathbb{P}(V^{\prime})\) such that \(\phi(L_{\sigma})\subset L_{\sigma^{\prime}}\) and \(\phi(\sigma)\subset\sigma^{\prime}\). We demand, in addition, that \(\phi:\sigma\to\sigma^{\prime}\) be either the inclusion of a face, or an isomorphism. We then define a _linear polyhedral complex_ to be a space assembled out of such objects (and their quotients by finite linear group actions). More precisely, it is given by a functor \[F:\mathcal{D}\longrightarrow\mathrm{PLC}_{k}\] where \(\mathcal{D}\) is equivalent to a finite diagram category. It has a topological realisation: \[|F|=\varinjlim_{x\in\mathcal{D}}\sigma(F(x))\,\] which is the topological space obtained by gluing the polyhedra \(\sigma(F(x))\) along linear maps. Examples of linear polyhedral complexes include the moduli space of tropical curves \(L\mathcal{M}_{g}^{\mathrm{trop}}\) of genus \(g\), where \(\mathcal{D}\) is a category of stable graphs, and the moduli space of tropical abelian varieties \(L\mathcal{A}_{g}^{\mathrm{trop}}\), in which case \(\mathcal{D}\) is a category of cones associated to quadratic forms. The topological realisation of a polyhedral linear complex is closely related to similar notions in the literature, including stacky fans [1] and generalised CW-complexes [1]. However, a polyhedral linear complex has additional algebraic structure. In particular, there is a functor \(\mathbb{S}F:\mathcal{D}\to\mathrm{Sch}_{k}\) to the category of schemes which picks out the first component in each triple \(x\mapsto\mathbb{P}(V_{F(x)})\). We can speak of a subscheme of \(F\): it is simply a subfunctor \(\mathcal{X}:\mathcal{D}\to\mathrm{Sch}_{k}\) of \(\mathbb{S}F\). Concretely, it is given by the data of a compatible family of subschemes \(\mathcal{X}_{x}\subset\mathbb{P}(V_{F(x)})\) for every object \(x\) in \(\mathcal{D}\). A global algebraic differential form on a polyhedral linear complex \(F\) with poles along \(\mathcal{X}\) is simply an element of \[\Omega^{\bullet}(F\backslash\mathcal{X})=\varprojlim_{x\in\mathcal{D}}\Omega^ {\bullet}\left(\mathbb{P}(V_{F(x)})\backslash\mathcal{X}_{x}\right). \tag{1.6}\] It is a compatible system of differential forms on \(\mathbb{P}(V_{F(x)})\) with poles along \(\mathcal{X}_{x}\) for each \(x\). Our main examples are: the subscheme of \(L\mathcal{M}_{g}^{\mathrm{trop}}\) defined by the graph hypersurface locus, and the subscheme of \(L\mathcal{A}_{g}^{\mathrm{trop}}\) defined by the determinant locus \(\mathrm{Det}\). The above definitions can be generalised. The most important construction, for our purposes, is a category \(\mathrm{BLC}_{k}\) whose objects consist of iterated blow-ups of polyhedral linear configurations along linear subspaces. They are 'wonderful' compactifications of linear polyhedra in the sense of [1]. Using this concept, we can efficiently define the bordifications mentioned earlier. For instance, we construct a functor \[L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}:\mathcal{D}_{g}^{\mathrm{perf}, \mathcal{B}}\longrightarrow\mathrm{BLC}_{\mathbb{Q}}\] Figure 5. Left: a polyhedral linear configuration in \(\mathbb{P}^{2}\). Right: two polyhedra \(\sigma_{1}\), \(\sigma_{2}\) are glued together along the common face \(\sigma_{12}=\sigma_{1}\cap\sigma_{2}\). An algebraic differential form \(\omega\) defines two forms \(\omega_{i}=\omega\big{|}_{\sigma_{i}}\), for \(i=1,2\), which coincide on \(\sigma_{12}\). The form \(\omega\) has poles along a subscheme \(\mathcal{X}\), which, as depicted, may not necessarily meet the topological realisation \(\sigma_{1}\cup_{\sigma_{12}}\sigma_{2}\). from a suitable diagram category to \(\mathrm{BLC}_{\mathbb{Q}}\). It is defined by blowing up the subspaces of quadratic forms with a non-trivial rational kernel which meet each cone in the Voronoi decomposition. A key point is that that this space is a diagram of finitely many algebraic varieties with extra structure; the informal definition of \(|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}|\) given earlier in this introduction _a priori_ involved infinitely many blow-ups. The next theorem is the key geometric input in the proof of theorems 1.1 and 1.3. **Theorem 1.6**.: _There is a commutative diagram_ \[\begin{array}{ccc}\partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}& \hookrightarrow&L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\\ \downarrow_{\pi^{\mathcal{B}}}&&\downarrow_{\pi^{\mathcal{B}}}\\ \partial L\mathcal{A}_{g}^{\mathrm{trop}}&\hookrightarrow&L\mathcal{A}_{g}^{ \mathrm{trop}}\end{array}\] _where the vertical maps \(\pi^{\mathcal{B}}\) are canonical blow-downs. The strict transform of the determinant subscheme \(\mathrm{Det}\subset L\mathcal{A}_{g}^{\mathrm{trop}}\) defines a subscheme_ \[\widetilde{\mathrm{Det}}\subset L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\] _which satisfies \(\pi^{\mathcal{B}}:\widetilde{\mathrm{Det}}\to\mathrm{Det}\), and has the property_ \[\widetilde{\mathrm{Det}}\cap\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B} }\right|=\emptyset. \tag{1.7}\] _On topological realisations, one has a commutative diagram of topological spaces_ \[\begin{array}{ccc}&&\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right| \setminus\left|\partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\\ &\nearrow&&\downarrow\\ \left|L\mathcal{A}_{g}^{\circ,\mathrm{trop}}\right|&\rightarrow&\left|L \mathcal{A}_{g}^{\mathrm{trop}}\right|\setminus\left|\partial L\mathcal{A}_{g} ^{\mathrm{trop}}\right|\end{array}\] _where all arrows are isomorphisms. The open locus \(\left|L\mathcal{A}_{g}^{\circ,\mathrm{trop}}\right|\) is homeomorphic to the locally symmetric space \(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})=\mathbb{R}_{>0}^{\times}\backslash \mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\)._ A similar theorem holds for the moduli space \(L\mathcal{M}_{g}^{\mathrm{trop}}\) of tropical curves (SS8). ### 'Motives' of quadratic forms Hereafter, a motive refers to an object in a suitable Tannakian \(\mathcal{H}_{\mathbb{Q}}\) category of realisations, following Deligne [10]. An object \(M\) in \(\mathcal{H}_{\mathbb{Q}}\) has Betti and de Rham realisations \(M_{B}\), \(M_{dR}\) and a period pairing \(M_{B}^{\times}\otimes_{\mathbb{Q}}M_{dR}\to\mathbb{C}\). The property (1.7) of the determinant locus enables us to define a canonical object \(\mathrm{mot}_{Q}\) associated to a positive definite quadratic form \(Q\) such that the integral (1.5) is a period (definition 15.1). In the special case when \(\sigma_{Q_{G}}=\sigma_{G}\) is a cographical cone in the image of the tropical Torelli map, \(\mathrm{mot}_{Q_{G}}\) is equivalent to the graph motive defined in [1]. This construction provides, in particular, a motivic interpretation of the Borel regulator. ### 'Motives' associated to \(\mathrm{GL}_{g}(\mathbb{Z})\) Similarly, we may find a motivic interpretation of the canonical cohomology of \(\mathrm{GL}_{g}(\mathbb{Z})\). For every \(g>1\) and \(d\geq 0\) we define a cohomology group \(H_{c}^{d}(g)\) consisting of closed compatible families of algebraic differential forms with poles along the determinant locus (1.6), modulo exact forms. There is a natural map: \[H_{c}^{d}(g)\longrightarrow H_{c}^{d}(L\mathcal{P}_{g}/\mathrm{GL}_{g}( \mathbb{Z});\mathbb{R})\] which is injective on \(\Omega_{c}^{d}(g)\subset H_{c}^{d}(g)\): in particular, \(H_{c}^{d}(g)\) contains all the compactly-supported cohomology classes for \(\mathrm{GL}_{g}(\mathbb{Z})\) which are described in theorem 1.1. Integration of differential forms defines a pairing \[H_{d}^{\mathrm{lf}}(\mathrm{GL}_{g}(\mathbb{Z});\mathbb{Q})\otimes_{\mathbb{Q} }H_{c}^{d}(g)\longrightarrow\mathbb{C}\] where \(H^{\mathrm{lf}}\) denotes locally finite (Borel-Moore) homology. **Theorem 1.7**.: _There is an object \(\mathsf{M}_{g}^{d}\) of \(\mathcal{H}_{\mathbb{Q}}\) equipped with a pair of canonical linear maps_ \[H_{d}^{\mathrm{lf}}(\mathrm{GL}_{g}(\mathbb{Z});\mathbb{Q}) \longrightarrow (\mathsf{M}_{g}^{d})_{B}^{\times}\] \[H_{c}^{d}(g) \longrightarrow (\mathsf{M}_{g}^{d})_{dR}\] _such that the integration pairing factors through the period pairing: \((\mathsf{M}_{g}^{d})^{\,\vee}_{B}\otimes_{\mathbb{Q}}(\mathsf{M}_{g}^{d})_{dR} \to\mathbb{C}\)._ This theorem implies that the part of the locally finite homology \(H_{d}^{\mathrm{lf}}(\mathrm{GL}_{g}(\mathbb{Z});\mathbb{Q})\) which pairs non-trivially with \(H_{c}^{d}(g)\) is motivic, since \(\mathsf{M}_{g}^{d}\) lies in the Tannakian subcategory of \(\mathcal{H}_{\mathbb{Q}}\) generated by the cohomology of algebraic varieties over \(\mathbb{Q}\). In particular, the theorem provides a motivic interpretation of the integrals of canonical differential forms over homology cycles, including the volume integrals over fundamental domains considered by Minkowski. The objects \(\mathsf{M}\) are interesting: for example, \(\mathsf{M}_{3}^{5}\) is a non-trivial extension of \(\mathbb{Q}(-3)\) by \(\mathbb{Q}(0)\) with period \(6\,\zeta(3)\), which is the volume of a fundamental domain for \(\mathrm{GL}_{3}(\mathbb{Z})\) with respect to the form \(\omega^{5}\) (1.1). One can define other motives associated to spaces such as \(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\), for example, by allowing differential forms with poles along other loci. For example, using a theorem of Sullivan one may define a weight zero Tate motive which captures the entire cohomology of \(\mathrm{GL}_{g}(\mathbb{Z})\). Its periods are only rational numbers, however. ### Plan and further comments Section SS2 introduces a general notion of polyhedral cell complexes in algebraic varieties, for which we prove a cellular homology and de Rham cohomology theorem in SS3. In section SS4 we study the particular subcategory of _linear_ polyhedral cell complexes \(\mathrm{PLC}_{k}\), and their iterated blow-ups in SS5. The first application of this theory is to the moduli space of tropical curves SS6, and its bordification SS7-8. The discussion of the moduli space of tropical abelian varieties begins in SS9 with a study of polyhedra in spaces of quadratic forms and their blow-ups. In section 10 we study \(L\mathcal{A}_{g}^{\mathrm{trop}}\) and define its perfect cone bordification \(L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\). Section 11 studies the properties of the determinant locus and its strict transform. In section 12, we construct the bordification of the tropical Torelli map. From section 13 onwards, we study the properties of canonical forms and their integrals, and in SS14 we prove the main results on the cohomology of \(\mathrm{SL}_{g}(\mathbb{Z})\) and \(\mathrm{GL}_{g}(\mathbb{Z})\) mentioned in this introduction. Finally, SS15 discusses the periods and motives associated to canonical integrals and in an appendix we discuss the relation between the Borel-Serre compactification and the space \(\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\) defined algebraically using blow-ups. We expect that the methods of this paper may be used to define and study the geometric spaces, differential forms, and cohomology classes associated to other types of graph complexes (see, e.g., [1]), as well as the quotients of symmetric spaces by general linear groups over number fields. ### Acknowledgements This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 724638). The author thanks Trinity College, Dublin for a Simons visiting Professorship during which much of this work was carried out and the University of Geneva, where it was completed. Very many thanks are owed to Chan, Galatius, Grushevsky and Payne for extensive discussions on \(\mathcal{A}_{g}\) which motivated this work. The author benefited from discussions with Berghoff, Dupont, Grobner and Vogtmann, whose online notes on the Borel-Serre construction were most useful. The author is especially grateful to Melody Chan for feedback and corrections. ## 2. Algebraic polyhedral cell complexes We describe a formalism to construct algebraic models of certain topological spaces defined by gluing together polyhedral cells according to a diagram category. In the first instance, the cells will be objects of a very general category \(\mathcal{PC}_{k}\), but for the applications we shall work with more restrictive subcategories \(\mathrm{PLC}_{k}\) of convex linear polyhedra in projective space, and \(\mathrm{BLC}_{k}\) of their blow-ups along linear spaces. ### A category of polyhedral cells Let \(k\subset\mathbb{C}\) be a field. There are many situations in which one has a polyhedron embedded in the real or complex points of an algebraic variety. Since convexity does not make sense in this generality, one must define a polyhedron in an algebraic variety using different concepts. To this end, let us denote by \(\mathcal{PC}_{k}\) the category whose objects are triples \((P,L,\sigma)\) defined recursively as follows: * \(P\) is a smooth scheme over \(k\). * \(\sigma\subset P(\mathbb{C})\) is homeomorphic to a closed ball \(B_{n}\) of dimension \(n\geq 0\), i.e., there is a continuous map \(g:B_{n}\hookrightarrow P(\mathbb{C})\) such that \(g:B_{n}\cong\sigma\). We call \(n\) the dimension of \((P,L,\sigma)\). It may be strictly smaller than the dimension of \(P\). * The boundary of \(\sigma\) satisfies \(\partial\sigma=\sigma\cap L(\mathbb{C})\), where \(L\subset P\) is a subscheme with finitely many distinct smooth irreducible components \(L_{i}\), \(i\in I\). * Let \(n\geq 1\). For every \(i\in I\) such that \(L_{i}(\mathbb{C})\cap\sigma\) is non-empty, the triple (2.1) \[\left(L_{i},\bigcup_{L_{j}\neq L_{i}}L_{i}\cap L_{j},\sigma\cap L_{i}(\mathbb{ C})\right)\] is required to be an object in \(\mathcal{PC}_{k}\) of dimension \(<n\). The objects in \(\mathcal{PC}_{k}\) of dimension \(0\) are triples \((P,L,\sigma)\) where \(\sigma\subset L(\mathbb{C})\) is a point. For any object \((P,L,\sigma)\), one has \(\partial\sigma=\bigcup_{i\in I}\sigma_{i}\), where \(\sigma_{i}=\sigma\cap L_{i}(\mathbb{C})\) is of smaller dimension. Therefore by repeatedly taking boundaries, one obtains a stratification on \(\sigma\) giving a structure of a regular CW-complex on the closed ball \(B_{n}\). Note that the fourth condition \(\partial\sigma=\sigma\cap L(\mathbb{C})\) captures a notion of convexity (it fails for non-convex Euclidean polyhedra). A morphism: \[\phi:(P,L,\sigma)\rightarrow(P^{\prime},L^{\prime},\sigma^{\prime})\] is given by a morphism \(\phi:P\to P^{\prime}\) such that \(\phi(L)\subset L^{\prime}\) and \(\phi(\sigma)\subset\sigma^{\prime}\). For any subset \(J\subset I\), let \(L_{J}=\bigcap_{j\in J}L_{j}\). A _face map_ is an inclusion of a _face_ \[\iota_{J}:\left(L_{J},\bigcup_{j\in I\setminus J}L_{J\cup\{j\}},\sigma\cap L _{J}(\mathbb{C})\right)\longrightarrow(P,L,\sigma) \tag{2.2}\] where \(\sigma\cap L_{J}(\mathbb{C})\neq\emptyset\). A _facet_ of \((P,L,\sigma)\) is a face (2.1) of dimension \(\dim\sigma-1\). **Example 2.1**.: Let \(P=\mathbb{P}_{0}^{n}\) denote projective space of dimension \(n\) with homogeneous coordinates \((x_{0}:\ldots:x_{n})\). The _algebraic simplex_ is the triple \((\mathbb{P}_{0}^{n},L,\sigma)\), where \(L=V(x_{0}\ldots x_{n})\) is the union of coordinate hyperplanes, and \(\sigma\subset\mathbb{P}_{\mathbb{Q}}^{n}(\mathbb{R})\) is defined by the standard coordinate simplex region \(x_{i}\geq 0\). Its faces are algebraic simplices of smaller dimension. **Notation 2.2**.: Define the product \(\prod_{i=1}^{n}(P_{i},L_{i},\sigma_{i})\) of objects \((P_{i},L_{i},\sigma_{i})\) in \(\mathcal{PC}_{k}\) by \[\left(P_{1}\times\ldots\times P_{n}\,\ \bigcup_{i=1}^{n}P_{1}\times\ldots\times P _{i-1}\times L_{i}\times P_{i+1}\times\ldots\times P_{n}\,\ \sigma_{1}\times \ldots\times\sigma_{n}\right)\.\] Given morphisms \(f_{1},\ldots,f_{n}\) in \(\mathcal{PC}_{k}\), denote by \(f_{1}\times\ldots\times f_{n}\) the induced morphism on products. There are canonical projection morphisms for any ordered finite sets \(I\supset J\): \[p_{I/J}\ :\ \prod_{i\in I}(P_{i},L_{i},\sigma_{i})\rightarrow\prod_{j\in J}(P_{j},L _{j},\sigma_{j}) \tag{2.3}\] There are two natural functors, both of which preserve products: \[\begin{array}{ccccc}\sigma:\mathcal{PC}_{k}&\longrightarrow&\mathcal{T}op &\text{and}&\mathsf{S}:\mathcal{PC}&\longrightarrow&\mathrm{Sch}_{k}\\ \sigma(P,L,\sigma)&=&\sigma&\mathsf{S}(P,L,\sigma)&=&P\end{array} \tag{2.4}\] ### Algebraic polyhedral cell complexes Let \(\mathcal{C}_{k}\) denote a subcategory of \(\mathcal{PC}_{k}\). The main cases of interest are \(\mathcal{C}_{k}=\mathrm{PLC}_{k}\) (SS4) and \(\mathcal{C}_{k}=\mathrm{BLC}_{k}\) (SS5). **Definition 2.3**.: A \(\mathcal{C}_{k}\)-_complex_ is a functor \[F:\mathcal{D}\longrightarrow\mathcal{C}_{k} \tag{2.5}\] where \(\mathcal{D}\) is equivalent to a finite category. A _morphism_\((\Phi,\phi)\) between two functors \(F:\mathcal{D}\rightarrow\mathcal{C}_{k}\) and \(F^{\prime}:\mathcal{D}^{\prime}\rightarrow\mathcal{C}_{k}\) is the data of: _(i)_ a functor \(\phi:\mathcal{D}\rightarrow\mathcal{D}^{\prime}\) and _(ii)_ a natural transformation \(\Phi:F\to F^{\prime}\circ\phi\). To spell this out, \(\Phi\) is the data, for every object \(x\) of \(\mathcal{D}\), of a morphism \[\Phi_{x}:F(x)\longrightarrow(F^{\prime}\circ\phi)(x)\qquad\text{ in }\mathcal{PC}_{k}\] such that, for every morphism \(f:x\to y\) in \(\mathcal{D}\), there is a commutative diagram \[\begin{array}{ccc}F(x)&\xrightarrow{\Phi_{x}}&(F^{\prime}\circ\phi)(x)\\ \downarrow_{F(f)}&&\downarrow_{(F^{\prime}\circ\phi)(f)}\\ F(y)&\xrightarrow{\Phi_{y}}&(F^{\prime}\circ\phi)(y)\qquad.\end{array}\] In particular, any functor \(\phi:\mathcal{D}\rightarrow\mathcal{D}^{\prime}\) induces a morphism \((\phi,\mathrm{id})\) between \(F=F^{\prime}\circ\phi\) and \(F^{\prime}\). If \(\phi\) is an equivalence, then \(F\) and \(F^{\prime}\) are isomorphic. In particular, by replacing \(\mathcal{D}\) with an equivalent category, we may assume that \(\mathcal{D}\) is itself finite. **Definition 2.4**.: The _topological realisation_ of (2.5) is the topological space \[|F|=\varinjlim_{x\in\mathrm{Ob}(\mathcal{D})}\sigma(F(x)). \tag{2.6}\] By taking limits, a morphism \((\Phi,\phi)\) from \(F\) to \(F^{\prime}\) induces a continuous map between their topological realisations. We denote it by \(|\Phi|:|F|\longrightarrow|F^{\prime}|\). ### Subschemes Let \(F:\mathcal{D}\rightarrow\mathcal{PC}_{k}\) be a functor (2.5) as above. **Definition 2.5**.: Define a closed (resp. open) _subscheme_ of \(F\) to be a functor \[\mathcal{X}:\mathcal{D}\longrightarrow\mathrm{Sch}_{k}\] such that \(\mathcal{X}_{x}\) is a closed (resp. open) subscheme of \(\mathsf{S}F_{x}\), for all objects \(x\) of \(\mathcal{D}\), and such that the canonical embedding \(i_{x}:\mathcal{X}_{x}\subset\mathsf{S}F_{x}\) is a natural transformation \(i:\mathcal{X}\rightarrow\mathsf{S}F\). If \(K=\mathbb{R}\) or \(\mathbb{C}\), and contains \(k\subset K\), then the set of \(K\)-points defines a topological space \[|\mathcal{X}(K)|=\varinjlim_{x\in\mathcal{D}}\mathcal{X}_{x}(K)\] with the analytic topology. Definition 2.5 is analogous to that of a subfunctor. It means that for all morphisms \(f:x\to y\) in \(\mathcal{D}\) there is a commutative diagram \[\begin{array}{ccc}\mathsf{S}F_{x}&\xrightarrow{F(f)}&\mathsf{S}F_{y}\\ \cup&&\cup\\ \mathcal{X}_{x}&\xrightarrow{X(f)}&\mathcal{X}_{y}\end{array}\] and hence the morphisms between \(\mathcal{X}_{x}\) are obtained by restricting those from \(\mathsf{S}F\). **Definition 2.6**.: We say that a closed subscheme \(\mathcal{Z}\) of \(\mathsf{S}F\) is _at infinity_, which we denote by \(|F|\cap\mathcal{Z}=\emptyset\), if its image does not meet any of the polyhedra \(\sigma(F_{x})\): \[\sigma(F_{x})\cap i_{x}(\mathcal{Z}_{x})(\mathbb{C})=\emptyset\ \text{ for all }\ x\in\mathrm{Ob}(\mathcal{D}).\] Similarly, for an open subscheme \(U\) of \(F\) we write \(|F|\subset U\) if \[\sigma(F_{x})\ \subset\ U_{x}(\mathbb{C})\ \text{ for all }\ x\in\mathrm{Ob}( \mathcal{D}).\] **Definition 2.7**.: Given an open subscheme \(\mathcal{U}\) of \(\mathsf{S}F\) such that \(|F|\subset\mathcal{U}\), define \[F\cap\mathcal{U}:\mathcal{D} \longrightarrow \mathcal{PC}_{k}\] \[x \mapsto (F\cap\mathcal{U})_{x}=(\mathcal{U}_{x},L_{x}\cap U,\sigma)\, \tag{2.7}\] which we may view as an (open) subfunctor of \(F:\mathcal{D}\to\mathcal{PC}_{k}\). ### Algebraic and meromorphic differential forms **Definition 2.8**.: Consider a \(\mathcal{C}_{k}\)-complex \(F:\mathcal{D}\to\mathcal{C}_{k}\) and a subscheme \(\mathcal{X}\subset\mathsf{S}F\). A global differential form of degree \(d\) on \(F\) with poles along \(\mathcal{X}\) is an element of the limit \[\Omega^{d}(\mathsf{S}F\backslash\mathcal{X})=\varprojlim_{x\in\mathrm{Ob}( \mathcal{D})}\Omega^{d}(\mathsf{S}F(x)\backslash\mathcal{X}(x))\.\] Equivalently, it is a collection, for every \(x\in\mathrm{Ob}(\mathcal{D})\), of regular forms \(\omega_{x}\in\Omega^{d}(\mathsf{S}F(x)\backslash\mathcal{X}(x))\) which are compatible in the sense that \[F(f)^{*}(\omega_{x})=\omega_{y}\quad\text{ for every }\quad f\in\mathrm{Hom}_{ \mathcal{D}}(x,y). \tag{2.8}\] The graded vector space \(\Omega^{\bullet}(\mathsf{S}F\backslash\mathcal{X})=\bigoplus_{d}\Omega^{d}( \mathsf{S}F\backslash\mathcal{X})\) is a differential graded algebra. We may also write it \(\Omega^{\bullet}(F\backslash\mathcal{X})\), bearing in mind that it depends only on the functor \(\mathsf{S}F\). Consider an object \((P,L,\sigma)\) of \(\mathcal{PC}_{k}\). Let us denote by \[\Omega^{d}_{\mathrm{mer}}\left((P,L,\sigma)\right)=\varprojlim_{U:\sigma \subset U(\mathbb{C})}\Omega^{d}(U;k)\] the space of meromorphic differential forms which are regular on an open affine subset \(U\) of \(P\) whose complex points contain \(\sigma\). Such a form may be restricted to the faces of \(\sigma\), and so \(\Omega^{\bullet}_{\mathrm{mer}}=\bigoplus_{d\geq 0}\Omega^{d}_{\mathrm{mer}}\) is a contravariant functor from \(\mathcal{PC}_{k}\) to the category of DGA's. **Definition 2.9**.: Consider a \(\mathcal{C}_{k}\)-complex \(F:\mathcal{D}\to\mathcal{C}_{k}\). A meromorphic differential form of degree \(d\) on \(F\) is an element of the projective limit \[\Omega^{d}_{\mathrm{mer}}(F)=\varprojlim_{x\in\mathrm{Ob}(\mathcal{D})}\Omega ^{d}_{\mathrm{mer}}(F(x))\.\] It is a compatible collection of meromorphic forms \(\omega_{x}\in\Omega^{d}_{\mathrm{mer}}(F(x))\) for \(x\in\mathrm{Ob}(\mathcal{D})\). The total space \(\Omega^{\bullet}_{\mathrm{mer}}(F)=\bigoplus_{d\geq 0}\Omega^{d}_{\mathrm{mer}}(F)\) is a differential graded algebra. **Examples 2.10**.: The DGA of meromorphic forms \(\Omega^{d}_{\mathrm{mer}}\left(\mathbb{P}^{n}_{\mathbb{Q}},L,\sigma\right)\) on an algebraic simplex (Example 2.1) contains the polynomial forms on \(\sigma\) in the sense of Sullivan [10]. **Definition 2.11**.: The \(\mathbb{C}\)-differential graded algebra of smooth forms on \(|F|\) is defined to be \[\mathcal{A}^{\bullet}(|F|)=\varprojlim_{x\in\mathrm{Ob}(\mathcal{D})}\mathcal{ A}^{\bullet}(\sigma(F_{x}))\, \tag{2.9}\] where \(\mathcal{A}^{\bullet}(\sigma(F_{x}))\) denotes the algebra of smooth differential forms over \(\mathbb{C}\) which are defined in an open neighbourhood of \(\sigma(F_{x})\) inside \(\mathsf{S}F_{x}(\mathbb{C})\). When \(k\subset\mathbb{R}\), and all polyhedra \(\sigma(F_{x})\) are in fact contained in \(\mathsf{S}F_{x}(\mathbb{R})\) (which will always be the case in our applications) then \(\mathcal{A}^{\bullet}(|F|)\) has a real structure \(\mathcal{A}^{\bullet}(|F|;\mathbb{R})\) consisting of the \(\mathbb{R}\)-subalgebra of real forms. If \(\mathcal{X}\) is at infinity \(\mathcal{X}\cap|F|=\emptyset\), then there are natural maps of DGA's: \[\Omega^{\bullet}(\mathsf{S}F\backslash\mathcal{X})\ \subset\ \Omega^{\bullet}_{ \mathrm{mer}}(F)\longrightarrow\mathcal{A}^{\bullet}(|F|). \tag{2.10}\] **Definition 2.12**.: Define smooth (resp. global algebraic) cohomology groups: \[H^{n}_{dR}(|F|;\mathbb{C})=H^{n}(\mathcal{A}^{\bullet}(|F|))\quad\text{(resp. }\quad H^{n}_{dR}(\mathsf{S}F\backslash\mathcal{X})=H^{n}_{dR}(\Omega^{\bullet}(F \backslash\mathcal{X}))). \tag{2.11}\] The former is a vector space over \(\mathbb{C}\); the latter a vector space over \(k\). If \(\mathcal{X}\cap|F|=\emptyset\), there is a natural map \(H^{n}_{dR}(\mathsf{S}F\backslash\mathcal{X})\to H^{n}_{dR}(|F|)\). ## 3. Homology and cohomology of polyhedral complexes ### Assumptions Let \(\mathcal{C}_{k}\) be a subcategory of \(\mathcal{PC}_{k}\) such that: 1. Every face of every object \((P,L,\sigma)\) in \(\mathcal{C}_{k}\) is also an object of \(\mathcal{C}_{k}\), and the corresponding face map is a morphism in \(\mathcal{C}_{k}\). 2. All morphisms \(f:(P,L,\sigma)\to(P^{\prime},L^{\prime},\sigma^{\prime})\) in \(\mathcal{C}_{k}\) are either face maps, or induce homeomorphisms on topological realisations \(f:\sigma\cong\sigma^{\prime}\). In particular, the topological realisation \(f:\sigma\to\sigma^{\prime}\) of any morphism in \(\mathcal{C}_{k}\) is necessarily injective. For every \((P,L,\sigma)\) of dimension \(n\geq 1\) one has \[H_{r}(\sigma,\partial\sigma;\mathbb{Z})=\begin{cases}\mathbb{Z}&\text{ if }r=n\\ 0&\text{otherwise}\end{cases}. \tag{3.1}\] An _orientation_ on \(\sigma\) is a generator \(\omega\in H_{n}(\sigma,\partial\sigma;\mathbb{Z})\) if \(n\geq 1\) or \(\omega\in H_{0}(\sigma)\) if \(n=0\). The two main categories of interest, \(\mathrm{PLC}_{k}\) and \(\mathrm{BLC}_{k}\), satisfy (1) and (2). An important remark is that an isomorphism between objects of \(\mathcal{C}_{k}\) induces isomorphisms of their faces. ### Cellular homology of a polyhedral \(\mathcal{C}_{k}\)-complex Let \(\mathcal{C}_{k}\) be a category satisfying the assumptions above and let \(F:\mathcal{D}\to\mathcal{C}_{k}\) be a \(\mathcal{C}_{k}\)-complex. **Definition 3.1**.: Define the _face complex_\(\left(\mathfrak{C}_{F}\right)_{n}\) of \(F\) to be the graded \(\mathbb{Q}\)-vector space with generators \([\sigma,\omega]\), where \(\sigma\) is any face of dimension \(n\) of the topological realisation \(\sigma F_{x}\) of an object \(x\) of \(\mathcal{D}\), and \(\omega\) is an orientation on \(\sigma\). These symbols are subject to relations: \[(i) [\sigma,\lambda\omega]=\lambda[\sigma,\omega]\qquad\text{ for any }\lambda\in\mathbb{Z}\,\] \[(ii) [\sigma,\omega]=[\sigma^{\prime},\omega^{\prime}]\] whenever \(\sigma\), \(\sigma^{\prime}\) are \(n\)-dimensional faces of \(\sigma(F_{x})\), \(\sigma(F_{y})\) respectively, and \(g:x\to y\) is a morphism in \(\mathcal{D}\) which restricts to an isomorphism \(\sigma F(g):\sigma\xrightarrow{\sim}\sigma^{\prime}\) which sends \(\omega\) to \(\omega^{\prime}\). Define a differential \(d:\left(\mathfrak{C}_{F}\right)_{n}\to\left(\mathfrak{C}_{F}\right)_{n-1}\) by \[d[\sigma,\omega]=\sum_{\tau}\left[\tau,\omega\right]_{\tau}] \tag{3.2}\] where the sum is over all facets \(\tau\) of \(\sigma\), where \(\dim\tau=\dim\sigma-1\), and \(\omega\big{|}_{\tau}\) is the image of \(\omega\) under the boundary map \(H_{n}(\sigma,\partial\sigma)\to H_{n-1}(\tau,\partial\tau)\). One checks that \(d^{2}=0\). **Theorem 3.2**.: _There is a natural isomorphism \(H_{n}(\mathfrak{C}_{F})\xrightarrow{\sim}H_{n}(|F|;\mathbb{Q})\)._ ### Differential forms and de Rham complex The following results are valid for any complex of differential forms \(\mathcal{A}^{n}\) which computes the cohomology of \(\sigma\) and its boundary: \[H^{\bullet}(\mathcal{A}^{n}(\sigma))\cong H^{n}(\sigma;\mathbb{C})\quad\text{ and }\quad H^{\bullet}(\mathcal{A}^{n}(\partial\sigma))=H^{n}(\partial\sigma; \mathbb{C}) \tag{3.3}\] and which furthermore are _extendable_ in the sense of [10, SS7]: \[i^{\bullet}:\mathcal{A}^{\bullet}(\sigma)\longrightarrow\mathcal{A}^{\bullet} (\partial\sigma)\quad\text{ is surjective}, \tag{3.4}\] where \(i:\partial\sigma\subset\sigma\) denotes the inclusion map. As suggested by the notation, these two properties hold for the complex of smooth differential forms. **Theorem 3.3**.: _Let \(\mathcal{A}^{n}\) satisfy (3.3) and (3.4) as above. Let \(F:\mathcal{D}\to\mathcal{C}_{k}\) be a \(\mathcal{C}_{k}\)-complex, and define the de Rham complex of \(F\) via (2.9). There is an isomorphism_ \[H^{n}\left(\mathcal{A}^{\bullet}(|F|)\right)\xrightarrow{\sim}H^{n}\left(|F|; \mathbb{C}\right)\,\] _where \(H^{n}\left(|F|;\mathbb{C}\right)=H_{n}(|F|;\mathbb{C})^{\vee}\), which is induced by a bilinear pairing_ \[H^{n}\left(\mathcal{A}^{\bullet}(|F|)\right)\otimes H_{n}(|F|) \longrightarrow \mathbb{C}\] \[\omega\otimes\gamma \mapsto \int_{\gamma}\omega. \tag{3.5}\] _By theorem 3.2, it may be interpreted as a pairing \(H^{n}\left(\mathcal{A}^{\bullet}(|F|)\right)\otimes H_{n}(\mathfrak{C}(F))\to \mathbb{C}\)._ Before proceeding with the proof, note that the integral (3.5) makes sense because a differential form \(\omega\in\mathcal{A}^{\bullet}(F)\) gives rise to a well-defined smooth differential form on each simplex of \(|F|\), because of the compatibility condition (2.8). The integral converges because the simplices in \(|F|\) are compact, and have finitely many isomorphism classes. _Remark 3.4_.: Sullivan defined a differential graded algebra of polynomial differential forms for simplicial complexes [12, SS7, (i), p. 297], and proved that it satisfies the extendability condition (3.4). Since they are special cases of meromorphic forms, his argument implies that the induced map \(H^{n}(\Omega^{\bullet}_{\text{mer}}(F))\to H^{n}(\mathcal{A}^{\bullet}(F))\) is surjective. Even though the complex \(\Omega^{\bullet}_{\text{mer}}(F)\), and hence its cohomology, has a \(k\)-structure, the map \(H^{n}(\Omega^{\bullet}_{\text{mer}}(F))\to H^{n}(\mathcal{A}^{\bullet}(|F|))\) is not an isomorphism and so cannot be used to define a rational structure on the de Rham cohomology \(H^{n}\left(\mathcal{A}^{\bullet}(|F|)\right)\). Put another way, the periods one obtains by integration (3.5) depends on the location of the poles of \(\omega\). ### Proof of theorems 3.2 and 3.3 The proof of theorem 3.2 is standard (compare with [13, Theorem 2.35], [10]). Consider the filtration of \(X=|F|\) by subspaces \[X_{p}=\operatorname{Im}\left(\coprod_{\dim\sigma_{x}=p}\sigma_{x}\longrightarrow X\right)\] where the disjoint union is over all faces of polyhedral cells dimension \(p\). It induces a filtration \(F_{p}C_{\bullet}(X)=C_{\bullet}(X_{p})\) on the singular chain complex, giving rise to a spectral sequence \[E^{1}_{p,q}=H_{p+q}(\operatorname{gr}_{p}^{F}C_{\bullet}(X))\cong H_{p+q}(X_{p },X_{p-1})\] which converges to \(\operatorname{gr}_{\bullet}^{F}H_{p+q}(X)\). The complex \((E^{1}_{p,q},d^{1})\) takes the form: \[\cdots\longrightarrow H_{p+q}(X_{p},X_{p-1})\longrightarrow H_{p+q-1}(X_{p-1 },X_{p-2})\longrightarrow\cdots \tag{3.6}\] where the maps are induced on the level of chains by the boundary map. By definition of \(X_{p}\) there is a natural morphism \[\bigoplus_{\dim\sigma_{x}=p}\left(H_{p}(\sigma_{x},\partial\sigma_{x})\right) /\mathrm{Aut}_{\mathcal{D}}(\sigma_{x})\longrightarrow H_{p}(X_{p},X_{p-1}) \tag{3.7}\] where the sum is over all equivalence classes of faces, where two faces \(\sigma_{x},\sigma_{y}\) are equivalent if they have the same image in \(X\), and \(\mathrm{Aut}_{\mathcal{D}}(\sigma_{x})\) is the subgroup of automorphisms on the face \(\sigma_{x}\) induced by morphisms in the category \(\mathcal{D}\). The map (3.7) is surjective since the intersections \(\sigma_{x}\cap\sigma_{y}\) between all non-isomorphic faces \(\sigma_{x},\sigma_{y}\) are unions of faces of dimension \(\leq p-1\), and contained in \(X_{p-1}\). Since \(H_{p+q}(\sigma_{x},\partial\sigma_{x})\) is concentrated in degree \(q=0\) (3.1) this proves that the spectral sequence degenerates at \(E^{1}\), and the cohomology of \(X\) is isomorphic to the cohomology of the complex (3.6) on setting \(q=0\). In order to identify this complex with the face complex, one may observe that \(H_{p+q}(\sigma_{x},\partial\sigma_{x})\cong H_{p+q}^{\mathrm{lf}}(\overset{ \circ}{\sigma}_{x})\) is isomorphic to the locally finite (Borel-Moore) homology of the interior of \(\sigma_{x}\). Since by assumption on \(\mathcal{C}_{k}\) the interiors of faces are either disjoint or isomorphic, we conclude that (3.7) is also injective, and therefore defines an isomorphism \[\mathfrak{C}(F)_{p}\cong H_{p}(X_{p},X_{p-1})\] by definition 3.1. Furthermore, since the morphisms in (3.6) are induced by the boundary map on relative cohomology, we may identify \((E^{1}_{p,0},d^{1})\) with the face complex (3.2). This completes the proof of Theorem 3.2, and shows, in passing, that \(d^{2}=0\). Theorem 3.3 is a de Rham version of theorem 3.2, and has a very similar proof on replacing the singular chain complex with a complex of differential forms. **Lemma 3.5**.: _Assume (3.3) and (3.4). Let \(K^{\bullet}(\sigma)=\ker(i^{\bullet}:A^{\bullet}(\sigma)\to A^{\bullet}( \partial\sigma))\). Then_ \[H^{n}(K^{\bullet}(\sigma))\overset{\sim}{\longrightarrow}H^{n}(\sigma, \partial\sigma). \tag{3.8}\] Proof.: By assumption, the relative cohomology group \(H^{n}(\sigma,\hat{\sigma}\sigma)\) is the cohomology of the mapping cone of \(i^{*}\), which is the complex \(\mathcal{A}^{n}(\sigma)\oplus\mathcal{A}^{n-1}(\hat{\sigma}\sigma)\) with differential \(d(\alpha,\beta)=(d\alpha,i^{*}\alpha-d\beta)\). The map (3.8) is induced by the morphism of complexes: \[K^{\bullet}\stackrel{{\kappa}}{{\longrightarrow}}\mathcal{A}^{n }(\sigma)\oplus\mathcal{A}^{n-1}(\hat{\sigma}\sigma)\] where \(\kappa(\omega)=(w,0)\). To see that (3.8) is surjective, let \((\omega,\eta)\in\mathcal{A}^{n}(\sigma)\oplus\mathcal{A}^{n-1}(\hat{\sigma}\sigma)\) be closed, which implies that \(d\omega=0\) and \(i^{*}\omega=d\eta\). By (3.4), there exists \(\alpha\in\mathcal{A}^{n-1}(\sigma)\) such that \(i^{*}\alpha=\eta\). The cohomology class of \((\omega,\eta)\) is also represented by \((\omega,\eta)-d(\alpha,0)=(\omega-d\alpha,0)\), which equals \(\kappa(\omega-d\alpha)\). Note that \(\omega-d\alpha\in K^{\bullet}(\sigma)\) since \(i^{*}(\omega-d\alpha)=i^{*}(\omega)-d\eta=0\). To establish the injectivity of (3.8), suppose that \(\kappa(\omega)=d(\alpha,\beta)\) is exact. This implies that \(d\alpha=\omega\), and \(i^{*}\alpha=d\beta.\) By (3.4), there exists \(\gamma\in\mathcal{A}^{\bullet}(\sigma)\) such that \(i^{*}\gamma=\beta\), and hence \(\omega=d(\alpha-d\gamma)\) is exact in \(K^{\bullet}(\sigma)\), since \(i^{*}(\alpha-d\gamma)=i^{*}\alpha-d\beta=0\). The proof of theorem 3.3 proceeds as for theorem 3.2. The filtration \(X_{p}\) on \(X\) induces a cofiltration on the differential graded algebra \(\mathcal{A}^{n}(|F|)\). It produces a spectral sequence converging to the cohomology of \(\mathcal{A}^{\bullet}(|F|)\). It is enough to show that this spectral sequence is isomorphic, via integration, to the dual of the homology spectral sequence considered above. The integration pairing is well-defined on the level of chain complexes because of Stokes' theorem. The associated graded of the cofiltration on \(\mathcal{A}^{n}(|F|)\) is \[K^{\bullet}_{p}(|F|)=\ker\left(\varprojlim_{\dim\sigma_{x}\leqslant p}\mathcal{ A}^{\bullet}(\sigma_{x})\longrightarrow\varprojlim_{\dim\sigma_{x}\leqslant p-1} \mathcal{A}^{\bullet}(\sigma_{x})\right)\] consisting of compatible systems of differential forms on faces of dimension \(p\) which vanish on faces of dimension \(p-1\). As previously, one has an isomorphism \[H^{p+q}(K^{\bullet}_{p}(|F|))\cong\bigoplus_{\sigma_{x}}H^{p+q}\left(K^{ \bullet}_{p}(\sigma_{x})\right)/\mathrm{Aut}_{\mathcal{D}}(\sigma_{x})= \bigoplus_{\sigma_{x}}H^{p+q}\left(\sigma_{x},\hat{\sigma}\sigma_{x}\right)/ \mathrm{Aut}_{\mathcal{D}}(\sigma_{x})\] where the direct sum is over equivalence classes of \(\sigma_{x}\), and the second equality follows from (3.8). The relative cohomology \(H^{p+q}\left(\sigma_{x},\hat{\sigma}\sigma_{x}\right)\) is isomorphic to the compactly supported cohomology \(H^{p+q}_{c}(\hat{\sigma}_{x})\) and is canonically dual to \(H_{p+q}(\sigma_{x},\hat{\sigma}\sigma_{x})\otimes_{\mathbb{Z}}\mathbb{C}\). This implies that \(H^{p+q}(K^{\bullet}_{p}(|F|))\) vanishes except in degree \(q=0\) and, in this degree, is dual to \(\mathfrak{C}(F)_{p}\otimes\mathbb{C}\) via the integration pairing, which proves theorem 3.3. ### Relative homology and compact supports Let \(F,G\) be two \(\mathcal{C}_{k}\)-complexes and \(i:G\to F\) an injective morphism. Via \(i\) we may view \(|G|\) as a subspace of \(|F|\). Define the DGA of compactly supported forms on the complement by \[\mathcal{A}^{\bullet}_{c}\left(|F|\setminus|G|\right)=\varprojlim_{\overset{ \star}{\star\in\mathcal{D}}}\mathcal{A}^{\bullet}_{c}\left(\sigma(F_{x}) \backslash(\sigma(F_{x})\cap|G|)\right)\] and write \(H^{n}_{dR,c}\left(|F|\setminus|G|\right)=H^{n}(\mathcal{A}^{\bullet}_{c}\left( |F|\setminus|G|\right)).\) Define the relative de Rham cohomology \(H^{n}_{dR}\left(|F|,|G|\right)\) to be the cohomology of the mapping cone \(\mathcal{A}^{\bullet}\left(|F|\right)\oplus\mathcal{A}^{\bullet-1}\left(|G| \right)\right)\) with respect to the differential \(d(\omega,\eta)=(d\omega,i^{*}\omega-d\eta)\). Relative homology and cohomology satisfy the usual long exact sequences. **Theorem 3.6**.: _There is an isomorphism \(H_{n}(\mathfrak{C}_{F}/\mathfrak{C}_{G})\cong H_{n}(|F|,|G|;\mathbb{Q})\). Integration defines a canonical isomorphism of \(\mathbb{C}\)-vector spaces:_ \[H^{n}_{dR}\left(|F|,|G|\right)\longrightarrow\left(H_{n}\left(|F|,|G|;\mathbb{Q }\right)\right)^{\vee}\otimes\mathbb{C}\.\] _The map defined on complexes by \(\omega\mapsto(\omega,0)\) passes to a canonical isomorphism_ \[H^{n}_{dR,c}\left(|F|\setminus|G|\right)\stackrel{{\sim}}{{ \longrightarrow}}H^{n}_{dR}\left(|F|,|G|\right). \tag{3.9}\] Proof.: The first part follows formally from theorems 3.2 and 3.3. To prove (3.9), one may follow the same strategy as theorem 3.3: the filtration \(X_{p}\) on \(|F|\) gives rise to a cofiltration on \(\mathcal{A}_{c}^{\bullet}\left(|F|\setminus|G|\right)\) and a spectral sequence whose \(E^{1}_{pq}\) terms are \[\bigoplus_{\sigma_{x}}H^{p+q}_{dR,c}\left(\sigma_{x}\backslash(\sigma_{x} \cap|G|)\right)/\mathrm{Aut}_{\mathcal{D}}(\sigma_{x})\overset{\sim}{\to} \bigoplus_{\sigma_{x}}H^{p+q}_{dR}\left(\sigma_{x},\hat{\sigma}\sigma_{x} \cup(\sigma_{x}\cap|G|)\right)/\mathrm{Aut}_{\mathcal{D}}(\sigma_{x}) \tag{3.10}\] where the direct sum is over equivalence classes \(\sigma_{x}\) of cells of dimension \(p\).1 Since every morphism in \(\mathcal{C}_{k}\) is either a face map or an isomorphism, it follows that either \(\sigma_{x}\cap|G|\) is contained in the boundary of \(\sigma_{x}\), in which case \(H^{p+q}_{dR}\left(\sigma_{x},\hat{\sigma}\sigma_{x}\cup(\sigma_{x}\cap|G|) \right)=H^{p+q}_{dR}(\sigma_{x},\hat{\sigma}\sigma_{x})\), or \(\sigma_{x}\subset|G|\), in which case this group vanishes. It follows that the complex on the right-hand side of (3.10) is dual, via integration, to \(\mathfrak{C}_{F}/\mathfrak{C}_{G}\otimes_{k}\mathbb{C}\). Footnote 1: One can directly compare this with the associated spectral sequence for the cohomology of the mapping cone \(\mathcal{A}^{\bullet}\left(|F|\right)\oplus\mathcal{A}^{\bullet-1}\left(|G| \right))\). One replaces \(K_{\mathcal{P}}^{\bullet}(|F|)\) with a complex consisting of forms which vanish on \(X_{p-1}\cup|G|\). Its cohomology is \(\bigoplus_{\sigma_{x}}H^{p+q}_{dR}(\sigma_{x},\hat{\sigma}\sigma_{x}\cup( \sigma_{x}\cap|G|))/\mathrm{Aut}_{\mathcal{D}}(\sigma_{x})\). ## 4. Linear polyhedral complexes Let \(V\) be a vector space of dimension \(n+1\) over a field \(k\subset\mathbb{R}\), and let \(\mathbb{P}(V)\) denote the associated projective space of dimension \(n\). When \(V\) has a preferred basis, we write \(\mathbb{P}^{n}_{k}\) instead of \(\mathbb{P}(k^{n+1})\). We call a _projective linear configuration_\(L\subset\mathbb{P}(V)\) any finite union \(L=\bigcup_{i\in I}L_{i}\) of linear spaces \(L_{i}\subset\mathbb{P}(V)\), all of which have equal dimension. Correspondingly, there are linear subspaces \(W_{i}\subset V\), such that \(L_{i}=\mathbb{P}(W_{i})\) for all \(i\in I\). For any subset \(J\subset I\), we shall write \(L_{J}=\bigcap_{j\in J}L_{j}\) and \(W_{J}=\bigcap_{j\in J}W_{i}\). ### Polyhedral linear configurations Let us write \(V_{\mathbb{R}}=V\otimes_{k}\mathbb{R}\). **Definition 4.1**.: A _real polyhedral cone_ defined over \(V\) is the convex hull of a finite set of vectors \(v_{1},\dots,v_{m}\in V\), where \(m\geq 1\): \[\widehat{\sigma}=\mathbb{R}_{\geq 0}\langle v_{1},\dots,v_{m}\rangle\quad \subset\quad V_{\mathbb{R}}. \tag{4.1}\] Its cone point is the origin. A polyhedral cone is called _strongly convex_ if it does not contain any real line \(\mathbb{R}w\), for a non-zero vector \(w\in V_{\mathbb{R}}\). A _(projective) polyhedron in \(\mathbb{P}(V)\)_ is a pair \((\sigma,V)\), where \(\sigma\subset\mathbb{P}(V_{\mathbb{R}})\) is the link of the cone point of a strongly convex polyhedral cone (4.1) defined over \(V\): \[\sigma=\left(\,\widehat{\sigma}\setminus\{0\}\,\right)/\mathbb{R}_{>0}^{\times }\.\] Given a polyhedron \((\sigma,V)\), we write \(V_{\sigma}\subset V\) for the \(k\)-linear span of its defining vectors \(v_{1},\dots,v_{m}\) (4.1). The space \(V_{\sigma}\) only depends on \(\sigma\), and indeed, the associated projective space \(\mathbb{P}(V_{\sigma})\) is the Zariski-closure of the \(k\)-rational points \(\sigma\cap\mathbb{P}(V)(k)\). In particular, \((\sigma,V_{\sigma})\) is a polyhedron in \(\mathbb{P}(V_{\sigma})\) and has maximal dimension. In general, the vector space \(V_{\sigma}\) may be strictly contained in \(V\). We may allow the case when all \(v_{1},\dots,v_{m}=0\) and \(\sigma\) is empty. By a well-known theorem of Minkowski and Weyl, a polyhedron may equivalently be described by its facets. There is a unique, finite, minimal set of hyperplanes \((H_{j})_{j\in J}\subset V_{\sigma}\subset V\) where \(H_{j}\) is defined by the vanishing of a non-zero linear form \(f_{j}\in V_{\sigma}^{\vee}\) defined over \(k\), such that for all \(j\in J\): \[\sigma\cap(V_{\sigma}\otimes_{k}\mathbb{R})=\{x\in V_{\sigma}\otimes_{k} \mathbb{R}:f_{j}(x)\geq 0\text{ for all }j\in J\}\.\] Note that \(\sigma\cap V_{\sigma}(\mathbb{R})\) is canonically identified with \(\sigma\) via the inclusion \(V_{\sigma}\hookrightarrow V\). A _facet of \(\sigma\)_ is a non-empty projective polyhedron of the form \((\sigma\cap H_{j},V)\), and has dimension one less than \(\sigma\). A _face of \(\sigma\)_ is any non-empty intersection of facets \((\sigma\cap H_{j_{1}}\cap\dots\cap H_{j_{k}},V)\), for \(k\geq 0\) and a _vertex of \(\sigma\)_ is a face of dimension zero. Every vertex of \(\sigma\) thus defines a \(k\)-rational point in \(\mathbb{P}(V)\) and, one may show, is the image \([v_{i}]\in\mathbb{P}_{k}(V)(k)\) of some vector \(v_{i}\), for \(1\leq i\leq m\), where the \(v_{i}\) are as in (4.1). Not all of the vectors \(v_{i}\) are necessarily vertices and may be redundant in the definition of \(\widehat{\sigma}\). **Definition 4.2**.: A _polyhedral linear configuration_ over \(k\) is a triple \((\mathbb{P}(V),L_{\sigma},\sigma)\) where \((\sigma,V)\) is a polyhedron, and \(L_{\sigma}\) is the linear configuration in \(\mathbb{P}(V)\) whose components \(L_{i}=V_{\sigma_{i}}\) are the affine spans of every facet \(\sigma_{i}\) of \(\sigma\). In particular, each component \(L_{i}\) of \(L_{\sigma}\) satisfies \(\dim L_{i}=\dim\sigma-1\) and the set of real points \(L_{\sigma}(\mathbb{R})\) is nothing other than the Zariski-closure of the set of points of the boundary \(\partial\sigma\) in \(\mathbb{P}(V)(\mathbb{R})\). In this manner, a polyhedron \((V,\sigma)\) uniquely determines a polyhedral linear configuration \((\mathbb{P}(V),L_{\sigma},\sigma)\), and vice-versa. **Definition 4.3**.: A _map of polyhedral linear configurations_, which we denote by \[\phi:(\mathbb{P}(V),L_{\sigma},\sigma)\longrightarrow(\mathbb{P}(V^{\prime}),L_{\sigma^{\prime}},\sigma^{\prime})\, \tag{4.2}\] is given by an injective linear map \(\phi:V\hookrightarrow V^{\prime}\) such that the induced map of projective spaces, also denoted by \(\phi:\mathbb{P}(V)\rightarrow\mathbb{P}(V^{\prime})\), satisfies both \[\phi(L)\subset L^{\prime}\quad\text{ and }\quad\phi(\sigma)\subset\sigma^{ \prime}.\] In particular, every face of \(\sigma\) maps to a face of \(\sigma^{\prime}\). _Remark 4.4_.: The above definitions are insufficient to express the subdivision of polyhedra into smaller polyhedra. For this one must consider a more general notion where \(L\) contains further linear subspaces in addition to the Zariski closures of the facets of \(\sigma\). **Example 4.5**.: The standard simplex in projective space \(\mathbb{P}_{\mathbb{Q}}^{n}\) with homogeneous coordinates \((x_{0}:\ldots:x_{n})\) is the polyhedral linear configuration \((\mathbb{P}_{\mathbb{Q}}^{n},L,\sigma)\), where \(L=V(x_{0}\ldots x_{n})\), and \(\sigma\subset\mathbb{P}_{\mathbb{Q}}^{n}(\mathbb{R})\) is defined by the region \(x_{i}\geq 0\). ### Faces and their normals Let \((\mathbb{P}(V),L_{\sigma},\sigma)\) be a polyhedral linear configuration, and consider any face \(\sigma_{F}\) of \(\sigma\). The polyhedron \((\sigma_{F},V_{\sigma_{F}})\) defines a polyhedral linear configuration \((\mathbb{P}(V_{\sigma_{F}}),L_{\sigma_{F}},\sigma_{F})\) and a 'face' map \[(\mathbb{P}(V_{\sigma_{F}}),L_{\sigma_{F}},\sigma_{F})\longrightarrow( \mathbb{P}(V),L,\sigma)\, \tag{4.3}\] which is the map (4.2) induced by the inclusion \(V_{\sigma_{F}}\) in \(V\), and corresponds to the inclusion of \(\sigma_{F}\) in \(\sigma\). Note that \((\sigma_{F},V_{\sigma_{F}})\) has maximal dimension, i.e., \(\dim(\sigma_{F})=\dim(V_{\sigma_{F}})-1\), but the same is not necessarily true of \((\sigma,V)\). For the trivial face, when \(\sigma_{F}\) is equal to \(\sigma\) itself, (4.3) gives a linear map \((\mathbb{P}(V_{\sigma}),L_{\sigma},\sigma)\rightarrow(\mathbb{P}(V),L,\sigma)\). **Definition 4.6**.: Let \(W\subset V\) be a vector subspace such \(\sigma_{W}=\sigma\cap\mathbb{P}(W)\) is a face of \(\sigma\). Consider the polyhedral cone: \[\widehat{\sigma_{/W}}\subset(V/W)_{\mathbb{R}}\] which is the image of \(\widehat{\sigma}\) (4.1) under the natural map \(V_{\mathbb{R}}\rightarrow(V/W)_{\mathbb{R}}\). It is defined over \(V/W\) since it is the convex hull of the images of a set of defining vectors \(v_{i}\) (4.1) under the natural map \(V\to V/W\). Denote by \(\sigma_{/W}\) the link of its cone point. It is important to note that \(\sigma_{W}\) is not necessarily assumed to be Zariski-dense in \(\mathbb{P}(W)\), i.e., \(W\) may strictly contain the space \(V_{\sigma_{W}}\). **Lemma 4.7**.: _The pair \((\sigma_{/W},V/W)\) is a polyhedron._ Proof.: It suffices to show that \(\sigma_{/W}\) is strictly convex. We do this by showing that it is contained in a standard simplex (example 4.5). Consider any choice of irreducible components \(L_{i}\) of \(L_{\sigma}\) such that the face \(\sigma_{W}\) is given by the intersection \(\sigma_{W}=\sigma\cap L_{1}\cap\ldots\cap L_{m}\). We may assume that \(L_{1},\ldots,L_{m}\) are normal crossing. Since \(\sigma_{W}\) is Zariski-dense in \(L_{1}\cap\ldots\cap L_{m}\) it follows that \(L_{1}\cap\ldots\cap L_{m}\subset\mathbb{P}(W)\), and hence \(\dim W\geq\dim V-m\). Since the \(L_{i}\) cross normally, it follows that any subset of \(p=\dim(V/W)\leq m\) spaces \(L_{i}\), and in particular \(L_{1},\ldots,L_{p}\), defines a set of coordinate hyperplanes on \(\mathbb{P}(V/W)\). Let \(x_{i}\) be a system of coordinates on \(\mathbb{P}(W/V)\) whose zero loci are the \(L_{i}\) for \(i=1,\ldots,p\). By replacing \(x_{i}\) with \(-x_{i}\) we may assume that the \(x_{i}\) are non-negative on \(\sigma\). By construction, the link \(\sigma_{/W}\) is contained in the strictly convex region \(\{(x_{1}:\ldots:x_{p}):x_{i}\geq 0\}\) **Definition 4.8**.: Define the _normal_ of \(\sigma\) relative to \(W\) to be \((\mathbb{P}(V/W),L_{\sigma_{/W}},\sigma_{/W})\). A map of polyhedral linear configurations induces maps simultaneously on faces and their normals. More precisely, let \(\phi\) be as in (4.2), and suppose that \(W\subset V\) meets \(\sigma\) in a face \(\sigma_{W}=\sigma\cap W\) of \(\sigma\). Then \(\phi(W)=W^{\prime}\) also meets \(\sigma^{\prime}\) in the face \(\sigma_{W^{\prime}}=\sigma^{\prime}\cap W^{\prime}\), and we deduce a pair of maps of polyhedral linear configurations: \[\phi\big{|}_{W}:(\mathbb{P}(W),L_{\sigma_{W}},\sigma_{W}) \to (\mathbb{P}(W^{\prime}),L_{\sigma^{\prime}_{W^{\prime}}},\sigma^ {\prime}_{W^{\prime}})\] \[\phi_{/W}:(\mathbb{P}(V/W),L_{\sigma_{/W}},\sigma_{/W}) \to (\mathbb{P}(V^{\prime}/W^{\prime}),L_{\sigma^{\prime}_{/W^{ \prime}}},\sigma^{\prime}_{/W^{\prime}}). \tag{4.4}\] _Remark 4.9_.: The projectivised normal bundle of the linear subspace \(\mathbb{P}(W)\subset\mathbb{P}(V)\) is trivial, and is canonically isomorphic to a product of projective spaces: \[\mathbb{P}(N_{\mathbb{P}(W)|\mathbb{P}(V)})=\mathbb{P}(W)\times\mathbb{P}(V/W )\.\] For any subspace \(W\subset V\) meeting \(\sigma\) in a face \(\sigma_{W}\), the product of polyhedra \[\sigma_{W}\times\sigma_{/W}\quad\subset\quad\mathbb{P}(W)(\mathbb{R})\times \mathbb{P}(V/W)(\mathbb{R})\] is contained within it. A map of polyhedral linear configurations \(\phi\) as above induces a map on the products \(\sigma_{W}\times\sigma_{/W}\to\sigma^{\prime}_{W^{\prime}}\times\sigma^{ \prime}_{/W^{\prime}}\). These products encode the infinitesimal structure of \(\sigma\) in the neighbourhood of \(\sigma_{W}\). This will be discussed in SS5.4. **Lemma 4.10**.: _Let \((\sigma,V)\) be a polyhedron, and let \(W_{1}\subsetneq W_{2}\subset V\) such that \(\sigma\cap\mathbb{P}(W_{1})(\mathbb{R})=\sigma\cap\mathbb{P}(W_{2})(\mathbb{R})\) is a face of \(\sigma\). Then \(\mathbb{P}(W_{2}/W_{1})(\mathbb{R})\) does not meet \((\sigma_{/W_{1}},V/W_{1})\)._ Proof.: If \(\mathbb{P}(W_{2}/W_{1})(\mathbb{R})\) were to meet \(\sigma_{/W_{1}}\), then there would exist a \(0\neq v\in W_{2}/W_{1}\) whose image in \(\mathbb{P}(W_{2}/W_{1})(\mathbb{Q})\) is a vertex of \(\sigma_{/W_{1}}\). This would imply that \(\sigma\cap\mathbb{P}(W_{2})(\mathbb{R})\) strictly contains \(\sigma\cap\mathbb{P}(W_{1})(\mathbb{R})\), a contradiction. ### Category of polyhedral linear configurations **Definition 4.11**.: Define a category \(\mathrm{PLC}_{k}\) whose objects are polyhedral linear configurations over \(k\), and whose morphisms are generated by: 1. (Linear embeddings) Maps of the form \(f:(\mathbb{P}(V),L,\sigma)\to(\mathbb{P}(V^{\prime}),L^{\prime},\sigma^{ \prime})\), where \(f\) is a linear embedding which satisfies \(f(L)=L^{\prime}\) and \(f(\sigma)=\sigma^{\prime}\), 2. (Inclusions of faces) For any face \(\sigma_{F}\) of \(\sigma\), the face maps (4.3): \[(\mathbb{P}(V_{\sigma_{F}}),L_{\sigma_{F}},\sigma_{F})\longrightarrow(\mathbb{ P}(V),L,\sigma)\.\] The category \(\mathrm{PLC}_{k}\) is a sub-category of \(\mathcal{PC}_{k}\) which satisfies the assumptions of SS3.1. _Remark 4.12_.: The above categories are adapted to studying the links of cones. If one is interested in the cones _per se_, one may consider a version in which one replaces projective space \(\mathbb{P}_{k}^{n}\) with affine space \(\mathbb{A}_{k}^{n+1}\), and \(\sigma\) with \(\hat{\sigma}\), _etc._ ### Linear polyhedral complexes Definition 2.3 leads to the following: **Definition 4.13**.: A _linear polyhedral complex_ is a \(\mathrm{PLC}_{k}\)-complex, i.e., a functor from a finite diagram category to the category of polyhedral linear configurations. _Remark 4.14_.: The topological realisation of a polyhedral linear complex is obtained by gluing together finitely many quotients of strictly convex polyhedra by finite groups of automorphisms and defines a symmetric CW-complex (see [1]). We may define subschemes of polyhedral linear complexes, and differential forms upon them in the manner of SS2.3, 2.4. ## 5. Wonderful compactifications of linear polyhedral complexes ### Blowing-up linear subspaces Let \(\mathcal{B}\) denote a finite set of linear subspaces \(\mathbb{P}(W)\subsetneq\mathbb{P}(V)\) with the property that \(\mathcal{B}\) is closed under intersections. **Definition 5.1**.: The (wonderful) compactification [1] of \(\mathbb{P}(V)\) along \(\mathcal{B}\) is denoted \[\pi_{\mathcal{B}}:P^{\mathcal{B}}(V)\longrightarrow\mathbb{P}(V)\] and is defined to be the iterated blow-up of \(\mathbb{P}(V)\) along the strict transforms of the strata \(\mathbb{P}(W_{j})\in\mathcal{B}\), in increasing order of dimension. It is shown to be independent of the order of blowings-up, and well-defined. Let \(D^{\mathcal{B}}\subset P^{\mathcal{B}}(V)\) denote the exceptional divisor. **Proposition 5.2**.: _The iterated blow-up \(P^{\mathcal{B}}\) has the following properties._ _(i) Let \(f:U\to V\) be an injective linear map, and \(\mathcal{B}\) a set of subspaces of \(\mathbb{P}(V)\) as above, such that \(\mathbb{P}(fU)\) is not contained in any element of \(\mathcal{B}\). If \(f^{-1}\mathcal{B}\) denotes the set of preimages \(f^{-1}\mathbb{P}(W)\) of spaces \(\mathbb{P}(W)\) in \(\mathcal{B}\), there is a canonical map \(f^{\mathcal{B}}:P^{f^{-1}\mathcal{B}}(U)\to P^{\mathcal{B}}(V)\) such that_ \[\begin{array}{ccc}P^{f^{-1}\mathcal{B}}(U)&\stackrel{{ f^{ \mathcal{B}}}}{{\longrightarrow}}&P^{\mathcal{B}}(V)\\ \downarrow&&\downarrow\\ \mathbb{P}(U)&\stackrel{{ f}}{{\longrightarrow}}&\mathbb{P}(V) \end{array}\] _commutes, where the vertical maps are the blow-downs \(\pi_{f^{-1}\mathcal{B}}\), \(\pi_{\mathcal{B}}\) respectively._ _(ii) Suppose that \(\mathcal{B}\) is a disjoint union \(\mathcal{B}=\mathcal{B}^{\prime}\sqcup\mathcal{I}\) where \(\mathcal{B}^{\prime}\) is closed under intersections and for all \(x\in\mathcal{B}\) and \(y\in\mathcal{I}\) one has \(x\cap y\in\mathcal{I}\). Then there is a canonical map_ \[\pi_{\mathcal{B}/\mathcal{B}^{\prime}}:P^{\mathcal{B}}(V)\longrightarrow P^{ \mathcal{B}^{\prime}}(V)\] _which blows down all exceptional divisors corresponding to \(\mathcal{I}\)._ Proof.: Part (i) follows from repeated application of the universal property of strict transforms [1, Corollary 7.15]. For (ii), applying the universal property to the morphism \(\pi_{\mathcal{I}}:P^{\mathcal{I}}(V)\to\mathbb{P}(V)\) with respect to each blow-up in \(\mathcal{B}^{\prime}\) gives rise to a canonical map \(X\to P^{\mathcal{B}^{\prime}}(V)\), where \(X\) is the iterated blow-up of \(P^{\mathcal{I}}(V)\) along \(\pi_{\mathcal{I}}^{-1}(\mathcal{B}^{\prime})\). The space \(X\) is obtained from \(\mathbb{P}(V)\) by blowing up the subspaces in \(\mathcal{B}\), starting with those in \(\mathcal{I}\). To see that this is isomorphic to the space \(\mathbb{P}^{\mathcal{B}}(V)\), use the fact that if \(L_{1},L_{2}\in\mathcal{B}\) are two linear subspaces, then after blowing up \(L_{1}\cap L_{2}\), the strict transforms of \(L_{1}\) and \(L_{2}\) are disjoint and blowing them up does not depend on the order in which the blow-ups are performed. If we write \(\mathcal{I}=\bigsqcup_{i\geq 0}\mathcal{I}_{i}\) and \(\mathcal{B}^{\prime}=\bigsqcup_{i\geq 0}\mathcal{B}^{\prime}_{i}\), where a subscript \(i\) denotes components of dimension \(i\), then by assumption \(\mathcal{B}^{\prime}_{i}\cap\mathcal{I}_{k}\subset\bigcup_{j\leq i}\mathcal{I }_{j}\) for all \(k\geq i\). It follows that if \(\mathcal{I}_{0},\ldots,\mathcal{I}_{i}\) have already been blown up (in increasing order of dimension), then blowing up (strict transforms of) elements in \(\mathcal{I}_{i+1}\) commutes with blowing up elements in \(\mathcal{B}_{k}\) for any \(k\). Thus we may write \[S_{1}=(\mathcal{I}_{0},\mathcal{B}^{\prime}_{0},\mathcal{I}_{1},\mathcal{B}^{ \prime}_{1},\mathcal{I}_{2},\mathcal{B}^{\prime}_{2},\ldots)\qquad,\qquad S_{2 }=(\mathcal{I}_{0},\mathcal{I}_{1},\mathcal{B}^{\prime}_{0},\mathcal{B}^{ \prime}_{1},\mathcal{I}_{2},\mathcal{B}^{\prime}_{2},\ldots)\] \[S_{3}=(\mathcal{I}_{0},\mathcal{I}_{1},\mathcal{B}^{\prime}_{0},\mathcal{I}_{2}, \mathcal{B}^{\prime}_{1},\mathcal{B}^{\prime}_{2},\ldots)\qquad,\qquad S_{4}=( \mathcal{I}_{0},\mathcal{I}_{1},\mathcal{I}_{2},\mathcal{B}^{\prime}_{0}, \mathcal{B}^{\prime}_{1},\mathcal{B}^{\prime}_{2},\ldots)\quad,\quad\ldots\] _etc_, where each sequence \(S_{i}\), read from left to right, is a sequence of spaces to be blown-up in \(\mathbb{P}(V)\), in order. By the previous argument, the space obtained by blowing up \(S_{i}\) in order is the same as that for \(S_{i-1}\). Since \(S_{1}\) corresponds to \(\mathbb{P}^{\mathcal{B}}(V)\), and \(S_{n}\), for sufficiently large \(n\), to \(X\), this proves that \(X\) and \(\mathbb{P}^{\mathcal{B}}(V)\) are canonically isomorphic. **Proposition 5.3**.: _[_1_]__. The irreducible components \(\mathcal{E}^{\mathcal{B}}_{W}\) of \(D^{\mathcal{B}}\) are in one-to-one correspondence with subspaces \(\mathbb{P}(W)\in\mathcal{B}\) of codimension \(\geq 1\), where \(\pi_{\mathcal{B}}:\mathcal{E}^{\mathcal{B}}_{W}\to\mathbb{P}(W)\), and \(\mathcal{E}^{\mathcal{B}}_{W}\) is the Zariski closure of the inverse image with respect to \(\pi_{\mathcal{B}}\) of the generic point of \(\mathbb{P}(W)\)._ _If we define the following sets (which are closed under intersections):_ \[\mathcal{B}_{W} = \{\mathbb{P}(T)\subsetneq\mathbb{P}(W)\ \ \text{for}\ \ \mathbb{P}(T)\in\mathcal{B}\ \text{such that}\ T\subseteq W\}\] \[\mathcal{B}_{/W} = \{\mathbb{P}(T/W)\subset\mathbb{P}(V/W)\ \ \text{for}\ \ \mathbb{P}(T)\in\mathcal{B}\ \text{such that}\ T\supseteq W\}\] _(recall that \(\mathbb{P}(0)=\emptyset\)) then there is a commutative diagram_ \[\begin{array}{ccc}P^{\mathcal{B}_{W}}\times P^{\mathcal{B}_{W}}&\stackrel{{ \sim}}{{\longrightarrow}}&\mathcal{E}^{\mathcal{B}}_{W}\\ \downarrow_{\pi_{\mathcal{B}_{W}}\times\pi_{\mathcal{B}_{W}}}&&\downarrow_{\pi_ {\mathcal{B}}}\\ \mathbb{P}(W)\times\mathbb{P}(V/W)&\longrightarrow&\mathbb{P}(W)\end{array} \tag{5.2}\] _where the horizontal map along the top is a canonical isomorphism, and the one along the bottom is projection onto the first factor \(\mathbb{P}(W)\). The divisor \(D^{\mathcal{B}}\) is simple normal crossing, and two components \(\mathcal{E}^{\mathcal{B}}_{W}\) and \(\mathcal{E}^{\mathcal{B}}_{W}\), have non-empty intersection if and only if one of the two spaces \(\mathbb{P}(W),\mathbb{P}(W^{\prime})\) is contained in the other._ _Remark 5.4_.: The following observation will be very useful. By (5.2), \(\mathcal{E}^{\mathcal{B}}_{W}\) is canonically isomorphic to the iterated blow-up of \(\mathbb{P}(W)\times\mathbb{P}(V/W)\) relative to \(\pi_{\mathcal{B}_{W}}\) and \(\pi_{\mathcal{B}_{W}}\) on each factor. The product \(\mathbb{P}(W)\times\mathbb{P}(V/W)\) is isomorphic to the exceptional divisor of a single blow-up of \(\mathbb{P}(V)\) along \(\mathbb{P}(W)\). Thus the exceptional divisor \(\mathcal{E}^{\mathcal{B}}_{W}\) may be computed by first blowing-up a single linear space \(\mathbb{P}(W)\) inside \(\mathbb{P}(V)\), and then computing the iterated blow-ups relative to \(\mathcal{B}_{W}\) and \(\mathcal{B}_{W}\) on each factor \(\mathbb{P}(W)\) and \(\mathbb{P}(V/W)\) of its exceptional divisor. By repeated application of proposition 5.3, one shows that intersections of irreducible components of \(D^{\mathcal{B}}\) are in one-to-one correspondence with sequences of subspaces \[\mathbb{P}(W_{k})\subset\mathbb{P}(W_{k-1})\subset\ldots\subset\mathbb{P}(W_ {1})\subset\mathbb{P}(V) \tag{5.3}\] where \(\mathbb{P}(W_{i})\in\mathcal{B}\) for \(i=1,\ldots,k\) and all inclusions are strict. The corresponding subscheme of \(D^{\mathcal{B}}\) is isomorphic to \(\prod_{i=1}^{k+1}P^{\mathcal{B}_{i}}(W_{i-1}/W_{i})\) where \(W_{k+1}=0\) and \(W_{0}=V\) and where \(\mathcal{B}_{i}=\{\mathbb{P}(T/W_{i})\) where \(W_{i}\subseteq T\subsetneq W_{i-1}\}\). The previous proposition can be proved using explicit local coordinates, which describe presently. ### Local coordinates for linear blow ups Given a sequence (5.3), we may choose projective coordinates \(\alpha_{1},\ldots,\alpha_{n}\) on \(\mathbb{P}(V)\) such that \[\mathbb{P}(W_{m})=V(\alpha_{1},\alpha_{2},\ldots,\alpha_{i_{m}})\quad\text{ for}\ \ 1\leq m\leq k\,\] for some increasing sequence \(0<i_{1}<\ldots<i_{k}<n\). A choice of local affine coordinates on \(P^{\mathcal{B}}\) lying over the open chart \(\alpha_{n}=1\) is given by \(\beta_{1},\ldots,\beta_{n-1}\) where: \[\beta_{1}=\frac{\alpha_{1}}{\alpha_{i_{1}}},\ \ldots,\ \beta_{i_{1}-1}= \frac{\alpha_{i_{1}-1}}{\alpha_{i_{1}}}\,\ \beta_{i_{1}}=\frac{\alpha_{i_{1}}}{\alpha_{i_{2}}},\ \ldots,\beta_{i_{2}-1}= \frac{\alpha_{i_{2}-1}}{\alpha_{i_{2}}}\,\ \ldots\\ \beta_{i_{k-1}}=\frac{\alpha_{i_{k-1}}}{\alpha_{i_{k}}},\ldots,\ \beta_{i_{k}-1}= \frac{\alpha_{i_{k}-1}}{\alpha_{i_{k}}},\ \beta_{i_{k}}=\alpha_{i_{k}},\ldots\,\ \beta_{n-1}= \alpha_{n-1}\end{array}\] The equation of the exceptional divisor \(\mathcal{E}_{W_{m}}\) which lies over \(\mathbb{P}(W_{m})\) is given by \(\beta_{i_{m}}=0\). The isomorphism \(\mathcal{E}_{W}\cong P^{\mathcal{B}_{W}}\times P^{\mathcal{B}_{W}}\), in the case \(W=W_{m}\), is represented by the partition of \(\beta_{1},\ldots,\beta_{n}\) into two sets of coordinates \(\beta_{1},\ldots,\beta_{i_{m}-1}\) and \(\beta_{i_{m}+1},\ldots,\beta_{n-1}\) corresponding to the iterated blow-ups of two nested sequences: \[\mathbb{P}(W_{k})\subset\ldots\subset\mathbb{P}(W_{m+2})\subset\mathbb{P}(W_ {m+1})\subset\mathbb{P}(W_{m})\] \[\mathbb{P}(W_{m-1}/W_{m})\subset\ldots\subset\mathbb{P}(W_{1}/W_{m})\subset \mathbb{P}(V/W_{m})\] lying over the affine charts with coordinates \((\alpha_{1}:\ldots:\alpha_{i_{m-1}}:1)\) for \(\mathbb{P}(W_{m})\), and \((\alpha_{i_{m+1}}:\ldots:\alpha_{n-1}:1)\) for \(\mathbb{P}(V/W_{m})\). Proposition 5.3 may be proven using these coordinates. **Lemma 5.5**.: _Let \(H\subset\mathbb{P}(V)\) be a hyperplane and let \(\mathbb{P}(W)\in\mathcal{B}\). Denote by \(\widehat{H}\subset P^{\mathcal{B}}\) the strict transform of \(H\) under \(\pi_{\mathcal{B}}\). Then_ \[\widetilde{H}\cap\mathcal{E}_{W}=\begin{cases}P^{\mathcal{B}_{W}}\times \widetilde{H}_{/W}&\text{if}\quad\mathbb{P}(W)\subseteq H\\ \widetilde{H}_{W}\times P^{\mathcal{B}_{/W}}&\text{if}\quad\mathbb{P}(W) \nsubseteq H\,\end{cases}\] _where \(H_{W}=H\cap\mathbb{P}(W)\), and, when \(H\) corresponds to a subspace \(W\subset H_{0}\subset V\), we write \(H_{/W}=\mathbb{P}(H_{0}/W)\). Their versions with tildes denote their strict transforms under the iterated blow-ups \(\pi_{\mathcal{B}_{W}}\) and \(\pi_{\mathcal{B}_{/W}}\), respectively. Note that if \(H\subseteq\mathbb{P}(W)\) then \(\widehat{H}=\emptyset\)._ Proof.: It follows from remark 5.4 that it is enough to compute the case when \(\mathcal{B}=\{\mathbb{P}(W)\}\) reduces to a single blow-up. The strict transform of \(H\) is either \[H_{W}\times\mathbb{P}(V/W)\quad\text{ or }\quad\mathbb{P}(W)\times H_{/W}\quad \text{ inside }\quad\mathbb{P}(W)\times\mathbb{P}(V/W)\] depending on which of the two cases are satisfied, since \(H\) meets the normal bundle of \(\mathbb{P}(W)\) in the product of a hyperplane and a projective space. Alternatively, one can also verify the lemma by direct computation. Suppose that \(H\) has the equation \[\lambda_{1}\alpha_{1}+\dots+\lambda_{n}\alpha_{n}=0\, \tag{5.4}\] and suppose that \(W=W_{m}\). One has \(\mathbb{P}(W_{m})\subseteq H\) if and only if \(\lambda_{i_{m}+1}=\dots=\lambda_{n}=0\). After performing the change of variables on the affine chart described above, it becomes \[\beta_{i_{m}}\dots\beta_{i_{k}}\left(\lambda_{1}\beta_{1}\beta_{ i_{1}}\dots\beta_{i_{m-1}}+\lambda_{2}\beta_{2}\beta_{i_{1}}\dots\beta_{i_{m-1}}+ \dots+\lambda_{i_{m}-1}\beta_{i_{m-1}}+\lambda_{i_{m}}\right)\\ +\left(\lambda_{i_{m}+1}\,\beta_{i_{m}+1}\beta_{i_{m+1}}\dots\beta _{i_{k}}+\dots+\lambda_{n-1}\,\beta_{n-1}+\lambda_{n}\right)\.\] In the case when \(\mathbb{P}(W_{m})\nsubseteq H\), then setting \(\beta_{i_{m}}=0\) annihilates the first line of the previous expression, leaving only the second term in parentheses. It is precisely the equation of the strict transform of \(H_{W}\) in \(P^{\mathcal{B}_{W}}\). In the case when \(\mathbb{P}(W_{m})\subset H\), the second term in parentheses is identically zero, and therefore the strict transform of \(H\) has the equation: \[\lambda_{1}\beta_{1}\beta_{i_{1}}\dots\beta_{i_{m-1}}+\lambda_{2}\beta_{2} \beta_{i_{1}}\dots\beta_{i_{m-1}}+\dots+\lambda_{i_{m}-1}\beta_{i_{m}-1}+ \lambda_{i_{m}}\] which is the equation of the strict transform of \(H_{/W}\) in \(P^{\mathcal{B}_{/W}}\). ### Blow-ups of polyhedral linear configurations Let \((\mathbb{P}(V),L,\sigma)\) be a polyhedral linear configuration, and let \(\mathcal{B}\) be a finite set of linear subspaces \(\mathbb{P}(W)\subsetneq\mathbb{P}(V)\) such that: 1. \(\mathcal{B}\) is stable under intersections, and 2. for every \(\mathbb{P}(W)\in\mathcal{B}\), the set \(\sigma_{W}=\sigma\cap\mathbb{P}(W)(\mathbb{R})\) is a face of \(\sigma\), or is empty. _Remark 5.6_.: We call a subspace \(\mathbb{P}(W)\) as in \((B2)\)_extraneous_ if \(\sigma_{W}=\emptyset\). Let \(\mathcal{B}^{\prime}\subset\mathcal{B}\) denote the subset of spaces generated by intersections of \(\mathbb{P}(W)\in\mathcal{B}\) such that \(\sigma_{W}\neq\emptyset\). Let \(\mathcal{I}_{e}\subset\mathcal{B}\) be its complement. Then \(\mathcal{B}=\mathcal{B}^{\prime}\sqcup\mathcal{I}^{e}\) satisfies the hypotheses of proposition 5.2. Consider the iterated blow up \(\pi_{\mathcal{B}}:P^{\mathcal{B}}\to\mathbb{P}(V)\) defined above and let \[\sigma^{\mathcal{B}}=\overline{\pi_{\mathcal{B}}^{-1}(\overset{\circ}{\sigma} )}\ \subset\ P^{\mathcal{B}}(\mathbb{R})\] denote the closure, in the analytic topology, of the inverse image of the interior of \(\sigma\). If \(\sigma\) is contained in a \(\mathbb{P}(W)\in\mathcal{B}\), then \(\sigma^{\mathcal{B}}\) is the empty set. In the case when \(\sigma^{\mathcal{B}}\) is non-empty, we define a _face_ of \(\sigma^{\mathcal{B}}\) to be a non-empty intersection \[\sigma^{\mathcal{B}}\cap D(\mathbb{R})\] where \(D\) is any intersection of irreducible components of the total transform \(\pi_{\mathcal{B}}^{-1}(L)\). A _facet_ is a face of dimension \(\dim\sigma-1\). Finally, define \(L^{\mathcal{B}}\subset P^{\mathcal{B}}\) to be the union of the Zariski closures of the facets of \(\sigma^{\mathcal{B}}\). It depends on \(\sigma\), but we usually write \(L^{\mathcal{B}}\) instead of \(L^{\mathcal{B}}_{\sigma}\). **Definition 5.7**.: The blow-up of \((\mathbb{P}(V),L,\sigma)\) along \(\mathcal{B}\) is the triple \((P^{\mathcal{B}},L^{\mathcal{B}},\sigma^{\mathcal{B}})\). By definition, the facets of \(\sigma^{\mathcal{B}}\) are Zariski-dense in the irreducible components \(L\) of \(L^{\mathcal{B}}\). From the description of the local coordinates on \(P^{\mathcal{B}}\) one sees that \(\sigma^{\mathcal{B}}\) is a topological polyhedron inside \(P^{\mathcal{B}}(\mathbb{R})\), whose boundary satisfies \(\partial\sigma^{\mathcal{B}}=\sigma^{\mathcal{B}}\cap L^{\mathcal{B}}(\mathbb{ R})\). Note that \(L^{\mathcal{B}}\) contains the strict transform of \(L\), and is contained within its total transform. It will follow from the description of faces in SS5.4 that the triple \((P^{\mathcal{B}},L^{\mathcal{B}},\sigma^{\mathcal{B}})\) defines an object of \(\mathcal{PC}_{k}\). As in SS2, a map of triples \(\phi:(P^{\mathcal{B}}_{1},L^{\mathcal{B}}_{1},\sigma^{\mathcal{B}}_{1})\to(P^{ \mathcal{B}}_{2},L^{\mathcal{B}}_{2},\sigma^{\mathcal{B}}_{2})\) is a morphism of schemes \(\phi:P^{\mathcal{B}}_{1}\to P^{\mathcal{B}}_{2}\) such that \(\phi(L^{\mathcal{B}}_{1})\subset L^{\mathcal{B}}_{2}\) and whose restriction to real points induces \(\phi(\sigma^{\mathcal{B}}_{1})\subset\sigma^{\mathcal{B}}_{2}\). In practice, we shall only consider maps of very specific types. **Example 5.8**.: The blow down map \[\pi_{\mathcal{B}}:(P^{\mathcal{B}},L^{\mathcal{B}},\sigma^{\mathcal{B}})\to( \mathbb{P}(V),L,\sigma) \tag{5.5}\] is a morphism in \(\mathcal{PC}_{k}.\) The map \(\pi_{\mathcal{B}}:\sigma^{\mathcal{B}}\to\sigma\) is not in general a homeomorphism. **Example 5.9**.: (Extraneous blow-downs). Suppose that \(\mathcal{B}=\mathcal{B}^{\prime}\sqcup\mathcal{I}^{e}\) are as in remark 5.6. Proposition 5.2 (ii) defines a morphism \[\pi_{\mathcal{B}/\mathcal{B}^{\prime}}:(P^{\mathcal{B}},L^{\mathcal{B}},\sigma ^{\mathcal{B}})\to(P^{\mathcal{B}^{\prime}},L^{\mathcal{B}^{\prime}},\sigma^{ \mathcal{B}^{\prime}}) \tag{5.6}\] such that \(\pi_{\mathcal{B}/\mathcal{B}^{\prime}}:\sigma^{\mathcal{B}}\cong\sigma^{ \mathcal{B}^{\prime}}\) is a homeomorphism, since \(\pi_{\mathcal{B}/\mathcal{B}^{\prime}}\) collapses exceptional divisors which do not meet \(\sigma^{\mathcal{B}^{\prime}}\). More generally, define an _extraneous blow-up relative to_\((\sigma,V)\) to be any composition of blow-ups \(\pi_{\mathcal{B}/\mathcal{B}^{\prime}}:P^{\mathcal{B}}(V)\to P^{\mathcal{B}^{ \prime}}(V)\) of the form of proposition 5.2 (ii), where \(I\subset\mathcal{B}\) is a subset of elements whose strict transforms in \(P^{\mathcal{B}^{\prime}}\) do not meet \(\sigma^{\mathcal{B}^{\prime}}\). **Example 5.10**.: (Linear embeddings). Let \(h:V_{1}\to V_{2}\) be an injective linear map and consider the corresponding morphism of polyhedral linear configurations: \[h:(\mathbb{P}(V_{1}),L_{1},\sigma_{1})\to(\mathbb{P}(V_{2}),L_{2},\sigma_{2})\] as in definition 4.11 (1), where \(h(L_{1})=L_{2}\) and \(h(\sigma_{1})=\sigma_{2}\). If \(\mathcal{B}_{2}\) is a set of linear subspaces of \(\mathbb{P}(V_{2})\) satisfying \((B1)\) and \((B2)\) relative to \(\sigma_{2}\), then let \[\mathcal{B}_{1}=h^{-1}\mathcal{B}_{2}=\left\{\mathbb{P}(h^{-1}W\cap V_{1}) \subset\mathbb{P}(V_{1})\,\ \text{for all}\ \ \mathbb{P}(W)\in\mathcal{B}_{2}\right\}\.\] The set \(\mathcal{B}_{1}\) satisfies \((B1),(B2)\) relative to \(\sigma_{1}\) and proposition 5.2 (i) gives a morphism \[(P^{\mathcal{B}_{1}}(V_{1}),L_{1}^{\mathcal{B}_{1}},\sigma_{1}^{\mathcal{B}_{ 1}})\longrightarrow(P^{\mathcal{B}_{2}}(V_{2}),L_{2}^{\mathcal{B}_{2}},\sigma _{2}^{\mathcal{B}_{2}}). \tag{5.7}\] There is a commutative diagram where the vertical maps are \(\pi_{\mathcal{B}_{1}}\) and \(\pi_{\mathcal{B}_{2}}\): \[\begin{array}{ccc}(P^{\mathcal{B}_{1}}(V_{1}),L_{1}^{\mathcal{B}_{1}}, \sigma_{1}^{\mathcal{B}_{1}})&\stackrel{{\sim}}{{\longrightarrow }}&(P^{\mathcal{B}_{2}}(V_{2}),L_{2}^{\mathcal{B}_{2}},\sigma_{2}^{\mathcal{B} _{2}})\\ \downarrow&&\downarrow\\ (\mathbb{P}(V_{1}),L_{1},\sigma_{1})&\stackrel{{\sim}}{{ \longrightarrow}}&(\mathbb{P}(V_{2}),L_{2},\sigma_{2})\end{array} \tag{5.8}\] ### Faces and their product structure **Proposition 5.11**.: _Consider a polyhedral linear configuration \((\mathbb{P}(V),L,\sigma)\) and \(\mathcal{B}\) as above. Let \(\mathbb{P}(W)\in\mathcal{B}\) which meets \(\sigma\), and let \(\mathcal{E}_{W}^{\mathcal{B}}\subset P^{\mathcal{B}}\) denote the exceptional divisor lying above \(\mathbb{P}(W)\). Then, via the isomorphism \(\mathcal{E}_{W}^{\mathcal{B}}\cong P^{\mathcal{B}_{W}}\times P^{\mathcal{B}_{ W}}\) (see (5.2)), one has_ \[\sigma^{\mathcal{B}}\cap\mathcal{E}_{W}^{\mathcal{B}}(\mathbb{R})=\sigma_{W}^{ \mathcal{B}_{W}}\times\sigma_{/W}^{\mathcal{B}_{W}} \tag{5.9}\] _where \(\sigma_{W}=\sigma\cap\mathbb{P}(W)(\mathbb{R})\) is the face of \(\sigma\) cut out by \(\mathbb{P}(W)\), which exists by assumption \((B2)\), and \(\sigma_{/W}\subset\mathbb{P}(V/W)(\mathbb{R})\) is its normal. If \(\sigma_{W}\) is contained in some \(\mathbb{P}(T)\in\mathcal{B}\), where \(T\subset W\) (i.e., \(\sigma_{W}=\sigma\cap\mathbb{P}(T)(\mathbb{R})\)), then \(\sigma_{W}^{\mathcal{B}_{W}}\) is the empty set. If not, we have:_ \[L^{\mathcal{B}}\cap\mathcal{E}_{W}^{\mathcal{B}}=\left(L_{\sigma_{W}}^{ \mathcal{B}_{W}}\times P^{\mathcal{B}_{/W}}\right)\,\bigcup\,\left(P^{ \mathcal{B}_{W}}\times L_{\sigma_{/W}}^{\mathcal{B}_{/W}}\right)\] Proof.: By remark 5.4, it is enough to consider first the case of a single blow-up of \(\mathbb{P}(V)\) along \(\mathbb{P}(W)\). The interior of the polyhedron \(\sigma\) is the intersection of a finite number of regions \(f_{i}>0\), where \(f_{i}\) is a linear form such as (5.4) whose zero locus \(H_{i}=V(f_{i})\) is a bounding hyperplane of \(\sigma\). The inverse image of its interior, intersected with the exceptional divisor \(\mathcal{E}\), is therefore cut out by the strict transforms of the \(H_{i}\) in \(\mathcal{E}\cong\mathbb{P}(W)\times\mathbb{P}(V/W)\). By lemma 5.5, the strict transforms are of the form \(H\times\mathbb{P}(V/W)\) or \(\mathbb{P}(W)\times H\). The former define bounding hyperplanes for \(\sigma_{W}\subset\mathbb{P}(W)(\mathbb{R})\); the latter define bounding hyperplanes for \(\sigma_{/W}\), by definition of the normal polyhedron (definition 4.8). Thus, after a single blow-up of \(\mathbb{P}(W)\) in \(\mathbb{P}(V)\) the intersection of the inverse image of the interior of \(\sigma\) with \(\mathcal{E}_{W}(\mathbb{R})\) is: \[\stackrel{{\circ}}{{\sigma}}_{W}\times\stackrel{{\circ}}{{ \sigma}}_{/W}\ \subset\ \mathbb{P}(W)(\mathbb{R})\times\mathbb{P}(V/W)(\mathbb{R})\,\] where a superscript \(\circ\) denotes the interior. The statement (5.9) for the iterated blow-up follows from the definition of \(\sigma_{\mathcal{B}}\) applied to \(\sigma_{W}\) relative to \(\pi_{\mathcal{B}_{W}}\), and to \(\sigma_{/W}\) relative to \(\pi_{\mathcal{B}_{/W}}\). The final statement concerning \(L^{\mathcal{B}}\cap\mathcal{E}^{\mathcal{B}}_{W}\) follows from the definition of \(L^{\mathcal{B}}\) as the union of the Zariski closures of the facets of \(\sigma^{\mathcal{B}}\). The situation when \(\sigma_{W}^{\mathcal{B}_{W}}=\emptyset\) arises when there exists \(\mathbb{P}(T)\subsetneq\mathbb{P}(W)\) in \(\mathcal{B}\) such that \(\sigma_{W}=\sigma\cap\mathbb{P}(T)(\mathbb{R})=\sigma\cap\mathbb{P}(W)( \mathbb{R})\), i.e., \(W\) is not minimal amongst the set of spaces in \(\mathcal{B}\) which meet \(\sigma\) along \(\sigma_{W}\). This phenomenon is discussed in further detail in SS5.5. In the particular case when \(\mathcal{B}=\{\mathbb{P}(W)\}\) is a singleton and hence \(P^{\mathcal{B}}\) is simply the blow-up of \(\mathbb{P}(V)\) along \(\mathbb{P}(W)\), formula (5.9) states that \[\sigma^{\mathcal{B}}\cap\mathcal{E}_{W}(\mathbb{R})=\sigma_{W}\times\sigma_{ /W}\,\] which provides an infinitesimal interpretation of the normal polyhedron (remark 4.9). **Notation 5.12**.: Let \((P^{\mathcal{B}},L^{\mathcal{B}},\sigma^{\mathcal{B}})\) be as above, and let \(D\) be an intersection of irreducible components of \(L^{\mathcal{B}}=\bigcup L\). Denote by \[D\cap(P^{\mathcal{B}},L^{\mathcal{B}},\sigma^{\mathcal{B}})=(D\cap P^{ \mathcal{B}},\bigcup_{L\neq D}(D\cap L),D(\mathbb{R})\cap\sigma^{\mathcal{B}})\.\] It is a face if \(D(\mathbb{R})\cap\sigma^{\mathcal{B}}\neq\emptyset\). **Corollary 5.13**.: _A face of the blow-up of a polyhedral linear configuration is isomorphic to a product of blow-ups of polyhedral linear configurations._ _More precisely, for any intersection of irreducible components \(D\) of \(L^{\mathcal{B}}\) we have_ \[D\cap(P^{\mathcal{B}},L^{\mathcal{B}},\sigma^{\mathcal{B}})\cong(P^{ \mathcal{B}_{1}},L_{1}^{\mathcal{B}_{1}},\sigma_{1}^{\mathcal{B}_{1}})\times \ldots\times(P^{\mathcal{B}_{n}},L_{n}^{\mathcal{B}_{n}},\sigma_{n}^{\mathcal{ B}_{n}}) \tag{5.10}\] _for suitable \(\sigma_{1},\ldots,\sigma_{n}\) polyhedra in \(\mathbb{P}(V_{1}),\ldots,\mathbb{P}(V_{n})\), and \(\mathcal{B}_{i}\) a subset of linear subspaces in \(\mathbb{P}(V_{i})\), satisfying (B1) and (B2) relative to \(\sigma_{i}\), for all \(1\leqslant i\leqslant n\)._ Proof.: Let \((P^{\mathcal{B}},L^{\mathcal{B}},\sigma^{\mathcal{B}})\) be the iterated blow up of \((\mathbb{P}(V),L,\sigma)\). Assume for the time being that the face in question is given by intersection with \(D\), a single irreducible component of \(L^{\mathcal{B}}\). Suppose first of all that \(D\subset\mathcal{E}_{W}\) is contained in an exceptional divisor for some \(\mathbb{P}(W)\in\mathcal{B}\). Then by proposition 5.11, \[\mathcal{E}_{W}\cap(P^{\mathcal{B}},L^{\mathcal{B}},\sigma^{\mathcal{B}})\cong (P^{\mathcal{B}_{W}},L_{\sigma_{W}}^{\mathcal{B}_{W}},\sigma_{W}^{\mathcal{ B}_{W}})\times(P^{\mathcal{B}_{/W}},L_{\sigma_{/W}}^{\mathcal{B}_{/W}},\sigma_{/W}^{ \mathcal{B}_{/W}})\] and by the final part of the same proposition, \(D\subset\mathcal{E}_{W}\cong P^{\mathcal{B}_{W}}\times P^{\mathcal{B}_{/W}}\) is of the form \(D=D_{W}\times P^{\mathcal{B}_{/W}}\) or \(D=P^{\mathcal{B}_{W}}\times D_{/W}\). Now suppose that \(D\) is not contained in any exceptional divisor, and is therefore the strict transform of a linear subspace \(\mathbb{P}(U)=\pi^{\mathcal{B}}(D)\) for some \(U\subset V\), such that \(\mathbb{P}(U)\subset L\) is not contained in any \(\mathbb{P}(W)\in\mathcal{B}\). It follows from proposition 5.2 (i) that \(D\) is the iterated blow up of \(\mathbb{P}(U)\) along the set of linear subspaces \(\mathcal{B}_{U}=\{\mathbb{P}(W\cap U):\mathbb{P}(W)\in\mathcal{B}\}\). It follows that \[D\cap(P^{\mathcal{B}},L^{\mathcal{B}},\sigma^{\mathcal{B}})\cong(P^{ \mathcal{B}_{U}},L_{U}^{\mathcal{B}_{U}},\sigma_{U}^{\mathcal{B}_{U}})\] is the iterated blow up of the face \((\mathbb{P}(U),L_{U},\sigma_{U})\) of \((\mathbb{P}(V),L,\sigma)\). In general when \(D\) has several irreducible components, we may proceed by induction by repeatedly taking the intersection with irreducible components of \(D\) as above. ### Minimal blow-up It may happen that strict transforms of subspaces in \(\mathcal{B}\) may become extraneous at an intermediate stage of blowing-up the spaces of \(\mathcal{B}\). **Definition 5.14**.: Let \((\mathbb{P}(V),L,\sigma)\) be as above, and let \(\mathcal{B}\) be a finite set of linear subspaces \(\mathbb{P}(W)\) of \(\mathbb{P}(V)\) such that (B1) and (B2) hold. Define \(\mathcal{B}^{\min,\sigma}\subset\mathcal{B}\) as follows. First consider the subset \(S(\sigma)\subset\mathcal{B}\) of spaces \(\mathbb{P}(W)\) such that (i). \(\mathbb{P}(W)(\mathbb{R})\cap\sigma\neq\emptyset\) (ii). if \(\mathbb{P}(W)(\mathbb{R})\cap\sigma=\mathbb{P}(W^{\prime})(\mathbb{R})\cap\sigma\) for some \(\mathbb{P}(W^{\prime})\in\mathcal{B}\) then \(W\subset W^{\prime}\). The set of \(U\) such that \(\mathbb{P}(U)(\mathbb{R})\cap\sigma\) equals the face \(\mathbb{P}(W)(\mathbb{R})\cap\sigma\) is closed under intersections: (ii) asks that \(W\) be minimal for this property. Define \(\mathcal{B}^{\min,\sigma}\subset\mathcal{B}\) to be the subspace generated by \(S(\sigma)\) by taking intersections. In other words, it is the set of intersections of minimal spaces which meet \(\sigma\) along a face. Note that taking intersections may violate \((i)\): in the situation where \(\sigma\) is not a simplex, there can be spaces \(\mathbb{P}(W)\) in \(\mathcal{B}^{\min,\sigma}\) which do not meet \(\sigma\). The definition of \(\mathcal{B}^{\min,\sigma}\) makes sense even if \(\mathcal{B}\) is infinite: the minimal set \(\mathcal{B}^{\min,\sigma}\) is necessarily finite. **Proposition 5.15**.: _There is a blow-down morphism_ \[\pi:P^{\mathcal{B}}\longrightarrow P^{\mathcal{B}^{\min,\sigma}}\] _such that \(P^{\mathcal{B}}\) is an iterated extraneous blow-up (Example 5.9) of \(P^{\mathcal{B}^{\min,\sigma}}\). In particular, \(\pi:\sigma^{\mathcal{B}}\overset{\sim}{\to}\sigma^{\mathcal{B}^{\min,\sigma}}\) is a homeomorphism._ Proof.: Write \(\mathcal{B}\) as a disjoint union \(\mathcal{B}=\mathcal{B}^{\min,\sigma}\cup\mathcal{I}\) and let \(\mathcal{B}^{\min,\sigma}_{k}\) (resp. \(\mathcal{I}_{k}\)) denote the subsets of spaces of \(\mathcal{B}^{\min,\sigma}\) (resp. \(\mathcal{I}\)) of dimension \(k\). The elements in \(\mathcal{I}_{0}\) are already extraneous: they do not meet \(\sigma\). For general \(k\geqslant 1\), let \(P^{\mathcal{B}^{\min,\sigma}_{k-1}}\) denote the iterated blow-up of \(\mathbb{P}(V)\) along \(\mathcal{B}^{\min,\sigma}_{0},\ldots,\mathcal{B}^{\min,\sigma}_{k-1}\) in increasing order of dimension. It follows from lemma 4.10 and proposition 5.11 that the strict transforms of elements of \(\mathcal{I}_{k}\) in \(P^{\mathcal{B}^{\min,\sigma}_{k-1}}\) are extraneous (they do not meet \(\sigma^{\mathcal{B}^{\min,\sigma}_{k-1}}\)). This property of being extraneous relative to \(\sigma^{\mathcal{B}^{\min,\sigma}_{k-1}}\) is preserved under further blow-ups. Consequently, \(\mathcal{I}_{k}\) satisfies the conditions of proposition 5.2 (ii), and so spaces in \(\mathcal{I}_{k}\) may be blown-up after having first blown up all the spaces in \(\mathcal{B}^{\min,\sigma}\) (i.e., in the order of blowing up, the \(\mathcal{I}_{k}\) may be interchanged with any elements in \(\mathcal{B}_{j}\), for \(j\geqslant k\)). It follows by induction on \(k\) that \(P^{\mathcal{B}}\) may be formed by first blowing up all elements in \(\mathcal{B}^{\min,\sigma}\), in increasing order of dimension, followed by all elements in \(\mathcal{I}\) (also in increasing order of dimension). Thus \(P^{\mathcal{B}}\) is the iterated extraneous blow-up of \(P^{\mathcal{B}^{\min}}\) along the strict transforms of the elements in \(\mathcal{I}\), none of which intersects \(\sigma^{\mathcal{B}^{\min,\sigma}}\). We shall call the \(P^{\mathcal{B}^{\min,\sigma}}\) the minimal blow-up of \(\mathbb{P}(V)\) relative to \(\mathcal{B}\), \(\sigma\). ### Category of blow-ups of polyhedral linear configurations We may now define a category \(\mathrm{BLC}_{k}\) of blow-ups of linear configurations. Its objects are products (using notation 2.2): \[(P^{\mathcal{B}_{1}},L^{\mathcal{B}_{1}},\sigma^{\mathcal{B}_{1}})\times \ldots\times(P^{\mathcal{B}_{k}},L^{\mathcal{B}_{k}},\sigma^{\mathcal{B}_{k}})\] of blow-ups of polyhedral linear configurations. Morphisms between objects are generated by products of the following: 1. (Linear Embeddings). The maps induced by linear embeddings (5.7): \[(P^{\mathcal{B}_{1}}(V_{1}),L^{\mathcal{B}_{1}}_{1},\sigma^{\mathcal{B}_{1}}) \longrightarrow(P^{\mathcal{B}_{2}}(V_{2}),L^{\mathcal{B}_{2}},\sigma^{ \mathcal{B}_{2}})\] They induce isomorphisms \(\sigma^{\mathcal{B}_{1}}\cong\sigma^{\mathcal{B}_{2}}\). 2. (Extraneous blow-downs). With notation as in Example 5.9: \[\pi_{\mathcal{B}/\mathcal{B}^{\prime}}:(P^{\mathcal{B}},L^{\mathcal{B}},\sigma^ {\mathcal{B}})\longrightarrow(P^{\mathcal{B}^{\prime}},L^{\mathcal{B}^{\prime}},\sigma^{\mathcal{B}^{\prime}})\.\] where \(\mathcal{B}^{\prime}=\mathcal{B}\setminus\mathcal{I}^{e}\) and \(\mathcal{I}^{e}\subset\mathcal{B}\) denotes a set of extraneous blow-ups. 3. (Face maps). A face map is defined to be a composition of inclusions of facets. There are two types of inclusions of facets depending on whether the facet in question is contained in an exceptional divisor or not. 1. Consider the inclusion of an exceptional divisor \(\mathcal{E}_{W}\) into \(P^{\mathcal{B}}\), where \(\mathbb{P}(W)\in\mathcal{B}\) such that \(\sigma_{W}=\sigma\cap\mathbb{P}(W)(\mathbb{R})\) is non-empty and hence by \((B2)\) is a face of \(\sigma\). By identifying \(\mathcal{E}^{\mathcal{B}}_{W}\cong P^{\mathcal{B}_{W}}\times P^{\mathcal{B}_{W}}\) we obtain a map via proposition 5.11: \[(P^{\mathcal{B}_{W}},L^{\mathcal{B}_{W}}_{\sigma_{W}},\sigma_{W}^{\mathcal{B}_{W }})\times(P^{\mathcal{B}_{W}},L^{\mathcal{B}_{W}}_{\sigma_{/W}},\sigma_{/W}^{ \mathcal{B}_{W}})\longrightarrow(P^{\mathcal{B}},L^{\mathcal{B}}_{\sigma}, \sigma^{\mathcal{B}})\] which defines a morphism in \(\mathrm{BLC}_{k}\). (ii). Consider the case of an irreducible component \(D\) of \(L^{\mathcal{B}}\) which is the strict transform of \(\mathbb{P}(U)\subset L\) for some \(U\subset V\), where \(\mathbb{P}(U)\) is not contained in any element of \(\mathcal{B}\). Then by identifying \[D\cap(P^{\mathcal{B}},L^{\mathcal{B}},\sigma^{\mathcal{B}})\cong(P^{\mathcal{ B}_{U}},L^{\mathcal{B}_{U}}_{U},\sigma_{U}^{\mathcal{B}_{U}})\] we obtain a map: \((P^{\mathcal{B}_{U}},L^{\mathcal{B}_{U}}_{U},\sigma_{U}^{\mathcal{B}_{U}}) \longrightarrow(P^{\mathcal{B}},L^{\mathcal{B}}_{\sigma},\sigma^{\mathcal{B}})\). The category \(\mathrm{PLC}_{k}\) is a subcategory of \(\mathrm{BLC}_{k}\), which in turn is a subcategory of \(\mathcal{PC}_{k}\). Assumptions (1) and (2) of SS3.1 follow from the description of iterated blow-ups in local coordinates and the description of faces of the \(\sigma^{\mathcal{B}}\) given in SS5.4. _Remark 5.16_.: Consider a polyhedral linear configuration \((\mathbb{P}(V),L,\sigma)\) and a set \(\mathcal{B}\) satisfying \((B1),(B2)\). Let \(\mathbb{P}(W)\in\mathcal{B}\). The inclusion of the exceptional divisor \(\mathcal{E}_{W}\) gives rise to the horizontal map along the top of the following commutative diagram in \(\mathcal{PC}_{k}\): \[\begin{array}{ccccc}(P^{\mathcal{B}_{W}},L^{\mathcal{B}_{W}}_{\sigma_{W}}, \sigma_{W}^{\mathcal{B}_{W}})&\times&(P^{\mathcal{B}_{W}},L^{\mathcal{B}_{W}} _{\sigma_{/W}},\sigma_{/W}^{\mathcal{B}_{/W}})&\longrightarrow&(P^{\mathcal{ B}},L^{\mathcal{B}}_{\sigma},\sigma^{\mathcal{B}})\\ &&\downarrow&&\downarrow\\ &(\mathbb{P}(W),L_{\sigma_{W}},\sigma_{W})&\longrightarrow&(\mathbb{P}(V),L, \sigma)\end{array}\] The top row is in the subcategory \(\mathrm{BLC}_{k}\), the bottom row is in \(\mathrm{PLC}_{k}\). The horizontal map along the bottom is the inclusion of the face \(\sigma_{W}\) of \(\sigma\). The right-most vertical map is the blow-down \(\pi_{\mathcal{B}}\) and the left-hand vertical map is the projection onto the first factor, followed by the blow-down \(\pi_{\mathcal{B}_{W}}\) (topologically, the normal polytope \(\sigma_{/W}\) is collapsed to a point.) ### Complexes of blow-ups of polyhedra **Definition 5.17**.: A complex of blow-ups of polyhedra is a \(\mathrm{BLC}_{k}\)-complex, i.e., a functor \(F:\mathcal{D}\to\mathrm{BLC}_{k}\) where \(\mathcal{D}\) is equivalent to a finite diagram category. The definition of a morphism of \(\mathrm{BLC}_{k}\)-complexes proceeds in an identical way to definition 2.3. The definition of subschemes, and differential forms follows SS2.3, 2.4. Although blow-downs and projection morphisms exist in the category \(\mathcal{PC}_{k}\), we do not include them in the category \(\mathrm{BLC}_{k}\) in order that assumptions SS3.1 hold. ## 6. The moduli space \(\mathcal{M}^{\mathrm{trop}}_{g}\) of tropical curves We recast the theory of Feynman polytopes in the context of polyhedral linear complexes, before turning to the moduli space of tropical curves. ### Graphs and polyhedral linear complexes Let \(G\) be a finite graph with vertices \(V_{G}\) and edges \(E_{G}\). Let \(\mathbb{P}^{E_{G}}=\mathbb{P}(\mathbb{Q}^{E_{G}})\) denote the projective space with projective coordinates \(\alpha_{e}\) for every edge \(e\in E_{G}\). For the time being its vertices are unweighted (equivalently, have weight zero). For any subgraph \(\gamma\) of \(G\), defined by a subset of edges of \(E_{G}\), we denote by \(G/\gamma\) the graph obtained by contracting all edges \(e\) of \(E_{\gamma}\). It does not depend on the ordering of the contractions. The edges of \(G\) are labelled in this section. **Definition 6.1**.: For any \(G\), consider the object in the category \(\mathrm{PLC}_{\mathbb{Q}}\) defined by \[\mathcal{F}(G)=\left(\mathbb{P}^{E_{G}},L,\sigma_{G}\right)\quad\text{where} \quad L=\bigcup_{e\in E_{G}}L_{e},\] where \(\sigma_{G}\) is the region in projective space where \(\alpha_{e}\geqslant 0\), and \(L_{e}\) is the coordinate hyperplane \(\alpha_{e}=0\). The object \(\mathcal{F}(G)\) is nothing other than a standard simplex Example 4.5. _Remark 6.2_.: Since \(\sigma_{G}\) is a simplex, the associated polyhedral configuration has the special property that any non-empty intersection of components of \(L\) is the Zariski closure of a face of \(\sigma_{G}\). This is not true for general polyhedra. For every edge \(e\in E_{G}\), there is a canonical face map (definition 4.11): \[\mathcal{F}(G/e)\longrightarrow\mathcal{F}(G)\] corresponding to the inclusion of the face \(\sigma_{G/e}\subset\sigma_{G}\), which identifies \(\mathbb{P}^{E_{G/e}}\) with the locus \(\alpha_{e}=0\) in the projective space \(\mathbb{P}^{E_{G}}\). The simplex \(\mathcal{F}(G)\), together with the data of all its faces, admits a categorical description. For this, all graphs in the following have labelled edges and hence have no non-trivial automorphisms. Consider the category \(\mathcal{C}_{G}\) whose objects are all quotients \(G/\gamma\) of \(G\), for all strict subgraphs \(\gamma\subsetneq E_{G}\), including the case \(\gamma=\emptyset\). The morphisms in this category are generated by edge contractions \(\Gamma\to\Gamma/e\) for any edge \(e\in E_{\Gamma}\). The map \(\mathcal{F}\) is a functor \[G\mapsto\mathcal{F}(G)\quad:\quad\mathcal{C}_{G}^{\mathrm{opp}}\longrightarrow \mathrm{PLC}_{\mathbb{Q}}\] whose morphisms are face maps. Indeed, the faces of the standard simplex \(\mathcal{F}(G)\) are in one-to-one correspondence with the elements of the category \(\mathcal{C}_{G}^{\mathrm{opp}}\). ### The moduli space of tropical curves \(\mathcal{M}_{g}^{\mathrm{trop}}\) The moduli space of tropical curves is constructed in a completely analogous manner to the above by gluing together simplices associated to isomorphism classes of stable graphs of a fixed genus. #### 6.2.1. Weighted and stable graphs Let \(G\) be a finite graph. A _weighting_ is a map \(w:V(G)\to\mathbb{N}_{\geq 0}\) which assigns a non-negative integer to every vertex. The _genus_ of \(G\) is defined to be: \[g(G)=h_{G}+w(G)\quad\text{ where }\quad w(G)=\sum_{v\in V_{G}}w(v)\;. \tag{6.1}\] A weighted graph \((G,w)\) is called _stable_ if every vertex of weight \(0\) has degree at least \(3\), and if every vertex of weight \(1\) has degree \(\geq 1\). An isomorphism \((G,w)\xrightarrow{\sim}(G^{\prime},w^{\prime})\) of weighted graphs is an isomorphism \(G\xrightarrow{\sim}G^{\prime}\) which respects the weightings. For every edge \(e\in E_{G}\), the contraction of \(e\) is defined as follows. If \(e\) is not a self-edge (tadpole), then \((G,w)/e\) is the pair \((G/e,w^{\prime})\) where \(G/e\) is the graph in which \(e\) is removed and the endpoints \(v_{1},v_{2}\) of \(G\) are identified to produce a new vertex \(v\) of weight \(w^{\prime}(v)=w(v_{1})+w(v_{2})\). The weights of all other vertices are unchanged. In the case when \(e\) is a loop, then \(G/e\) is defined by simply removing the edge \(e\). In this case, the common endpoint \(v\) of \(e\) has weight \(w^{\prime}(v)=w(v)+1\) in the graph \(G/e\); the weights of all other vertices are unchanged. **Definition 6.3**.: Let \(g\geq 2\) and let \(I_{g}\) denote the category of connected, stable, weighted graphs \((G,w)\) of genus \(g\) and \(e_{G}\geq 1\), whose morphisms are generated by isomorphisms and edge contractions \((G,w)\mapsto(G,w)/e\). Let \(I_{g}^{\mathrm{opp}}\) denote the opposite category. Figure 6. Illustration of the category \(I_{2}\). Unlabelled vertices have weight \(0\). Automorphisms of graphs are not depicted, only edge contractions. An isomorphism of weighted graphs \((G,w)\cong(G^{\prime},w^{\prime})\) is in particular an isomorphism of graphs \(G\cong G^{\prime}\) and induces a bijection between the corresponding sets of edges. This gives rise to a linear isomorphism \(\mathbb{P}^{E_{G}}\cong\mathbb{P}^{E_{G^{\prime}}}\) and hence a canonical isomorphism \(\mathcal{F}(G)\cong\mathcal{F}(G^{\prime})\) in \(\mathrm{PLC}_{\mathbb{Q}}\). Consequently, the restriction of \(\mathcal{F}\) to stable, connected graphs defines a functor. **Definition 6.4**.: The link of the moduli space of tropical curves is the polyhedral linear complex defined by the functor \[L\mathcal{M}_{g}^{\mathrm{trop}}:I_{g}^{\mathrm{opp}}\longrightarrow\mathrm{ PLC}_{\mathbb{Q}}\] which to any weighted graph \((G,w)\) assigns the standard simplex (example 4.5): \[L\mathcal{M}_{g}^{\mathrm{trop}}(G,w)=\mathcal{F}(G)\.\] Note that \(\mathcal{F}(G)\) does not depend on the weighting of \(G\). Its topological realisation \(|L\mathcal{M}_{g}^{\mathrm{trop}}|\) (2.6) is the link of the moduli space of tropical curves. _Remark 6.5_.: The definitions are easily adapted to the moduli space of tropical curves, rather than the link of its cone point. Let \(\widetilde{I}_{g}\) denote the category \(I_{g}\), with an additional, final, object given by a single vertex of weight \(g\). Consider the functor \(\mathcal{M}_{g}^{\mathrm{trop}}\) from \(\widetilde{I}_{g}^{\mathrm{opp}}\) to \(\mathcal{PC}_{\mathbb{Q}}\): \[(G,w)\mapsto\left(\mathbb{A}^{E_{G}},V\left(\prod_{e\in E_{G}}\alpha_{e} \right),\widehat{\sigma}_{G}\right)\,\] where \(\widehat{\sigma}_{G}\) is the cone \(\alpha_{e}\geq 0\). The final object in \(\widetilde{I}_{g}^{\mathrm{opp}}\) maps to \((\mathrm{Spec}(\mathbb{Q}),\mathrm{Spec}(\mathbb{Q}),\mathrm{Spec}(\mathbb{C}))\), whose image in \(|\mathcal{M}_{g}^{\mathrm{trop}}|\) is the cone point. The functor \(\mathcal{M}_{g}^{\mathrm{trop}}\) lands in a subcategory of \(\mathcal{PC}_{\mathbb{Q}}\) which is an analogue of \(\mathrm{PLC}_{\mathbb{Q}}\) defined using affine spaces. Since \(\left|\mathcal{M}_{g}^{\mathrm{trop}}\right|\) is topologically trivial, these considerations will not be pursued further in this paper. ### The open moduli space and Outer Space Let \(\partial I_{g}\) denote the full subcategory of \(I_{g}\) consisting of graphs of weight \(>0\). Consider the functor \[\partial L\mathcal{M}_{g}^{\mathrm{trop}}:\partial I_{g}^{\mathrm{opp}} \longrightarrow\mathrm{PLC}_{\mathbb{Q}}\] obtained by restricting \(L\mathcal{M}_{g}^{\mathrm{trop}}\) to \(\partial I_{g}^{\mathrm{opp}}\). It sends \(G\) to \(\mathcal{F}(G)\). **Definition 6.6**.: Let us denote its topological realisation by \[\left|\partial L\mathcal{M}_{g}^{\mathrm{trop}}\right|=\lim_{G\in\widehat{ \partial I}_{g}^{\mathrm{opp}}}\sigma_{G}\] which canonical embeds in \(\left|L\mathcal{M}_{g}^{\mathrm{trop}}\right|\). Let \(\left|L\mathcal{M}_{g}^{\mathrm{trop}}\right|=\left|L\mathcal{M}_{g}^{\mathrm{ trop}}\right|\setminus\left|\partial L\mathcal{M}_{g}^{\mathrm{trop}}\right|\) denote its open complement. Figure 7. A picture of \(|L\mathcal{M}_{2}^{\mathrm{trop}}|\). Faces with common labels are identified. The open moduli space is obtained by gluing together the open interiors of simplices \(\sigma_{G}\) for graphs \(G\) which have total weight \(0\): \[\big{|}L\mathcal{M}_{g}^{\circ,\mathrm{trop}}\big{|}=\mathrm{Im}\left(\bigcup_{G \in I_{g}^{\mathrm{opp}},w(G)=0}\stackrel{{\circ}}{{\sigma}}_{G} \longrightarrow\big{|}L\mathcal{M}_{g}^{\mathrm{trop}}\big{|}\right)\,,\] and is equipped with the subspace topology. Note that the open moduli space \(\big{|}L\mathcal{M}_{g}^{\circ,\mathrm{trop}}\big{|}\) is not the topological realisation of a polyhedral complex since it is not closed. It is the quotient of Outer space \(\mathcal{O}_{g}\)[23] by the action of \(\mathrm{Out}(F_{g})\). ### The graph locus Let \(G\) be a graph. The graph polynomial for a connected graph \(G\) (with all vertices of weight zero) is defined to be the polynomial \[\Psi_{G}=\sum_{T\subset G}\prod_{e\notin T}\alpha_{e}\quad\in\quad\mathbb{Z}[ \alpha_{e},e\in E_{G}]\] where the sum is over all spanning trees \(T\) of \(G\). It is homogeneous of degree \(h_{G}\). For a graph \(G\) with connected components \(G_{1},\ldots,G_{n}\), we define \[\Psi_{G}=\prod_{i=1}^{n}G_{i}\.\] If a graph \(G\) has a vertex \(v\) with \(w(v)>0\), i.e., the total weight \(w(G)\) is positive, then we define \(\Psi_{G}=0\). The graph locus \(X_{G}\subset\mathbb{P}^{E_{G}}\) is defined to be the zero locus of \(\Psi_{G}\). It is a hypersurface if \(G\) has total vertex weight \(0\), but equals \(\mathbb{P}^{E_{G}}\) otherwise. **Proposition 6.7**.: _The map \(G\mapsto X_{G}\) defines a functor_ \[\mathcal{X}:(I_{g})^{\mathrm{opp}}\longrightarrow\mathrm{Sch}_{\mathbb{Q}}\] _It is a subscheme functor of \(L\mathcal{M}_{g}^{\mathrm{trop}}\), i.e., of \(\mathsf{S}\mathcal{F}:(I_{g})^{\mathrm{opp}}\rightarrow\mathrm{Sch}_{\mathbb{ Q}}\) which sends \(G\mapsto\mathbb{P}^{E_{G}}\), (definition 6.1). Its topological complement is the open moduli space:_ \[\big{|}L\mathcal{M}_{g}^{\circ,\mathrm{trop}}\big{|}=\big{|}L\mathcal{M}_{g}^{ \mathrm{trop}}\big{|}\setminus\big{(}\big{|}L\mathcal{M}_{g}^{\mathrm{trop}} \big{|}\cap|\mathcal{X}(\mathbb{R})|\big{)}=\varinjlim_{x\in I_{g}^{\mathrm{ opp}}}\sigma_{G}\backslash(\sigma_{G}\cap X_{G}(\mathbb{R}))\] _Equivalently, one has \(|\mathcal{X}(\mathbb{R})|\cap|L\mathcal{M}_{g}^{\mathrm{trop}}|=|\partial L \mathcal{M}_{g}^{\mathrm{trop}}|\)._ Proof.: An isomorphism \((G,w)\cong(G^{\prime},w^{\prime})\) of weighted graphs induces a bijection on edge sets, and gives a linear isomorphism \(\mathbb{P}^{E_{G}}\cong\mathbb{P}^{E_{G^{\prime}}}\) and an isomorphism \(X_{G}\cong X_{G^{\prime}}\). The image of an edge contraction \(G\to G/e\) under the functor \(\mathsf{S}\mathcal{F}\) is the inclusion \(\mathbb{P}^{E_{G/e}}\rightarrow\mathbb{P}^{E_{G}}\) whose image is the coordinate hyperplane \(\alpha_{e}=0\). By definition, \(\Psi_{G}|_{\alpha_{e}=0}=\Psi_{G/e}\), which implies that \(X_{G/e}=X_{G}\cap V(\alpha_{e})\). This formula holds also for \(e\) a tadpole, in which case \(G/e\) has positive weight, and both \(\Psi_{G}|_{\alpha_{e}=0}\) and \(\Psi_{G/e}\) vanish. This proves that \(\mathcal{X}\) indeed defines a subscheme of \(\mathsf{S}\mathcal{F}\). For the last part, observe that \(\sigma_{G}\subset X_{G}(\mathbb{R})\) if \(w(G)>0\) and that \[\stackrel{{\circ}}{{\sigma}}_{G}\cap X_{G}(\mathbb{R})=\emptyset \quad\text{if}\quad w(G)=0\.\] This follows from the fact that if \(w(G)=0\) then \(\Psi_{G}\) is a non-trivial sum of positive monomials, and so \(\Psi_{G}>0\) for \(\alpha_{e}>0\). The open faces of \(\sigma_{G}\) are in one-to-one correspondence with the \(\stackrel{{\circ}}{{\sigma}}_{G/\gamma}\) for all \(\gamma\subset G\). It follows that \(\sigma_{G}\cap X_{G}(\mathbb{R})\) is the union of the faces \(\sigma_{G/\gamma}\) where \(h_{\gamma}>0\), or equivalently, such that \(w(G/\gamma)\) is positive. ### The graph complex \(\mathcal{GC}_{0}^{\mathrm{stab}}\) Consider the version of the graph complex [19, 20] defined to be the \(\mathbb{Q}\)-vector space generated by oriented, stable unweighted graphs \([G,\omega]\). It is bigraded by genus \(g\), and the number of edges \(e_{G}\), which is the homological degree. An orientation \(\omega\) on \(G\) is an element of \(\bigwedge(\mathbb{Z}^{E_{G}})^{\times}\), i.e., an order of the edges up to even permutations. One has relations \([G,-\omega]=-[G,\omega]\) and \([G,\omega]=[G^{\prime},\omega^{\prime}]\) if \(G\cong G^{\prime}\) is an isomorphism which sends \(\omega\) to \(\omega^{\prime}\). The differential is: \[d[G,e_{1}\wedge\ldots\wedge e_{n}]=\sum_{e}(-1)^{i}[G/\!\!/e_{i},e_{1}\wedge \ldots\widehat{e_{i}}\wedge\ldots\wedge e_{n}]\] where \(G/\!\!/e\) denotes the graph in which the edge \(e\) is contracted, and is the empty graph if \(e\) is a tadpole. One shows that \(\mathcal{GC}_{0}^{\mathrm{stab}}\) is quasi-isomorphic to the complex usually denoted by \(\mathcal{GC}_{0}\). The following proposition is proven in [10] and also follows from theorem 3.2. **Proposition 6.8**.: _The relative face complex \(\mathfrak{C}(L\mathcal{M}_{g}^{\mathrm{trop}})/\mathfrak{C}(\partial L\mathcal{ M}_{g}^{\mathrm{trop}})\) is the graph complex \(\mathcal{GC}_{0}^{\mathrm{stab}}[-1]\) where the homological degree is given by numbers of edges \(-1\). Thus:_ \[H_{n}(\mathcal{GC}_{0})\cong\bigoplus_{g\geqslant 1}H_{n-1}(|L\mathcal{M}_{g}^{ \mathrm{trop}}|,\left|\partial L\mathcal{M}_{g}^{\mathrm{trop}}\right|;\mathbb{ Q})\.\] A more profound statement is the fact, due to [10], that \(\left|\partial L\mathcal{M}_{g}^{\mathrm{trop}}\right|\) is contractible and hence the graph complex also computes the reduced homology of \(|L\mathcal{M}_{g}^{\mathrm{trop}}|\). The bordification \(\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) of \(\mathcal{M}_{g}^{\mathrm{trop}}\) We review the blow-ups of Feynman polytopes, and define \(\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) by gluing together the blow-ups of polyhedral configurations of stable, weighted graphs. ### Blow-ups and Feynman polytopes **Definition 7.1**.: Let \(G\) be a graph. Consider the set of subspaces \[\mathcal{B}^{G}=\mathcal{B}^{G,\mathrm{core}}=\{\mathbb{P}(\mathbb{Q}^{E_{ \gamma}}):\gamma\subset G\quad\text{core subgraph}\}\] which, one verifies (see e.g. [1, 16]), satisfies \((B1),(B2)\).2 Define Footnote 2: If we set \(\mathcal{B}^{G,\max}=\{\mathbb{P}(\mathbb{Q}^{E_{\gamma}}):\text{ all }\gamma \subset G\}\), then one may verify that \((\mathcal{B}^{G,\max})^{\min}=\mathcal{B}^{G,\mathrm{core}}\). Therefore the core blow-up is minimal, and proposition 5.15 defines a morphism \(P^{\mathcal{B}^{G,\max}}\to P^{\mathcal{B}^{G,\mathrm{core}}}\). \[\mathcal{F}^{\mathcal{B}}(G)=\left(\mathbb{P}^{\mathcal{B}^{G}},L^{\mathcal{B }^{G}},\sigma^{\mathcal{B}^{G}}\right)\] to be the object in \(\mathrm{BLC}_{\mathbb{Q}}\) obtained by blowing up the linear subspaces corresponding to core subgraphs. It is equipped with a canonical blow-down morphism in \(\mathcal{PC}_{\mathbb{Q}}\): \[\pi^{\mathcal{B}}:\mathcal{F}^{\mathcal{B}}(G)\longrightarrow\mathcal{F}(G)\.\] A key point is that the set \(\mathcal{B}^{G}\) is intrinsic to \(G\), which is why we shall drop the superscript \(G\) with impunity. By this we mean the following. Let \(g\subset G\) be a core subgraph. Then, employing the notation (5.1), we have canonical identifications \[\left(\mathcal{B}^{G}\right)_{\mathbb{Q}^{E_{\gamma}}}=\mathcal{B}^{\gamma} \qquad\text{ and }\qquad\left(\mathcal{B}^{G}\right)_{\mathbb{Q}^{E_{G/\gamma}}}= \mathcal{B}^{G/\gamma}\] since, for the first equation, there is a bijection between the set of core subgraphs \(g\subset G\) which are contained in \(\gamma\), and the set of core subgraphs of \(\gamma\) (the property of being a core subgraph is intrinsic, and does not depend on the ambient graph). For the second equation, there is a bijection between core subgraphs \(g\subset G\) which contain \(\gamma\), and core subgraphs of \(G/\gamma\). For all \(e\in E_{G}\) which is not a tadpole, there is a face morphism \[\mathcal{F}^{\mathcal{B}}(G/e)\longrightarrow\mathcal{F}^{\mathcal{B}}(G) \tag{7.1}\] in the category \(\mathrm{BLC}_{\mathbb{Q}}\). It is compatible, via blow-down, with the corresponding map \(\mathcal{F}(G/e)\rightarrow\mathcal{F}(G)\). Exceptional divisors are indexed by core subgraphs \(\emptyset\neq\gamma\subset G\). For each such subgraph (which is not necessarily connected), there is a canonical face morphism: \[\mathcal{F}^{\mathcal{B}}(\gamma)\times\mathcal{F}^{\mathcal{B}}(G/\gamma) \longrightarrow\mathcal{F}^{\mathcal{B}}(G). \tag{7.2}\] Compatibility with the face map \(\mathcal{F}(G/\gamma)\to\mathcal{F}(G)\) is expressed by the diagram in \(\mathcal{PC}_{\mathbb{Q}}\): \[\begin{array}{ccccc}\mathcal{F}^{\mathcal{B}}(\gamma)&\times&\mathcal{F}^{ \mathcal{B}}(G/\gamma)&\longrightarrow&\mathcal{F}^{\mathcal{B}}(G)\\ &\downarrow&&\downarrow\\ &\mathcal{F}(G/\gamma)&&\longrightarrow&\mathcal{F}(G)\end{array}\] which commutes (see remark 5.16). The facets of \(\mathcal{F}^{\mathcal{B}}(G)\) are in one-to-one correspondence with either non-tadpole edges (7.1), or core subgraphs (7.2). ### Sequences of graphs and categorical formulation of the blow-up **Definition 7.2**.: Consider a sequence of strictly nested subgraphs \[(\gamma_{1},\gamma_{2},\ldots,\gamma_{n-1},\gamma_{n})\ \ \text{such that}\ \ \gamma_{i}\subsetneq\gamma_{i+1}\ \text{for}\ 1\leqslant i\leqslant n-1\, \tag{7.3}\] which are not necessarily connected (even if \(\gamma_{n}\) is). Such a sequence may equivalently be viewed as the data of a strict filtration \(F_{\bullet}\) on \(G=\gamma_{n}\), where \(F_{i}G=\gamma_{i}\). The _graded sequence_ associated to this filtration is the sequence of graphs: \[(\gamma_{1},\gamma_{2}/\gamma_{1},\ldots,\gamma_{n}/\gamma_{n-1})\.\] Let \(e\in E_{\gamma_{n}}\) be an edge. There is a unique index \(k\) such that \(e\) is an edge of \(\gamma_{k}/\gamma_{k-1}\). We call \(e\)_admissible_ if it is neither a self-edge nor the only edge in the quotient \(\gamma_{k}/\gamma_{k-1}\), in which case we define the _edge contraction_ with respect to \(e\) to be the sequence: \[(\gamma_{1},\ldots,\gamma_{n})/e\ =\ ((\gamma_{1}\cup e)/e\,\ (\gamma_{2}\cup e )/e\,\ \ldots\,\ (\gamma_{n-1}\cup e)/e\,\ \gamma_{n}/e) \tag{7.4}\] Define a _refinement_ of \((\gamma_{1},\gamma_{2},\ldots,\gamma_{n-1},\gamma_{n})\) to be: \((\gamma,\gamma_{1},\ldots,\gamma_{n})\) where \(\gamma\subsetneq\gamma_{1}\) or \[(\gamma_{1},\ldots,\gamma_{i},\gamma,\gamma_{i+1},\ldots,\gamma_{n})\ \text{ where}\ \gamma_{i}\subsetneq\gamma_{i+1}\] for any \(i\). Define the _edge degree_ to be \(e_{\gamma_{n}}-n+1\), and the genus to be \(h_{\gamma_{n}}\). Given a graph \(G\) with labelled edges, define a category \(\mathcal{C}^{\mathcal{B}}_{G}\) whose objects are sequences (7.3) of edge degree \(>0\) such that \(\gamma_{n}=G/\gamma\) is a quotient of \(G\) satisfying \(h_{\gamma_{n}}=h_{G}\) (i.e., \(\gamma\) does not contain any loops), and such that each graph \(\gamma_{i}\), \(i<n\) is core. The morphisms are given by contraction of admissible edges, and refinements: indeed, \(\mathcal{C}^{\mathcal{B}}_{G}\) is simply the category of objects of edge degree \(\geqslant 1\) generated by the sequence \((G)\) under these two operations. **Theorem 7.3**.: _There is a canonical functor \(\big{(}\mathcal{C}^{\mathcal{B}}_{G}\big{)}^{\mathrm{opp}}\to\mathrm{BLC}_{ \mathbb{Q}}\) which sends_ \[(\gamma_{1},\gamma_{2},\ldots,\gamma_{n-1},\gamma_{n})\mapsto\mathcal{F}^{ \mathcal{B}}(\gamma_{1})\times\mathcal{F}^{\mathcal{B}}(\gamma_{2}/\gamma_{1}) \times\ldots\times\mathcal{F}^{\mathcal{B}}(\gamma_{n}/\gamma_{n-1})\] _and where all morphisms are face morphisms. The objects of \(\mathcal{C}^{\mathcal{B}}_{G}\) are in one-to-one correspondence with the faces of \(\mathcal{F}^{\mathcal{B}}(G)\); the morphisms of \(\mathcal{C}^{\mathcal{B}}_{G}\) are in one-to-one correspondence with inclusions of faces. The face corresponding to a sequence \((\gamma_{1},\ldots,\gamma_{n})\) has codimension \(e_{G}-e_{\gamma_{n}}+n-1\). There is a canonical blow-down in \(\mathcal{PC}_{\mathbb{Q}}\) from the functor_ \[\mathcal{F}^{\mathcal{B}}:\mathcal{C}^{\mathcal{B}}_{G}\to\mathrm{BLC}_{ \mathbb{Q}}\subset\mathcal{PC}_{\mathbb{Q}}\qquad\text{ to }\qquad\mathcal{F}: \mathcal{C}_{G}\to\mathrm{PLC}_{\mathbb{Q}}\subset\mathcal{PC}_{\mathbb{Q}}\] _which is induced by the pair \((\Phi,\phi)\), where the functor \(\phi:\mathcal{C}^{\mathcal{B}}_{G}\to\mathcal{C}_{G}\) is defined by_ \[\phi:(\gamma_{1},\gamma_{2},\ldots,\gamma_{n})\mapsto\gamma_{n}/\gamma_{n-1}\] _and \(\Phi\) is the natural transformation defined by the family of morphisms in \(\mathcal{PC}_{\mathbb{Q}}\):_ \[\mathcal{F}^{\mathcal{B}}(\gamma_{1})\times\mathcal{F}^{\mathcal{B}}(\gamma_{2 }/\gamma_{1})\times\ldots\times\mathcal{F}^{\mathcal{B}}(\gamma_{n}/\gamma_{n-1 })\longrightarrow\mathcal{F}(\gamma_{n}/\gamma_{n-1})\] _given by projection onto the final component followed by blow-down for \(\mathcal{F}^{\mathcal{B}}(\gamma_{n}/\gamma_{n-1})\)._ Proof.: The theorem is a reformulation of theorem 7.1 in [1]. On topological realisations, \(|\Phi|\) induces the blow-down morphism \(\big{|}\mathcal{F}^{\mathcal{B}}(G)\big{|}\to|\mathcal{F}(G)|\) from the Feynman polytope \(\big{|}\mathcal{F}^{\mathcal{B}}(G)\big{|}\) to the closed simplex. ### Canonical blow-up of \(L\mathcal{M}_{g}^{\mathrm{trop}}\) Consider the category whose objects are the faces of all \(\mathcal{F}^{\mathcal{B}}(G)\), where \(G\) ranges over stable connected graphs of total weight \(0\), and whose morphisms are generated by isomorphisms and face morphisms. By construction, it admits a canonical functor to \(\mathrm{BLC}_{\mathbb{Q}}\). The complex \(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) is defined to be this functor. It may be described more explicitly using nested sequences of graphs. **Definition 7.4**.: Define a category \(I_{g}^{\mathcal{B}}\) whose objects are nested sequences (7.3) of graphs of genus \(g\), with the property that each \(\gamma_{i}\), for \(i<n\), is a core graph (but not necessarily connected), and \(\gamma_{n}\) is a stable connected graph with all vertices of weight \(0\). The morphisms in this category are isomorphisms, admissible edge contractions, and refinements. There is a 'collapsing' functor: \[\begin{array}{rcl}I_{g}^{\mathcal{B}}&\longrightarrow&I_{g}\\ (\gamma_{1},\ldots,\gamma_{n-1},\gamma_{n})&\mapsto&\gamma_{n}/\gamma_{n-1} \end{array} \tag{7.5}\] which sends a filtered graph to its highest graded component, where the quotient \(\gamma_{n}/\gamma_{n-1}\) is to be viewed as a weighted graph as follows. First assign weight \(0\) to every vertex of \(\gamma_{n}\), and contract each edge of \(\gamma_{n-1}\), in any order, keeping track of the induced weights by the process described in SS6.2.1. To verify that this is indeed a functor, one only needs to consider refinements where an additional graph \(\gamma\) satisfying \(\gamma_{n-1}\subset\gamma\subset\gamma_{n}\) is inserted between \(\gamma_{n-1}\) and \(\gamma_{n}\). It maps to the morphism in \(I_{g}\) given by contracting the edges in \(E_{\gamma}\backslash E_{\gamma_{n-1}}\). We emphasize that the collapsing functor sends unweighted nested sequences to weighted graphs. It induces a functor between the opposite categories in the usual manner. The functor (7.5) is essentially surjective. Furthermore, \(I_{g}^{\mathcal{B}}\) is generated by singletons \((G)\) for \(G\) an unweighted, connected, stable graph of genus \(g\) by edge contractions and refinements. Since there are finitely many isomorphism classes, \(I_{g}^{\mathcal{B}}\) is equivalent to a finite category. **Definition 7.5**.: Consider the map \[L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}:\big{(}I_{g}^{ \mathcal{B}}\big{)}^{\mathrm{opp}} \longrightarrow \mathrm{BLC}_{\mathbb{Q}}\] \[(\gamma_{1},\ldots,\gamma_{n}) \mapsto \mathcal{F}^{\mathcal{B}}(\gamma_{1})\times\mathcal{F}^{\mathcal{ B}}(\gamma_{2}/\gamma_{1})\times\ldots\times\mathcal{F}^{\mathcal{B}}(\gamma_{n}/ \gamma_{n-1}) \tag{7.6}\] sending a nested sequence to the product of the blow-ups of its graded sequence. **Theorem 7.6**.: _The map \(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) is a functor, and sends morphisms in \((I_{g}^{\mathcal{B}})^{\mathrm{opp}}\) to isomorphisms and face maps. There is a canonical morphism of functors in \(\mathcal{PC}_{\mathbb{Q}}\):_ \[L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\longrightarrow L\mathcal{M}_{g}^{ \mathrm{trop}}\] Figure 8. A picture of \(I_{2}^{\mathcal{B}}\). All vertices have weight \(0\). Edge contractions are depicted in red, refinements in green, and automorphisms are not shown. _which is induced by the pair \((\Phi,\phi^{\mathrm{opp}})\) where \(\phi:I_{g}^{\mathcal{B}}\to I_{g}\) is the collapsing functor, and the natural transformation \(\Phi\) is defined on the image of \((\gamma_{1},\ldots,\gamma_{n})\) by_ \[\Phi:\mathcal{F}^{\mathcal{B}}(\gamma_{1})\times\mathcal{F}^{\mathcal{B}}( \gamma_{2}/\gamma_{1})\times\ldots\times\mathcal{F}^{\mathcal{B}}(\gamma_{n}/ \gamma_{n-1})\longrightarrow\mathcal{F}(\gamma_{n}/\gamma_{n-1})\] _namely projection onto the last component followed by blow down \(\mathcal{F}^{\mathcal{B}}(\gamma_{n}/\gamma_{n-1})\rightarrow\mathcal{F}( \gamma_{n}/\gamma_{n-1})\)._ Proof.: Any isomorphism of nested sequences \((\gamma_{1},\gamma_{2},\ldots,\gamma_{n})\cong(\gamma_{1}^{\prime},\gamma_{2} ^{\prime},\ldots,\gamma_{n}^{\prime})\) induces an isomorphism between the associated graded sequences, and hence a canonical isomorphism in the category \(\mathrm{BLC}_{\mathbb{Q}}\) between \[\mathcal{F}^{\mathcal{B}}(\gamma_{1})\times\mathcal{F}^{\mathcal{B}}(\gamma_ {2}/\gamma_{1})\times\ldots\times\mathcal{F}^{\mathcal{B}}(\gamma_{n}/\gamma_ {n-1})\] and its version with each \(\gamma_{i}\) replaced with \(\gamma_{i}^{\prime}\). Furthermore, this isomorphism is compatible, via blow-downs, with the isomorphism \(\mathcal{F}(\gamma_{n}/\gamma_{n_{1}})\cong\mathcal{F}(\gamma_{n}^{\prime}/ \gamma_{n_{1}}^{\prime}).\) The fact that \(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) is a functor then follows the proof of theorem 7.3, as does the rest of the statement. The topological realisation \(\left|L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\right|\) is defined differently, but equivalent, we expect, to the bordification discussed in [1]. There is a canonical continuous morphism: \[\left|L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\right|\longrightarrow\left| L\mathcal{M}_{g}^{\mathrm{trop}}\right| \tag{7.7}\] which collapses all exceptional components. ## 8. The strict transform of the graph locus in \(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) Let \(G\) be a graph with zero weights. Recall that \(\pi_{\mathcal{B}}:P_{G}^{\mathcal{B}}\rightarrow\mathbb{P}^{E_{G}}\) is the corresponding blow-up and \(X_{G}\subset\mathbb{P}^{E_{G}}\) is the graph hypersurface. Denote its strict transform by \[\widetilde{X}_{G}\subset P^{\mathcal{B}_{G}}\] and let \(U_{G}\subset P^{\mathcal{B}_{G}}\) denote its open complement \(P^{\mathcal{B}_{G}}\backslash\widetilde{X}_{G}\). Recall from theorem 7.3 that \(\left(\mathcal{C}_{G}^{\mathcal{B}}\right)^{\mathrm{opp}}\) is equivalent to the category of faces of the blown-up Feynman polytope \(\mathcal{F}_{G}^{\mathcal{B}}\). **Definition 8.1**.: Define a functor \[\widetilde{\mathcal{X}}:\left(\mathcal{C}_{G}^{\mathcal{B}}\right)^{\mathrm{ opp}}\longrightarrow\mathrm{Sch}_{\mathbb{Q}}\] which sends the singleton \((G)\) to the strict transform \(\widetilde{X}_{G}\subset P^{\mathcal{B}_{G}}\). It is uniquely defined on all other objects (7.3) by restriction to faces and defines a subscheme functor of \(\mathsf{SF}^{\mathcal{B}}\). An isomorphism of graphs (or nested sequence of graphs) induces an isomorphism of graph hypersurfaces and hence of their strict transforms by proposition 5.2 (i). Since \(I_{g}^{\mathcal{B}}\) is generated by stable unweighted graphs of genus \(g\), we deduce the existence of a functor \[\widetilde{\mathcal{X}}:\left(I_{g}^{\mathcal{B}}\right)^{\mathrm{opp}} \longrightarrow\mathrm{Sch}_{\mathbb{Q}}\] which is a subscheme functor of \(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\). **Proposition 8.2**.: _The functor \(\widetilde{\mathcal{X}}:\left(I_{g}^{\mathcal{B}}\right)^{\mathrm{opp}}\to \mathrm{Sch}_{\mathbb{Q}}\) is given on sequences (7.3) by_ \[(\gamma_{1},\ldots,\gamma_{n})\ \mapsto\ \bigcup_{i=1}^{n}P^{\gamma_{1}}\times \ldots\times P^{\gamma_{i-1}/\gamma_{i-2}}\times\widetilde{X}_{\gamma_{i}/ \gamma_{i-1}}\times P^{\gamma_{i+1}/\gamma_{i}}\times\ldots\times P^{\gamma_{n }/\gamma_{n-1}} \tag{8.1}\] _where we write \(\gamma_{0}=\emptyset\). It is a closed subscheme of \(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) at infinity, i.e., a subscheme of the functor \(\mathsf{S}L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\), whose real points do not meet the topological realisation:_ \[\widetilde{\mathcal{X}}(\mathbb{R})\cap|L\mathcal{M}_{g}^{\mathrm{trop}, \mathcal{B}}|=\emptyset. \tag{8.2}\] _The blow-down \((\phi,\Phi):L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\to L\mathcal{M}_{g}^{ \mathrm{trop}}\) induces a natural transformation \(\widetilde{\mathcal{X}}\to\mathcal{X}\circ\phi\), where \(\phi\) is the collapsing functor._ Proof.: If \(e\in E_{G}\) is not a tadpole, there is a canonical inclusion \(\widetilde{X}_{G/e}\to\widetilde{X}_{G}\) induced by the inclusion \(P_{G/e}^{\mathcal{B}}\to P_{G}^{\mathcal{B}}\), since this is true for graph hypersurfaces and strict transforms are functorial (proposition 5.2 (i)). This proves the formula (8.1) for sequences of length \(n=1\). The general case follows by taking refinements. Let \(\gamma\subset G\) be a core subgraph of \(G\). One has the following identity of graph polynomials [1, 1], \[\Psi_{G}=\Psi_{\gamma}\Psi_{G/\gamma}+R_{G/\gamma}\] where \(R_{G/\gamma}\) is a polynomial of homogeneous degree strictly greater than \(\deg\Psi_{\gamma}=h_{\gamma}\) in the edge variables \(\alpha_{e}\) for \(e\in E_{\gamma}\). The identity implies that if \(D\subset P^{G}\) is the exceptional divisor corresponding to the blow-up of \(\mathbb{P}^{E_{\gamma}}\subset\mathbb{P}^{E_{G}}\) one has a canonical isomorphism: \[\widetilde{X}_{G}\cap D\cong\left(\widetilde{X}_{\gamma}\times P_{G/\gamma}^{ \mathcal{B}}\right)\ \cup\ \left(P_{\gamma}^{\mathcal{B}}\times\widetilde{X}_{G/\gamma}\right)\.\] The fact that \(\widetilde{\mathcal{X}}\) is at infinity follows from the fact that the strict transform locus \(\widetilde{X}_{G}(\mathbb{R})\) does not meet the region \(\sigma_{G}^{\mathcal{B}}\)[1, 1]. The final statement follows by definition from the fact that \(X_{G}\) is the blow-down of \(\widetilde{X}_{G}\). The properties of graph hypersurfaces required in the proof will actually be rederived in SS11 from a more conceptual viewpoint via properties of the determinant. The open complement of the subscheme \(\widetilde{\mathcal{X}}\) is the functor \[\mathcal{U}:\left(I_{g}^{\mathcal{B}}\right)^{\mathrm{opp}} \longrightarrow \mathrm{Sch}_{\mathbb{Q}}\] \[(\gamma_{1},\ldots,\gamma_{n}) \mapsto U(\gamma_{1})\times U(\gamma_{2}/\gamma_{1})\times\ldots\times U (\gamma_{n}/\gamma_{n-1})\] It is an open subscheme of \(\mathsf{S}L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) and satisfies \(|L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}|\subset\mathcal{U}\) (definition 2.6). **Corollary 8.3**.: _Via definition 2.7 we deduce a functor_ \[L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\backslash\widetilde{ \mathcal{X}}\ :\ \left(I_{g}^{\mathcal{B}}\right)^{\mathrm{opp}} \longrightarrow \mathcal{PC}_{\mathbb{Q}}\] \[x \mapsto (\mathcal{U}_{x},L^{\mathcal{B}}\cap\mathcal{U}_{x},\sigma_{x})\] _Remark 8.4_.: A functorial system of linear subspaces of \(\mathsf{S}\mathcal{F}(G)\) was defined in [1, SS5.4] whose complement in \(P^{G}\) is affine. It defines a subscheme functor \(\mathcal{A}\subset L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) such that \(\left|L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\right|\subset P^{G} \backslash\mathcal{A}\). It would be very interesting to construct natural differential forms on \(P^{G}\backslash\mathcal{A}\). They would have linear poles (by contrast with the canonical forms studied here.) ### The boundary and open locus **Definition 8.5**.: Let \(\partial I_{g}^{\mathcal{B}}\) denote the full subcategory of \(I_{g}^{\mathcal{B}}\) whose objects are sequences of graphs \((\gamma_{1},\ldots,\gamma_{n})\) where \(n\geq 2\). Denote the restriction of \(L\mathcal{M}_{g}^{\mathcal{B}}\) to this category by \[\partial L\mathcal{M}_{g}^{\mathcal{B}}:\partial I_{g}^{\mathcal{B}} \longrightarrow\mathrm{BLC}_{\mathbb{Q}}\.\] We shall call it the boundary locus, or exceptional locus, of \(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\). The open locus \(|L\mathcal{M}_{g}^{\mathrm{trop}}|\) embeds canonically into both \(|L\mathcal{M}_{g}^{\mathrm{trop}}|\) and \(|L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}|\). **Proposition 8.6**.: _There is a morphism of \(\mathcal{PC}_{\mathbb{Q}}\)-complexes_ \[\partial L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\longrightarrow\partial L \mathcal{M}_{g}^{\mathrm{trop}}\] _given by the pair \((\phi^{\mathrm{opp}},\pi^{\mathcal{B}})\), where \(\phi:\partial L_{g}^{\mathcal{B}}\rightarrow\partial I_{g}\) is the restriction of the collapsing functor, and \(\pi^{\mathcal{B}}\) is the canonical blow-down map. In addition, there is a canonical embedding_ \[\left|L\mathcal{M}_{g}^{\circ,\mathrm{trop}}\right|\stackrel{{ \sim}}{{\longrightarrow}}\left|L\mathcal{M}_{g}^{\mathrm{trop}, \mathcal{B}}\right|\setminus\left|\partial L\mathcal{M}_{g}^{\mathrm{trop}, \mathcal{B}}\right| \tag{8.3}\] _whose inverse is the blow-down \(\left|L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\right|\setminus\left| \partial L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\right|\stackrel{{ \sim}}{{\longrightarrow}}\left|L\mathcal{M}_{g}^{\mathrm{trop}}\right| \setminus\left|\partial L\mathcal{M}_{g}^{\mathrm{trop}}\right|\)._ Proof.: The first part is immediate from the fact that the exceptional locus of \(\mathcal{F}^{\mathcal{B}}(G)\rightarrow\mathcal{F}(G)\) corresponds to sequences (7.3) of length \(\geqslant 2\). To prove (8.3), first define the full subcategory \(I_{g,w=0}\) of \(I_{g}\) consisting of graphs of weight \(0\), and consider the functor: \[\sigma\cap\mathcal{U}:I_{g,w=0}^{\mathrm{opp}} \longrightarrow \mathcal{T}op\] \[G \mapsto (\sigma\cap\mathcal{U})_{G}=\sigma_{G}\backslash\left(\sigma_{G }\cap(X_{G})(\mathbb{R})\right)\.\] Note that \((\sigma\cap\mathcal{U})_{G}\) is contained in \(\sigma_{G}\), but strictly contains \(\overset{\circ}{\sigma}_{G}\). It follows from proposition 6.7 and the fact that \((\sigma\cap\mathcal{U})_{G}\) is the empty set if \(w(G)>0\) that: \[\left|\sigma\cap\mathcal{U}\right|=\lim_{Ge\overset{\mathrm{opp}}{I_{g,w=0}^{ \mathrm{opp}}}}\sigma_{G}\backslash\left(\sigma_{G}\cap(X_{G})(\mathbb{R}) \right)=\left|L\mathcal{M}_{g}^{\circ,\mathrm{trop}}\right|\.\] Now consider the functor \(\sigma\backslash\mathcal{E}:I_{g}^{\mathcal{B}}\rightarrow\mathcal{T}op\) which sends sequences (7.3) of length \(>1\) to the empty set, and maps singletons \((G)\) to \[(G)\ \mapsto\ (\sigma\backslash\mathcal{E})_{G}=\sigma_{G}^{\mathcal{B}} \setminus\ \left(\sigma_{G}^{\mathcal{B}}\cap\mathcal{E}_{G}(\mathbb{R})\right)\] where \(\mathcal{E}_{G}\subset P_{G}^{\mathcal{B}}\) is the exceptional divisor of \(\pi_{\mathcal{B}}:P_{G}^{\mathcal{B}}\rightarrow\mathbb{P}^{E_{G}}\). The blow-down morphism defines a functorial isomorphism of algebraic varieties, and of topological spaces: \[\pi_{G}^{\mathcal{B}}\ :\ P_{G}^{\mathcal{B}}\backslash\left(\mathcal{E}_ {G}\cup\widetilde{\mathcal{X}}_{G}\right) \stackrel{{\sim}}{{\longrightarrow}} \mathbb{P}^{E_{G}}\backslash X_{G}\] \[\sigma_{G}^{\mathcal{B}}\setminus\ \left(\sigma_{G}^{\mathcal{B}}\cap\mathcal{E}_{G}( \mathbb{R})\right) \stackrel{{\sim}}{{\longrightarrow}} \sigma_{G}\backslash(\sigma_{G}\cap X_{G})=(\sigma\cap\mathcal{U})_ {G}\] The second line follows from restriction of the first, since \(\widetilde{\mathcal{X}}_{G}(\mathbb{R})\) does not meet \(\sigma_{G}^{\mathcal{B}}\) by (8.2). Let \(j:\sigma_{G}\backslash(\sigma_{G}\cap X_{G})\stackrel{{\sim}}{{ \rightarrow}}\sigma_{G}^{\mathcal{B}}\setminus\ \left(\sigma_{G}^{\mathcal{B}}\cap\mathcal{E}_{G}(\mathbb{R})\right)\) denote its inverse. To define the continuous map (8.3) we define a pair consisting of a functor and natural transformation, and take its limit. The natural transformation is given by \(j\); the functor is given by \[\iota:I_{g,w=0}\longrightarrow I_{g}^{\mathcal{B}}\qquad\text{where}\qquad \iota:G\mapsto(G)\,\] and \((G)\) denotes a sequence of graphs of length one. The isomorphism \(j\), which is functorial, thus defines a natural transformation from \(\sigma\cap\mathcal{U}\) to \[(\sigma\backslash\mathcal{E})\circ\iota:I_{g,w=0}\stackrel{{\iota }}{{\longrightarrow}}I_{g}^{\mathcal{B}}\stackrel{{\sigma\backslash \mathcal{E}}}{{\longrightarrow}}\mathcal{T}op\] It induces a continuous map \[\left|\sigma\cap\mathcal{U}\right|\longrightarrow\left|\sigma\backslash \mathcal{E}\right|=\lim_{G\in(I_{g}^{\mathcal{B}})^{\mathrm{opp}}}\sigma_{G}^{ \mathcal{B}}\setminus\ \left(\sigma_{G}^{\mathcal{B}}\cap\mathcal{E}_{G}(\mathbb{R})\right)\] which is exactly (8.3). The last statement follows from the fact that the inverse of the continuous map \(j\) is, by definition, the restriction of the blow-down map \(\pi^{\mathcal{B}}\). The key point in the previous proposition is that simplices which are glued together in the tropical moduli space, and which are not contained in the graph hypersurface locus, continue to be glued together after blowing-up (i.e., if \(\mathcal{F}(\gamma)\) is a common face of both \(\mathcal{F}(G_{1})\) and \(\mathcal{F}(G_{2})\) and \(w(\gamma)>0\), then \(\mathcal{F}^{\mathcal{B}}(\gamma)\) is a common face of both \(\mathcal{F}^{\mathcal{B}}(G_{1})\) and \(\mathcal{F}^{\mathcal{B}}(G_{2})\)). This is expressed by the fact that \(\iota\) is a functor, using the fact that core graphs are intrinsic. ### The face complex associated to \(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) Using the general definition 3.1, we may write down the face complex associated to \(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\). **Definition 8.7**.: Let \(\mathcal{GC}_{0}^{\mathcal{B}}\) denote the \(\mathbb{Q}\)-vector space generated by symbols \(\llbracket\underline{\Gamma},\omega\rrbracket\) \[\text{where}\quad\ \underline{\Gamma}=(\gamma_{1},\ldots,\gamma_{n})\,\quad\text{ and } \quad\omega=\omega_{\gamma_{1}}\wedge\omega_{\gamma_{2}/\gamma_{1}}\wedge\ldots \wedge\omega_{\gamma_{n}/\gamma_{n-1}}\,\] where \(\gamma_{1}\subsetneq\ldots\subsetneq\gamma_{n}\) is a strict nested sequence of graphs where \(\gamma_{i}\) is core for \(i<n\) and \(\gamma_{n}\) stable (definition 7.2), and \(\omega_{\gamma_{i+1}/\gamma_{i}}\) is an orientation on the quotients \(\gamma_{i+1}/\gamma_{i}\), with relations: \[\llbracket\underline{\Gamma},-\omega\rrbracket=-\llbracket\underline{\Gamma}, \omega\rrbracket\qquad\text{ and }\qquad\llbracket\underline{\Gamma},\omega\rrbracket= \llbracket\underline{\Gamma}^{\prime},\omega^{\prime}\rrbracket\] whenever \(\underline{\Gamma}\cong\underline{\Gamma}^{\prime}\) is an isomorphism of nested sequences \(f:\gamma_{n}\cong\gamma_{n}^{\prime}\) such that \(f(\gamma_{i})=\gamma_{i}^{\prime}\), and \(\omega^{\prime}=f(\omega)\). It is bigraded by genus, and edge number. The differential \[d:\mathcal{GC}_{0}^{\mathcal{B}}\longrightarrow\mathcal{GC}_{0}^{\mathcal{B}}\] has two components: \(d=d^{i}+d^{e}\), which we call the internal and exceptional differentials, respectively. The internal differential is defined by edge contraction (7.4) \[d^{i}\big{[}(\gamma_{1},\ldots,\gamma_{n}),\omega\big{]}=\sum_{e\in E_{\gamma_ {j}/\gamma_{j-1}}}(-1)^{j}\big{[}(\gamma_{1},\ldots,\gamma_{n})/e,\omega^{ \prime}\big{]}\] where the sum is over all admissible edges, \(j\) is the unique index such that \(e\in E_{\gamma_{j}/\gamma_{j-1}}\) and \[\omega^{\prime}=\varepsilon\,\omega_{\gamma_{1}}\wedge\ldots\wedge\omega_{ \gamma_{i-1}/\gamma_{i-2}}\wedge\omega^{\prime}_{\gamma_{i}/\gamma_{i-1}} \wedge\ldots\wedge\omega_{\gamma_{n}/\gamma_{n-1}}\] where \(\varepsilon=\pm 1\) is defined by \(\omega^{\prime}_{\gamma_{i}/\gamma_{i-1}}=\varepsilon\,e\wedge\omega_{\gamma_ {i}/(\gamma_{i-1}\cup e)}\). The exceptional differential is: \[d^{e}\big{[}(\gamma_{1},\ldots,\gamma_{n}),\omega\big{]}=\sum_{\gamma}(-1)^{ i}\big{[}(\gamma_{1},\ldots,\gamma_{i-1},\gamma,\gamma_{i},\ldots,\gamma_{n}), \omega^{\prime}\big{]}\] where the sum is over all refinements for \(1\leqslant i\leqslant n\), where \[\omega^{\prime}=\varepsilon\,\omega_{\gamma_{1}}\wedge\ldots\wedge\omega_{ \gamma_{i-1}/\gamma_{i-2}}\wedge\omega^{\prime}_{\gamma/\gamma_{i-1}}\wedge \omega^{\prime}_{\gamma_{i}/\gamma}\wedge\ldots\wedge\omega_{\gamma_{n}/\gamma _{n-1}}\] and where \(\varepsilon=\pm 1\) is defined by \(\omega_{\gamma_{i}/\gamma_{i-1}}=\varepsilon\omega^{\prime}_{\gamma/\gamma_{i- 1}}\wedge\omega^{\prime}_{\gamma_{i}/\gamma}\). The complex \(\mathcal{GC}_{0}^{\mathcal{B}}\) is filtered by the length \(n\) of sequences. Let \(F_{k}\mathcal{GC}_{0}^{\mathcal{B}}\) denote the subcomplex whose generators \([(\gamma_{1},\ldots,\gamma_{n}),\omega\big{]}\) have \(n>k\) components. **Proposition 8.8**.: _If we write \(\mathcal{GC}_{0}^{\mathcal{CB}}=F_{1}\mathcal{GC}_{0}^{\mathcal{B}}\) then_ \[H_{n}(\mathcal{GC}_{0}^{\mathcal{B}})\cong H_{n-1}(\big{|}L\mathcal{M}_{g}^{ \mathrm{trop},\mathcal{B}}\big{|})\quad\text{ and }\quad H_{n}(\mathcal{GC}_{0}^{ \mathcal{CB}})\cong H_{n-1}(\big{|}\partial L\mathcal{M}_{g}^{\mathrm{trop}, \mathcal{B}}\big{|})\.\] _Furthermore, the homology of the usual commutative even graph complex satisfies \(H_{n}(\mathcal{GC}_{0})=H_{n}(\mathcal{GC}_{0}^{\mathcal{B}},\mathcal{GC}_{0}^{ \mathcal{CB}})\) and fits into a long exact sequence_ \[\ldots\longrightarrow H_{n}(\mathcal{GC}_{0}^{\mathcal{CB}})\longrightarrow H _{n}(\mathcal{GC}_{0}^{\mathcal{B}})\longrightarrow H_{n}(\mathcal{GC}_{0}) \stackrel{{ d^{n}}}{{\longrightarrow}}H_{n-1}(\mathcal{GC}_{0}^{ \mathcal{CB}})\longrightarrow\ldots\] Proof.: This follows from theorem 3.2, the relative long exact homology sequence, and (8.3) to identify \(H_{n}(\big{|}L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\big{|}\,\big{|} \partial L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\big{|})=H_{n}(\big{|}L \mathcal{M}_{g}^{\mathrm{trop}}\big{|}\,,\big{|}\partial L\mathcal{M}_{g}^{ \mathrm{trop}}\big{|})\). ## 9. Polyhedra in spaces of quadratic forms We now turn to the study of polyhedra whose vertices are positive semi-definite quadratic forms with rational null spaces. ### Positive semi-definite matrices Let \(\mathcal{P}_{g}\) denote the space of positive definite \(g\times g\) real symmetric matrices. It may be identified, via the map \(X\mapsto X^{T}X\), with \(\mathrm{GL}_{g}(\mathbb{R})/O_{g}(\mathbb{R})\), and admits a right action \(M\mapsto h^{T}Mh\) for \(h\in\mathrm{GL}_{g}(\mathbb{R})\). We write \(L\mathcal{P}_{g}=\mathcal{P}_{g}/\mathbb{R}_{>0}^{\times}\). The space of real non-zero symmetric matrices up to scalar multiplication may be identified with the real points of the projective space \(\mathbb{P}^{d_{g}-1}\), where \(d_{g}=\binom{g+1}{2}\), and hence \(L\mathcal{P}_{g}\subset\mathbb{P}^{d_{g}-1}(\mathbb{R})\). A choice of homogeneous coordinates on \(\mathbb{P}^{d_{g}-1}\) are given by matrix entries on or above the diagonal. Since the determinant is a homogeneous function of these coordinates, its vanishing locus defines a hypersurface \[\mathrm{Det}\ \subset\ \mathbb{P}^{d_{g}-1}\] which satisfies \(L\mathcal{P}^{g}\cap\mathrm{Det}(\mathbb{R})=\emptyset\). The linear action of \(\mathrm{GL}_{g}\) on \(\mathbb{P}^{d_{g}-1}\), which corresponds to the action \(M\mapsto h^{T}Mh\) on symmetric matrices, preserves \(\mathrm{Det}\) and \(L\mathcal{P}_{g}\subset\mathbb{P}^{d_{g}-1}(\mathbb{R})\). ### Positive polyhedra in the space of symmetric matrices It is convenient to reformulate the above in a coordinate-free manner. Consider a vector space \(V\) of dimension \(g\) defined over a field \(k\subset\mathbb{R}\). Let us denote by \[\mathcal{Q}(V)=\left(\mathrm{Sym}^{2}\,V\right)^{\vee}\] the \(k\)-vector space of symmetric bilinear (i.e., quadratic) forms \[Q:V\otimes_{k}V\longrightarrow k\.\] Such a quadratic form may be viewed as a linear map \(Q:V\to V^{\vee}\) which is self-dual, i.e., \(Q=Q^{T}\). Consequently, \(\mathcal{Q}\) defines a contravariant functor from the category of vector spaces to itself: for any linear map \(V\to W\), there is a natural map \(\mathcal{Q}(W)\to\mathcal{Q}(V)\). It sends surjective maps to injective maps, and vice-versa. Given an isomorphism \(V\cong k^{g}\) one may identify elements of \(\mathcal{Q}(V)\) with symmetric matrices with entries in \(k\). Recall that the null space \(\ker(Q)\) of a quadratic form is defined to be the kernel of \(Q:V\to V^{\vee}\). If \(Q\) is a positive semi-definite quadratic form, one has \[Q(v,v)=0\quad\Longleftrightarrow\quad v\in\ker(Q). \tag{9.1}\] Consider the projective space \(\mathbb{P}(\mathcal{Q}(V))\). It has distinguished linear subspaces: \[\mathbb{P}(\mathcal{Q}(V/K))\hookrightarrow\mathbb{P}(\mathcal{Q}(V))\qquad \text{ for every }K\subset V\.\] Such a subspace is contained in the determinant locus \(\mathrm{Det}\subset\mathbb{P}(\mathcal{Q}(V))\) if and only if \(K\neq 0\). Viewing \(\mathcal{Q}(V/K)\) as the subspace of \(\mathcal{Q}(V)\) of quadratic forms \(Q\) satisfying \(Q(k,v)=0\) for all \(k\in K\), and \(v\in V\), one deduces the formula: \[\mathcal{Q}\left(V/K_{1}\right)\cap\mathcal{Q}\left(V/K_{2}\right)=\mathcal{ Q}\left(V/(K_{1}+K_{2})\right) \tag{9.2}\] which implies that \(\mathbb{P}\left(\mathcal{Q}\left(V/K_{1}\right)\right)\cap\mathbb{P}\left( \mathcal{Q}\left(V/K_{2}\right)\right)=\mathbb{P}\left(\mathcal{Q}\left(V/(K_ {1}+K_{2})\right)\right)\). Consider the convex subsets \[\mathcal{Q}^{>0}(V)\ \subset\ \mathcal{Q}^{>0}(V)\ \subset\ \mathcal{Q}(V)\] consisting of positive definite, and positive semi-definite, quadratic forms on \(V\). **Definition 9.1**.: We shall say that a polyhedron \((\sigma,\mathcal{Q}(V))\) in the space of quadratic forms is _positive_, which we shall denote by \(\sigma\geqslant 0\), if its vertices lie in \(\mathcal{Q}^{\geqslant 0}(V)\), and hence \(\sigma\subset\mathcal{Q}^{\geqslant 0}(V)(\mathbb{R})\) is contained in the set of positive semi-definite quadratic forms. We shall call a polyhedron _strictly positive_, denoted by \(\sigma>0\), if \(\sigma\) positive, and if in addition it meets the interior \(\sigma\cap\mathcal{Q}^{>0}(V)\neq\emptyset\). In other words, \(\sigma\) is strictly positive if it contains at least one positive definite quadratic form. Positive polyhedra meet the determinant locus in a specific manner. **Lemma 9.2**.: _Let \(\sigma\) be a positive polyhedron in \(\mathcal{Q}(V)\) and let \(\sigma_{F}\) be a face of \(\sigma\) of dimension \(>0\). The following are equivalent:_ _(i) The face \(\sigma_{F}\) is contained in the determinant locus \(\operatorname{Det}(\mathbb{R})\)._ _(ii) There is a point in the interior of \(\sigma_{F}\) which lies in \(\operatorname{Det}(\mathbb{R})\), i.e.,_ \[\overset{\circ}{\sigma}_{F}\cap\operatorname{Det}(\mathbb{R})\neq\emptyset\] _(iii) There is a non-zero linear subspace \(0\neq K\subset V\) such that_ \[\sigma_{F}\ \subset\ \mathbb{P}(\mathcal{Q}(V/K))(\mathbb{R})\.\] Proof.: Clearly \((i)\) implies \((ii)\) and \((iii)\) implies \((i)\) since \(\mathbb{P}(\mathcal{Q}(V/K))\subset\operatorname{Det}\). It suffices to prove that \((ii)\) implies \((iii)\). Let us write \(\widehat{\sigma}_{F}=\{\sum_{i=1}^{n}\lambda_{i}Q_{i}:\lambda_{i}\in\mathbb{R }_{\geqslant 0}\}\) where \(Q_{i}\) are non-zero positive semi-definite quadratic forms on \(V\). If we assume \((ii)\) then there exist \(\lambda_{i}>0\) such that \(\det\big{(}\sum_{i=1}^{n}\lambda_{i}Q_{i}\big{)}=0\) since every interior point of \(\sigma\) admits a (non-unique) representation as a linear combination of \(Q_{i}\) with strictly positive coefficients. Therefore there is a non-zero vector \(x\in V\) such that \(x\in\ker\big{(}\sum_{i=1}^{n}\lambda_{i}Q_{i}\big{)}\). This implies in particular that \(\sum_{i=1}^{n}\lambda_{i}Q_{i}(x,x)=0\). Since \(Q_{i}(x,x)\geqslant 0\) we deduce that \(Q_{i}(x,x)=0\) for all \(i=1,\dots,n\). By (9.1), \(x\in\ker Q_{i}\) for all \(i\). If we set \(K_{\sigma_{F}}=\bigcap_{i=1}^{n}\ker(Q_{i})\), then \(x\in K_{\sigma_{F}}\), and in particular \(K_{\sigma_{F}}\neq 0\). Property \((iii)\) holds on setting \(K=K_{\sigma_{F}}\). It follows that if \(\sigma\geqslant 0\) is a positive polyhedron, then it is strictly positive if and only if its interior does not meet the determinant locus: \[\sigma>0\quad\Longleftrightarrow\quad\overset{\circ}{\sigma}\cap \operatorname{Det}(\mathbb{R})=\emptyset. \tag{9.3}\] **Definition 9.3**.: For any positive polyhedron \(\sigma\geqslant 0\) in \(\mathbb{P}(\mathcal{Q}(V))\), we denote by \(K_{\sigma}\subset V\) the common null space of its vertices. It follows from (9.2) that \(K_{\sigma}\) is the unique largest subspace of \(V\) such that \[\sigma\ \subset\ \mathbb{P}(\mathcal{Q}(V/K_{\sigma}))(\mathbb{R})\.\] In particular, \(\sigma\) meets the strictly positive locus, i.e., \(\sigma\cap\mathcal{Q}^{>0}(V/K_{\sigma})\neq\emptyset\) and thus the polyhedron \((\sigma,V/K_{\sigma})\) is strictly positive. **Lemma 9.4**.: _Let \(\sigma\geqslant 0\) be a positive polyhedron. For any subspace \(K\subset V\),_ \[\sigma\cap\mathbb{P}(\mathcal{Q}(V/K))(\mathbb{R})\] _is either empty, or is a face of \(\sigma\)._ Proof.: For every \(v\in K\), consider the linear map \(f_{v}:\mathcal{Q}(V)\to k\) defined by \(Q\mapsto Q(v,v)\). Its kernel is \(\mathcal{Q}(V/kv)\), which contains \(\mathcal{Q}(V/K)\) by definition. The linear form \(f_{v}\) is non-negative on the subspace \(\mathcal{Q}^{\geqslant 0}(V)\subset\mathcal{Q}(V)\). It follows from (9.2) that, for any vectors \(v_{1},\dots,v_{d}\) which span \(K\), one has \(\mathcal{Q}(V/K)=\bigcap_{i=1}^{d}\mathcal{Q}(V/kv_{i})=\bigcap_{i=1}^{d}H_{i}\), where \(H_{i}=\ker(f_{v_{i}})\). Since \(f_{v_{i}}\) is non-negative on \(\sigma\), the set \(\sigma\cap\mathbb{P}(H_{i})\) is either empty, or a face of \(\sigma\), and \[\sigma\cap\mathbb{P}(V/K)(\mathbb{R})=\bigcap_{i=1}^{d}\sigma\cap H_{i}( \mathbb{R})\] is either empty or a non-empty intersection of faces of \(\sigma\) and hence also a face. The previous lemma has the important consequence that in defining \(L\mathcal{A}_{g}^{\operatorname{trop},\mathcal{B}}\) we shall only need to blow up linear subspaces of \(\mathbb{P}(\mathcal{Q}(V))\) of the special form \(\mathbb{P}(\mathcal{Q}(V/K))\). **Definition 9.5**.: For any face \(\sigma_{F}\) of \(\sigma\), define its _essential envelope_ to be the face \[\sigma_{F}^{\operatorname{ess}}=\left(\sigma\cap\mathbb{P}(\mathcal{Q}(V/K_{ \sigma_{F}}))(\mathbb{R})\right)\.\] It satisfies \(\sigma_{F}\subset\sigma_{F}^{\operatorname{ess}}\). We call a face \(\sigma_{F}\)_essential_ if \(\sigma_{F}=\sigma_{F}^{\operatorname{ess}}\). If \(\sigma_{F}>0\) is strictly positive, then \(K_{\sigma_{F}}=0\) and one has \(\sigma_{F}^{\rm ess}=\sigma\). In general, a face \(\sigma_{F}\) of \(\sigma_{Q}\) (whether essential or not) will not be Zariski dense in \(\mathbb{P}^{Q}\cap\mathbb{P}(\mathcal{Q}(V/K_{\sigma}))\). Proposition 12.5 implies that for Feynman polytopes, the essential envelope of a face \(\sigma_{G/\gamma}\) indexed by a subgraph \(\gamma\subset G\) is the face \(\sigma_{G/\gamma^{\rm core}}\) where \(\gamma^{\rm core}\subset\gamma\) is the maximal core subgraph of \(\gamma\). **Lemma 9.6**.: _The intersection of an essential face of \(\sigma\) with any other face \(\sigma_{Q^{\prime}}\) of \(\sigma\), is either empty or another essential face of \(\sigma_{Q^{\prime}}\)._ Proof.: The set of essential faces of \(\sigma\) are precisely the sets \(\sigma\cap\mathbb{P}(\mathcal{Q}(V/K))(\mathbb{R})\) where \(K\subset V\) is a linear subspace. The intersection of such a face with \(\sigma_{Q^{\prime}}\) is \(\sigma_{Q^{\prime}}\cap\mathbb{P}(\mathcal{Q}(V/K))(\mathbb{R})\), which is therefore also essential or empty. ### Collapsing the complement of the determinant in the normal bundle Restriction of quadratic forms to any subspace \(K\subset V\) defines a linear map: \[\mathcal{Q}(V)/\mathcal{Q}(V/K)\longrightarrow\mathcal{Q}(K). \tag{9.4}\] It does not induce a map on the corresponding projective spaces since it is not injective in general. To see this, choose a complementary space \(V\cong K\oplus C\) where \(C\cong V/K\). The space \(\mathcal{Q}(V)/\mathcal{Q}(V/K)\) may be represented by symmetric matrices in block matrix form, \[Q=\left(\begin{array}{c|c}Q_{0}&Q_{1}\\ \hline Q_{1}^{i}&0\end{array}\right)\, \tag{9.5}\] which implies that \(\mathcal{Q}(V)/\mathcal{Q}(V/K)\) is isomorphic to the product \(\mathcal{Q}(K)\times\mathbb{A}^{\rm Hom}(V/K,K)\). The map \(\mathcal{Q}(V)/\mathcal{Q}(V/K)\rightarrow\mathcal{Q}(K)\) sends \(Q\mapsto Q_{0}\). The determinant function \(\det:\mathcal{Q}(K)\to k\) defines a homogeneous polynomial \[Q\mapsto\det(Q\big{|}_{K})\ :\ \mathcal{Q}(V)/\mathcal{Q}(V/K)\longrightarrow k\.\] In the coordinates (9.5), \(\det(Q\big{|}_{K})\) is simply \(\det(Q_{0})\). Denote its zero locus by \[\operatorname{Det}\big{|}_{K}\ \subset\ \mathbb{P}\left(\mathcal{Q}(V)/ \mathcal{Q}(V/K)\right). \tag{9.6}\] **Lemma 9.7**.: _Restriction of quadratic forms to \(K\) gives a well-defined projection_ \[\pi_{K}\ :\ \mathbb{P}\left(\mathcal{Q}(V)/\mathcal{Q}(V/K)\right)\ \backslash\left. \operatorname{Det}\right|_{K}\longrightarrow\mathbb{P}\left(\mathcal{Q}(K) \right)\ \backslash\ \operatorname{Det}_{K} \tag{9.7}\] _which is a fibration in affine spaces \(\operatorname{Hom}(V/K,K)\cong\mathbb{A}^{\dim K.\dim V/K}\). It is functorial, i.e., for any \(h\in\operatorname{GL}(V)\), there is a commutative diagram:_ \[\begin{array}{ccc}\mathbb{P}\left(\mathcal{Q}(V)/\mathcal{Q}(V/K)\right) \ \backslash\ \operatorname{Det}\big{|}_{K}&\xrightarrow{\pi_{K}}&\mathbb{P}\left( \mathcal{Q}(K)\right)\ \backslash\ \operatorname{Det}_{K}\\ \downarrow&&\downarrow\\ \mathbb{P}\left(\mathcal{Q}(V)/\mathcal{Q}(V/K^{\prime})\right)\ \backslash\ \operatorname{Det}\big{|}_{K^{\prime}}&\frac{\pi_{K^{\prime}}}{\mathbb{P} \left(\mathcal{Q}(K^{\prime})\right)\ \backslash\ \operatorname{Det}_{K^{\prime}}}\end{array}\] _where the vertical maps are isomorphisms \(Q\mapsto h^{T}Qh\) and \(K^{\prime}=h^{T}Kh\)._ Proof.: A choice of decomposition (9.5) defines projective coordinates on \(\mathbb{P}\left(\mathcal{Q}(V)/\mathcal{Q}(V/K)\right)\). The complement of \(\operatorname{Det}\big{|}_{K}\) consists of projective classes of matrices with \(\det(Q_{0})\neq 0\). In particular, \(Q_{0}\neq 0\) and so the map \(Q\mapsto Q_{0}\) is well-defined on projective spaces. Its fiber is isomorphic to the space of matrices \(Q_{1}\), which is nothing other than the vector space \(\operatorname{Hom}(V/K,K)\), viewed as the \(k\) points of an affine space of equal dimension. Note that the ambiguity in the choice of decomposition \(V\cong K\oplus V/K\), is the set of splittings of the exact sequence \(0\to K\to V\to V/K\to 0\), which is a torsor over \(\operatorname{Hom}(V/K,K)\) ### Strict positivity of normal faces For simplicity of notation, let us write \[\sigma/\sigma_{F}=\sigma_{/K_{\sigma_{F}}} \tag{9.8}\] for the normal polyhedron (definition 4.8) of \(\sigma\) relative to the null space \(K_{\sigma_{F}}\) of a face \(\sigma_{F}\). **Lemma 9.8**.: _Let \(\sigma\) be strictly positive and let \(\sigma_{F}\) be a face of \(\sigma\). In the product_ \[\sigma_{F}\times(\sigma/\sigma_{F})\ \subset\ \mathbb{P}\left(\mathcal{Q}(V/K_{ \sigma_{F}})\right)\ \times\ \mathbb{P}\left(\mathcal{Q}(V)/\mathcal{Q}(V/K_{\sigma_{F}}) \right)\,\] _both \(\sigma_{F}\) and \(\sigma/\sigma_{F}\) are strictly positive (definition 9.1). In the latter case, this means that the interior of the polyhedron \(\sigma/\sigma_{F}\) does not meet the determinant locus:_ \[(\sigma/\overset{\circ}{\sigma_{F}})\subset\mathbb{P}\left(\mathcal{Q}(V)/ \mathcal{Q}(V/K_{\sigma_{F}})\right)\ \setminus\operatorname{Det}\bigr{|}_{K_{\sigma_{F}}}\] _or equivalently, that the image \(\pi_{K}(\sigma/\sigma_{F})\) is a positive polyhedron in \(\mathbb{P}(\mathcal{Q}(K))\)._ Proof.: The vertices of \(\sigma\) are projective classes of positive semi-definite quadratic forms \(Q_{1},\dots,Q_{n}\in\mathcal{Q}(V)\). The vertices of the face \(\sigma_{F}\) correspond to a subset of them: \(\{Q_{i},i\in I\}\), where \(I\subset\{1,\dots,n\}\). By definition, \[K_{\sigma_{F}}=\bigcap_{i\in I}\ker Q_{i}\, \tag{9.9}\] which implies that in any choice of decomposition \(V=K_{\sigma_{F}}\oplus C\), where \(C\cong V/K_{\sigma_{F}}\), the matrices \(Q_{i}\) have the block matrix form \[Q_{i}=\left(\begin{array}{c|c}0&0\\ \hline 0&\overline{Q_{i}}\end{array}\right)\quad\text{ if and only if }\quad i\in I\,\] where the \(\overline{Q}_{i}\) are the restrictions of \(Q_{i}\) to \(K_{\sigma_{F}}\), and are positive semi-definite. Suppose that \(\sigma_{F}\) is not strictly positive. Then \(\sigma_{F}\) is contained in \(\operatorname{Det}_{V/K_{\sigma_{F}}}(\mathbb{R})\subset\mathbb{P}(\mathcal{Q }(V/K_{\sigma_{F}}))(\mathbb{R})\). By lemma 9.2, there is a subspace \(0\neq K^{\prime}\subset V/K_{\sigma_{F}}\) which is contained in \(\bigcap_{i}\ker\overline{Q}_{i}\). But this implies \(\bigcap_{i}\ker Q_{i}\) is strictly larger than \(K_{\sigma_{F}}\), contradicting (9.9). The polyhedron \(\sigma/\sigma_{F}\) has vertices given by the projective classes of the \(Q_{j}\), \(j\notin I\) which are non-zero in the quotient \(\mathcal{Q}(V)/\mathcal{Q}(V/K_{\sigma_{F}})\) and have block matrix representatives \[Q_{j}=\left(\begin{array}{c|c}R_{j}&P_{j}\\ \hline P_{j}^{T}&0\end{array}\right)\quad\text{ for all }j\notin I\.\] If \(\sigma/\sigma_{F}\) were contained in \(\operatorname{Det}\bigr{|}_{K_{\sigma_{F}}}(\mathbb{R})\), then by the argument of lemma 9.2 applied to the matrices \(R_{j}\), for \(j\notin I\), there would exist \(0\neq x\in K_{\sigma_{F}}\) such that \(x\in\ker(R_{j})\subset K_{\sigma_{F}}\) for all \(j\notin I\). In this case, the image \(y\) of \(x\) in \(V\), satisfies \(Q_{j}(y,y)=R_{j}(x,x)=0\) and hence \(y\in\ker(Q_{j})\) for all \(j\notin I\). Since \(y\in\ker(Q_{i})\) for all \(i\in I\) by (9.9), we deduce that \(y\) lies in \(\ker Q_{i}\), for all \(i=1,\dots,n\), contradicting the strict positivity of \(\sigma\) by lemma 9.2. _Remark 9.9_.: This lemma easily extends to the case when \(\sigma\) is a strictly positive polyhedron in the space \(\mathbb{P}(\mathcal{Q}(V)/\mathcal{Q}(V/K))\). The statement is the following: in the product of polyhedra \[\sigma_{F}\times(\sigma/\sigma_{F})\subset\mathbb{P}\left(\mathcal{Q}(V/K_{ \sigma_{F}})/\mathcal{Q}(V/K)\right)\times\mathbb{P}\left(\mathcal{Q}(V)/ \mathcal{Q}(V/K_{\sigma_{F}})\right)\] both \(\sigma_{F}\) and \(\sigma/\sigma_{F}\) are strictly positive. The proof is identical: one only needs to consider symmetric matrices with a number of initial rows and columns which are identically zero. ### Blow-ups and the determinant locus The determinant locus is well-behaved with respect to blowing up linear subspaces of the form \(\mathbb{P}(\mathcal{Q}(V/K))\). **Proposition 9.10**.: _Let \(0\neq K\subset V\) be a subspace and let_ \[\pi:\mathbb{P}\longrightarrow\mathbb{P}(\mathcal{Q}(V))\] _denote the blow-up of \(\mathbb{P}(\mathcal{Q}(V))\) along \(\mathbb{P}(\mathcal{Q}(V/K))\). The exceptional divisor \(\mathcal{E}\) may be canonically identified with the normal bundle of \(\mathbb{P}(\mathcal{Q}(V/K))\):_ \[\mathcal{E}\cong\mathbb{P}(\mathcal{Q}(V/K))\times\mathbb{P}(\mathcal{Q}(V)/ \mathcal{Q}(V/K))\.\] _If \(\widetilde{\mathrm{Det}}_{V}\) denotes the strict transform of the determinant locus, we have_ \[\widetilde{\mathrm{Det}}_{V}\cap\mathcal{E}\ \cong\ \left(\mathrm{Det}_{V/K}\times \mathbb{P}(\mathcal{Q}(V)/\mathcal{Q}(V/K))\right)\ \cup\ \left(\mathbb{P}(\mathcal{Q}(V/K))\times\mathrm{Det}\big{|}_{K}\right)\,\] _where \(\mathrm{Det}\big{|}_{K}\) was defined in (9.6). In particular one has the product formula:_ \[\mathcal{E}\ \backslash\ \left(\mathcal{E}\cap\widetilde{\mathrm{Det}}_{V} \right)\ \cong\ \left(\mathbb{P}(\mathcal{Q}(V/K))\ \backslash\ \mathrm{Det}_{V/K}\right)\ \times\ \left(\mathbb{P}(\mathcal{Q}(V)/\mathcal{Q}(V/K))\ \backslash\ \mathrm{Det}\big{|}_{K}\right)\.\] Proof.: Choose a complementary space to \(K\), i.e., \(V\cong K\oplus C\) and choose bases of \(K,C\). We can compute the blow up in local affine coordinates (SS5.2) as follows. A quadratic form in \(Q\in\mathcal{Q}(V)\) may be written in the form \[Q=\begin{pmatrix}Q_{K}&P\\ P^{T}&Q_{C}\end{pmatrix} \tag{9.10}\] Let \(x_{ij}\) denote the \((i,j)^{\mathrm{th}}\) matrix coefficient. A choice of affine coordinates on \(\mathbb{P}(\mathcal{Q}(V))\) is given by the \(x_{ij}\) which lie on or above the diagonal \(i\leqslant j\), and one \(x_{ij}\) is set to \(1\). The locus \(\mathbb{P}(\mathcal{Q}(V/K))\) is represented by matrices such that \(Q_{K}=P=0\), i.e., the vanishing of \(x_{ij}\) for \(i\leqslant j\) such that \(i\leqslant\dim(K)\). In any such local affine coordinates (see SS5.2) on this locus (i.e., \(x_{i_{0}j_{0}}=1\), where \(i_{0}\leqslant j_{0}\), \(i_{0}\leqslant\dim(K)\)) the blow up is represented by the map: \[s_{K}^{\bullet}:k[x_{ij},i\leqslant j] \longrightarrow k[z,x_{ij},i\leqslant j]\] \[s_{K}^{\bullet}(x_{ij}) = zx_{ij}\quad\text{ if }i\leqslant\dim(K)\] \[s_{K}^{\bullet}(x_{ij}) = x_{ij}\quad\text{ otherwise}\] The exceptional divisor is defined by the equation \(z=0\). Applying the map \(s_{K}^{\bullet}\) to the entries of the matrix of a quadratic form (9.10), we obtain \[s_{K}^{\bullet}Q=\begin{pmatrix}zQ_{K}&zP\\ zP^{T}&Q_{C}\end{pmatrix}. \tag{9.11}\] It follows from the theory of Schur complements that \[\begin{pmatrix}zQ_{K}&zP\\ zP^{T}&Q_{C}\end{pmatrix}=\begin{pmatrix}I&0\\ P^{T}Q_{K}^{-1}&I\end{pmatrix}\begin{pmatrix}zQ_{K}&0\\ 0&Q_{C}-zP^{T}Q_{K}^{-1}P\end{pmatrix}\begin{pmatrix}I&Q_{K}^{-1}P\\ 0&I\.\end{pmatrix}\] in the ring of matrices with coefficients in \(k[z,x_{ij},\det(Q_{K})^{-1}]\). The determinants satisfy \(\det(s_{K}^{\bullet}Q)=\det(zQ_{K})\det(Q_{C}-zP^{T}Q_{K}^{-1}P)\). This implies that \[\det(s_{K}^{\bullet}Q)=z^{\dim K}\det(Q_{K})\det(Q_{C})\mod(z^{\dim K+1})\.\] It follows that the intersection of the strict transform of the locus \(\det(Q)=0\) with the exceptional divisor \(z=0\) is given by the equation \(\det(Q_{K})\det(Q_{C})=0\). The zero loci of \(\det(Q_{K})\) and \(\det(Q_{C})\) are \(\mathrm{Det}\big{|}_{K}\), and \(\mathrm{Det}_{V/K}\) respectively. This result generalises several asymptotic factorisation theorems for graph polynomials [1, SS2] simply by applying the proposition to a suitable graph Laplacian matrix. ## 10. Perfect cone compactification Now consider the case when \(k=\mathbb{Q}\) and the vector space \(V=V_{\mathbb{Z}}\otimes\mathbb{Q}\) has a lattice \(V_{\mathbb{Z}}\). It is convenient to identify \(V_{\mathbb{Z}}\cong\mathbb{Z}^{g}\) for the purposes of the ensuing discussion. ### Rational closure of \(\mathcal{P}_{g}\) The rational closure \(\mathcal{P}_{g}^{\mathrm{rt}}\) of the space of positive definite matrices \(\mathcal{P}_{g}\) is defined to be the space of positive semi-definite matrices \(M\) whose null space \(\ker(M)\) is defined over \(\mathbb{Q}\). A positive semi-definite matrix \(M\) has rational kernel if and only if there exists an element \(h\in\mathrm{GL}_{g}(\mathbb{Z})\) such that \[h^{T}Mh=\begin{pmatrix}M_{0}&0\\ 0&0\end{pmatrix}\] where \(M_{0}\) is positive definite, i.e., \(M_{0}\in\mathcal{P}_{g^{\prime}}\) for some \(g^{\prime}\leqslant g\). Recall from SS9.1 that we identify the space of \(g\times g\) matrices with \(\mathbb{R}^{d_{g}}\), where \(d_{g}=\binom{g+1}{2}\). We have inclusions \(\mathcal{P}_{g}\subset\mathcal{P}_{g}^{\mathrm{rt}}\subset\mathbb{R}^{d_{g}}\) and \[L\mathcal{P}_{g}\subset L\mathcal{P}_{g}^{\mathrm{rt}}\subset\mathbb{P}^{d_{g} -1}(\mathbb{R})\] where the link of \(\mathcal{P}_{g}^{\mathrm{rt}}\) is defined by \(L\mathcal{P}_{g}^{\mathrm{rt}}=\left(\mathcal{P}_{g}^{\mathrm{rt}}\backslash \{0\}\right)/\mathbb{R}_{>0}^{\times}\). The group \(\mathrm{GL}_{g}(\mathbb{Z})\) preserves all three spaces. One has \(L\mathcal{P}_{g}^{\mathrm{rt}}\ \backslash\ L\mathcal{P}_{g}\ \subset\ \mathrm{Det}(\mathbb{R})\). _Remark 10.1_.: Lemma 9.2 and the discussion which follows implies that a positive polyhedron \((\sigma,\mathcal{Q}(\mathbb{Q}^{g}))\) has the property that \(\sigma\) is contained in the space \(\mathcal{P}_{g}^{\mathrm{rt}}\), i.e., every point of \(\sigma\) defines a real symmetric matrix with rational kernel. ### Minimal vectors and polyhedral linear configurations For any positive definite real quadratic form \(Q\) on \(\mathbb{R}^{g}\), denote its set of minimal vectors by \[M_{Q}=\{\lambda\in\mathbb{Z}^{g}\backslash\{0\}:Q(\lambda)\leq Q(\mu)\text{ for all }\mu\in\mathbb{Z}^{g}\backslash\{0\}\}\.\] The associated polyhedral cone is defined to be \[\widehat{\sigma}_{Q}=\mathbb{R}_{\geq 0}\langle\lambda\lambda^{T}\rangle_{ \lambda\in M_{Q}}\subset\mathcal{P}_{g}^{\mathrm{rt}}\] where the \(\lambda\lambda^{T}\) are rank one quadratic forms in \(\mathcal{P}_{g}^{\mathrm{rt}}\). The cone \(\widehat{\sigma}_{Q}\) is strongly convex since this is already the case for \(\mathcal{P}_{g}^{\mathrm{rt}}\) (although \(\mathcal{P}_{g}^{\mathrm{rt}}\) is not itself a polyhedral cone). A positive definite quadratic form \(Q\) is called _perfect_ if \(\widehat{\sigma}_{Q}\) has maximal dimension \(d_{g}\). Equivalently, \(Q\) is uniquely determined up to a scalar multiple by its set of minimal vectors [11, 12]. **Definition 10.2**.: For any positive definite real quadratic form \(Q\) denote by \[\mathsf{c}_{Q}=(\mathbb{P}^{Q},L_{Q},\sigma_{Q}) \tag{10.1}\] the polyhedral linear configuration where \(\sigma_{Q}=L\widehat{\sigma}_{Q}=(\widehat{\sigma}_{Q}\backslash\{0\})/ \mathbb{R}_{>0}^{\times}\) is the link of \(\widehat{\sigma}_{Q}\) in \(\mathbb{P}^{d_{g}-1}(\mathbb{R})\), \(L_{Q}=\bigcup_{i}L_{i}\) is the union of the Zariski closures of its facets, and \(\mathbb{P}^{Q}\subset\mathbb{P}^{d_{g}-1}\cong\mathbb{P}(\mathcal{Q}(V))\) is the linear subspace defined by the Zariski-closure of \(\sigma_{Q}\). The configuration \(\mathsf{c}_{Q}\) is thus an object of the category \(\mathrm{PLC}_{\mathbb{Q}}\). The form \(Q\) is perfect if and only if \(\mathbb{P}^{Q}=\mathbb{P}^{d_{g}-1}\), i.e., \(\sigma_{Q}\) is Zariski dense in the whole space of quadratic forms. ### Admissible decompositions See [1, 10, 12, 13]. **Definition 10.3**.: A set \(\Sigma=\{\widehat{\sigma}_{i}\}\) of polyhedral cones \(\widehat{\sigma}_{i}\subset\mathcal{P}_{g}^{\mathrm{rt}}\) is called _admissible_ if it is stable under the action of \(\mathrm{GL}_{g}(\mathbb{Z})\), and such that: * it covers \(\mathcal{P}_{g}^{\mathrm{rt}}\), i.e., \(\bigcup_{i}\widehat{\sigma}_{i}=\mathcal{P}_{g}^{\mathrm{rt}}\), * if \(\widehat{\sigma}\in\Sigma\), then all the faces of \(\widehat{\sigma}\) are elements of \(\Sigma\), * the intersection of two cones in \(\Sigma\) is a face of both cones, * the set of \(\mathrm{GL}_{g}(\mathbb{Z})\)-orbits of \(\Sigma\) is finite. **Theorem 10.4**.: _(Voronoi). The set of cones \(\{\widehat{\sigma}_{Q},Q\in\mathcal{P}_{g}\}\) is an admissible decomposition of \(\mathcal{P}_{g}^{\mathrm{rt}}\). In particular, every face of \(\widehat{\sigma}_{Q}\) is a cone \(\widehat{\sigma}_{Q^{\prime}}\) for some other quadratic form \(Q^{\prime}\). Furthermore, every cone \(\widehat{\sigma}_{Q}\) is a face of a perfect cone._ Let us say that two quadratic forms \(Q,Q^{\prime}\) are equivalent if their sets of minimal vectors (and hence their associated cones) coincide: \(M_{Q}=M_{Q^{\prime}}\). Denote by \([Q]\) the equivalence classes. The group \(\mathrm{GL}_{g}(\mathbb{Z})\) acts upon the classes \([Q]\). Write \[[Q^{\prime}]\leq[Q]\] if the cone \(\widehat{\sigma}_{Q^{\prime}}\) is a face of \(\widehat{\sigma}_{Q}\). Let us write \(\dim{[Q]}=\dim{\sigma_{Q}}\). **Definition 10.5**.: Let \(\mathcal{D}_{g}^{\mathrm{perf}}\) denote the category whose objects are equivalence classes \([Q]\) for \(Q\in\mathcal{P}_{g}\), and whose morphisms are generated by the following two kinds of maps: 1. face maps \([Q^{\prime}]\to[Q]\) whenever \([Q^{\prime}]\leq[Q]\), 2. isomorphisms \(h:[Q]\to[h^{T}Qh]\) for any \(h\in\mathrm{GL}_{g}(\mathbb{Z})\). The category \(\mathcal{D}^{\mathrm{perf}}_{g}\) is equivalent to a finite category [14], since there are only finitely many isomorphism classes of objects, and automorphism groups of cones are finite. Every \(h\in\mathrm{GL}_{g}(\mathbb{Z})\) defines a linear isomorphism in \(\mathrm{PLC}_{\mathbb{Q}}\) we denote by: \[\mathsf{c}_{Q}\stackrel{{ h}}{{\longrightarrow}}\mathsf{c}_{h^{T} Qh}\.\] For every \(Q^{\prime}\leqslant Q\) one has a face map \(\mathsf{c}_{Q^{\prime}}\to\mathsf{c}_{Q}\) in the category \(\mathrm{PLC}_{\mathbb{Q}}\). **Definition 10.6**.: Consider the functor \[L\mathcal{A}^{\mathrm{trop}}_{g}:\mathcal{D}^{\mathrm{perf}}_{g}\longrightarrow \mathrm{PLC}_{\mathbb{Q}} \tag{10.2}\] which sends \(Q\) to \(\mathsf{c}_{Q}\). Its topological realisation \(|L\mathcal{A}^{\mathrm{trop}}_{g}|\) is the link of the moduli space of tropical abelian varieties. By construction, it is a generalised cone complex as in [1]. ### Blow-ups of cones associated to positive definite quadratic forms **Definition 10.7**.: Suppose that \(Q\) is a positive definite quadratic form on \(V\), such that \(\sigma_{Q}>0\) is strictly positive (definition 9.1). Its Zariski closure \(\mathbb{P}^{Q}\) admits a canonical embedding into \(\mathbb{P}(\mathcal{Q}(V))\). Consider the finite set of subspaces of \(\mathbb{P}^{Q}\) defined as follows: \[\mathcal{B}_{Q}=\bigcup_{n\geqslant 1}\left\{\bigcap_{i=1}^{n}\mathbb{P}^{Q} \cap\mathbb{P}(\mathcal{Q}(V/K_{\sigma_{i}}))\ :\ \ \text{where $\sigma_{i}$ are faces of $\sigma_{Q}$}\right\}. \tag{10.3}\] Alternatively, consider for all \(0\neq K\subset V\), the (infinite) set \(\mathcal{B}^{\mathrm{all}}\) of all subspaces \[\mathbb{P}(\mathcal{Q}(V/K))\subset\mathbb{P}(\mathcal{Q}(V))\] Then \(\mathcal{B}_{Q}=(\mathcal{B}^{\mathrm{all}})^{\mathrm{min},\sigma_{Q}}\cap \mathbb{P}^{Q}\), by definition 5.14 and lemma 9.4. Denote the iterated blow-up of \(\mathsf{c}_{Q}\) along \(\mathcal{B}_{Q}\) to be: \[\mathsf{c}_{Q}^{\mathcal{B}}\quad\in\quad\mathrm{Ob}(\mathrm{BLC}_{\mathbb{Q} })\.\] Let us denote its topological realisation by \(\sigma_{Q}^{\mathcal{B}}=\sigma(\mathsf{c}_{Q}^{\mathcal{B}})\). There is a natural blow-down morphism \(\mathsf{c}_{Q}^{\mathcal{B}}\to\mathsf{c}_{Q}\) in the category \(\mathcal{PC}_{\mathbb{Q}}\), which induces a continuous map \(\sigma_{Q}^{\mathcal{B}}\to\sigma_{Q}\). _Remark 10.8_.: Note that the set of loci \(\mathbb{P}^{Q}\cap\mathbb{P}(\mathcal{Q}(V/K))\) where \(\mathbb{P}(\mathcal{Q}(V/K))\) meets \(\sigma\) is not closed under intersections, because the intersection of two essential faces can be empty. Unlike the case of graphs, \(\mathcal{B}_{Q}\) does _not_ consist of the Zariski closures of the faces of \(\sigma_{Q}\). This is a fundamental difference between the constructions for \(L\mathcal{M}^{\mathrm{trop},\mathcal{B}}_{g}\) and \(L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g}\). By contrast to the situation with polytopes associated to graphs, the blow-ups are not intrinsic, which means that we do not strictly have face maps. Nevertheless, the following lemma implies that they are intrinsic, up to extraneous blow-ups (Example 5.9). **Lemma 10.9**.: _Let \(Q^{\prime}\leqslant Q\) be positive definite. Let us denote by \(F_{Q^{\prime}}\mathsf{c}_{Q}^{\mathcal{B}}\) the face of \(\mathsf{c}_{Q}\) corresponding to the face \(\sigma_{Q^{\prime}}\). Then there is a canonical morphism_ \[F_{Q^{\prime}}\mathsf{c}_{Q}^{\mathcal{B}}\longrightarrow\mathsf{c}_{Q^{ \prime}}^{\mathcal{B}} \tag{10.4}\] _in the category \(\mathrm{BLC}_{\mathbb{Q}}\) which is a composition of extraneous blow-ups. Consequently, it induces an isomorphism on topological realisations._ Proof.: By definition and (9.2), \(\mathcal{B}_{Q}\) consists of spaces of the form \(\mathbb{P}^{Q}\cap\mathbb{P}(\mathcal{Q}(V/K))\) for certain \(0\neq K\subset V\). By lemma 9.4, \(\sigma_{Q^{\prime}}\cap\mathbb{P}(\mathcal{Q}(V/K))(\mathbb{R})\) is either a face of \(\sigma_{Q^{\prime}}\) or the empty set. Since \(\mathbb{P}^{Q^{\prime}}\) is the Zariski-closure of \(\sigma_{Q^{\prime}}\), we deduce that \(\mathcal{B}_{Q^{\prime}}=(\mathcal{B}_{Q}\cap\mathbb{P}^{Q^{\prime}})^{ \mathrm{min},\sigma_{Q^{\prime}}}\) and (10.4) is a consequence of proposition 5.15 and proposition 5.2 (i). Face morphisms are thus replaced in this context by _face diagrams_ in \(\mathrm{BLC}_{\mathbb{Q}}\): \[\mathsf{c}_{Q^{\prime}}^{\mathcal{B}}\longleftarrow F_{Q^{\prime}}\mathsf{c }_{Q}^{\mathcal{B}}\longrightarrow\mathsf{c}_{Q}^{\mathcal{B}} \tag{10.5}\] where the map on the right is inclusion of a face, and the map on the left is an extraneous blow-down (10.4). The blow-down of (10.5) in \(\operatorname{PLC}_{\mathbb{Q}}\) is the diagram \[\mathsf{c}_{Q^{\prime}}\ \stackrel{{\sim}}{{\longleftarrow}}\ \mathsf{c}_{Q^{\prime}}\longrightarrow\mathsf{c}_{Q}\.\] ### Definition of \(L\mathcal{A}_{g}^{\operatorname{trop},\mathcal{B}}\) Every \(h\in\operatorname{GL}_{g}(\mathbb{Z})\) gives rise to a bijection \(\mathcal{B}_{Q}\stackrel{{ h}}{{\to}}\mathcal{B}_{h^{T}Qh}\) and hence induces a linear isomorphism and commutative diagram in \(\mathcal{PC}_{\mathbb{Q}}\): \[\mathsf{c}_{Q}^{\mathcal{B}} \stackrel{{ h}}{{\longrightarrow}} \mathsf{c}_{h^{T}Qh}^{\mathcal{B}}\] \[\downarrow \downarrow\] \[\mathsf{c}_{Q} \stackrel{{ h}}{{\longrightarrow}} \mathsf{c}_{h^{T}Qh} \tag{10.6}\] where the horizontal maps are in \(\operatorname{BLC}_{\mathbb{Q}}\). In other words, blow-ups along linear strata are functorial for the action of \(\operatorname{GL}_{g}(\mathbb{Z})\). Consequently, the action of \(\operatorname{GL}_{g}(\mathbb{Z})\) is compatible with face morphisms: for every face morphism \(F\to\mathsf{c}_{Q}^{\mathcal{B}}\) and \(h\in\operatorname{GL}_{g}(\mathbb{Z})\) there is a unique face \(h(F)\) of \(\mathsf{c}_{h^{T}Qh}^{\mathcal{B}}\) such that the following diagram in \(\operatorname{BLC}_{\mathbb{Q}}\) commutes: \[\begin{array}{ccc}F&\stackrel{{ h}}{{\longrightarrow}}&h(F)\\ \downarrow&&\downarrow\\ \mathsf{c}_{Q}^{\mathcal{B}}&\stackrel{{ h}}{{\longrightarrow}}& \mathsf{c}_{h^{T}Qh}^{\mathcal{B}}\end{array}\,\] where the vertical maps are inclusions of faces. Furthermore, this diagram is compatible with face diagrams (10.5) and their blow-downs. The reason is that an element \(h\in\operatorname{GL}_{g}(\mathbb{Z})\) induces an isomorphism not only on faces but also on their normals (remark 4.9). **Definition 10.10**.: Define a category \(\mathcal{D}_{g}^{\operatorname{perf},\mathcal{B}}\) whose objects are the sets of faces of \(\mathsf{c}_{Q}^{\mathcal{B}}\), for all equivalence classes of positive definite quadratic forms \([Q]\), and whose morphisms are generated by face maps, linear isomorphisms \(h\), for all \(h\in\operatorname{GL}_{g}(\mathbb{Z})\), and extraneous blow down morphisms of the form (10.4). Since \(\mathsf{c}_{Q}^{\mathcal{B}}\) has finitely many faces, each of which blows down to a face of \(\mathsf{c}_{Q}\), it follows that \(\mathcal{D}_{g}^{\operatorname{perf},\mathcal{B}}\), like \(\mathcal{D}_{g}^{\operatorname{perf}}\), is equivalent to a finite category. Blow-down defines a canonical functor \[\phi:\mathcal{D}_{g}^{\operatorname{perf},\mathcal{B}}\longrightarrow\mathcal{ D}_{g}^{\operatorname{perf}}\.\] **Definition 10.11**.: Consider the \(\operatorname{BLC}_{\mathbb{Q}}\)-complex \[L\mathcal{A}_{g}^{\operatorname{trop},\mathcal{B}}:\mathcal{D}_{g}^{ \operatorname{perf},\mathcal{B}}\longrightarrow\operatorname{BLC}_{\mathbb{Q}} \tag{10.7}\] which to any object of \(\mathcal{D}_{g}^{\operatorname{perf},\mathcal{B}}\) associates the corresponding face of \(\mathsf{c}_{Q}^{\mathcal{B}}\), for suitable \(Q\). There is a canonical blow-down morphism (in \(\mathcal{PC}_{\mathbb{Q}}\)) \[L\mathcal{A}_{g}^{\operatorname{trop},\mathcal{B}}\longrightarrow L\mathcal{ A}_{g}^{\operatorname{trop}}\] given by the pair \((\phi,\Phi)\), where \(\phi:\mathcal{D}_{g}^{\operatorname{perf},\mathcal{B}}\to\mathcal{D}_{g}^{ \operatorname{perf}}\) is the functor considered above, and \(\Phi\) is the natural transformation obtained by restricting the canonical blow-down map \(\mathsf{c}_{Q}^{\mathcal{B}}\to\mathsf{c}_{Q}\) to faces. It induces a continuous map on topological realisations: \[\left|L\mathcal{A}_{g}^{\operatorname{trop},\mathcal{B}}\right|\longrightarrow \left|L\mathcal{A}_{g}^{\operatorname{trop}}\right|\.\] _Remark 10.12_.: It is instructive to explain how, in the absence of face maps _per se_, the blow ups of cones are glued together along faces. Consider two positive definite quadratic forms \(Q_{1},Q_{2}\) which share a common face \([Q]\leqslant[Q_{1}],[Q_{2}]\). In the perfect cone complex, this is reflected by face maps \(\mathsf{c}_{Q}\to\mathsf{c}_{Q_{1}}\) and \(\mathsf{c}_{Q}\to\mathsf{c}_{Q_{i}}\) which we think of as gluing \(\mathsf{c}_{Q_{1}},\mathsf{c}_{Q_{2}}\) along the common face \(\mathsf{c}_{Q}\); indeed this is precisely what happens in the topological realisation: the images of the polyhedra \(\sigma_{Q_{1}}\) and \(\sigma_{Q_{2}}\) are identified along \(\sigma_{Q}\). In the complex \(L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\), we have instead a slightly more complicated picture obtained by joining two face diagrams (10.5) in \(\mathrm{BLC}_{\mathbb{Q}}\) pictured below on the left-hand side: \[\begin{array}{ccccc}F_{Q}{\mathcal{C}}_{Q_{1}}^{{\mathcal{B}}}&\longrightarrow&{ \mathcal{C}}_{Q_{1}}^{{\mathcal{B}}}&&\sigma_{Q}^{{\mathcal{B}}}&\longrightarrow &\sigma_{Q_{1}}^{{\mathcal{B}}}\\ \downarrow&&\downarrow&&\\ {\mathcal{C}}_{Q}^{{\mathcal{B}}}&&\stackrel{{\mathcal{,} \sigma}}{{\longrightarrow}}&\sigma_{Q}^{{\mathcal{B}}}&&\\ \uparrow&&&&\uparrow&&\\ F_{Q}{\mathcal{C}}_{Q_{2}}^{{\mathcal{B}}}&\longrightarrow&{\mathcal{C}}_{Q_ {2}}^{{\mathcal{B}}}&&\sigma_{Q}^{{\mathcal{B}}}&\longrightarrow&\sigma_{Q_{2} }^{{\mathcal{B}}}\end{array}\] where the horizontal maps are inclusions of faces, and the vertical maps are extraneous blow-downs. This diagram means that the blow-ups \({\mathcal{C}}_{Q_{1}}^{{\mathcal{B}}}\) and \({\mathcal{C}}_{Q_{2}}^{{\mathcal{B}}}\) are indeed glued along the common face \({\mathcal{C}}_{Q}^{{\mathcal{B}}}\), but only after collapsing extraneous exceptional divisors. The diagram on the right is the realisation, in the category of toplogical spaces, of the diagram on the left. The vertical maps are isomorphisms, since they are induced by extraneous blow-ups (example 5.9). Thus, in the topological realisation \(\left|L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\right|\), this subtlety falls away and one does in effect have canonical face maps, as in the case of the perfect cone complex. ## 11. The determinant locus in \(L{\mathcal{A}}_{g}^{\mathrm{trop}}\) and its blow-up ### The determinant locus in \(L{\mathcal{A}}_{g}^{\mathrm{trop}}\) **Lemma 11.1**.: _The determinant locus defines a subscheme functor of \(L{\mathcal{A}}_{g}^{\mathrm{trop}}\):_ \[\mathrm{Det}:{\mathcal{D}}_{g}^{\mathrm{perf}}\longrightarrow\mathrm{Sch}_{ \mathbb{Q}}\.\] _The topological realisation of its complement is \(\left|L{\mathcal{A}}_{g}^{\mathrm{trop}}\right|\setminus|\mathrm{Det}(\mathbb{ R})|\cong L{\mathcal{P}}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\)._ Proof.: The functor \(\mathsf{S}\,L{\mathcal{A}}_{g}^{\mathrm{trop}}\) is given on \([Q]\) by the first component \(\mathbb{P}^{Q}\) of \({\mathsf{c}}_{Q}=(\mathbb{P}^{Q},L_{Q},\sigma_{Q})\) which is embedded in \(\mathbb{P}({\mathcal{Q}}(V))\). Define \(\mathrm{Det}([Q])=\mathrm{Det}\cap\mathbb{P}^{Q}\subseteq\mathbb{P}^{Q}\). It is compatible with inclusions of faces since they correspond to inclusions \(\mathbb{P}^{Q^{\prime}}\rightarrow\mathbb{P}^{Q}\) of linear subspaces of \(\mathbb{P}({\mathcal{Q}}(V))\). The compatibility with isomorphisms of definition 10.5 (ii) follows since the determinant locus is invariant under the action of \(\mathrm{GL}_{g}(\mathbb{Z})\) (i.e., \(\det(P)=0\Leftrightarrow\det(hPh^{T})=0\) for all \(h\in\mathrm{GL}_{g}(\mathbb{Z})\).) The second statement follows from lemma 9.2. The subscheme \(\mathrm{Det}\) is not at infinity (definition 2.6) since, in general: \[\sigma_{Q}\cap\mathrm{Det}(\mathbb{R})\neq\emptyset\.\] Indeed, any face \(\sigma_{Q^{\prime}}\) where \([Q^{\prime}]\leq[Q]\) lies at infinity is necessarily contained in \(\mathrm{Det}(\mathbb{R})\). Nevertheless, we show below how performing iterated blow-ups has the effect of separating the strict transform of the determinant locus away from the perfect cones. The strict transform of the determinant locus in \(L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\) **Lemma 11.2**.: _The strict transform of the determinant defines a subfunctor of \(\mathsf{S}L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\):_ \[\widetilde{\mathrm{Det}}:{\mathcal{D}}_{g}^{\mathrm{perf},{\mathcal{B}}} \longrightarrow\mathrm{Sch}_{\mathbb{Q}}\.\] Proof.: Recall that \(\mathsf{S}\,L{\mathcal{A}}_{g}^{\mathrm{trop}}\) on \([Q]\) is the iterated blow-up \(P^{{\mathcal{B}}_{Q}}\) of \(\mathbb{P}^{Q}\) along \({\mathcal{B}}_{Q}\). Let \(\widetilde{\mathrm{Det}}\) denote the strict transform of \(\mathrm{Det}\cap\mathbb{P}^{Q}\). The functoriality with respect to isomorphisms follows from the invariance of the determinant locus under \(\mathrm{GL}_{g}(\mathbb{Z})\). The functoriality with respect to face diagrams follows from proposition 5.2 (i) in the case of face morphisms, and for extraneous blow-downs by definition since \(\widetilde{\mathrm{Det}}\) is a strict transform. **Theorem 11.3**.: _The functor \(\widetilde{\mathrm{Det}}:{\mathcal{D}}_{g}^{\mathrm{perf},{\mathcal{B}}} \rightarrow\mathrm{Sch}_{\mathbb{Q}}\) of \(L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\) is at infinity, i.e._ \[\left|L{\mathcal{A}}_{g}^{\mathrm{perf},{\mathcal{B}}}\right|\cap\widetilde{ \mathrm{Det}}=\emptyset\] The proof follows from corollary 11.5. Consequently we define a functor (definition 2.7) \[L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\setminus\widetilde{\mathrm{ Det}}:{\mathcal{D}}_{g}^{\mathrm{perf},{\mathcal{B}}}\longrightarrow\mathrm{BLC}_{ \mathbb{Q}}. \tag{11.1}\] ### Structure of faces and proof of theorem 11.3 Let \(Q\) be a positive definite quadratic form such that \(\sigma_{Q}>0\). Let \(\mathcal{B}\) denote the set of subspaces \(\mathbb{P}(\mathcal{Q}(V/K))\) of \(\mathbb{P}(\mathcal{Q}(V))\) occuring in definition (10.3), such that \(\mathcal{B}_{Q}=\mathcal{B}\cap\mathbb{P}^{Q}\). By proposition 5.2 (i), we may form the blow-up \(P^{\mathcal{B}}\) either by blowing up \(\mathbb{P}^{Q}\) along \(\mathcal{B}_{Q}\), or by first blowing up \(\mathbb{P}(\mathcal{Q}(V))\) along \(\mathcal{B}\), and then restricting to the strict transform of \(\mathbb{P}(Q)\). Here we shall do the latter. **Proposition 11.4**.: _An intersection \(\mathcal{E}\) of irreducible components of the exceptional divisor of the iterated blow-up \(P^{\mathcal{B}}\) of \(\mathbb{P}(\mathcal{Q}(V))\) along \(\mathcal{B}\) is indexed by nested sequences of spaces_ \[0=K_{0}\subset K_{1}\subset K_{2}\subset\ldots\subset K_{n+1}=V. \tag{11.2}\] _There is a canonical isomorphism \(\mathcal{E}\cong P^{\mathcal{B}_{1}}_{1}\times\ldots\times P^{\mathcal{B}_{n}} _{n}\) where \(P^{\mathcal{B}_{i}}_{i}\) is the iterated blow-up of_ \[\mathbb{P}\left(\frac{\mathcal{Q}(V/K_{i})}{\mathcal{Q}(V/K_{i+1})}\right)\] _along a certain set \(\mathcal{B}_{i}\) of spaces \(\mathbb{P}(\mathcal{Q}(V/K))\), where \(K_{i}\subset K\subset K_{i+1}\)._ _The intersection of \(\mathcal{E}\) with the strict transform \(\widetilde{\operatorname{Det}}\) is canonically isomorphic to_ \[\mathcal{E}\cap\widetilde{\operatorname{Det}}\ \cong\ \bigcup_{i=1}^{n}P^{ \mathcal{B}_{i}}_{1}\times\ldots\times P^{\mathcal{B}_{i-1}}_{i-1}\times \widetilde{\operatorname{Det}}_{i}\times P^{\mathcal{B}_{i+1}}_{i+1}\times \ldots\times P^{\mathcal{B}_{n}}_{n}\] _where \(\widetilde{\operatorname{Det}}_{i}\subset P^{\mathcal{B}_{i}}_{i}\) is the strict transform of the zero locus of the homogeneous polynomial map obtained by composing the restriction map (9.4) with the determinant:_ \[\frac{\mathcal{Q}(V/K_{i})}{\mathcal{Q}(V/K_{i+1})}\longrightarrow\mathcal{Q} \left(K_{i+1}/K_{i}\right)\stackrel{{\operatorname{det}}}{{ \longrightarrow}}\mathbb{Q}\.\] _Let \(\sigma^{\mathcal{B}}\) be the closure of the inverse image of the interior of \(\sigma_{Q}\), viewed inside \(\mathbb{P}(\mathcal{Q}(V))(\mathbb{R})\). The set \(\sigma^{\mathcal{B}}\cap\mathcal{E}(\mathbb{R})\) is either empty or a face which is canonically homeomorphic to a product_ \[\sigma^{\mathcal{B}_{1}}_{1}\times\sigma^{\mathcal{B}_{2}}_{2}\times\ldots \times\sigma^{\mathcal{B}_{n}}_{n} \tag{11.3}\] _where \(\sigma^{\mathcal{B}_{i}}_{i}\subset P_{i}(\mathbb{R})\) is the blow-up of a polyhedral cone \(\sigma_{i}\subset\mathbb{P}\left(\mathcal{Q}(V/K_{i})/\mathcal{Q}(V/K_{i+1}) \right)(\mathbb{R})\) which is strictly positive._ Proof.: The first part follows from the description (5.3) of exceptional divisors and their intersections in iterated blow-ups in terms of nested sequences of spaces in \(\mathcal{B}\). The rest follows from repeated application of lemma 9.8, remark 9.9 and proposition 9.10. The fact that \(\sigma^{\mathcal{B}}\) may be empty follows since some of the blow-ups may be extraneous to \(\sigma\). **Corollary 11.5**.: _Let \(\mathsf{c}^{\mathcal{B}_{Q}}_{Q}=(P^{\mathcal{B}_{Q}},L^{\mathcal{B}_{Q}}, \sigma^{\mathcal{B}}_{Q})\) be the blow-up of \(\mathsf{c}_{Q}\) (definition 10.2). Then_ \[\sigma^{\mathcal{B}}_{Q}\cap\widetilde{\operatorname{Det}}(\mathbb{C})= \emptyset\.\] Proof.: One has \(\sigma^{\mathcal{B}}_{Q}\cong\sigma^{\mathcal{B}}\) since \(\mathbb{P}^{Q}\) is the Zariski closure of \(\sigma_{Q}\). Since the boundary of \(\sigma^{\mathcal{B}}\) is the union of the interiors of its faces, it suffices to show that the interior of each face does not meet the strict transform of the determinant locus. Since each face of \(\sigma_{Q}\) either lies at infinity, or is a strictly positive polyhedron of the form \(\sigma_{Q^{\prime}}\) for some \([Q^{\prime}]\leqslant[Q]\), every face of \(\sigma^{\mathcal{B}}\) is a product of blow-ups of strictly positive polyhedra by (11.3) (on applying proposition 11.4 to a polyhedron \(\sigma_{Q^{\prime}}\)). The statement follows from lemma 9.2. **Definition 11.6**.: With the notations of proposition 11.4, define a morphism \[\pi_{\operatorname{red}}:\mathcal{E}\backslash(\mathcal{E}\cap\widetilde{ \operatorname{Det}})\longrightarrow\prod_{i=0}^{n}\left(\mathbb{P}(\mathcal{Q} (K_{i+1}/K_{i}))\setminus\operatorname{Det}_{K_{i+1}/K_{i}}\right) \tag{11.4}\] which is the product of restriction maps (9.7) on each component. ### Inverting extraneous blow-ups We may define a category \(I^{\mathrm{perf},\mathcal{B}}_{g}\) which has the same objects as \(D^{\mathrm{perf},\mathcal{B}}_{g}\) but in which all morphisms corresponding to extraneous blow-ups are inverted. Thus, a face diagram in the category \(\mathcal{D}^{\mathrm{perf},\mathcal{B}}_{g}\) which is represented by \[[Q^{\prime}]\longleftarrow F_{Q^{\prime}}[Q]\longrightarrow[Q]\] is replaced, in the category \(\mathcal{I}^{\mathrm{perf},\mathcal{B}}_{g}\), by morphisms \[[Q^{\prime}]\stackrel{{\sim}}{{\longleftarrow}}F_{Q^{\prime}} [Q]\longrightarrow[Q]\] giving rise to a genuine face map \([Q^{\prime}]\rightarrow[Q]\) in \(\mathcal{I}^{\mathrm{perf},\mathcal{B}}_{g}\). The functor \(L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g}\) does not extend to \(\mathcal{I}^{\mathrm{perf},\mathcal{B}}_{g}\), but does if the target category \(\mathrm{BLC}_{\mathbb{Q}}\) is replaced by its localisation with respect to extraneous blow-ups. We shall not pursue this any further in this paper. However, by the final comments of remark 10.12, extraneous blow-ups are already isomorphisms in the topological realisation. As a result, the topological realisation functor \(\sigma L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g}:\mathcal{D}^{\mathrm{perf },\mathcal{B}}_{g}\rightarrow\mathcal{T}op\) canonically extends to a functor \[\sigma L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g}\ :\ \mathcal{I}^{\mathrm{perf}, \mathcal{B}}_{g}\longrightarrow\mathcal{T}op\] whose associated topological space is precisely \[\big{|}L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g}\big{|}=\varinjlim_{x\in \mathcal{I}^{\mathrm{perf},\mathcal{B}}_{g}}\sigma_{x}. \tag{11.5}\] _Remark 11.7_.: The category \(\mathcal{I}^{\mathrm{perf},\mathcal{B}}_{g}\) can be described in terms of nested sequences of objects \(([Q_{1}],\dots,[Q_{r}])\) where \([Q_{i}]\) are objects of the category \(\mathcal{D}^{\mathrm{perf}}_{g}\), in much the same way as definition 7.2 in the case of graphs. This will not be discussed further here. ### The boundary and interior Let \(\partial\mathcal{D}^{\mathrm{perf}}_{g}\) denote the full subcategory of \(\mathcal{D}^{\mathrm{perf}}_{g}\) consisting of positive definite quadratic forms \(Q\) such that \(\sigma_{Q}\subset\mathrm{Det}(\mathbb{R})\). **Definition 11.8**.: Denote the restriction of the functor \(L\mathcal{A}^{\mathrm{trop}}_{g}\) to \(\partial\mathcal{D}^{\mathrm{perf}}_{g}\subset\mathcal{D}^{\mathrm{perf}}_{g}\) by \[\partial L\mathcal{A}^{\mathrm{trop}}_{g}:\partial\mathcal{D}^{\mathrm{perf}}_{ g}\rightarrow\mathrm{PLC}_{\mathbb{Q}}\.\] Its topological realisation \(\big{|}\partial L\mathcal{A}^{\mathrm{trop}}_{g}\big{|}\) is the union of all faces of cones in the Vornoi decomposition which lie at infinity, i.e., in \(P^{\mathrm{rt}}_{g}\backslash P_{g}\). Denote its complement by \[\big{|}L\mathcal{A}^{\circ,\mathrm{trop}}_{g}\big{|}=\big{|}L\mathcal{A}^{ \mathrm{trop}}_{g}\big{|}\setminus\big{|}\partial L\mathcal{A}^{\mathrm{trop}} _{g}\big{|}. \tag{11.6}\] By lemma 11.1, \(\big{|}L\mathcal{A}^{\mathrm{trop}}_{g}\big{|}\cap\mathrm{Det}(\mathbb{R})= \big{|}\partial L\mathcal{A}^{\mathrm{trop}}_{g}\big{|}\) or equivalently, \[\big{|}L\mathcal{A}^{\circ,\mathrm{trop}}_{g}\big{|}=\big{|}L\mathcal{A}^{ \mathrm{trop}}_{g}\big{|}\setminus\big{(}\big{|}L\mathcal{A}^{\mathrm{trop}}_{g }\big{|}\cap|\mathrm{Det}(\mathbb{R})|\big{)}\cong L\mathcal{P}_{g}/\mathrm{ GL}_{g}(\mathbb{Z})\.\] Now let us define \(\partial\mathcal{D}^{\mathrm{perf},\mathcal{B}}_{g}\) to be the full subcategory of \(\mathcal{D}^{\mathrm{perf},\mathcal{B}}_{g}\) consisting of all faces whose image under canonical blow-downs are objects of \(\partial\mathcal{D}_{g}\). **Definition 11.9**.: Denote the restriction of \(L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g}\) to \(\partial\mathcal{D}^{\mathrm{perf},\mathcal{B}}_{g}\) by \[\partial L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g}:\partial\mathcal{D}^{ \mathrm{perf},\mathcal{B}}_{g}\rightarrow\mathrm{BLC}_{\mathbb{Q}}\.\] We now show the open \(\big{|}L\mathcal{A}^{\circ,\mathrm{trop}}_{g}\big{|}\) embeds canonically into \(|L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g}|\). **Proposition 11.10**.: _There is a morphism of complexes_ \[\partial L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g}\rightarrow\partial L \mathcal{A}^{\mathrm{trop}}_{g}\] _given by the pair \((f,\Phi)\) where the functor \(f:\partial\mathcal{D}^{\mathrm{perf},\mathcal{B}}_{g}\rightarrow\partial\mathcal{ D}^{\mathrm{perf}}_{g}\) and the natural transformation \(\Phi\) is induced by the canonical blow-downs \(c^{\mathcal{B}}_{\mathcal{Q}}\rightarrow c_{\mathbb{Q}}\). There is a canonical embedding_ \[\big{|}L\mathcal{A}^{\circ,\mathrm{trop}}_{g}\big{|}\stackrel{{ \sim}}{{\longrightarrow}}\big{|}L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g} \big{|}\setminus\big{|}\partial L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g} \big{|} \tag{11.7}\] _whose inverse is the blow-down \(\big{|}L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g}\big{|}\setminus\big{|} \partial L\mathcal{A}^{\mathrm{trop},\mathcal{B}}_{g}\big{|}\stackrel{{ \sim}}{{\longrightarrow}}\big{|}L\mathcal{A}^{\mathrm{trop}}_{g}\big{|} \setminus\big{|}\partial L\mathcal{A}^{\mathrm{trop}}_{g}\big{|}\)._ Proof.: Consider the full subcategory \(\mathcal{D}_{g}^{\circ,\mathrm{perf}}\) of \(\mathcal{D}_{g}^{\mathrm{perf}}\) whose objects are classes \([Q]\), where \(Q\) is positive definite, such that \(\sigma_{Q}>0\). Consider the functor \(\mathcal{D}_{g}^{\circ,\mathrm{perf}}\to\mathcal{T}op\) defined by \[[Q]\mapsto\sigma_{Q}\backslash\left(\sigma_{Q}\cap\mathrm{Det}(\mathbb{R}) \right)\.\] The associated topological space is \[\left|L\mathcal{A}_{g}^{\circ,\mathrm{trop}}\right|=\varinjlim_{Q\in\mathcal{D }_{g}^{\circ,\mathrm{perf}}}\sigma_{Q}\backslash\left(\sigma_{Q}\cap\mathrm{ Det}(\mathbb{R})\right)\.\] Because \(\mathcal{D}_{g}^{\mathrm{perf},\mathcal{B}}\) does not have face maps, there is no natural functor from \(\mathcal{D}_{g}^{\circ,\mathrm{perf}}\) to \(\mathcal{D}_{g}^{\mathrm{perf},\mathcal{B}}\), but after inverting extraneous blow-ups (SS11.4), there is a functor \(\iota:\mathcal{D}_{g}^{\circ,\mathrm{perf}}\to\mathcal{I}_{g}^{\mathrm{perf}, \mathcal{B}}\). Imitating the proof of proposition 8.6, the blow-down induces a canonical isomorphism \[\sigma_{Q}^{\mathcal{B}}\backslash\left(\sigma_{Q}^{\mathcal{B}}\cap\partial L \mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right)\ \longrightarrow\ \sigma_{Q}\backslash\left(\sigma_{Q}\cap\mathrm{Det}(\mathbb{R})\right)\] whose inverse we shall denote by \(j\). We deduce from (11.5) that there is a continuous map \[\left|L\mathcal{A}_{g}^{\circ,\mathrm{trop}}\right|\longrightarrow\left|L \mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\backslash\left|\partial L \mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\] induced by the pair \((\iota,j)\). The last statement follows since the composition with the blow-down map \(\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\backslash\left| \partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\to\left|L \mathcal{A}_{g}^{\mathrm{trop}}\right|\setminus\left|\partial L\mathcal{A}_{g} ^{\mathrm{trop}}\right|=\left|L\mathcal{A}_{g}^{\circ,\mathrm{trop}}\right|\) is the identity. ## 12. The blow-up of the tropical Torelli map ### The tropical Torelli map Let \(G\) be a connected graph with zero weights. To each edge \(e\in E_{G}\) we assign a variable \(x_{e}\). Recall that the graph Laplacian is a linear map \[\Lambda_{G}:H_{1}(G;\mathbb{Z})\longrightarrow\mathrm{Hom}\left(H_{1}(G; \mathbb{Z}),\mathbb{Z}[x_{e},e\in e_{G}]\right)\] which represents a quadratic form on \(H_{1}(G;\mathbb{Z})\) taking values in \(\mathbb{Z}[x_{e},e\in E_{G}]\). A graph Laplacian is a matrix representative with respect to a basis of \(H_{1}(G;\mathbb{Z})\). It is an \(h_{G}\times h_{G}\) symmetric matrix whose entries lie in \(\mathbb{Z}[x_{e},e\in E_{G}]\). Its determinant, which is well-defined, is equal to the graph polynomial: \[\det\Lambda_{G}=\Psi_{G}. \tag{12.1}\] In the case when \(G\) is a metric graph, with edge lengths \(\ell_{e}\in\mathbb{R}_{\geqslant 0}\), the corresponding graph Laplacian is the real quadratic form \[\Lambda_{G}=\Lambda_{G}\Big{|}_{x_{e}=\ell_{e}}\quad\in\quad\mathcal{Q}(H_{1}( G;\mathbb{R}))\] which is denoted, by abuse of notation, by the same symbol. One shows that the graph Laplacian is positive semi-definite, with rational kernel. The tropical Torelli map, defined and studied in [13, 2, 10, 11], is the map \[t_{g}:\mathcal{M}_{g}^{\mathrm{trop}} \longrightarrow \mathcal{A}_{g}^{\mathrm{trop}}\] \[G \mapsto \left[\Lambda_{G}\oplus 0^{w(G)}\right] \tag{12.2}\] which sends a weighted metric graph \(G\) to the class of the Laplacian of the associated unweighted metric graph, where \(0^{w(G)}\) denotes the \(w(G)\times w(G)\) matrix whose entries are all zero. A matrix representative \(\Lambda_{G}\oplus 0^{w(G)}\) has \(g\) rows and columns, where \(g\) is the genus of \(G\). It was proven in [1] that the tropical Torelli map is a morphism in a category of stacky fans. In this section, we extend it to a map on bordifications. Before doing so, we observe that the tropical Torelli map is degenerate on certain kinds of graphs. **Example 12.1**.: Consider the sunrise diagram (figure 10, top right), with edges oriented to the right, and let \(c_{1}=e_{1}-e_{2}\), \(c_{2}=e_{2}-e_{3}\) denote cycles in \(\mathbb{Z}^{E_{G}}\) whose image in \(H_{1}(G;\mathbb{Z})\) is a basis. The graph Laplacian matrix with respect to this basis is \[\Lambda_{G}=\begin{pmatrix}x_{1}+x_{2}&-x_{2}\\ -x_{2}&x_{2}+x_{3}\end{pmatrix}\] with determinant \(\Psi_{G}=x_{1}x_{2}+x_{1}x_{3}+x_{2}x_{3}\). The vanishing of the matrix \(\Lambda_{G}\) implies that \(x_{1}=x_{2}=x_{3}=0\). The Laplacians of the remaining stable graphs of genus 2 are depicted in the same figure. The tropical Torelli map is not injective on the dumbbell graph (top left), and is even identically zero on the graph depicted in the bottom left. Thus the tropical Torelli map collapses the dumbbell cell into the boundary of the image of the sunrise cell. Let \(G\) be a connected graph. The Laplacian \(\Lambda_{G}\) defines a linear map \[\lambda_{G}:\mathbb{Q}^{E_{G}}\longrightarrow\mathcal{Q}(H_{1}(G;\mathbb{Q})) \tag{12.3}\] which is not injective in general, as the previous example shows. When this is the case, the tropical Torelli map does not extend to a morphism in the category \(\mathrm{PLC}_{\mathbb{Q}}\) because it sends boundaries of polyhedral cones into the interiors of other polyhedral cones. **Proposition 12.2**.: _The linear map (12.3) is injective if and only if \(G\) is 3-edge connected (the removal of any edge of \(G\) is connected and bridgeless)._ Proof.: Suppose that \(G\) is 3-edge connected. Let \(e\in E_{G}\) be any edge of \(e\) which is not a self-edge. The graph \(G\backslash e\) is 2-edge connected. By Menger's theorem there exist two edge-disjoint paths \(p_{1},p_{2}\) between the endpoints of \(e\) which lie in \(G\backslash e\). It follows that, by choosing an orientation on the edges of \(G\) and adjoining the edge \(e\) to the paths \(p_{1},p_{2}\), we may find two cycles \(c_{1},c_{2}\in\mathbb{Z}^{E_{G}}\) which are independent homology classes in \(H_{1}(G;\mathbb{Z})\), and only overlap in the single edge \(e\). We may complete these to a family of cycles \(c_{1},c_{2},\ldots,c_{h}\) which represent a basis of \(H_{1}(G;\mathbb{Z})\). It follows from the definition of \(\Lambda_{G}\) that the entry \((\Lambda_{G})_{1,2}=\pm x_{e}\). where the sign depends on the choice of orientation of edges, and is immaterial. Consequently, the kernel of \(\Lambda_{G}\) is contained in the subspace \(x_{e}=0\) of \(\mathbb{Q}^{E_{G}}\). Now suppose that \(e\) is a self-edge. We may choose the first cycle \(c_{1}\) in a choice of representatives \(c_{1},\ldots,c_{h}\) for \(H_{1}(G;\mathbb{Z})\) to consist of the single edge \(e\). In this case, \((\Lambda_{G})_{1,1}=\pm x_{e}\) and again we conclude that the kernel of (12.3) is contained in \(x_{e}=0\) for all edges \(e\in E_{G}\). Hence \(\lambda_{G}\) is injective. Conversely, if \(G\) is not 3-edge connected, there exist distinct edges \(e_{1},e_{2}\in E_{G}\) such that \(G^{\prime}=G\backslash\{e_{1},e_{2}\}\) is disconnected, and \(G/G^{\prime}\) is a bubble graph with two vertices, two edges and a single loop. Its Laplacian is represented by the \(1\times 1\) matrix with a single entry \(x_{e_{1}}+x_{e_{2}}\). It follows that \(\lambda_{G/G^{\prime}}\), and a fortiori \(\lambda_{G}\), are not injective since the linear subspace of \(\mathbb{Q}^{E_{G}}\) given by \(x_{e_{1}}=-x_{e_{2}}\) and all remaining \(x_{e}=0\), is contained in their kernel. The answer to a more precise question, namely when two tropical curves have equivalent Laplacians, is provided in [10]. Note that for any bridge \(e\in E_{G}\), the variable \(x_{e}\) does not appear at all in the Laplacian \(\Lambda_{G}\) and one always has \(\mathbb{Q}e\subset\ker(\lambda_{G})\). ### Reduced moduli space of tropical curves The set of 3-edge connected weighted graphs is stable under edge contractions. Figure 10. The graph Laplacians of the stable graphs of genus 2 **Definition 12.3**.: Let \(I_{g}^{\mathrm{red}}\) be the full subcategory of \(I_{g}\) (definition 6.3) whose objects are isomorphism classes of weighted connected graphs with edge-connectivity three. Let \[L\mathcal{M}_{g}^{\mathrm{red,trop}}:(I_{g}^{\mathrm{red}})^{\mathrm{opp}} \longrightarrow\mathrm{PLC}_{\mathbb{Q}}\] denote the restriction of the functor \(L\mathcal{M}_{g}^{\mathrm{trop}}\) to \((I_{g}^{\mathrm{red}})^{\mathrm{opp}}\). Similarly, let \[\mathcal{X}^{\mathrm{red}}:(I_{g}^{\mathrm{red}})^{\mathrm{opp}} \longrightarrow\mathrm{Sch}_{\mathbb{Q}}\] denote the restriction of the graph locus functor to \((I_{g}^{\mathrm{red}})^{\mathrm{opp}}\). There is a natural map \(L\mathcal{M}_{g}^{\mathrm{red,trop}}\to L\mathcal{M}_{g}^{\mathrm{trop}}\) and an inclusion \(\left|L\mathcal{M}_{g}^{\mathrm{red,trop}}\right|\,\hookrightarrow\,\left|L \mathcal{M}_{g}^{\mathrm{trop}}\right|\). In genus 2, for example, the reduced moduli space of tropical curves has only one top-dimensional cell, which is indexed by the sunrise graph (figure 10). ### The projective tropical Torelli map Let \(G\) be a 3-edge connected weighted graph. By proposition 12.2, \(\lambda_{G}\) is injective and its projectivisation defines a linear morphism: \[\begin{array}{rcl}\mathbb{P}^{E_{G}}&\stackrel{{[\lambda_{G}] }}{{\longrightarrow}}&\mathbb{P}(\mathcal{Q}(H_{1}(G;\mathbb{Q}))\\ (\alpha_{e})_{eeE_{G}}&\mapsto&\left[\sum_{e}\alpha_{e}Q_{e}\right]\,\end{array} \tag{12.4}\] where \(Q_{e}\) is the quadratic form \(Q_{e}(e_{1},e_{2})=\delta_{e,e_{1}}\delta_{e,e_{2}}\). A key theorem [1, Defn. 3.4 and Theorem 6.7], generalised to regular matroids in [13, Theorem 4.2.1], states that for any such \(G\), there exists a quadratic form \(Q_{G}\) whose minimal vectors are exactly the set of \(Q_{e}\), for \(e\in E_{G}\). Thus, for every 3-edge connected graph \(G\) of weight zero, the map \([\lambda_{G}]:\sigma_{G}\cong\sigma_{Q_{G}}\) is an isomorphism. Therefore \([\lambda_{G}]\) defines a morphism \[\mathsf{c}_{G}=(\mathbb{P}^{E_{G}},L_{G},\sigma_{G})\quad\longrightarrow\quad \mathsf{c}_{Q_{G}}=(\mathbb{P}^{Q_{G}},L_{\sigma_{Q_{G}}},\sigma_{Q_{G}}) \tag{12.5}\] in the category \(\mathrm{PLC}_{\mathbb{Q}}\). The following proposition is a paraphrase of the results [1] in the category of stacky fans, transposed to setting of polyhedral linear complexes. **Proposition 12.4**.: _There is a canonical morphism of \(\mathrm{PLC}_{\mathbb{Q}}\)-complexes:_ \[\lambda:L\mathcal{M}_{g}^{\mathrm{red,trop}}\to L\mathcal{A}_{g}^{\mathrm{trop}} \tag{12.6}\] _which maps the subscheme \(\mathcal{X}^{\mathrm{red}}\) to \(\mathrm{Det}\). Its topological incarnation_ \[\left|L\mathcal{M}_{g}^{\mathrm{red,trop}}\right|\longrightarrow\left|L \mathcal{A}_{g}^{\mathrm{trop}}\right|\] _is the restriction of the tropical Torelli map (12.2) to the reduced moduli space._ Proof.: By the references quoted above, there exists a functor \(t_{g}:\left(I_{g}^{\mathrm{red}}\right)^{\mathrm{opp}}\longrightarrow \mathcal{D}_{g}^{\mathrm{perf}}\) which sends a 3-edge connected graph \(G\) to \([Q_{G}]\). The pair \((t_{g},[\lambda])\) defines the required morphism of \(\mathrm{PLC}_{\mathbb{Q}}\)-complexes, since \([\lambda]\) is functorial and defines a natural transformation. The compatibility of \(\mathcal{X}^{\mathrm{red}}\) and \(\mathrm{Det}\) follows since \(\Psi_{G}=\det\Lambda_{G}\), and hence \([\lambda](X_{G})=\mathrm{Det}\). ### The blow-up of the tropical Torelli map **Proposition 12.5**.: _Let \(G\) be a 3-edge connected graph. Then the tropical Torelli map induces a homeomorphism:_ \[\lambda_{G}^{\mathcal{B}}:\sigma_{G}^{\mathcal{B}}\stackrel{{ \sim}}{{\longrightarrow}}\sigma_{Q_{G}}^{\mathcal{B}}\.\] Proof.: Because \(G\) is 3-edge connected, the map \(\lambda_{G}\) is injective. Since \(\lambda_{G}:\sigma_{G}\cong\sigma_{Q_{G}}\) is an isomorphism and since \(\mathbb{P}^{Q_{G}}\) is the Zariski-closure of \(\sigma_{Q_{G}}\) it follows that (12.5) is an isomorphism in \(\mathrm{PLC}_{\mathbb{Q}}\). It suffices to show that for every face \(\sigma_{F}\) of \(\sigma_{Q_{G}}\), one has \(\mathbb{P}(\mathcal{Q}(V/K_{\sigma_{F}}))(\mathbb{R})\cap\sigma_{Q_{G}}=\sigma _{\gamma}\) for some core subgraph \(\gamma\subset G\). To see this, note that since \(\sigma_{G}\) is a simplex, its faces \(\sigma_{G}\cap L_{\gamma}\cong\sigma_{G/\gamma}\) are in one-to-one correspondence with subgraphs \(\gamma\subset G\) where \(L_{\gamma}\) is the coordinate hyperplane defined by the vanishing of \(x_{e}\), for \(e\in E_{\gamma}\). Therefore let \(\gamma\subset G\) such that \(\lambda:\sigma_{G/\gamma}\cong\sigma_{F}\). We first show that \(K_{\sigma_{G/\gamma}}=H_{1}(\gamma;\mathbb{Q})\). For this, choose an isomorphism \(H_{1}(G;\mathbb{Q})\cong H_{1}(\gamma;\mathbb{Q})\oplus H_{1}(G/\gamma;\mathbb{Q})\). The restriction of the graph Laplacian \(\Lambda_{G}\) to \(L_{\gamma}\) takes the block matrix form \[\Lambda_{G}\Big{|}_{L_{\gamma}}=\begin{pmatrix}0&0\\ 0&\Lambda_{G/\gamma}\end{pmatrix}\.\] Since \(\det(\Lambda_{G/\gamma})=\Psi_{G/\gamma}\) is positive on \(\sigma\), the kernel of this quadratic form is precisely \(H_{1}(\gamma;\mathbb{Q})\) and hence \(K_{\sigma_{G/\gamma}}=H_{1}(\gamma;\mathbb{Q})\). Let \(\gamma^{\text{core}}\) denote the largest core subgraph of \(\gamma\). Since \(H_{1}(\gamma;\mathbb{Q})=H_{1}(\gamma^{\text{core}};\mathbb{Q})\), we deduce that \(\mathbb{P}(\mathcal{Q}(V/H_{1}(\gamma;\mathbb{Q}))(\mathbb{R})\cap\sigma_{G}\) contains \(\sigma_{\gamma^{\text{core}}}\). To show the reverse inclusion, note that the restriction of \(\Lambda_{G}\) to \(H^{1}(\gamma;\mathbb{Q})\) is \(\Lambda_{\gamma}\), whose determinant is \(\Psi_{\gamma}=\Psi_{\gamma^{\text{core}}}\). Thus the intersection of \(\sigma\) with the vanishing locus of \(\Lambda_{\gamma}\) is contained in the locus of \(\sigma\) such that \(\Psi_{\gamma^{\text{core}}}=0.\) But this implies that \(x_{e}=0\) for all \(e\in E_{\gamma^{\text{core}}}\) by the positivity of core graph polynomials. Hence \(\mathbb{P}(\mathcal{Q}(V/H_{1}(\gamma;\mathbb{Q}))(\mathbb{R})\cap\sigma_{G}= \sigma_{\gamma^{\text{core}}}\). Define \(L\mathcal{M}_{g}^{\text{red},\mathcal{B}}\) to be the restriction of the functor \(L\mathcal{M}_{g}^{\text{trop},\mathcal{B}}\) to the (opposite of the) subcategory \(I_{g}^{\text{red},\mathcal{B}}\) of \(I_{g}^{\mathcal{B}}\) generated by the images of three-edge connected graphs under admissible edge contractions and refinements (definition 7.4). **Theorem 12.6**.: _The tropical Torelli map induces a continuous map_ \[\lambda^{\mathcal{B}}:|L\mathcal{M}_{g}^{\text{red},\mathcal{B}}|\longrightarrow |L\mathcal{A}_{g}^{\text{trop},\mathcal{B}}|\] _which maps \(|\partial L\mathcal{M}_{g}^{\text{red},\mathcal{B}}|\) to \(|\partial L\mathcal{A}_{g}^{\text{trop},\mathcal{B}}|\) and is compatible, via the canonical blow-down maps with the Torelli map (12.6)._ Proof.: This follows from propositions 12.5 and (12.6), which makes essential use of the theorem of [1] on the existence and functoriality of \([Q_{G}]\), for \(G\) a graph. ## 13. Canonical bi-invariant forms and their integrals ### Canonical forms Employing the same notation as in [1], we define \[\Omega_{\text{can}}=\bigwedge\bigoplus_{k\geqslant 1}\mathbb{Q}\,\omega^{4k+1} \tag{13.1}\] to be the graded-commutative Hopf algebra generated by elements \(\omega^{4k+1}\) in degree \(4k+1\), where the coproduct \(\Delta^{\text{can}}:\Omega^{\text{can}}\to\Omega^{\text{can}}\otimes_{ \mathbb{Q}}\Omega^{\text{can}}\) has the property that the \(\omega^{4k+1}\) are primitive, i.e., \(\Delta^{\text{can}}(\omega^{4k+1})=\omega^{4k+1}\otimes 1+1\otimes\omega^{4k+1}\). Let \(R=\bigoplus_{n\geqslant 0}R^{n}\) be a differential graded algebra and let \(X\in\operatorname{GL}_{g}(R^{0})\) be an invertible \(g\times g\) symmetric matrix. We denote by \[\omega_{X}^{4k+1}=\operatorname{tr}\left((X^{-1}dX)^{4k+1}\right)\quad\in \quad R^{4k+1}\.\] More generally, any \(\omega\in\Omega_{\text{can}}\) is a polynomial in the \(\omega^{4k+1}\). We write \(\omega_{X}\) for the corresponding polynomial in the \(\omega_{X}^{4k+1}\). One can show that \(\omega_{X}\) are: 1. _Closed_: \(d\omega_{X}=0\) for any \(X\) as above. 2. _Bi-invariant_: \(\omega_{AXB}=\omega_{X}\) for all \(A,B\in\operatorname{GL}_{g}(R^{0})\) such that \(dA=dB=0\). 3. _Additive_: if we write \(\Delta^{\text{can}}\omega=\sum\omega^{\prime}\otimes\omega^{\prime\prime}\) and \(X_{1},X_{2}\in\operatorname{GL}_{g}(R^{0})\), then \[\omega_{X_{1}\oplus X_{2}}=\sum\omega_{X_{1}}^{\prime}\wedge\omega_{X_{2}}^{ \prime\prime}\.\] 4. _Projective_: \(\omega_{\lambda X}=\omega_{X}\) for all \(\lambda\in(R^{0})^{\times}\), 5. _Vanish in low rank_: \(\omega_{X}^{4k+1}=0\) if \(4k+1>2g\). See [1] for proofs. Borel showed in [1] that \(\Omega_{\text{can}}\overset{\sim}{\to}\varprojlim H^{\bullet}(\operatorname{GL} _{g}(\mathbb{Z});\mathbb{R})\). ### Forms of 'compact type' Let \(g>1\) be odd. By the vanishing property (5), only a finite subspace of \(\Omega_{\text{can}}\) is non-zero on \(g\times g\) symmetric matrices. **Definition 13.1**.: Let \(\Omega(g)\subset\Omega_{\text{can}}\) denote the graded exterior algebra generated by \(1\), \(\omega^{5}\),..., \(\omega^{2g-1}\). It is naturally a quotient of \(\Omega_{\text{can}}\) via the map \(\Omega_{\text{can}}\to\Omega(g)\) which sends \(\omega^{4k+1}\) to zero for all \(2k>g\). We shall call a form _simple_ if it is a monomial in the generators of \(\Omega(g)\), i.e., a product of primitive elements. An element of \(\Omega(g)\) is a finite linear combination of simple forms. Call a form \(\omega\in\Omega(g)\) of _compact type_ relative to \(g\) if it is divisible by \(\omega^{2g-1}\) \[\omega=\eta\wedge\omega^{2g-1}\quad\text{ for some }\ \eta\in\Omega(g). \tag{13.2}\] Otherwise, a simple form \(\omega\) will be called of _non-compact type_. Denote the space spanned by forms of compact (resp. non-compact) type by \(\Omega_{c}(g)\) (resp. \(\Omega_{nc}(g)\)). We have \[\Omega(g)=\Omega_{c}(g)\oplus\Omega_{nc}(g)\] The subspace \(\Omega_{c}(g)\subset\Omega(g)\) is an ideal. Consider the volume form \[\text{vol}_{g}=\omega^{5}\wedge\ldots\wedge\omega^{2g-1}\quad\in\quad\Omega_{c }(g) \tag{13.3}\] which is of degree \(d_{g}-1\), where \(d_{g}=\frac{g(g+1)}{2}\). It is simple, and of compact type. **Definition 13.2**.: Let \(\omega\in\Omega(g)\) be simple. There is a unique \(\star\omega\in\Omega(g)\) such that \[\omega\wedge\star\omega=\text{vol}_{g}\.\] The map \(\star\) extends by linearity to an endomorphism (the Hodge star operator) on \(\Omega(g)\) which interchanges the subspaces of forms of compact and non-compact types: \[\star:\Omega_{c}(g)\stackrel{{\sim}}{{\longrightarrow}}\Omega_ {nc}(g)\.\] It is an involution up to sign: \(\star\star\omega=\pm\omega\). **Example 13.3**.: Let \(g=7\). Then \(\text{vol}_{7}=\omega^{5}\wedge\omega^{9}\wedge\omega^{13}\). The following table lists the simple elements in \(\Omega_{nc}(7)\), and below them, their images in \(\Omega_{c}(7)\) under the map \(\star\): \[\begin{matrix}1&\omega^{5}&\omega^{9}&\omega^{5}\wedge\omega^{9}\\ \downarrow&\downarrow&\downarrow&\downarrow\\ \omega^{5}\wedge\omega^{9}\wedge\omega^{13}&\omega^{9}\wedge\omega^{13}&- \,\omega^{5}\wedge\omega^{13}&\omega^{13}\end{matrix}\] ### Canonical forms on \(L\mathcal{M}_{g}^{\text{trop}}\) and \(L\mathcal{A}_{g}^{\text{trop}}\) **Lemma 13.4**.: _Let \(\omega\in\Omega_{\text{can}}\) be a canonical form of degree \(d\), and let \(V\) be a vector space over \(k\subset\mathbb{R}\). Then \(\omega\) defines a closed projective differential form_ \[\omega\quad\in\quad\Omega^{d}(\mathbb{P}(\mathcal{Q}(V))\backslash\text{Det})\] _which is invariant under the action of \(\text{\rm GL}(V)\)._ Proof.: The projectivity follows from SS13.1 (4), the invariance from (2). **Proposition 13.5**.: _A canonical form \(\omega\in\Omega^{d}(g)\) defines an algebraic differential form (SS2.4):_ \[\omega\quad\in\quad\Omega^{d}(L\mathcal{A}_{g}^{\text{trop}}\backslash\text{Det}) \tag{13.4}\] _It restricts to a smooth differential form, also denoted by \(\omega\), on_ \[\omega\quad\in\quad\mathcal{A}^{d}\left(\left|L\mathcal{A}_{g}^{\text{\tiny c },\text{\tiny trop}}\right|\right)=\mathcal{A}^{d}\left(L\mathcal{P}_{g}/ \text{\rm GL}_{g}(\mathbb{Z})\right)\] Proof.: By definition 2.9 a differential form must verify the compatibility relations (2.8). A canonical form \(\omega\) satisfies the compatibilities for the two types of morphisms in the category \(\mathcal{D}_{g}^{\text{perf}}\) of definition 10.5: for face morphisms, the compatibility is automatic, and for isomorphisms, this follows from the bi-invariance SS13.1 (2). The fact that \(\omega\) only has poles along the determinant locus follows immediately from its definition since for any square matrix \(X\), the one-form \(X^{-1}dX\) is regular outside the locus \(\text{det}(X)=0\) In [10], we studied the canonical forms \(\omega_{G}=\omega_{\Lambda_{G}}\) on the polyhedral linear configuration associated to a graph \(G\). A result in that paper may be paraphrased as follows. **Proposition 13.6**.: _A form \(\omega\in\Omega^{d}(g)\) defines an algebraic form \(\omega\in\Omega^{d}(L\mathcal{M}_{g}^{\mathrm{trop}}\backslash\mathcal{X})\). It restricts to a smooth differential form, also denoted by \(\omega\), on_ \[\omega\quad\in\quad\mathcal{A}^{d}\left(\left|L\mathcal{M}_{g}^{\mathrm{trop }}\right|\backslash\left|\hat{\mathcal{C}}L\mathcal{M}_{g}^{\mathrm{trop}} \right|\right)\.\] _The restriction of \(\omega\) to the reduced moduli space \(L\mathcal{M}_{g}^{\mathrm{trop,trop}}\) is the pull-back \(\lambda_{G}^{\ast}\omega\) of the corresponding canonical forms (13.4) on \(L\mathcal{M}_{g}^{\mathrm{trop}}\) by the tropical Torelli map._ ### Behaviour at infinity A key property of canonical forms is their behaviour on blow-ups and in particular, a factorisation property at infinity. **Proposition 13.7**.: _Let \(0\leqslant K\subset V\) be a subspace and let \(\pi:\mathbb{P}\to\mathbb{P}(\mathcal{Q}(V))\) denote the blow-up along \(\mathbb{P}(\mathcal{Q}(V/K))\). Let \(\omega\in\Omega^{d}_{\mathrm{can}}\). Denote its coproduct by_ \[\Delta^{\mathrm{can}}\omega=\sum_{i}\omega_{i}^{\prime}\otimes\omega_{i}^{ \prime\prime}\.\] _The pull-back \(\pi^{\ast}\omega\) to \(\mathbb{P}\) has no poles along the exceptional divisor, and hence defines a regular differential form on the complement of the strict transform \(\widetilde{\mathrm{Det}}\) of the determinant locus:_ \[\pi^{\ast}(\omega)\quad\in\quad\Omega^{d}\left(\mathbb{P}\setminus\widetilde{ \mathrm{Det}}\right)\.\] _Its restriction to the exceptional divisor \(\mathcal{E}\), which we recall by proposition 9.10 is canonically isomorphic to the product \(\mathcal{E}=\mathbb{P}\left(\mathcal{Q}(V/K)\right)\times\mathbb{P}\left( \mathcal{Q}(V)/\mathcal{Q}(V/K)\right)\), satisfies_ \[\pi^{\ast}(\omega)\big{|}_{\mathcal{E}}=\sum_{i}\omega_{i}^{\prime}\wedge\pi_{ K}^{\ast}(\omega_{i}^{\prime\prime}) \tag{13.5}\] _where \(\omega_{i}^{\prime}\in\Omega^{\bullet}\left(\mathbb{P}(\mathcal{Q}(V/K)) \backslash\mathrm{Det}_{V/K}\right)\), \(\omega_{i}^{\prime\prime}\in\Omega^{\bullet}\left(\mathbb{P}\left(\mathcal{Q} (K)\right)\backslash\mathrm{Det}_{K}\right)\), and \(\pi_{K}\) is the map (9.7)._ Proof.: The proof is almost identical to that of theorem 7.4 in [10] to which we refer for further details. The key point is to write formula (9.11) in the form: \[s_{K}^{\ast}Q=\begin{pmatrix}zQ_{K}&zP\\ zP^{T}&Q_{C}\end{pmatrix}=\Lambda U\quad\text{ where we set }\quad\Lambda= \begin{pmatrix}zQ_{K}&0\\ 0&Q_{C}\end{pmatrix}\,\] and where \[U=\Lambda^{-1}s_{K}^{\ast}Q=\begin{pmatrix}1&Q_{K}^{-1}P\\ 0&1\end{pmatrix}\pmod{z}\.\] For simplicity, assume that \(\omega=\omega^{4k+1}\) is primitive. Then \[\omega_{\Lambda U}^{4k+1}=\mathrm{tr}\left((U^{-1}\Lambda^{-1}(d\Lambda)U+U^{ -1}dU)^{4k+1}\right)=\mathrm{tr}\left(\left(\Lambda^{-1}d\Lambda+dU.U^{-1} \right)^{4k+1}\right)\] The argument follows exactly as in theorem 7.4 of _loc. cit._ to conclude that \[\omega_{s_{K}^{\ast}Q}\Big{|}_{z=0}=\omega_{Q_{C}}^{4k+1}+\omega_{Q_{K}}^{4k+1}\] and hence \(\pi^{\ast}(\omega)\big{|}_{\mathcal{E}}=\omega_{Q_{C}}^{4k+1}\wedge 1+1\wedge \omega_{Q_{K}}^{4k+1}\). The case when \(\omega\) is a simple form follows by multiplying primitive forms together; the general case follows by linearity. _Remark 13.8_.: This implies that the restriction of \(\pi^{\ast}(\omega)\) to the exceptional divisor is actually just the pre-image of canonical forms under the collapsing map SS9.3 of the normal bundle. It may often happen that the restriction \(\pi^{\ast}(\omega)\) to the exceptional divisor \(\mathcal{E}\) vanishes. There is one important situation in which this always occurs. **Corollary 13.9**.: _Let \(\omega\in\Omega^{d}_{c}(g)\) be of compact type, where \(g=\dim(V)>1\) is odd. Then, in the situation of proposition 13.7, \(\pi^{\ast}\omega\) vanishes along the exceptional divisor \(\mathcal{E}\)._ Proof.: We may assume that \(\omega\) is simple. Since \(\omega\) is of compact type, it follows that in the expression \(\Delta^{\mathrm{can}}\omega=\sum_{i}\omega_{i}^{\prime}\otimes\omega_{i}^{\prime}\) either \(\omega_{i}^{\prime}\) or \(\omega_{i}^{\prime\prime}\) is of compact type for each \(i\), since one of these two forms must be divisible by \(\omega^{2g-1}\). Since \(K\neq 0\), the vanishing property SS13.1 (5) implies that either \((\omega_{i}^{\prime})_{Q_{K}}=0\) or \((\omega_{i}^{\prime\prime})_{Q_{C}}=0\) for every \(i\), since both \(Q_{K}\) and \(Q_{C}\) have strictly smaller rank \(<g\). It follows from the formula (13.5) that \(\pi^{*}(\omega)\) vanishes along \(\mathcal{E}\). The essential part of the proof is the fact that \(\Omega_{c}(g)\) is a Hopf ideal in \(\Omega(g)\): \[\Delta^{\mathrm{can}}(\Omega_{c}(g))\ \subset\ (\Omega_{c}(g)\otimes_{\mathbb{Q}} \Omega_{nc}(g))+\left(\Omega_{nc}(g)\otimes_{\mathbb{Q}}\Omega_{c}(g)\right)\.\] Canonical forms on \(L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\) and \(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) **Theorem 13.10**.: _Let \(\omega\in\Omega_{\mathrm{can}}^{d}\). It defines an algebraic differential form_ \[\widetilde{\omega}\in\Omega^{d}\left(L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{ B}}\backslash\widetilde{\mathrm{Det}}\right) \tag{13.6}\] _which restricts to a smooth form, also denoted by the same notation,_ \[\widetilde{\omega}\in\mathcal{A}^{d}\left(\left|L\mathcal{A}_{g}^{\mathrm{trop },\mathcal{B}}\right|\right) \tag{13.7}\] _on the topological realisation. In the case when \(g>1\) is odd and \(\omega\in\Omega_{c}^{d}(g)\) is of compact type, then the restriction of (13.6) to the boundary \(\partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\ \backslash\ (\partial L \mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\cap\widetilde{\mathrm{Det}})\) vanishes._ Proof.: Let \(Q\) be a positive definite quadratic form such that \(\sigma_{Q}>0\). Let \(\widetilde{\omega}=\pi^{*}\omega\) denote the pull-back of the form \(\omega\) along the iterated blow-up \(\pi:P^{\mathcal{B}_{Q}}\to\mathbb{P}^{Q}.\) By repeated application of proposition 13.7 for each blow-up in the formation of \(P^{\mathcal{B}_{Q}}\), we deduce that \(\widetilde{\omega}\) is a meromorphic differential form with no poles along the exceptional divisor. It follows that \(\widetilde{\omega}\in\Omega^{d}(P^{\mathcal{B}_{Q}}\setminus\widetilde{ \mathrm{Det}})\). By restricting it to faces, we obtain a family of forms in \(\Omega^{d}((\mathsf{SL}\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\backslash \widetilde{\mathrm{Det}})(x))\) for every object \(x\) of \(\mathcal{D}_{g}^{\mathrm{perf}}\). By the universal property of strict transforms (proposition 5.2 (i)) and the fact that restriction of differential forms is compatible with pull-back along blow-ups, the compatibility conditions (2.8) for \(\widetilde{\omega}\) follow from those of \(\omega\), established in proposition 13.5. This is because the objects in a face diagram (10.5) are the blow-ups of a face map in \(\mathcal{D}_{g}^{\mathrm{perf}}\), and the form \(\widetilde{\omega}\) on each component is the pull-back of \(\omega\) along a suitable blow-up. The statement (13.7) is a consequence of the fact that \(\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\) does not meet the strict transform of the determinant locus (theorem 11.3). When \(\omega\) is of compact type, \(\widetilde{\omega}\) vanishes along exceptional divisors by the second part of proposition 13.7. The following statement for \(L\mathcal{M}_{G}^{\mathrm{trop},\mathcal{B}}\) may be proved using [1]. **Theorem 13.11**.: _Let \(\omega\in\Omega_{\mathrm{can}}^{d}\). It defines both a holomorphic differential form, and a smooth form on which we may denote by:_ \[\widetilde{\omega}\in\Omega^{d}\left(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{ B}}\backslash\widetilde{\mathcal{X}}\right)\qquad\text{resp.}\qquad \widetilde{\omega}\in\mathcal{A}^{d}\left(\left|L\mathcal{M}_{g}^{\mathrm{trop },\mathcal{B}}\right|\right). \tag{13.8}\] Note that in the case of \(L\mathcal{M}_{g}^{\mathrm{trop},\mathcal{B}}\) the existence of the dual graph Laplacian implies that most forms of compact type actually vanish (see [1], Lemma 8.4) except for the primitive ones. See the discussion in SS14.6. ### Integrals of canonical forms Canonical integrals define new invariants associated to the minimal vectors of quadratic forms. **Corollary 13.12**.: _Let \(Q\) be a positive definite quadratic form such that \(\sigma_{Q}\) is strictly positive. Then for any canonical form \(\omega\in\Omega_{\mathrm{can}}^{d}\) where \(d=\dim\sigma_{Q}\), the integral_ \[I_{Q}(\omega)=\int_{\sigma_{Q}}\omega\] _is finite. In the case where \(Q=Q_{G}\) is the image of a graph, the integral \(I_{Q_{G}}(\omega)\) reduces to the canonical integrals \(I_{G}(\omega)\) studied in [1]._ Proof.: By passing to the blow-up \(P^{\mathcal{B}_{Q}}\to\mathbb{P}^{Q}\), we may write \[I_{Q}(\omega)=\int_{\sigma_{Q}^{\mathcal{B}}}\widetilde{\omega}\] since \(\overset{\circ}{\sigma}_{Q}\) is contained in \(\sigma_{Q}^{\mathcal{B}}\), and has complement of Lebesgue measure zero. By theorem 11.3, \(\widetilde{\omega}\) has no poles along the compact region \(\sigma_{Q}^{\mathcal{B}}\) and therefore the integral is finite. These integrals satisfy many interesting relations. The analogous relations for graphs were studied in [10] and related to the 'waterfall' spectral sequence of [11]. **Theorem 13.13**.: _(Stokes relation). Let \(Q\) be a positive definite quadratic form such that \(\sigma_{Q}>0\), and let \(\omega\in\Omega_{\mathrm{can}}\) have degree \(\dim\sigma_{Q}-1\). Let \(\Delta^{\mathrm{can}}\omega=\sum_{i}\omega_{i}^{\prime}\otimes\omega_{i}^{\prime\prime}\). Then_ \[0=\sum_{Q^{\prime}}\!\int_{\sigma_{Q^{\prime}}}\omega+\sum_{i}\sum_{F}\int_{ \sigma_{F}}\omega_{i}^{\prime}\int_{\sigma_{Q}/\sigma_{F}}\pi_{K_{\sigma_{F} }}^{*}(\omega_{i}^{\prime\prime}) \tag{13.9}\] _where \(Q^{\prime}\leqslant Q\) denote the facets of \(\sigma_{Q}\) which are strictly positive (not contained in \(\mathrm{Det}(\mathbb{R})\)), and \(\sigma_{F}\times\sigma_{Q}/\sigma_{F}\) denote the set of facets of its blow-up \(\sigma_{Q}^{\mathcal{B}_{Q}}\) which are at infinity (in other words, \(\sigma_{F}\) is an essential face of \(\sigma_{Q}\) at infinity, and \(\sigma_{Q}/\sigma_{F}\) is defined in (9.8)). The map \(\pi_{K_{\sigma_{F}}}\) is defined in lemma 9.7._ Proof.: This follows from an application of Stokes' formula: \[0=\int_{\sigma_{Q}^{\mathcal{B}}}d\widetilde{\omega}=\int_{\widetilde{\omega} \sigma_{Q}^{\mathcal{B}}}\widetilde{\omega}\,\] the description of the boundary facets of \(\sigma_{Q}^{\mathcal{B}}\), and the factorisation formula (13.5). ### Compactly-supported representatives Every form of compact type has a representative which vanishes identically in a neighbourhood of the boundary. **Corollary 13.14**.: _Let \(g>1\) be odd. The cohomology class of the differential form \(\omega^{2g-1}\) in \(H^{2g-1}_{dR}(\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|,\left| \partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|)\) has a compactly supported representive_ \[[\omega_{c}^{2g-1}]\;\in\;H^{2g-1}_{dR,c}(\left|L\mathcal{A}_{g}^{\circ, \mathrm{trop}}\right|)\.\] _Having chosen \(\omega_{c}^{2g-1}\), any other form \(\omega=\omega^{2g-1}\wedge\eta\in\Omega_{c}(g)\) of compact type has a canonical compactly supported representative \(\omega_{c}=\omega_{c}^{2g-1}\wedge\eta\)._ Proof.: By (3.9), and (11.7) we have \(H^{\bullet}_{dR,c}(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|)\cong H^{ \bullet}_{dR}(\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|,\left| \partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|)\). The form \(\omega^{2g-1}\), being of compact type, vanishes on \(\left|\partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\) by theorem 13.10 and therefore defines a class in the latter cohomology group. The fact that \(\omega^{2g-1}\) has a compactly supported representative on \(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\) was announced in [12]. Since one also has \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|=\left|L\mathcal{A}_{g}^{\mathrm{ trop}}\right|\setminus\left|\partial L\mathcal{A}_{g}^{\mathrm{trop}}\right|\), the blow-down map \(\pi:\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|\to\left|L \mathcal{A}_{g}^{\mathrm{trop}}\right|\) induces an isomorphism: \[\pi^{*}:H^{\bullet}_{dR}(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|,\left| \partial L\mathcal{A}_{g}^{\mathrm{trop}}\right|)\xrightarrow{\sim}H^{\bullet} _{dR}(\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|,\left| \partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}\right|)\] since both sides may be identified with the compactly supported cohomology of the interior \(\left|L\mathcal{A}_{g}^{\circ,\mathrm{trop}}\right|\). In particular, any compactly supported form \(\omega_{c}\) as in the corollary may be viewed as a closed differential form on \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\). Its cohomology class \[[\omega_{c}]\in H^{\bullet}_{dR}(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|) \tag{13.10}\] does not depend on the choice of representative for \(\omega_{c}^{2g-1}\) since it is the image of \((\pi^{*})^{-1}[\widetilde{\omega}]\) under the natural map \(H^{\bullet}_{dR}(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|,\left|\partial L \mathcal{A}_{g}^{\mathrm{trop}}\right|)\to H^{\bullet}_{dR}(\left|L \mathcal{A}_{g}^{\mathrm{trop}}\right|)\). ## 14. Cohomology classes in \(\left|L\mathcal{A}_{g}^{\mathrm{trop}}\right|\). In this section we prove the theorems announced in the introduction. ### Canonical Classes in \(\big{|}L\mathcal{A}_{g}^{\mathrm{trop}}\big{|}\) Let \(g>1\) be odd and \(\ell_{g}=\frac{g(g+1)}{2}-1\). **Lemma 14.1**.: _The form \(\mathrm{vol}_{g}\) (13.3) defines a non-trivial relative cohomology class_ \[0\neq[\mathrm{vol}_{g}]\ \in\ H_{dR}^{\ell_{g}}\left(|L\mathcal{A}_{g}^{\mathrm{ trop},\mathcal{B}}|,|\partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}|\right).\] Proof.: Since \(g\) is odd, the space \(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\) is orientable and has a relative fundamental class \([\mathcal{F}_{g}^{\mathrm{lf}}]\in H_{\ell_{g}}^{\mathrm{lf}}\left(\big{|}L \mathcal{A}_{g}^{\circ,\mathrm{trop}}\big{|}\right)\) in locally finite homology, or equivalently, \[[\mathcal{F}_{g}^{\mathrm{lf}}]\in H_{\ell_{g}}\left(\big{|}L\mathcal{A}_{g}^{ \mathrm{trop},\mathcal{B}}\big{|}\,,|\partial L\mathcal{A}_{g}^{\mathrm{ trop},\mathcal{B}}|\right)\.\] A volume form \(\eta_{g}\) on \(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\) is defined on symmetric matrices \(M=(M_{ij})_{ij}\) by \[\eta_{g}(M)=\partial\left(\frac{1}{\det(M)^{\frac{g+1}{2}}}\bigwedge_{1\leq i \leq j\leq g}dM_{ij}\right)\] where \(\partial=\sum_{i,j}\frac{\partial}{\partial M_{ij}}\) is the Euler vector field. By the theory of invariant differential forms, there exists \(\alpha\in\mathbb{Q}^{\times}\) such that \(\eta_{g}=\alpha\,\mathrm{vol}_{g}=\alpha\,\omega^{5}\wedge\omega^{9}\wedge\ldots \wedge\omega^{2g-1}\). We have \[\mathrm{vol}(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z}))=\int_{L\mathcal{P} _{g}/\mathrm{GL}_{g}(\mathbb{Z})}\eta_{g}=\alpha\int_{[\mathcal{F}_{g}^{ \mathrm{lf}}]}\mathrm{vol}_{g}\.\] Since \(\mathrm{vol}(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z}))\neq 0\) (for a more precise statement see (14.1)), we deduce that the cohomology class of \([\mathrm{vol}_{g}]\) in \(H_{dR}^{\ell_{g}}\left(|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}|,| \partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}|\right)\) is non-zero. _Remark 14.2_.: It follows from a theorem of Minkowski (see [10]) that \[\int_{L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})}\eta_{g}\ \in\ \zeta(3)\zeta(5)\ldots\zeta(2g-1)\,\mathbb{Q}^{\times}. \tag{14.1}\] The point is that the volume of the symmetric space \(\mathrm{SL}_{g}(\mathbb{R})/\mathrm{SL}_{g}(\mathbb{Z})\) is proportional, by a rational number, to the product of consecutive zeta values \(\zeta(2)\ldots\zeta(2g-1)\). One proves [20] that the volume of \(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\) is obtained by dividing by the volume of \(\mathrm{SO}_{g}(\mathbb{R})\), which is proportional to the product of the even zeta values \(\zeta(2)\ldots\zeta(2g-2)\), giving (14.1). **Theorem 14.3**.: _The map \(\omega\mapsto[\omega_{c}]\) (13.10) induces an injective map of graded algebras_ \[\Omega_{c}^{\bullet}(g)\hookrightarrow H_{dR}^{\bullet}\left(\big{|}L\mathcal{ A}_{g}^{\mathrm{trop}}\big{|}\right)\.\] _It factors through an injective map \(\Omega_{c}^{\bullet}(g)\hookrightarrow H_{dR}^{\bullet}\left(\big{|}L \mathcal{A}_{g}^{\mathrm{trop}}\big{|}\,,\big{|}\partial L\mathcal{A}_{g}^{ \mathrm{trop}}\big{|}\right)\)._ Proof.: We prove a stronger result, namely that the classes \(\Omega_{c}^{\bullet}(g)\) are non-zero in a larger complex of differential forms with'reduced' boundary: \[\Omega^{\bullet,\mathrm{red}}(L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}) \quad\subset\quad\Omega^{\bullet}(L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B }}\setminus\,\widetilde{\mathrm{Det}})\ \oplus\ \bigoplus_{\mathcal{E}}\Omega^{\bullet}\left(\prod_{i=0}^{n}\mathbb{P}\left( \mathcal{Q}(K_{i+1}/K_{i})\backslash\mathrm{Det}_{K_{i+1}/K_{i}}\right)\right)\] where the direct sum is over all components of the boundary \(\partial L\mathcal{A}_{g}^{\mathcal{B}}\) indexed by flags (11.2). It consists of pairs \((\omega,(\eta_{\mathcal{E}})_{\mathcal{E}})\) such that the restriction of \(\omega\) to a face \(\mathcal{E}\cap\partial L\mathcal{A}_{g}^{\mathcal{B}}\) satisfies \[\omega\big{|}_{\mathcal{E}}=\pi_{\mathrm{red}}^{\bullet}\,\eta_{\mathcal{E}} \tag{14.2}\] where \(\pi_{\mathrm{red}}\) was defined in (11.4). There is a corresponding complex \(\mathcal{A}^{\bullet,\mathrm{red}}(\big{|}L\mathcal{A}_{g}^{\mathrm{trop}, \mathcal{B}}|)\) of smooth differential forms defined in the same way: it consists of smooth forms \((\omega,(\eta_{\mathcal{E}})_{\mathcal{E}})\), where \(\omega\in\mathcal{A}^{\bullet}(\big{|}L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B }}|)\) and \(\eta_{\mathcal{E}}\) are smooth forms on a neighbourhood of \(\pi_{\mathrm{red}}(\big{|}\partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B }}|\cap\mathcal{E}(\mathbb{R}))\) which satisfy (14.2). Let us denote its cohomology by \(H^{\bullet,\mathrm{red}}(\big{|}L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}|)\). Since the restriction of the blow down to the boundary \(|\partial L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}|\to|\partial L\mathcal{A} _{g}^{\mathrm{trop}}|\) factors through \(\pi_{\mathrm{red}}\) (it factors through the map which projects a product (11.3) onto its final component), it follows that the map on cohomology induced by \(\pi_{\mathcal{B}}:|L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B}}|\to|L\mathcal{A} _{g}^{\mathrm{trop}}|\) factors through the reduced cohomology. More precisely there is a commuting diagram \[H_{dR}^{\bullet}(\big{|}L\mathcal{A}_{g}^{\mathrm{trop}}\big{|})\longrightarrow H _{dR}^{\bullet,\mathrm{red}}(\big{|}L\mathcal{A}_{g}^{\mathrm{trop},\mathcal{B }}|)\longrightarrow H_{dR}^{\bullet}(\big{|}L\mathcal{A}_{g}^{\mathrm{trop}, \mathcal{B}}|) \tag{14.3}\] where the first map is induced by \(\pi^{*}_{\mathcal{B}}\) and the second is \((\omega,\eta)\mapsto\omega\). We will show that \[\omega\mapsto(\omega,0)\ :\ \Omega^{\bullet}(g)\longrightarrow H^{\bullet,\text{ red}}_{dR}(|L\mathcal{A}^{\text{trop},\mathcal{B}}_{g}|)\] is an injection. This map is well-defined by equation (13.5), which implies that the restriction of a canonical form to an exceptional divisor \(\mathcal{E}\) is always of the form \(\pi^{*}_{\text{red}}(\eta)\). We first show that the volume form is non-zero in \(H^{\ell_{g},\text{red}}(|L\mathcal{A}^{\text{trop},\mathcal{B}}_{g}|)\). If \(\text{vol}_{g}\) were exact, there would exist forms \(\alpha\) on \(|L\mathcal{A}^{\text{trop},\mathcal{B}}_{g}|\) and \((\beta_{\mathcal{E}})_{\mathcal{E}}\) of degree \(\ell_{g}-1\) such that \[\text{vol}_{g}=d\alpha\quad\text{ and }\quad\alpha\Big{|}_{\mathcal{E}( \mathbb{R})}=\pi^{*}_{\text{red}}\beta. \tag{14.4}\] However, since \(\pi_{\text{red}}\) decreases the dimension by at least \(g-1\), such a form \(\beta\) is necessarily zero since it has degree \(>\ell_{g}-g\). Equation (14.4) then implies that \(\text{vol}_{g}\) is zero in relative cohomology \(H^{\ell_{g}}_{dR}\left(|L\mathcal{A}^{\text{trop},\mathcal{B}}_{g}|\,\big{|} \partial L\mathcal{A}^{\text{trop},\mathcal{B}}_{g}|\right)\) which contradicts lemma 14.1. Now let \(\omega\in\Omega(g)\). Since the Hodge star operator is non-degenerate, the element \(\star\omega\in\Omega(g)\) satisfies \(\omega\wedge\star\omega=\lambda\text{vol}_{g}\) for some non-zero \(\lambda\in\mathbb{Q}^{\times}\). Since (14.2) is compatible with wedge products of differential forms, the complex \(\mathcal{A}^{\bullet,\text{red}}(|L\mathcal{A}^{\text{trop},\mathcal{B}}_{g}|)\) inherits a multiplicative structure giving rise to a product on reduced cohomology: \[H^{m,\text{red}}_{dR}(|L\mathcal{A}^{\text{trop},\mathcal{B}}_{g}|)\otimes H^ {n,\text{red}}_{dR}(|L\mathcal{A}^{\text{trop},\mathcal{B}}_{g}|)\longrightarrow H ^{m+n,\text{red}}_{dR}(|L\mathcal{A}^{\text{trop},\mathcal{B}}_{g}|)\.\] Since \([\![\star\omega]\wedge[\omega]=\lambda[\text{vol}_{g}]\) is non-zero, it follows that \([\omega]\in H^{\bullet,\text{red}}_{dR}(|L\mathcal{A}^{\text{trop},\mathcal{B}} _{g}|)\) is non-zero and hence \(\Omega(g)\) indeed injects into \(H^{\bullet,\text{red}}_{dR}(|L\mathcal{A}^{\text{trop},\mathcal{B}}_{g}|)\) as claimed. By (14.3), the image of the subspace of compact type \(\Omega_{c}(g)\) in \(H^{\bullet,\text{red}}_{dR}(|L\mathcal{A}^{\text{trop},\mathcal{B}}_{g}|)\) factors through its image in \(H^{\bullet}(|L\mathcal{A}^{\text{trop}}_{g}|)\). It follows that the map \(\Omega_{c}(g)\to H^{\bullet}(|L\mathcal{A}^{\text{trop}}_{g}|)\) given by (13.10) must already be injective. The last part follows from the fact that the map \(\Omega_{c}(g)\to H^{\bullet}(|L\mathcal{A}^{\text{trop}}_{g}|)\) factors through \(H^{\bullet}(|L\mathcal{A}^{\text{trop}}_{g}|,|\partial L\mathcal{A}^{\text{trop }}_{g}|)\) by definition. ### Cohomology of \(\text{GL}_{g}(\mathbb{Z})\) The previous theorem has the following corollaries. First, some notation. Since \(\mathcal{P}_{g}\) is contractible and an \(\mathbb{R}^{\times}\)-bundle over the space \(L\mathcal{P}_{g}\), we have \[H^{n}(\text{GL}_{g}(\mathbb{Z});\mathbb{R}) = H^{n}(L\mathcal{P}_{g}/\text{GL}_{g}(\mathbb{Z}))\,\] \[H^{n+1}_{c}(\text{GL}_{g}(\mathbb{Z});\mathbb{R}) = H^{n}_{c}(L\mathcal{P}_{g}/\text{GL}_{g}(\mathbb{Z})). \tag{14.5}\] The reason for the shift in indices is that \(H^{n}_{c}(\mathbb{R}^{\times})=0\) for \(n\neq 1\) and \(H^{1}_{c}(\mathbb{R}^{\times})=\mathbb{R}\). **Corollary 14.4**.: _Let \(g>1\) be odd. The forms of non-compact (resp. compact) type embed into the cohomology (resp. compactly supported cohomology) of \(L\mathcal{P}_{g}/\text{GL}_{g}(\mathbb{Z})\):_ \[\Omega^{\bullet}_{nc}(g)\otimes_{\mathbb{Q}}\mathbb{R} \hookrightarrow H^{\bullet}(\text{GL}_{g}(\mathbb{Z});\mathbb{R})\] \[\Omega^{\bullet}_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R} \hookrightarrow H^{\bullet+1}_{c}(\text{GL}_{g}(\mathbb{Z});\mathbb{R}). \tag{14.6}\] _Poincare duality gives natural sections of these maps._ Proof.: Recall that \(\ell_{g}=\dim L\mathcal{P}_{g}\). For any \(\omega\in\Omega_{c}(g)\), consider the cup product \[H^{n}_{c,dR}\left(L\mathcal{P}_{g}/\text{GL}_{g}(\mathbb{Z})\right)\otimes_{ \mathbb{R}}H^{\ell_{g}-n}_{dR}\left(L\mathcal{P}_{g}/\text{GL}_{g}(\mathbb{Z}) \right)\longrightarrow H^{\ell_{g}}_{c,dR}\left(L\mathcal{P}_{g}/\text{GL}_{g}( \mathbb{Z})\right) \tag{14.7}\] sends \([\omega_{c}]\otimes[\star\omega]\mapsto[\omega_{c}\wedge\star\omega]=[\text{vol} ^{c}_{g}]\neq 0\). It follows that both \([\omega_{c}]\) and \([\star\omega]\) are non-zero classes in the respective cohomology groups. The statement follows from (14.5). A splitting \(H^{\bullet}(\text{GL}_{g}(\mathbb{Z});\mathbb{R})\to\Omega^{\bullet}_{nc}(g) \otimes_{\mathbb{Q}}\mathbb{R}\) may be defined as follows. For any \([\alpha]\in H^{n}(\text{GL}_{g}(\mathbb{Z});\mathbb{R})\), it follows from Stokes' theorem that the map \[\omega_{c}\mapsto\int_{L\mathcal{P}_{g}/\text{GL}_{g}(\mathbb{Z})}[\alpha] \wedge\omega_{c}\] is a well-defined element of \(\text{Hom}(\Omega^{\ell_{g}-n}_{c}(g),\mathbb{R})\), which is isomorphic to \(\Omega^{n}_{nc}(g)\otimes_{\mathbb{Q}}\mathbb{R}\) via the star operator since \(\star\omega\wedge\omega_{c}\) pairs non-trivially with the fundamental class \([\mathcal{F}^{\text{H}}_{g}]\). This defines a linear map \(H^{\bullet}(\operatorname{GL}_{g}(\mathbb{Z});\mathbb{R})\to\Omega^{\bullet}_{nc}(g) \otimes_{\mathbb{Q}}\mathbb{R}\) which, after rescaling, defines the required splitting. A splitting of the second map of (14.6) may be defined similarly. **Corollary 14.5**.: _Let \(g>1\) be odd. Then for all \(h\geqslant g\), we have:_ \[\Omega^{\bullet}_{nc}(g)\quad\hookrightarrow\quad H^{\bullet}(\operatorname{ GL}_{h}(\mathbb{Z});\mathbb{R}). \tag{14.8}\] Proof.: Let \(k\geqslant 1\) and consider the maps \[\Omega^{\bullet}_{c}(g)\to\Omega^{\bullet}_{c}(g+2k)\to H^{\bullet}( \operatorname{GL}_{g+2k}(\mathbb{Z});\mathbb{R})\to H^{\bullet}(\operatorname{ GL}_{g+2k-1}(\mathbb{Z});\mathbb{R})\to H^{\bullet}(\operatorname{GL}_{g}( \mathbb{Z});\mathbb{R})\] where \(H^{\bullet}(\operatorname{GL}_{g+m}(\mathbb{Z});\mathbb{R})\to H^{\bullet}( \operatorname{GL}_{g}(\mathbb{Z});\mathbb{R})\) is induced by the stabilisation map \(X\mapsto X\oplus I_{m}\), where \(I_{m}\) is the identity matrix of rank \(m\). The composition of these maps is the injection \(\Omega^{\bullet}_{c}(g)\to H^{\bullet}(\operatorname{GL}_{g}(\mathbb{Z}); \mathbb{R})\) since \(\omega_{X\oplus I_{m}}=\omega_{X}\) for any canonical form \(\omega\). Therefore \(\Omega^{\bullet}_{c}(g)\) injects into all intermediate spaces, and hence into \(H^{\bullet}(\operatorname{GL}_{h}(\mathbb{Z});\mathbb{R})\) for all \(h\geqslant g\). _Remark 14.6_.: A brief discussion of the history of this result is in the introduction. Note that the previous corollaries are strictly weaker than theorem 14.3. ### Compactly supported cohomology The following proposition is essentially a statement, in a different form, of results of [10]. **Proposition 14.7**.: _For all \(n,g>1\), there are short exact sequences:_ \[0\longrightarrow H^{n-1}_{dR}(|L\mathcal{A}^{\operatorname{trop}}_{g-1}|) \longrightarrow H^{n}_{dR,c}(|L\mathcal{A}^{\operatorname{o,trop}}_{g}|) \longrightarrow H^{n}_{dR}(|L\mathcal{A}^{\operatorname{trop}}_{g}|)\longrightarrow 0\] Proof.: Consider the long exact relative de Rham cohomology sequence: \[\cdots\longrightarrow H^{n-1}_{dR}(|\partial L\mathcal{A}^{\operatorname{ trop}}_{g}|)\longrightarrow H^{n}_{dR}(|L\mathcal{A}^{\operatorname{ trop}}_{g}|,|\partial L\mathcal{A}^{\operatorname{trop}}_{g}|) \longrightarrow H^{n}_{dR}(|L\mathcal{A}^{\operatorname{trop}}_{g}|) \longrightarrow\cdots\.\] The statement follows on interpreting the relative cohomology group in the middle as compactly supported cohomology (3.9); from the homeomorphism \(\left|\partial L\mathcal{A}^{\operatorname{trop}}_{g}\right|\cong|L\mathcal{A} ^{\operatorname{trop}}_{g-1}|\), which follows from [10, Corollary 4.12]; and from the fact that the restriction maps \(H^{n}_{dR}(|L\mathcal{A}^{\operatorname{trop}}_{g}|)\to H^{n}_{dR}(|\partial L \mathcal{A}^{\operatorname{trop}}_{g}|)\) are zero. This last fact is dual to the statement that the inclusion of the boundary \(\left|\partial L\mathcal{A}^{\operatorname{trop}}_{g}\right|\hookrightarrow \left|L\mathcal{A}^{\operatorname{trop}}_{g}\right|\) induces the zero map in homology \(H_{n}(\left|\partial L\mathcal{A}^{\operatorname{trop}}_{g}\right|;\mathbb{R}) \to H_{n}(\left|L\mathcal{A}^{\operatorname{trop}}_{g}\right|;\mathbb{R})\). The reason for this is that the image of a chain in the boundary \(\left|\partial L\mathcal{A}^{\operatorname{trop}}_{g}\right|\) only involves cones \(\sigma_{Q}\) which are equivalent to the cone of a positive definite quadratic form of rank strictly less than \(g\), and therefore lies in the inflation subcomplex \(I^{(g)}\) of the perfect cone complex \(P^{(g)}\). Since the inflation complex is acyclic, by [10, Theorem 5.15], any such chain is trivial in homology. **Corollary 14.8**.: _Let \(g>1\) be odd. Then there is a natural injective map_ \[\Omega^{\bullet}_{c}(g)\hookrightarrow H^{\bullet+2}_{c}(\operatorname{GL}_{g+ 1}(\mathbb{Z});\mathbb{R}). \tag{14.9}\] Proof.: By proposition 14.7, there is an injective map \[H^{\bullet}_{dR}(|L\mathcal{A}^{\operatorname{trop}}_{g}|)\longrightarrow H ^{\bullet+1}_{dR,c}(|L\mathcal{A}^{\operatorname{o,trop}}_{g+1}|)\cong H^{ \bullet+1}_{dR,c}(L\mathcal{P}_{g+1}/\operatorname{GL}_{g+1}(\mathbb{Z}))\] The result follows from theorem 14.3 and (14.5). ### Unstable cohomology of \(\operatorname{SL}_{g}(\mathbb{Z})\) The following lemma is standard [10]. **Lemma 14.9**.: _Let \(n\geqslant 0\). Then_ \[H^{n}(\operatorname{SL}_{g}(\mathbb{Z});\mathbb{R})\cong\begin{cases}H^{n}( \operatorname{GL}_{g}(\mathbb{Z});\mathbb{R})&\text{ if $g$ is odd}\\ H^{n}(\operatorname{GL}_{g}(\mathbb{Z});\mathbb{R})\oplus H^{d_{g}-n}_{c}( \operatorname{GL}_{g}(\mathbb{Z});\mathbb{R})^{\vee}&\text{ if $g$ is even}\end{cases}\] Proof.: Consider the short exact sequence \[1\longrightarrow\operatorname{SL}_{g}(\mathbb{Z})\longrightarrow\operatorname{ GL}_{g}(\mathbb{Z})\stackrel{{\det}}{{\longrightarrow}} \operatorname{GL}_{1}(\mathbb{Z})\longrightarrow 1\] where \(\operatorname{GL}_{1}(\mathbb{Z})=\mathbb{Z}^{\times}\cong\mathbb{Z}/2\mathbb{Z}\). When \(g\) is odd, it splits: in fact, since the diagonal matrix \(-I_{g}\) of rank \(g\) is central, \(\operatorname{GL}_{g}(\mathbb{Z})\) is isomorphic to the direct product \(\operatorname{SL}_{g}(\mathbb{Z})\times\mathbb{Z}/2\mathbb{Z}\). Since \(\mathbb{Z}/2\mathbb{Z}\) is torsion we deduce that \(H^{n}(\mathrm{SL}_{g}(\mathbb{Z});\mathbb{Q})\cong H^{n}(\mathrm{GL}_{g}(\mathbb{ Z});\mathbb{Q})\) when \(g\) is odd. Now suppose that \(g\) is even, and consider the rank one \(\mathbb{R}\)-vector space \(\mathbb{R}^{-}\) equipped with the action of \(g\in\mathrm{GL}_{g}(\mathbb{Z})\) given by \(g.v=\det(g)v\). By Shapiro's lemma one has: \[H^{n}(\mathrm{SL}_{g}(\mathbb{Z});\mathbb{R})\cong H^{n}(\mathrm{GL}_{g}( \mathbb{Z});\mathrm{Ind}_{\mathrm{SL}_{g}(\mathbb{Z})}^{\mathrm{GL}_{g}( \mathbb{Z})}\,\mathbb{R})\cong H^{n}(\mathrm{GL}_{g}(\mathbb{Z});\mathbb{R} \oplus\mathbb{R}^{-})\.\] Denote by \(\mathcal{O}\) the orientation bundle on \(L\mathcal{P}_{g}\), which is non-trivial when \(g\) is even. We have \[H^{n}(\mathrm{GL}_{g}(\mathbb{Z});\mathbb{R}^{-})=H^{n}(\mathcal{P}_{g}/ \mathrm{GL}_{g}(\mathbb{Z});\mathcal{O})\cong H^{d_{g}-n}_{e}(\mathcal{P}_{g}/ \mathrm{GL}_{g}(\mathbb{Z});\mathbb{R})^{\vee}\] where the first equality follows from the argument of [13, Lemma 7.2] and the second for Poincare duality for non-orientable homology manifolds. The previous lemma and (14.9) imply the following result on the cohomology of \(\mathrm{SL}_{g}(\mathbb{Z})\). Choose a surjective map \(s:H^{\bullet}_{c}{}^{+2}(\mathrm{GL}_{g+1}(\mathbb{Z});\mathbb{R})\to\Omega^{ \bullet}_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R}\) which is a section of the injective map \(\Omega^{\bullet}_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R}\hookrightarrow H^{ \bullet+2}_{c}(\mathrm{GL}_{g+1}(\mathbb{Z});\mathbb{R})\) equivalent to (14.9). **Corollary 14.10**.: _Let \(g>1\) be odd and \(n\geq 0\). Then_ \[\Omega^{n}_{nc}(g)\otimes_{\mathbb{Q}}\mathbb{R} \hookrightarrow H^{n}(\mathrm{SL}_{g}(\mathbb{Z});\mathbb{R})\] \[(\Omega^{n}_{nc}(g)\otimes_{\mathbb{Q}}\mathbb{R})\oplus\left( \Omega^{d_{g+1}-n-2}_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R}\right)^{\vee} \hookrightarrow H^{n}(\mathrm{SL}_{g+1}(\mathbb{Z});\mathbb{R})\] _The first map is natural. The second map depends on the choice of section \(s\): it is given by its dual \(s^{\vee}\) on the second component \((\Omega^{d_{g+1}-n-2}_{c}(g)\otimes\mathbb{R})^{\vee}\)._ If we identify \((\Omega^{\ell_{g}-n}_{c}(g))^{\vee}\) with \(\Omega^{n}_{nc}(g)\) via the Hodge star operator, the second map in the corollary may be written (non-canonically) in the numerologically more convenient form \[\left(\Omega^{n}_{nc}(g)\oplus\Omega^{n-g}_{nc}(g)\right)\otimes_{\mathbb{Q}} \mathbb{R}\hookrightarrow H^{n}(\mathrm{SL}_{g+1}(\mathbb{Z});\mathbb{R})\.\] **Examples 14.11**.: For \(g=3\), we have \(\Omega_{nc}(3)=\mathbb{Q}\) and \(\Omega_{c}(3)=\omega^{5}\mathbb{Q}\). The previous corollary produces classes in \(H^{0}(\mathrm{SL}_{3}(\mathbb{Z});\mathbb{R})\), as well as \(H^{0}(\mathrm{SL}_{4}(\mathbb{Z});\mathbb{R})\), \(H^{3}(\mathrm{SL}_{4}(\mathbb{Z});\mathbb{R})\). For \(g=5\), we have forms of non-compact type \(\Omega_{nc}(5)=\mathbb{Q}\oplus\mathbb{Q}\omega^{5}\). They give rise, via the previous corollary, to classes in \(H^{0}(\mathrm{SL}_{5}(\mathbb{Z});\mathbb{R})\) and \(H^{5}(\mathrm{SL}_{5}(\mathbb{Z});\mathbb{R})\), and also \(H^{0}(\mathrm{SL}_{6}(\mathbb{Z});\mathbb{R})\) and \(H^{5}(\mathrm{SL}_{6}(\mathbb{Z});\mathbb{R})\). The forms of compact type \(\Omega_{c}(5)=\omega^{9}\mathbb{Q}\oplus(\omega^{5}\wedge\omega^{9})\mathbb{Q}\) give rise to two further classes in \(H^{5}(\mathrm{SL}_{6}(\mathbb{Z});\mathbb{R})\) and \(H^{10}(\mathrm{SL}_{6}(\mathbb{Z});\mathbb{R})\). _Remark 14.12_.: A natural way to phrase the corollary is to say that there is a complex \[\Omega^{n}_{nc}(g)\otimes_{\mathbb{Q}}\mathbb{R}\longrightarrow H^{n}( \mathrm{SL}_{g+1}(\mathbb{Z});\mathbb{R})\longrightarrow\left(\Omega^{d_{g+1} -n-2}_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R}\right)^{\vee}\] where the first map is injective, and the second map is surjective. ### Stability Let \(g>1\) be odd. By Borel [1], there exists a \(\kappa(g)\) which grows linearly in \(g\) such that \[\Omega^{n}_{nc}(g)\otimes_{\mathbb{Q}}\mathbb{R}\xrightarrow{\sim}H^{n}( \mathrm{GL}_{g}(\mathbb{Z});\mathbb{R})\quad\text{ for all }\,n<\kappa(g). \tag{14.10}\] Borel showed that \(\kappa(g)\) is of the order of \(g/4\). Using the recent work [13, Theorem A], and (14.8), it may be improved to \(\kappa(g)=g\). For \(n\) in this range, \(\Omega^{n}_{nc}(g)=\Omega^{n}(g)\). **Corollary 14.13**.: _Let \(g>1\) be odd and let \(\kappa(g)\) be such that (14.10) holds. Then_ \[H^{n}_{dR}(|L\mathcal{A}^{\mathrm{trop}}_{g}|) \cong \Omega^{n}_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R}\] \[H^{n-1}_{dR}(|L\mathcal{A}^{\mathrm{trop}}_{g-1}|) = 0\,\] _for all \(n\geq d_{g}-\kappa(g)\)._ Proof.: Let \(g>1\) be odd. Since \(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\) is orientable, Poincare duality implies that \(H^{n}_{c}(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z});\mathbb{R})\cong H^{ \ell_{g}-n}(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z});\mathbb{R})^{\vee}\). By (14.10), (14.6) and duality, the cohomology of the compactly supported cohomology is given in the stable range by: \[H^{n}_{c}(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z});\mathbb{R})\cong\Omega^{n }_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R}\quad\text{ for }n>\ell_{g}-\kappa(g)\] In this range, proposition 14.7 becomes \[0\longrightarrow H^{n-1}_{dR}(|L\mathcal{A}^{\mathrm{trop}}_{g-1}|) \longrightarrow\Omega^{n}_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R}\stackrel{{ (*)}}{{\longrightarrow}}H^{n}_{dR}(|L\mathcal{A}^{\mathrm{trop}}_{g}|) \longrightarrow 0\] By theorem 14.3, the map \(\Omega^{n}_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R}\xrightarrow{}H^{n}_{dR}(|L \mathcal{A}^{\mathrm{trop}}_{g}|)\) is injective, hence \((*)\) is an isomorphism, which implies that its kernel vanishes. ### Canonical forms and the tropical Torelli map The composition \[\Omega^{n}_{c}(g)\otimes_{\mathbb{Q}}\mathbb{R}\longrightarrow H^{n}_{dR} \left(|L\mathcal{A}^{\mathrm{trop}}_{g}|\right)\stackrel{{\lambda ^{*}}}{{\longrightarrow}}H^{n}_{dR}\left(|L\mathcal{M}^{\mathrm{trop}}_{g}|\right)\] sends the majority of forms to zero. Indeed, if \(\eta\in\Omega^{d}_{c}(g)\) has degree \(d\) then \[\lambda^{*}(\omega^{2g-1}\wedge\eta)=0\quad\text{ if }\quad d>g-2. \tag{14.11}\] This is because the left-hand side restricts to the form \(\omega^{2g-1}_{G}\wedge\eta_{G}\) in the notation of [1] on the simplex \(\sigma_{G}\) associated to a graph \(G\). By [1, Corollary 6.19], it vanishes if its degree is greater than \(3g-3\). It might be possible to reduce the degree \(d\) in (14.11) by a more detailed analysis of vanishing of graph forms [1, SS6.5]. Evidence suggests that \(\lambda^{*}(\omega)\) is non-zero if and only if \(\omega\) is primitive, or more precisely, that the image of \[\lambda^{*}:H^{n}_{dR}(L\mathcal{P}_{g}/\mathrm{GL}_{g}(\mathbb{Z})) \longrightarrow H^{n}_{dR}(|L\mathcal{M}^{\mathrm{trop}}_{g}|)\] is one-dimensional: \(\mathrm{Im}(\lambda^{*})=\mathbb{Q}[\omega^{2g-1}]\), and is dual to the class of the wheel with \(g\) spokes. Nevertheless, a conjecture in _loc. cit._ states that the free Lie algebra embeds: \[\mathrm{Lie}\left(\Omega^{\mathrm{can}}\right)\hookrightarrow\bigoplus_{g \geqslant 1}H^{\bullet}\left(|L\mathcal{M}^{\mathrm{trop}}_{g}|\right)\.\] It is corroborated by all known evidence in low degrees, and suggests that all canonical forms do appear in the cohomology of \(H^{\bullet}_{dR}\left(|L\mathcal{M}^{\mathrm{trop}}_{g}|\right)\) but for a higher than expected genus. ## 15. Periods We work in a suitable neutral Tannakian category \(\mathcal{H}_{\mathbb{Q}}\) of motivic realisations over \(\mathbb{Q}\). There are many possible variants [1]. We assume that it has two fiber functors \[M\mapsto M_{B},M_{dR}\ :\ \mathcal{H}_{\mathbb{Q}}\longrightarrow\mathrm{Vec}_{ \mathbb{Q}}\] and that there is a canonical element \(\mathrm{comp}_{B,dR}\in\mathrm{Isom}^{\otimes}_{\mathcal{H}_{\mathbb{Q}}}( \omega_{dR},\omega_{B})(\mathbb{C})\). A convenient option is to let \(\mathcal{H}_{\mathbb{Q}}\) be the category of mixed Hodge structures with additional \(\mathbb{Q}\)-de Rham structure studied in [1]. The element \(\mathrm{comp}_{B,dR}\) defines an isomorphism \[M_{dR}\otimes_{\mathbb{Q}}\mathbb{C}\stackrel{{\sim}}{{ \longrightarrow}}M_{B}\otimes_{\mathbb{Q}}\mathbb{C}\,\] which is natural in \(M\), which may be interpreted as a period pairing \(M_{dR}\otimes_{\mathbb{Q}}M^{\vee}_{B}\to\mathbb{C}\). Consider the affine group scheme: \[G_{dR}=\mathrm{Aut}^{\otimes}_{\mathcal{H}_{\mathbb{Q}}}(\omega_{dR})\] given by the automorphisms of the fiber functor \(\omega_{dR}:M\mapsto M_{dR}\). By the Tannaka theorem [1], the fiber functor \(\omega_{dR}\) defines an equivalence of categories: \[\omega_{dR}:\mathcal{H}_{\mathbb{Q}}\stackrel{{\sim}}{{ \longrightarrow}}\mathrm{Rep}\,G_{dR}\.\] ### 'Motives' of quadratic forms **Definition 15.1**.: Let \(Q\) be a positive definite quadratic form on \(V\) such that \(\sigma_{Q}>0\). Recall that \(\mathsf{c}_{Q}^{\mathcal{B}}=(P^{\mathcal{B}_{Q}},L^{\mathcal{B}_{Q}},\sigma_{Q }^{\mathcal{B}})\). Let \(d=\dim\sigma_{Q}\). Define \[\operatorname{mot}_{Q}=H^{d}\left(P^{\mathcal{B}_{Q}}\backslash\widetilde{ \operatorname{Det}}\,\ L^{\mathcal{B}_{Q}}\backslash(L^{\mathcal{B}_{Q}}\cap \widetilde{\operatorname{Det}})\right). \tag{15.1}\] By corollary 11.5, we have \([\sigma_{Q}^{\mathcal{B}}]\in(\operatorname{mot}_{Q})_{B}^{\vee}\). Any canonical form \(\omega\in\Omega_{\operatorname{can}}^{d}\) defines a class \([\omega]\in(\operatorname{mot}_{Q})_{dR}\), and so we may define the'motivic period': \[I_{Q}^{\mathfrak{m}}(\omega)=[\operatorname{mot}_{Q},[\sigma_{Q}],[\omega]]^{ \mathfrak{m}}\quad\in\quad\mathcal{O}(\operatorname{Isom}_{\mathcal{H}_{Q}}^ {\otimes}(\omega_{dR},\omega_{B})). \tag{15.2}\] Note that for cones \(\sigma_{Q}\) which are contained in the determinant locus, [10, Lemma 4.9] implies that they are equivalent to the cone of a quadratic form of smaller rank, so definition 15.1 in fact covers all cases. The period of \(I_{Q}^{\mathfrak{m}}(\omega)\) is the convergent integral \[\operatorname{per}\left(I_{Q}^{\mathfrak{m}}(\omega)\right)=\int_{\sigma_{Q}}\omega\] which is finite by corollary 13.12. Furthermore, the relation of theorem 13.13 is motivic: it holds verbatim for the objects \(I_{Q}^{\mathfrak{m}}(\omega)\). In the case when \(Q=Q_{G}\) comes from a graph, the object \(I_{Q_{G}}^{\mathfrak{m}}(\omega)\) is equal to the 'canonical motivic Feynman integral' \(I_{G}^{\mathfrak{m}}(\omega)\) defined in [1]. _Remark 15.2_.: Every isomorphism \([Q]\cong[Q^{\prime}]\) in the category \(\mathcal{D}^{\operatorname{perf},\mathcal{B}}\) gives rise to an isomorphism \(\operatorname{mot}_{Q}\cong\operatorname{mot}_{Q}^{\prime}\). Similarly, face diagrams give rise to a diagram of morphisms in the category \(\mathcal{H}_{\mathbb{Q}}\) (see [1] for the case of graphs), but extraneous blow-ups are _not_ necessarily isomorphisms (although they do induce equivalences of motivic periods (15.2)). **Example 15.3**.: We refer to [1] for many examples of graphs for which \(I_{G}(\omega)\) are known. The most interesting was computed by Borinsky and Schnetz [1]: \[I_{K_{6}}(\omega^{5}\wedge\omega^{9})=3\,\frac{10!}{16}\left(12\,\zeta(3,5)-29 \,\zeta(8)+23\,\zeta(3)\zeta(5)\right). \tag{15.3}\] It was proven in [1, lemmas 3.7, 6.8] that the canonical integrals are stable under duality, which implies that the volume of the graphic cell \(\sigma_{K_{6}^{\vee}}\) where \(K_{6}^{\vee}\) is the matroid dual to \(K_{6}\), is given by the same integral. It is the 'principal cone' [10, Def. 2.13] in the fundamental domain \(\mathcal{F}_{5}\) for \(\operatorname{GL}_{5}(\mathbb{Z})\). By (14.1) the volume \(\int_{\mathcal{F}_{5}}\omega^{5}\wedge\omega^{9}\) is a proportional to a product \(\zeta(3)\zeta(5)\). The appearance, therefore, of the non-trivial multiple zeta value \(\zeta(3,5)\) in (15.3) suggests that the principal cone does not span the fundamental domain: \(\sigma_{K_{6}^{\vee}}\subsetneq\mathcal{F}_{5}\) and that the tropical Torelli map is not surjective. Indeed, it is known (e.g. [11]) that the fundamental domain involves two further cones. We expect from (15.3) that their volumes, which are not yet known, will involve non-trivial multiple zeta values. As noted in [1, SS10.3.1], the particular linear combination \(12\,\zeta(3,5)-29\,\zeta(8)\) is the same one which occurs in quantum field theory as the Feynman residue of \(K_{3,4}\). It is a very striking fact that \(\zeta(3,5)\) is only ever observed in this particular combination, and never in isolation, amongst the vast array of periods in quantum field theory [15]. This surprising phenomenon is one particular consequence of the 'Cosmic' Galois group. The geometric interpretation (15.3) of this linear combination points to an unexplored and deep connection between quantum fields and the reduction theory of quadratic forms. ### Locally finite homology'motive' of \(\operatorname{GL}_{g}(\mathbb{Z})\) Let \(\mathcal{H}_{\mathbb{Q}}\), as above, be a Tannakian category of realisations which contains every object \(\operatorname{mot}_{Q}\) (15.1). Consider the cohomology \[H_{c}^{d}(g)=H^{d}\left(\ker\left(\Omega^{\bullet}\left(L\mathcal{A}_{g}^{ \operatorname{trop},\mathcal{B}}\setminus\widetilde{\operatorname{Det}} \right)\longrightarrow\Omega^{\bullet}\left(\partial L\mathcal{A}_{g}^{ \operatorname{trop},\mathcal{B}}\setminus(\partial L\mathcal{A}_{g}^{ \operatorname{trop},\mathcal{B}}\cap\widetilde{\operatorname{Det}})\right) \right)\right)\] of the complex of compatible systems (definition 2.8) of algebraic forms on \(L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\backslash\widetilde{\mathrm{Det}}\) whose restriction to the boundary \(\partial L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\) vanishes. There is a natural map \[H_{c}^{d}(g)\longrightarrow H_{dR}^{d}\left(\left|L{\mathcal{A}}_{g}^{\mathrm{ trop},{\mathcal{B}}}\right|\;,\;\left|\partial L{\mathcal{A}}_{g}^{\mathrm{ trop},{\mathcal{B}}}\right|\right)\cong H_{c}^{d}(L{\mathcal{P}}_{g}/\mathrm{GL}_{g}( \mathbb{Z});\mathbb{R})\] via which elements of \(H_{c}^{d}(g)\) may be interpreted as smooth compactly supported differential forms on \(L{\mathcal{P}}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\). There is a natural map \(\Omega_{c}^{d}(g)\to H_{c}^{d}(g)\) (which is injective). **Theorem 15.4**.: _For every \(g>1\), and \(d\geq 0\), there exists a minimal object \({\mathsf{M}}_{g}^{d}\) of \({\mathcal{H}}_{\mathbb{Q}}\) which is equipped with a pair of canonical linear maps_ \[H_{d}^{\mathrm{lf}}(\mathrm{GL}_{g}(\mathbb{Z});\mathbb{Q}) \longrightarrow ({\mathsf{M}}_{g}^{d})^{\vee}_{B}\] \[H_{c}^{d}(g) \longrightarrow ({\mathsf{M}}_{g}^{d})_{dR}\] _such that the integration pairing_ \[H_{d}^{\mathrm{lf}}(\mathrm{GL}_{g}(\mathbb{Z});\mathbb{Q})\otimes_{\mathbb{Q} }H_{c}^{d}(g)\longrightarrow\mathbb{C}\] _factors through the period pairing: \(\langle\;,\;\rangle:({\mathsf{M}}_{g}^{d})^{\vee}_{B}\otimes_{\mathbb{Q}}({ \mathsf{M}}_{g}^{d})_{dR}\rightarrow\mathbb{C}\)._ Proof.: First consider the object of \({\mathcal{H}}_{\mathbb{Q}}\) defined by: \[M_{g}^{d}=\bigoplus_{\dim\sigma_{Q}=d}\mathrm{mot}_{Q}^{\mathrm{Aut}(\sigma_{ Q})}\,\] where the sum is over isomorphism classes of cones \(\sigma_{Q}\) of dimension \(d\). Consider the subspace \(Z_{g}^{d}\subset\Omega^{d}(L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}} \backslash\widetilde{\mathrm{Det}})\) consisting of compatible systems of closed forms whose restriction to \(\partial L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\) vanishes, and let \(B_{g}^{d}\subset Z_{g}^{d}\) be the subspace spanned by \(d\omega\), where \(\omega\) are compatible systems of forms vanishing on \(\partial L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\). There is a natural map \(Z_{g}^{d}\rightarrow(M_{g}^{d})_{dR}\). By the Tannaka theorem, there are unique subobjects \(\langle B_{g}^{d}\rangle\subset\langle Z_{g}^{d}\rangle\subset M_{g}^{d}\) such that \[\langle Z_{g}^{d}\rangle_{dR}=G_{dR}\,Z_{g}^{d}\qquad\text{and}\qquad\langle B _{g}^{d}\rangle_{dR}=G_{dR}\,B_{g}^{d}\.\] Let \(N_{g}^{d}=\langle Z_{g}^{d}\rangle/\langle B_{g}^{d}\rangle\). By definition, there is a natural map \(H_{c}^{d}(g)\rightarrow\left(N_{g}^{d}\right)_{dR}\). By theorem 11.3, inclusion defines a natural map \[H_{d}(\sigma_{Q},\partial\sigma_{Q})\longrightarrow(\mathrm{mot}_{Q})^{\vee }_{B}=H_{d}\left((P^{{\mathcal{B}}_{Q}}\backslash\widetilde{\mathrm{Det}})( \mathbb{C})\,\;L^{{\mathcal{B}}_{Q}}\backslash(L^{{\mathcal{B}}_{Q}}\cap \widetilde{\mathrm{Det}})(\mathbb{C});\mathbb{Q}\right)\] which induces a map \[\bigoplus_{\dim\sigma_{Q}=d}H_{d}(\sigma_{Q},\partial\sigma_{Q})/\mathrm{Aut} (\sigma_{Q})\longrightarrow(M_{g}^{d})^{\vee}_{B}\,\] where the direct sum is over all isomorphism classes of cones \(\sigma_{Q}\) of dimension \(d\). Via the cellular interpretation of homology (theorem 3.2), we deduce a map from closed cellular chains \(\gamma\) in \(L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\) to \((M_{g}^{d})^{\vee}_{B}\) and thence to its quotient \(\langle Z_{g}^{d}\rangle^{\vee}_{B}\). Since Stokes' theorem (13.9) holds for motivic periods, the period pairing \(\langle\gamma,g\omega\rangle\) vanishes for all \(g\in G^{dR}\) if \(\omega\in B_{g}^{d}\); and for all \(g\in G^{dR}\) and \(\omega\in Z_{g}^{d}\) if \(\gamma\) is supported on \(\partial L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\), or if \(\gamma=\partial\alpha\) is a boundary. This proves, at the same time, that the image of a cellular chain \(\gamma\) lands in the subspace \((\langle Z_{g}^{d}\rangle/\langle B_{g}^{d}\rangle)^{\vee}_{B}\) of \(\langle Z_{g}^{d}\rangle^{\vee}_{B}\), and that it is zero if \(\gamma\) is a boundary, or supported on \(\partial L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\). Thus by theorem 3.6, there is a well-defined map \[H_{d}\left(\left|L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\right|\;,\; \left|\partial L{\mathcal{A}}_{g}^{\mathrm{trop},{\mathcal{B}}}\right|\right) \longrightarrow(N_{g}^{d})^{\vee}_{B}\.\] The left-hand group is isomorphic to the locally finite homology \(H_{d}^{\mathrm{lf}}(\mathrm{GL}_{g}(\mathbb{Z});\mathbb{Q})\), and so the object \(N_{g}^{d}\) satisfies the conditions of the theorem. This proves the existence. For the minimality, one can repeat the argument of [10], to show that any object \(M\) of \({\mathcal{H}}_{\mathbb{Q}}\) equipped with maps \(V\hookrightarrow M_{B}^{\vee}\) and \(W\hookrightarrow M_{dR}\), where \(V,W\) are \(\mathbb{Q}\)-vector spaces, has a unique smallest subquotient with the this property (the case when \(V\), and \(W\) have dimension \(1\) is proven in [10, SS2.4], but the general case is similar). It follows from (14.6) and the relative de Rham theorem 3.6 that the composition \[\Omega_{c}^{d}(g)\longrightarrow H_{c}^{d}(g)\longrightarrow(\mathsf{M}_{g}^{d})_ {dR}\] is injective. Theorem 15.4 implies that any integral of a canonical form of compact type over a locally finite homology class in \(H_{d}^{\mathrm{lf}}(\mathrm{GL}_{g}(\mathbb{Z}))\) is motivic. In particular, we deduce a motivic interpretation of Minkowski's volume integrals (14.1). _Remark 15.5_.: An interesting question is whether the map \(H_{d}^{\mathrm{lf}}(\mathrm{GL}_{g}(\mathbb{Z});\mathbb{Q})\to(\mathsf{M}_{g}^ {d})_{B}^{\vee}\) is injective or not: is it the case that the entire locally finite homology of \(\mathrm{GL}_{g}(\mathbb{Z})\) is motivic, or only a quotient of it? Note that one can capture the entire cohomology by enlarging \(H_{c}^{d}(g)\), for instance by replacing the determinant locus with suitable linear subspaces, since every cohomology class can be represented by a compatible family of algebraic forms by remark 3.4. A deeper question is to describe the action of \(G_{dR}\) on \((\mathsf{M}_{g}^{d})_{dR}\). **Examples 15.6**.: In the case \(g=3,d=5\), the map \[\mathbb{Q}\cong H_{5}^{\mathrm{lf}}(\mathrm{GL}_{3}(\mathbb{Z});\mathbb{Q}) \longrightarrow(M_{3}^{5})_{B}^{\vee}\] is injective. In fact, \(M_{5}^{3}\) is given by the graph'motive' [1] of the wheel with \(3\) spokes \(W_{3}\), which, one can show, is of rank two, and is a non-trivial extension of \(\mathbb{Q}(-3)\) by \(\mathbb{Q}(0)\). The space \(\mathbb{Q}(-3)_{dR}\) is spanned by the class of the canonical form \(\omega^{5}\); the group \(H_{5}^{\mathrm{lf}}(\mathrm{GL}_{3}(\mathbb{Z});\mathbb{Q})\) is spanned by the fundamental class which is the relative homology class of \(\sigma_{W_{3}}^{\mathcal{B}}\). ## 16. An example Let \(G\) be the wheel with \(3\) spokes, with inner edges oriented outwards from the center and outer edges oriented counter-clockwise. Consider a basis \(h_{1},h_{2},h_{3}\) for its homology given by the cycles consisting of edges \(\{1,2,4,5\}\), \(\{2,4,6\}\), \(\{3,5,4\}\). With respect to this basis and orientation, the graph Laplacian is represented by the \(3\times 3\) matrix \[\Lambda_{G}=\mathcal{H}_{G}^{T}\,D_{G}\,\mathcal{H}_{G}=\begin{pmatrix}x_{1}+ x_{2}+x_{4}+x_{5}&-x_{2}-x_{4}&-x_{4}-x_{5}\\ -x_{2}-x_{4}&x_{2}+x_{4}+x_{6}&-x_{4}\\ -x_{4}-x_{5}&-x_{4}&x_{3}+x_{4}+x_{5}\end{pmatrix}\,.\] Consider the subgraph \(\gamma\) spanned by the edges \(1,2,4,5\). The quotient graph \(G/\gamma\) has a single vertex, and two edges \(3,6\). Setting \(x_{e}=0\) for \(e\in E_{\gamma}\) one obtains \[\Lambda_{G}\big{|}_{\gamma}=\begin{pmatrix}0&0&0\\ 0&x_{6}&0\\ 0&0&x_{3}\end{pmatrix}\,,\qquad\text{ with Zariski closure}\qquad V_{\sigma_{G/ \gamma}}=\left\{\begin{pmatrix}0&0&0\\ 0&*&0\\ 0&0&*\end{pmatrix}\right\}\,\,.\] It defines a quadratic form on \(V=\bigoplus_{i=1}^{3}h_{i}\mathbb{Q}\cong H^{1}(G;\mathbb{Q})\) whose kernel is the one-dimensional subspace \(K=h_{1}\mathbb{Q}\). The space of quadratic forms \(\mathcal{Q}(V/K)\) with null space \(K\) may be identified with the space of symmetric matrices of the form \[\mathcal{Q}(V/K)\cong\left\{\begin{pmatrix}0&0&0\\ 0&*&*\\ 0&*&*\end{pmatrix}\right\}\,\,.\] The image of \(\Lambda_{G}|_{\gamma}\) has codimension \(1\) inside it, and therefore is not Zariski-dense. This example illustrates the difference between \(L\mathcal{M}_{g}^{\operatorname{trop},\mathcal{B}}\) and \(L\mathcal{A}_{g}^{\operatorname{trop},\mathcal{B}}\). The former involves blowing up the subspace \(L_{\gamma}=\{x_{e}=0,e\in E_{\gamma}\}\), corresponding to the core subgraph \(\gamma\), whose image under the Torelli map \(\lambda\) is the projective space \(\mathbb{P}(V_{\sigma_{G/\gamma}})\). However, in \(L\mathcal{A}_{g}^{\operatorname{trop},\mathcal{B}}\) it is the space \(\mathbb{P}(\mathcal{Q}(V/K))\) which is blown up, which strictly contains \(\mathbb{P}(V_{\sigma_{G/\gamma}})\). Indeed, its preimage under \(\lambda\) equals \[V(x_{1}+x_{2}+x_{4}+x_{5},x_{2}+x_{4},x_{4}+x_{5})\cong\{(1:-1:x_{3}:1:-1:x_{6} )\}\subset\mathbb{P}(\mathbb{Q}^{E_{G}})\] which strictly contains the locus \(V(x_{1},x_{2},x_{4},x_{5})\) corresponding to \(\gamma\), and hence \(\lambda\) does not extend to a morphism of blow-ups. Nevertheless \(\mathbb{P}(V_{\sigma})(\mathbb{R})\) and \(\mathbb{P}(\mathcal{Q}(V/K))(\mathbb{R})\) both meet \(\sigma_{G}\) along the same face \(\sigma_{G/\gamma}\), and so \(\lambda\) does extend to an isomorphism on \(\sigma_{G}^{\mathcal{B}}\). ## 17. Appendix: Blow-ups and Borel-Serre for \(\operatorname{GL}_{n}\) ### Statements Let \(V\) be a vector space of dimension \(n\) over \(\mathbb{Q}\). Let \(\mathcal{Q}(V)\) denote the space of quadratic forms on \(V\) and \(\mathbb{P}(\mathcal{Q}(V))\) the associated projective space. Let \[X\quad\subset\quad\mathbb{P}(\mathcal{Q}(V))(\mathbb{R}) \tag{17.1}\] denote the space of positive definite real quadratic forms. It is contained in the complement of the determinant hypersurface \(\operatorname{Det}\subset\mathbb{P}(\mathcal{Q}(V))\). For any \(0\neq K\subset V\), the space \(\mathbb{P}(\mathcal{Q}(V/K))\subset\operatorname{Det}\) is the subspace of quadratic forms with null space \(K\). **Definition 17.1**.: Let \(\pi_{\mathfrak{B}}:\mathfrak{B}\to\mathbb{P}(\mathcal{Q}(V))\) denote the space obtained by blowing up all (infinitely many) linear subspaces of \(\mathbb{P}(\mathcal{Q}(V))\) of the form \(\mathbb{P}(\mathcal{Q}(V/K))\) for \(0\neq K\subset V\), in increasing order of dimension. Let \(\widetilde{\operatorname{Det}}\subset\mathfrak{B}\) denote the strict transform of \(\operatorname{Det}\). Let \(X^{\operatorname{BS}}\) denote the Borel-Serre compactification of \(X\). It has a stratification \[X^{\operatorname{BS}}=X\cup\bigcup_{P}e(P)\] where \(P\) ranges over rational parabolics (see SS17.2). The exceptional divisor \(\mathcal{E}\) for \(\pi_{\mathfrak{B}}\) defines a stratification on \(\mathfrak{B}\) by taking intersections of irreducible components. Since \(\pi_{\mathfrak{B}}(\mathcal{E})\subset\operatorname{Det}\), the inclusion (17.1) induces an injective map: \[X\ \longrightarrow\ (\mathbb{P}(\mathcal{Q}(V))\backslash\operatorname{Det}) \left(\mathbb{R}\right)\stackrel{{\pi_{\mathfrak{B}}^{-1}}}{{ \longrightarrow}}(\mathfrak{B}\backslash\mathcal{E})\left(\mathbb{R}\right)\,. \tag{17.2}\] **Theorem 17.2**.: _There is a continuous injective map of stratified spaces_ \[f:X^{\operatorname{BS}}\longrightarrow\left(\mathfrak{B}\,\setminus\, \widetilde{\operatorname{Det}}\right)(\mathbb{R}) \tag{17.3}\] _whose restriction to the big open stratum is the embedding (17.2). The image of (17.3) is the closure of \(X\), for the analytic topology, inside \(\mathfrak{B}(\mathbb{R})\). The map \(f\) is equivariant with respect to the natural actions of \(\operatorname{GL}(V)\) on both \(X^{\operatorname{BS}}\) and \(\mathfrak{B}\)._ _Let \(P\) denote the rational parabolic associated to a nested sequence of strict subspaces \(0\subset V_{d}\subset V_{d-1}\subset\ldots\subset V_{1}\subset V\). The map (17.3) restricts to a map_ \[f:e(P)\longrightarrow\mathcal{E}_{P}\backslash\left(\widetilde{\operatorname{ Det}}\cap\mathcal{E}_{P}\right)\,(\mathbb{R}) \tag{17.4}\] _where \(\mathcal{E}_{P}\subset\mathfrak{B}\) is the exceptional locus associated to the iterated blow-up of_ \[\mathbb{P}(\mathcal{Q}(V/V_{1}))\subset\ldots\subset\mathbb{P}(\mathcal{Q}(V/ V_{d}))\] _in increasing order of dimension. The complement of the strict transform of \(\operatorname{Det}\) is canonically isomorphic to a product of hypersurface complements:_ \[\mathcal{E}_{P}\backslash\left(\widetilde{\operatorname{Det}}\cap\mathcal{E }_{P}\right)\cong\prod_{k=0}^{d}\left(\mathbb{P}\left(\frac{\mathcal{Q}(V/V_{k +1})}{\mathcal{Q}(V/V_{k})}\right)\,\,\backslash\,\operatorname{Det}\bigr{|} _{V_{k}/V_{k+1}}\right)\] _where we write \(V_{0}=V\) and \(V_{d+1}=0\), and \(\operatorname{Det}\nolimits_{|V_{k}/V_{k+1}}\) is the vanishing locus of the determinant of the restriction of a quadratic form on \(V/V_{k+1}\) to the subspace \(V_{k}/V_{k+1}\)._ In short, the space \(\mathfrak{B}\) is an algebraic incarnation of the Borel-Serre compactification with identical combinatorial structure. The space \(X^{\operatorname{BS}}\) is identified with a semi-algebraic subset of its real points \(\mathfrak{B}(\mathbb{R})\). It is defined by infinitely many algebraic inequalities of the form \(u>0\), where \(u\) is a homogeneous polynomial given by the determinant of a matrix minor. Note that although \(\mathfrak{B}\) is defined by infinitely many blow-ups, and is therefore not of finite type, the local structure of \(X^{\operatorname{BS}}\) in terms of spaces \(X(P)\) (see SS17.2) may be studied by embedding each \(X(P)\) into a space obtained by performing only finitely many blow-ups of \(\mathbb{P}(\mathcal{Q}(V))\). This is how we shall prove theorem 17.2. **Corollary 17.3**.: _Let \(|L\mathcal{A}_{g}|^{\operatorname{trop},\mathfrak{B}}\) be the topological space defined in (11.5). Then the map (17.3) induces a canonical homeomorphism_ \[f:X^{\operatorname{BS}}_{g}/\mathrm{GL}_{g}(\mathbb{Z})\stackrel{{ \sim}}{{\longrightarrow}}|L\mathcal{A}_{g}|^{\operatorname{trop}, \mathfrak{B}}\.\] ### Reminders on the Borel-Serre compactifcation for \(\mathrm{GL}_{n}\) Let \(V\) be a vector space of dimension \(n\) over \(\mathbb{R}\) equipped with an inner product, or equivalently, an isomorphism \(V\cong V^{\vee}\). The group of automorphisms \(\mathrm{GL}(V)\) is isomorphic to \(\mathrm{GL}_{n}(\mathbb{R})\), and the subgroup preserving the inner product defines a maximal compact subgroup \(K=O(V)\) of \(\mathrm{GL}(V)\). It is isomorphic to \(O_{n}(\mathbb{R})\). Given any filtration \(F\) of \(V\) of length \(d\) by subspaces \[F:\qquad 0=V_{d+1}\subset V_{d}\subset\ldots\subset V_{1}\subset V_{0}=V\, \tag{17.5}\] where all inclusions are strict, denote by \(P_{F}\leq\mathrm{GL}(V)\) the parabolic subgroup of automorphisms of \(V\) which preserve \(F\). A filtration \(F^{\prime}\) \[F^{\prime}:\qquad 0\subset V_{i_{k}}\subset\ldots\subset V_{i_{1}}\subset V\] where \(1\leq i_{1}<\ldots<i_{k}\leq d\) is obtained by omitting elements from the filtration \(F\), is denoted by \(F^{\prime}\leq F\). The set of filtrations \(F^{\prime}\) such that \(F^{\prime}\leq F\) forms a finite poset. Since for \(F^{\prime}\leq F\), one has \(P_{F}\leq P_{F^{\prime}}\), the set \(\{P_{F^{\prime}}:F^{\prime}\leq F\}\) with respect to inclusion of groups is isomorphic to the opposite poset. For any such \(P=P_{F}\), as above, denote by \(K_{P}=P\cap K\). The natural map \[P\longrightarrow\prod_{i=0}^{d}\mathrm{GL}(V_{i}/V_{i+1})\] induces an isomorphism \(K_{P}\cong\prod_{i=0}^{d}O(V_{i}/V_{i+1})\), the product of orthogonal groups with respect to the induced inner product on \(V_{i}\to V\cong V^{\vee}\to V^{\vee}_{i}\). Let \(Z_{P}\leq K_{P}\) denote the subgroup of central elements acting via scalar multiplication by \(\mathbb{R}_{>0}^{\times}\) on each quotient \(V_{i}/V_{i+1}\) for \(0\leq i\leq d\). Thus \(Z_{P}\cong(\mathbb{R}_{>0}^{\times})^{d+1}\). **Example 17.4**.: Let \(e_{1},\ldots,e_{n}\) denote a basis of \(V\) compatible with \(F\). In this basis, automorphisms of \(V\) may be identified with \(n\times n\) matrices, and the groups \(P\) and \(Z_{P}\) are block-lower triangular, and block diagonal respectively: \[P=\begin{pmatrix}P_{00}&&&\\ P_{01}&P_{11}&&\\ \vdots&&\ddots&\\ P_{0k}&P_{1k}&\ldots&P_{kk}\end{pmatrix}\qquad,\qquad Z_{P}=\begin{pmatrix}\mu_ {0}I_{0}&&&\\ &\mu_{1}I_{1}&&\\ &&\ddots&\\ &&&\mu_{k}I_{k}\end{pmatrix}\] where \(k=d\), \(I_{0},\ldots,I_{k}\) are block identity matrices and \(\mu_{0},\ldots,\mu_{k}>0\). The \(P_{ij}\) are likewise block matrices. The group \(K_{P}\) is isomorphic to the group of block diagonal matrices where the matrix \(O_{i}\) in the \(i^{\text{th}}\) block is orthogonal: \(O_{i}^{T}O_{i}=I_{i}\). The group \(Z_{P}\) acts upon \(P\) by multiplication on the left. Denote the group of central elements by \(H=Z_{P}\cap Z(\operatorname{GL}(V))\cong\mathbb{R}_{>0}^{\times}\). In example 17.4, \(H\leqslant Z_{P}\) is the subgroup of diagonal matrices. Let \(X\) denote the space of projective equivalence classes of real positive definite quadratic forms on \(V\). #### 17.2.1. Geodesic action For any \(P=P_{F}\) as above there is a homeomorphism: \[HK_{P}\backslash P \stackrel{{\sim}}{{\longrightarrow}} X\] \[g \mapsto g^{T}g. \tag{17.6}\] The action of \(Z_{P}\) on \(P\) passes to a well-defined action on \(HK_{P}\backslash P\). We write \(A_{P}=H\backslash Z_{P}\). The action of \(A_{P}\) on \(X\) induced by (17.6) is called the geodesic action. Borel and Serre consider the following space of orbits for the geodesic action: \[e(P)=A_{P}\backslash X\stackrel{{\eqref{eq:P}}}{{\cong}}Z_{P}K_{P }\backslash P\.\] It is homeomorphic to a Euclidean space \(\mathbb{R}^{\binom{n+1}{2}-d-1}\). The compactification of \(X\) is defined as follows. Consider the isomorphism \[H\backslash Z_{P} \cong A_{P}\cong(\mathbb{R}_{>0}^{\times})^{d}\] \[(\mu_{0},\mu_{1},\dots,\mu_{d}) \mapsto (\lambda_{1},\dots,\lambda_{d}) \tag{17.7}\] where \(\lambda_{i}=\mu_{i}/\mu_{i-1}\) for \(1\leqslant i\leqslant d\). Consider the multiplicative monoid \[\overline{A}_{P}=(\mathbb{R}_{\geqslant 0}^{\times})^{d}\] with coordinates \(\lambda_{i}\geqslant 0\). The subgroup \(A_{P}\leqslant\overline{A}_{P}\) has coordinates \(\lambda_{i}>0\). By (17.7), the natural map \(Z_{P}\to\overline{A}_{P}\) has kernel exactly \(H\). Borel and Serre define \[X(P)=A_{P}\backslash(\overline{A}_{P}\times X)\.\] The space \(\overline{A}_{P}\) is stratified by the closed subspaces \(\lambda_{i}=0\) for \(i=1,\dots,d\). The corresponding open stratification is a union of copies of \((\mathbb{R}_{>0}^{\times})^{a}\) for \(a\leqslant d\), which may be identified with the subgroups \(A_{P_{P^{\prime}}}\), for all \(F^{\prime}\leqslant F\). Thus \[X(P)=X\sqcup\coprod_{P^{\prime}}A_{P^{\prime}}\backslash X=X\sqcup\coprod_{P^{ \prime}}e(P^{\prime})\] is a disjoint union of spaces \(e(P^{\prime})\), where \(P^{\prime}=P_{F^{\prime}}\) for all \(F^{\prime}\leqslant F\). #### 17.2.2. Borel-Serre compactification Now suppose that \(V\) has a \(\mathbb{Q}\)-structure and that its inner product is defined over \(\mathbb{Q}\). A parabolic \(P=P_{F}\) is called rational if the vector spaces \(V_{i}\) in the corresponding sequence (17.5) are all defined over \(\mathbb{Q}\). The Borel-Serre compactification is defined to be \[X^{\mathrm{BS}}=\bigcup_{P}X(P) \tag{17.8}\] where the union is over all rational parabolics \(P\). **Example 17.5**.: Let \(V=\mathbb{R}^{3}\), \(d=2\), and \(V_{i}\cong\mathbb{R}^{3-i}\), for \(0\leqslant i\leqslant 2\). Write \[P=\begin{pmatrix}p_{00}&&\\ p_{01}&p_{11}&\\ p_{02}&p_{12}&p_{22}\end{pmatrix}\quad,\quad Z=\begin{pmatrix}\mu_{0}&&\\ &\mu_{1}&\\ &&\mu_{2}\end{pmatrix}\] where \(p_{ij}\in\mathbb{R}\) and \(\det(P)=p_{00}p_{11}p_{22}\neq 0\). We set \[X=P^{T}P=\begin{pmatrix}p_{00}^{2}+p_{01}^{2}+p_{02}^{2}&p_{02}p_{12}+p_{01}p_ {11}&p_{02}p_{22}\\ p_{02}p_{12}+p_{01}p_{11}&p_{12}^{2}+p_{11}^{2}&p_{12}p_{22}\\ p_{02}p_{22}&p_{12}p_{22}&p_{22}^{2}\end{pmatrix}\] Let \(X_{ij}=x_{ij}\). The geodesic action \(X\mapsto Z\circ X\) where \(Z\circ X=P^{T}Z^{T}ZP\), may, as one can check, be written intrinsically as a function of \(X\) via \[Z\circ X=\mu_{0}^{2}\left(\begin{matrix}\frac{\det(X)}{\det(X^{11})}\\ \end{matrix}\right)+\mu_{1}^{2}\left(\begin{matrix}\frac{\det(X^{21})^{2}}{ \det(X^{11})x_{33}}&\frac{\det(X^{21})}{x_{33}}\\ \frac{\det(X^{21})}{x_{33}}&\frac{\det(X^{11})}{x_{33}}\\ \end{matrix}\right)+\mu_{2}^{2}\left(\begin{matrix}\frac{x_{13}^{2}}{x_{33}}& \frac{x_{13}x_{23}}{x_{33}}&x_{13}\\ \frac{x_{13}x_{23}}{x_{33}}&\frac{x_{23}^{2}}{x_{33}}&x_{23}\\ x_{13}&x_{23}&x_{33}\\ \end{matrix}\right)\] where \(\det(X^{11})=x_{22}x_{33}-x_{23}^{2}\) and \(\det(X^{21})=x_{12}x_{33}-x_{13}x_{23}\) are determinants of minors of \(X\). The action \(X\mapsto Z\circ X\) is clearly algebraic in the entries of \(X\) and is well-defined on the locus where \(\det(X)\det(X^{11})x_{33}\neq 0\). This example illustrates that the coordinates given by the geodesic action are perhaps not ideally suited to studying the asymptotic behaviour of canonical differential forms at infinity: the coordinates in the blow-up are much easier work with, since they correspond to scalar multiplication on matrix blocks (see (9.11)). ### Linear monoidal actions and blow-ups The Borel-Serre construction is topological, and relies on the geodesic action which is only defined on the space of positive definite matrices, since it makes crucial use of the surjectivity of (17.6). In order to reformulate it in the language of algebraic geometry, we first need some preliminaries on monoidal actions on vector space schemes and their quotients by multiplicative groups. A possible relation between Borel-Serre's construction and blow-ups is hinted at in [10, pg 437, line 5], but does not seem to have been developed further. Let \(k\) be a field. All schemes will be over \(k\). #### 17.3.1. Multiplicative monoid The multiplicative group is defined by \(\mathbb{G}_{m}=\operatorname{Spec}k[t,t^{-1}]\), where \(\mathcal{O}(\mathbb{G}_{m})=k[t,t^{-1}]\) is the Hopf algebra whose coproduct \(\Delta:\mathcal{O}(\mathbb{G}_{m})\to\mathcal{O}(\mathbb{G}_{m})\otimes_{k} \mathcal{O}(\mathbb{G}_{m})\) satisfies \(\Delta t=t\otimes t\). The multiplication on \(\mathbb{G}_{m}\) is dual to \(\Delta\). We denote the multiplicative monoid scheme \(\mathbb{G}_{m}\leqslant\overline{\mathbb{G}}_{m}\) to be \[\overline{\mathbb{G}}_{m}=\operatorname{Spec}k[t]\] where \(k[t]\) is the coalgebra equipped with the coproduct \(\Delta:k[t]\to k[t]\otimes_{k}k[t]\) satisfying the same formula \(\Delta t=t\otimes t\). Thus, as a scheme, \(\overline{\mathbb{G}}_{m}\) is simply the affine line \(\mathbb{A}^{1}\), but has a multiplication morphism \(\overline{\mathbb{G}}_{m}\times\overline{\mathbb{G}}_{m}\to\overline{\mathbb{G }}_{m}\). It has a distinguished point \(0\in\overline{\mathbb{G}}_{m}\) such that \(\overline{\mathbb{G}}_{m}\backslash 0=\mathbb{G}_{m}\), and has the property that the multiplication satisfies \(0\times\overline{\mathbb{G}}_{m}\to 0\). #### 17.3.2. Linear monoidal action A vector space of dimension \(n\) over a field \(k\) will be viewed as the \(k\)-points of an affine scheme \(\mathbb{A}^{n}\) in the usual manner. Consider a finite-dimensional graded vector space \(W\) over a field \(k\) \[W=W_{n}\oplus W_{n-1}\oplus\ldots\oplus W_{1}\oplus W_{0}\] where \(\dim W_{i}\geqslant 1\) for all \(i\). The multiplicative group \(\mathbb{G}_{m}\times W_{i}\to W_{i}\) acts by scalar multiplication on each component, denoted by \((\lambda,w)\mapsto\lambda w\). It extends to an action \(\overline{\mathbb{G}}_{m}\times W_{i}\to W_{i}\) which sends \((0,w)\) to the origin of \(W_{i}\). Consider the monoid action \[m:\overline{\mathbb{G}}_{m}^{n}\times W\longrightarrow W\, \tag{17.9}\] where the \(i^{\text{th}}\) component of \(\overline{\mathbb{G}}_{m}\) for \(i=1,\ldots,n\), acts via \[\begin{array}{ccc}\overline{\mathbb{G}}_{m}\times W&\longrightarrow&W\\ (\lambda_{i},(w_{n},\ldots,w_{0}))&\mapsto&(\lambda_{i}w_{n},\ldots,\lambda_ {i}w_{i},w_{i-1},\ldots,w_{0})\.\end{array} \tag{17.10}\] The action (17.9) restricts to an action of \(\mathbb{G}_{m}^{n}\) on \(W\). Furthermore, let \(\mathbb{G}_{m}^{n}\) act on \(\overline{\mathbb{G}}_{m}^{n}\) componentwise via the map \(\mathbb{G}_{m}\times\overline{\mathbb{G}}_{m}\to\overline{\mathbb{G}}_{m}\) given by \((\mu,\lambda)\mapsto\lambda\mu^{-1}\), where \(\mu^{-1}\) is the inverse in the group \(\mathbb{G}_{m}\). We deduce that (17.9) is equivariant for the diagonal action of \(\mathbb{G}_{m}^{n}\) on \(\overline{\mathbb{G}}_{m}^{n}\times W\), in other words, the following diagram commutes \[\begin{array}{ccc}\mathbb{G}_{m}^{n}\times(\overline{\mathbb{G}}_{m}^{n} \times W)&\longrightarrow&\overline{\mathbb{G}}_{m}^{n}\times W\\ \downarrow&&\downarrow_{m}\\ \overline{\mathbb{G}}_{m}^{n}\times W&\stackrel{{ m}}{{ \longrightarrow}}&W\end{array}\] where the vertical map on the left is projection onto \(\overline{\mathbb{G}}_{m}^{n}\times W\), and the horizontal map is the diagonal action of \(\mathbb{G}_{m}^{n}\) on both \(\overline{\mathbb{G}}_{m}^{n}\) and \(W\). In the case \(n=1\) this diagram merely expresses the trivial fact that \(m(\mu.(\lambda,w))=m(\lambda\mu^{-1},\mu w)=\lambda w=m(\lambda,w)\). #### 17.3.3. Quotients and blowups For any finite dimensional vector space \(W\) over \(k\) let \(W^{\times}=W\backslash\{0\}\). It is an open subscheme of \(W\) stable under the action of \(\mathbb{G}_{m}\), but not under the extended action of \(\overline{\mathbb{G}}_{m}\), since \(\overline{\mathbb{G}}_{m}\times W^{\times}\to W\). Let us denote by: \[W^{\star}=W_{n}^{\times}\oplus W_{n-1}^{\times}\oplus\ldots\oplus W_{1}^{ \times}\oplus W_{0}\,\] (component \(W_{0}\)_sic_). The morphism (17.9) restricts to a morphism \(m:\overline{\mathbb{G}}_{m}^{n}\times W^{\star}\to W\), which, by the previous paragraph, is equivariant with respect to the action of \(\mathbb{G}_{m}^{n}\). We wish to define its quotient by \(\mathbb{G}_{m}^{n}\) in the category of schemes. **Example 17.6**.: We warm up with the construction of projective space as a quotient. Let \(x_{i}^{1},\ldots,x_{i}^{n}\) denote coordinates on the \(k\)-vector space \(W_{i}\cong\mathbb{A}^{n}\), where \(n=\dim W_{i}\), and write \[W_{i}^{\times}=\bigcup_{j=1}^{n}U_{i}^{j}\] where \(U_{i}^{j}\subset\mathbb{A}^{n}\) is the open complement of \(x_{i}^{j}=0\). Let \(U_{i}^{jk}=U_{i}^{j}\cap U_{i}^{k}\). The opens \(U_{i}^{j},U_{i}^{jk}\) are stable under the action of \(\mathbb{G}_{m}\), and the inclusions \(U_{i}^{jk}\hookrightarrow U_{i}^{j}\) are \(\mathbb{G}_{m}\)-equivariant. One may define the quotient by \(\mathbb{G}_{m}\): \[\mathbb{G}_{m}\backslash W_{i}^{\times}:=\bigcup_{i=1}^{n}\mathbb{G}_{m} \backslash U_{i}^{j}\] to be the union of schemes \(\mathbb{G}_{m}\backslash U_{i}^{i}\) glued along the \(\mathbb{G}_{m}\backslash U_{i}^{jk}\), where \(\mathbb{G}_{m}\backslash U_{i}^{j}=\operatorname{Spec}\left(\mathcal{O}(U_{i}^ {j})^{\mathbb{G}_{m}}\right)\) (and similarly \(\mathbb{G}_{m}\backslash U_{i}^{jk}\)) is the affine scheme whose coordinate ring consists of \(\mathbb{G}_{m}\)-invariant polynomials. Since \(\mathcal{O}(U_{i}^{j})^{\mathbb{G}_{m}}=k[\frac{x_{i}^{1}}{x_{i}^{1}},\ldots, \frac{x_{i}^{n}}{x_{i}^{1}}]\), we recover the standard affine covering of projective space and thus \[\mathbb{G}_{m}\backslash W_{i}^{\times}\cong\mathbb{P}(W_{i})\] is canonically isomorphic to the projective space of \(W_{i}\). **Definition 17.7**.: Consider a \(\mathbb{G}_{m}\)-equivariant covering \(W_{i}^{\times}=\bigcup_{1\leqslant j\leqslant\dim W_{i}}U_{i}^{j}\) of \(W_{i}\) for every \(i=1,\ldots,n\) as in the previous example, and define a scheme: \[\mathbb{G}_{m}^{n}\backslash(\overline{\mathbb{G}}_{m}^{n}\times W^{\star}):= \bigcup_{i_{1},\ldots,i_{n}}\mathbb{G}_{m}^{n}\backslash\left(\overline{ \mathbb{G}}_{m}^{n}\times U_{n}^{i_{n}}\times\ldots\times U_{1}^{i_{1}}\times W _{0}\right)\, \tag{17.11}\] where each \(1\leqslant i_{k}\leqslant\dim W_{k}\). It does not depend on the choice of coordinates (since \(\mathbb{G}_{m}^{n}\) is central in the product of general linear groups \(\operatorname{GL}(W_{i})\)). Since it is \(\mathbb{G}_{m}^{n}\)-equivariant, the map \(m\) defines a morphism of schemes \[m\ :\ \mathbb{G}_{m}^{n}\backslash(\overline{\mathbb{G}}_{m}^{n}\times W^{\star}) \longrightarrow W. \tag{17.12}\] **Proposition 17.8**.: _The scheme (17.11) canonically embeds as an open subscheme of the iterated blow-up \(B\to W\) of the affine space \(W\) along the linear subspaces_ \[W_{0}\ \subset\ W_{0}\oplus W_{1}\ \subset\ \ldots\ \subset\ W_{0}\oplus W_{1} \oplus\ldots\oplus W_{n-1}\] _in increasing order of dimension. It is Zariski-dense in \(B\)._ Proof.: The most direct proof is via explicit coordinates generalising example 17.6. Consider the coordinate ring of a particular open in (17.11) \[\mathcal{O}\left(\overline{\mathbb{G}}_{m}^{n}\times U_{n}^{i_{n}}\times\ldots \times U_{1}^{i_{1}}\times W_{0}\right)\] where \(1\leq i_{k}\leq\dim W_{k}\). With the labelling of coordinates of example 17.6, it is \[k\left[\lambda_{1},\ldots,\lambda_{n},\left(x_{i}^{j}\right)_{ \begin{subarray}{c}0\leq i\leq n\\ 1\leq j\leq\dim V_{i}\end{subarray}},\frac{1}{x_{1}^{i_{1}}},\ldots,\frac{1}{ x_{n}^{i_{n}}}\right]\] where \(\lambda_{i}\) is the coordinate on the \(i\)th component of \(\overline{\mathbb{G}}_{m}\) (and the \(x_{0}^{j}\) are coordinates on \(W_{0}\)). The action of the \(i\)th component of \(\mathbb{G}_{m}\) is equivalent to the action \[\lambda_{j}\mapsto\begin{cases}\lambda_{j}\mu_{i}^{-1}&\text{ if }i=j\,\\ \lambda_{j}&\text{ if }i\neq j\end{cases}\quad\text{ and }\quad x_{k}^{j}\mapsto \begin{cases}\mu_{i}x_{k}^{j}&\text{ if }k\geq i\\ x_{k}^{j}&\text{ if }k>i\end{cases}\] for all \(j\geq 1\). The \(\mathbb{G}_{m}^{n}\)-invariants of \(\mathcal{O}\left(\overline{\mathbb{G}}_{m}^{n}\times U_{n}^{i_{n}}\times\ldots \times U_{1}^{i_{1}}\times W_{0}\right)\) are \[\mathcal{O}(W_{0})\Bigg{[}\left(\frac{\alpha_{1}^{j_{1}}}{\alpha_{1}^{i_{1}}} \right)_{j_{1}},\alpha_{1}^{i_{1}},\left(\frac{\alpha_{2}^{j_{2}}}{\alpha_{2}^ {i_{2}}}\right)_{j_{2}},\frac{\alpha_{2}^{i_{2}}}{\alpha_{1}^{i_{1}}},\quad \ldots,\quad\left(\frac{\alpha_{n}^{j_{n}}}{\alpha_{n}^{i_{n}}}\right)_{j_{n}},\frac{\alpha_{n}^{i_{n}}}{\alpha_{n-1}^{i_{n-1}}}\Bigg{]}\] where \(1\leq j_{r}\leq\dim W_{r}\), for \(r=1,\ldots n\) and where \(\alpha_{j}^{i}=\lambda_{1}\lambda_{2}\ldots\lambda_{j}x_{j}^{i}\), which are \(\mathbb{G}_{m}^{n}\)-invariant. These are identical to a system of local coordinates on the iterated blow-up described in SS5.2 (after reversing the indexation of the subscripts of the \(\alpha\)'s) along \[V(\alpha_{i}^{j}:i\geq 1)\quad\subset\quad V(\alpha_{i}^{j}:i\geq 2)\quad \subset\quad\ldots\quad\subset\quad V(\alpha_{i}^{j}:i\geq n)\.\] These coordinates define a canonical open immersion of this affine chart of (17.11) into \(B\). Since these coordinate charts are compatible they glue together to form an explicit open immersion of schemes from (17.11) to \(B\). Inspection of the proof shows that the scheme (17.11) is canonically isomorphic to the complement in \(B\) of strict transforms of some linear subspaces of \(\mathbb{P}(W)\) which may be written in terms of the \(W_{i}\), although we will not need such an explicit description here. _Remark 17.9_.: The recursive structure of the iterated blow-up may be interpreted using the monoidal action. The blow-down morphism (17.12) factors through \[\pi_{n}:\overline{\mathbb{G}}_{m}^{n}\times W^{\star} \longrightarrow W_{n}\times\left(\overline{\mathbb{G}}_{m}^{n-1}\times(W_{n-1}^ {\star}\oplus\ldots\oplus W_{1}^{\star}\oplus W_{0})\right)\] \[(\lambda_{1},\ldots,\lambda_{n}),(w_{n},\ldots,w_{0}) \mapsto \lambda_{n}w_{n},(\lambda_{1},\ldots,\lambda_{n-1}),(w_{n-1}, \ldots,w_{1},w_{0})\.\] It is \(\mathbb{G}_{m}^{n}\)-equivariant and hence defines a morphism \[\mathbb{G}_{m}^{n}\backslash\left(\overline{\mathbb{G}}_{m}^{n}\times W^{ \star}\right)\longrightarrow W_{n}\times\mathbb{G}_{m}^{n-1}\backslash\left( \overline{\mathbb{G}}_{m}^{n-1}\times(W_{n-1}^{\star}\oplus\ldots\oplus W_{1}^ {\star}\oplus W_{0})\right)\.\] The proposition implies that this is the blow-up along the subscheme defined by \[Z_{n}=0\times\mathbb{G}_{m}^{n-1}\backslash\left(\overline{\mathbb{G}}_{m}^{n- 1}\times(W_{n-1}^{\star}\oplus\ldots\oplus W_{1}^{\star}\oplus W_{0})\right)\] where \(0\) is the origin in \(W_{n}\). To justify this, observe that \(\pi_{n}\) restricts to an isomorphism from the open locus of the left-hand side defined by \(\lambda_{n}\neq 0\) and the complement of \(Z_{n}\). This is because if \(w_{n}\neq 0\), the equation \(\lambda_{n}w_{n}\neq 0\) is equivalent to \(\lambda_{n}\neq 0\). Thus \(\pi_{n}\) is indeed an isomorphism on the complement of the exceptional divisor. The exceptional divisor \(\pi_{n}^{-1}(Z_{n})\cong V(\lambda_{n})\) is isomorphic to \[\left(\mathbb{G}_{m}\backslash W_{n}^{\times}\right)\times\mathbb{G}_{m}^{n-1} \backslash\left(\overline{\mathbb{G}}_{m}^{n-1}\times(W_{n-1}^{\star}\oplus \ldots\oplus W_{1}^{\star}\oplus W_{0})\right)\.\] By example 17.6 this is isomorphic to \(\mathbb{P}(W_{n})\times Z_{n}\) and is isomorphic to the projectivised normal bundle of \(Z_{n}\). #### 17.3.4. Variant: projective version With notation as above, let \[W^{\circ}=W^{\times}_{n}\oplus W^{\times}_{n-1}\oplus\ldots\oplus W^{\times}_{1} \oplus W^{\times}_{0}\] with the action of \(\mathbb{G}^{n+1}_{m}=\mathbb{G}^{n}_{m}\times\mathbb{G}_{m}\) where \(\mathbb{G}^{n}_{m}\) acts as above, but with an additional copy of \(\mathbb{G}_{m}\) (with coordinate \(\lambda_{0}\)) acting by scalar multiplication on \(W^{\circ}\). Then we may define the quotient \(\mathbb{G}^{n+1}_{m}\backslash(\overline{\mathbb{G}}^{n}_{m}\times W^{\circ})\) as before. By (17.10), all components of \(\mathbb{G}^{n}_{m}\) act trivially on \(W_{0}\) and he morphism (17.12) becomes: \[\mathbb{G}^{n+1}_{m}\backslash(\overline{\mathbb{G}}^{n}_{m}\times W^{\circ}) \longrightarrow\mathbb{G}_{m}\backslash(W\backslash\{0\})=\mathbb{P}(W)\.\] Note that its image is the open complement of \(\mathbb{P}(W_{n}\oplus\ldots\oplus W_{1}\oplus\{0\})\) in \(\mathbb{P}(W)\). The projective version of proposition 17.8 is: **Proposition 17.10**.: _The scheme \(\mathbb{G}^{n+1}_{m}\backslash(\overline{\mathbb{G}}^{n}_{m}\times W^{\circ})\), relative to the morphism:_ \[\mathbb{G}^{n+1}_{m}\backslash(\overline{\mathbb{G}}^{n}_{m}\times W^{\circ} )\longrightarrow\mathbb{P}(W)\] _canonically embeds into the iterated blow-up \(B_{W}\rightarrow\mathbb{P}(W)\) along the linear subspaces_ \[\mathbb{P}(W_{0})\ \subset\ \mathbb{P}(W_{0}\oplus W_{1})\ \subset\ \ldots\ \subset\ \mathbb{P}(W_{0}\oplus W_{1}\oplus\ldots\oplus W_{n-1})\] _in increasing order of dimension. It is Zariski-dense in \(B_{W}\)._ #### 17.3.5. Exceptional divisors The exceptional divisors of \(\mathbb{G}^{n+1}_{m}\backslash(\overline{\mathbb{G}}^{n}_{m}\times W^{\circ})\) are in one-to-one correspondence with the divisors \(V(\lambda_{i})\) defined by \(\lambda_{i}=0\) for \(i=1,\ldots,n\). Furthermore, the divisor \(V(\lambda_{i})\) is canonically isomorphic to a product \[V(\lambda_{i})\ \simeq\ \mathbb{G}^{n-i+1}_{m}\backslash(\overline{\mathbb{G}}^{n-i }_{m}\times W^{\times}_{n}\oplus\ldots\oplus W^{\times}_{i})\ \times\ \mathbb{G}^{i}_{m}\backslash(\overline{\mathbb{G}}^{i-1}_{m}\times W^{\times}_{i -1}\oplus\ldots\oplus W^{\times}_{0})\] which, if we write \(W^{\prime}=W_{i-1}\oplus\ldots\oplus W_{0}\), embeds into the product \[V(\lambda_{i})\ \hookrightarrow\ B_{W/W^{\prime}}\,\mathbb{P}(W/W^{\prime}) \times B_{W^{\prime}}\,\mathbb{P}(W^{\prime}) \tag{17.13}\] where the blow-ups are with respect to the flags of subspaces defined by the filtration \(F_{i}=\bigoplus_{k=0}^{i}W_{k}\) on \(W\) and the filtrations it induces on \(W^{\prime}\) and \(W/W^{\prime}\). The irreducible components \(\mathcal{S}_{i}\) of the exceptional divisor \(\mathcal{S}\) defines a stratification on the blow-up \(B_{W}\) of \(\mathbb{P}(W)\). Consider the corresponding quasi-projective stratification \[B_{W}=\bigcup_{I}\mathcal{S}^{\circ}_{I} \tag{17.14}\] where the union is over all subsets \(I\subset\{1,\ldots,n\}\), and where we denote as is customary, \(\mathcal{S}_{I}=\bigcap_{i\in I}\mathcal{S}_{i}\), and \(\mathcal{S}_{\widetilde{\mathcal{G}}}=\mathbb{P}(W)\), and define \[\mathcal{S}^{\circ}_{I}=\mathcal{S}_{I}\backslash\left(\bigcup_{j\notin I} \mathcal{S}_{I}\cap\mathcal{S}_{j}\right)\.\] It follows from the description (17.13) above that each stratum of (17.14), and hence each stratum of \(\mathbb{G}^{n+1}_{m}\backslash(\overline{\mathbb{G}}^{n}_{m}\times W^{\circ})\), is embedded in a product of projective spaces: \[\mathcal{S}^{\circ}_{I} \subset \prod_{k=1}^{|I|}\mathbb{P}\left(\bigoplus_{i_{k-1}\leqslant j \leqslant i_{k}}W_{j}\right)=\prod_{k=1}^{|I|}\mathbb{P}\left(\mathrm{gr}^{F^{ \prime}}_{k}W\right)\,\] where \(F^{\prime}W\) is the filtration on \(W\) whose \(k^{\text{th}}\) graded is \(\bigoplus_{i_{k-1}\leqslant j\leqslant i_{k}}W_{j}\). The strata of (17.14) are in one-to-one correspondence with filtrations \(F^{\prime}\leqslant F\). ### Spaces of parabolic and symmetric matrices Using the previous construction, we may define an algebraic version of the Borel-Serre construction \(A_{P}\backslash(\overline{A}_{P}\times P)\) associated to a rational parabolic \(P\) and embed it in an iterated blow-up. We then provide an algebraic interpretation of the space \(X(P)\) by embedding it into the real points of a scheme of finite type. We shall denote algebraic groups with caligraphic letters to distinguish them from the topological groups considered in SS17.2. Let \(V\) be a finite-dimensional vector space over \(\mathbb{Q}\) and consider a flag of subspaces \[F\ :\ 0\subset V_{d}\subset V_{d-1}\subset\ldots\subset V_{1}\subset V_{0}=V\] which is split: \(V\cong\bigoplus_{i\geq 0}V_{i}/V_{i+1}\). Let \(\mathcal{P}_{F}\subset\mathrm{End}_{F}(V)\) denote the affine variety over \(\mathbb{Q}\) of endomorphisms of \(V\) which preserve \(F\). Using the splitting of \(V\) we may write \[\mathrm{End}_{F}(V)=\mathrm{Hom}_{F}(V\,\ \bigoplus_{i}V_{i}/V_{i+1})= \bigoplus_{i}\mathrm{Hom}_{F}(V,V_{i}/V_{i+1})\] which, upon setting \(\mathcal{P}_{i}=\mathrm{Hom}_{F}(V,V_{i}/V_{i+1})\), gives a decomposition \[\mathcal{P}_{F}=\mathcal{P}_{d}\oplus\mathcal{P}_{d-1}\oplus\ldots\oplus \mathcal{P}_{0}. \tag{17.15}\] Define a subscheme \(\mathcal{P}_{F}^{\circ}\) of \(\mathcal{P}_{F}\) as in SS17.3.4. Let \[\mathcal{Z}_{P}=\mathbb{G}_{m}^{d+1}\ \hookrightarrow\ \mathrm{GL}(\bigoplus_{i}V_{i}/V_{i+1})= \prod_{i}\mathrm{GL}(V_{i}/V_{i+1})\] be the product of multiplicative groups whose \(k^{\mathrm{th}}\) component, where \(0\leq k\leq d\), acts by scalar multiplication on \(V_{k}/V_{k+1}\). Let \(\overline{\mathcal{A}}_{P}=\overline{\mathbb{G}}_{m}^{d}\) denote the multiplicative monoid whose \(k^{\mathrm{th}}\) component, with coordinate \(\lambda_{k}\) where \(1\leq k\leq d\), acts by scalar multiplication on \(V_{k}\) as in SS17.3, i.e., for \(v_{i}\in V_{i}/V_{i+1}\), for \(0\leq i\leq d\) we have: \[(\lambda_{1},\ldots,\lambda_{d})\times(v_{d},\ldots,v_{1},v_{0})\mapsto( \lambda_{d}\ldots\lambda_{1}v_{d},\ldots,\lambda_{2}\lambda_{1}v_{2},\lambda_{ 1}v_{1},v_{0})\] The morphism of monoids \(\mathcal{Z}_{P}\to\overline{\mathcal{A}}_{P}\) is given in coordinates by: \[\mathcal{O}(\overline{\mathcal{A}}_{P})\cong k[\lambda_{1},\ldots, \lambda_{d}] \to \mathcal{O}(\mathcal{Z}_{P})\cong k[\mu_{0}^{\pm},\ldots,\mu_{d}^ {\pm}]\] \[\lambda_{i} \mapsto \mu_{i-1}^{-1}\mu_{i} \tag{17.16}\] and has kernel \(\mathbb{G}_{m}\). Let \(\mathcal{A}_{P}\leq\overline{\mathcal{A}}_{P}\) be the subgroup \(\lambda_{1}\ldots\lambda_{d}\neq 0\). Thus \(\mathcal{Z}_{P}/\mathbb{G}_{m}\cong\mathcal{A}_{P}\subset\overline{\mathcal{A} }_{P}\). **Proposition 17.11**.: _There is a canonical Zariski-dense embedding:_ \[\mathcal{Z}_{P}\backslash\left(\overline{\mathcal{A}}_{P}\times\mathcal{P}_{F }^{\circ}\right)\longrightarrow B_{F}\,\mathbb{P}(\mathcal{P}_{F}) \tag{17.17}\] _where \(B_{F}\mathbb{P}(\mathcal{P}_{F})\) is the iterated blow-up of \(\mathbb{P}(\mathcal{P}_{F})\) along the subspaces_ \[\mathbb{P}(\mathcal{P}_{0})\ \subset\ \mathbb{P}(\mathcal{P}_{0}\oplus \mathcal{P}_{1})\ \subset\ \ldots\ \subset\ \mathbb{P}(\mathcal{P}_{0}\oplus\ldots\oplus\mathcal{P}_{d-1})\.\] _Let \(I=\{i_{1},\ldots,i_{k}\}\) and \(1\leq i_{1}<\ldots<i_{k}\leq d\). The image of the locus \(V(\lambda_{i},i\in I)\) under the morphism (17.17) is contained in the codimension \(k\) stratum \(\mathcal{S}_{I}\) of the exceptional divisor of \(B_{F}\mathbb{P}(\mathcal{P}_{F})\) which corresponds to the flag \(F^{\prime}\leq F\):_ \[F^{\prime}\ :\ 0=V_{i_{k+1}}\subset V_{i_{k}}\subset V_{i_{k-1}}\subset\ldots \subset V_{i_{1}}\subset V_{i_{0}}=V. \tag{17.18}\] _Both \(V(\lambda_{i},i\in I)\) and its Zariski-closure \(\mathcal{S}_{I}\) are canonically isomorphic to products, and the morphism (17.17) restricts to the embedding:_ \[V(\lambda_{i},i\in I)\cong\prod_{m=0}^{k}\left(\mathcal{Z}_{P_{F_{m}}} \backslash\left(\overline{\mathcal{A}}_{P_{F_{m}}}\times\mathcal{P}_{F_{m}}^{ \circ}\right)\right)\longrightarrow\mathcal{S}_{I}\cong\prod_{m=0}^{k}B_{F_{ m}}\mathbb{P}(P_{F_{m}}) \tag{17.19}\] _where \(\mathcal{P}_{F_{m}}=\bigoplus_{i_{m}\leq j<i_{m+1}}\mathcal{P}_{j}\), \(\mathcal{P}_{F_{m}}^{\circ}=\bigoplus_{i_{m}\leq j<i_{m+1}}\mathcal{P}_{j}^{\times}\) and \(F_{m}\) is the filtration induced by \(F\) on the quotients \(V_{i_{m}}/V_{i_{m+1}}\). The restriction of (17.19) to the open locus \(\{\lambda_{j}\neq 0:j\notin I\}\) gives an embedding_ \[\mathcal{Z}_{P_{F^{\prime}}}\backslash\mathcal{P}_{F}^{\circ}\quad \hookrightarrow\quad\mathcal{S}_{I}^{\circ}\subseteq\prod_{m=0}^{k}\mathbb{P}(P _{F_{m}}). \tag{17.20}\] Proof.: Apply proposition 17.10 to \(\mathcal{P}_{F}\). The second part follows by iterating the description of a codimension one exceptional divisor given in SS17.3.5. The only novelty is (17.20). It follows from the fact that the locus \(\{\lambda_{j}\neq 0:j\notin J\}\) is isomorphic to \[\left(\prod_{m=0}^{k}\mathcal{A}_{P_{F_{m}}}\backslash\left(\overline{ \mathcal{A}}_{P_{F_{m}}}\times\mathcal{P}_{F_{m}}^{\circ}\right)\right) \backslash\bigcup_{j\neq J}V(\lambda_{j})\cong\mathcal{A}_{P_{F^{\prime}}} \backslash\mathcal{P}_{F}^{\circ}\.\] This follows from \(\mathcal{P}_{F}^{\circ}=\bigoplus_{m=0}^{k}\mathcal{P}_{F_{m}}^{\circ}\), and the fact that the natural morphism of multiplicative monoids \(\mathcal{Z}_{P}\to\overline{\mathcal{A}}_{P}\) given by (17.16) has kernel \(\mathbb{G}_{m}\). Since the determinant is a homogeneous polynomial in matrix entries, we may define the determinant hypersurface \(\mathrm{Det}\subset\mathbb{P}(\mathcal{P}_{F})\) to be its vanishing locus. Let \[\widetilde{\mathrm{Det}}\quad\subset\quad B_{F}\mathbb{P}(\mathcal{P}_{F})\] denote its strict transform. Let \(P=\mathrm{GL}_{F}(V\otimes_{\mathbb{Q}}\mathbb{R})\) denote the (topological) group of automorphisms considered in SS17.2, and likewise \(A_{P}\leq P\). Then we may interpret \[P\ \cong\ (\mathcal{P}_{F}\backslash\mathrm{Det})\left(\mathbb{R}\right)\] and it follows from proposition 17.11 that there are injections: \[Z_{P}\backslash\left(\overline{A}_{P}\times P\right)\longrightarrow\left( \mathcal{Z}_{P}\backslash\left(\left(\overline{\mathcal{A}}_{P}\times P_{F}^ {\circ}\right)\backslash\mathrm{Det}\right)\right)\left(\mathbb{R}\right) \longrightarrow\left(B_{F}\mathbb{P}(\mathcal{P}_{F})\backslash\widetilde{ \mathrm{Det}}\right)\left(\mathbb{R}\right)\,. \tag{17.21}\] **Example 17.12**.: Consider the case when \(d=2\) and \(F\) is \(0\subset V_{2}\subset V_{1}\subset V\). By choosing a basis of \(V\) adapted to this filtration, the points of \(\mathcal{P}_{F}\) are given by block lower triangular matrices, and \(\overline{\mathcal{A}}_{P}\cong\overline{\mathbb{G}}_{m}^{2}\) acts by \[\left(\lambda_{1},\lambda_{2}\right)\times\begin{pmatrix}P_{0}&&\\ P_{01}&P_{1}&\\ P_{02}&P_{12}&P_{2}\end{pmatrix}\mapsto\begin{pmatrix}P_{0}&&\\ \lambda_{1}P_{01}&\lambda_{1}P_{1}&\\ \lambda_{1}\lambda_{2}P_{02}&\lambda_{1}\lambda_{2}P_{12}&\lambda_{1}\lambda_{ 2}P_{2}\end{pmatrix}\] where \(\lambda_{1},\lambda_{2}\) are points on \(\overline{\mathbb{G}}_{m}^{2}\). The points of \(\mathcal{P}_{i}\), for \(i=0,1,2\) are given by the subsets of matrices which vanish everywhere except in row \(i+1\), and the subspaces \(\mathcal{P}_{0}\subset\mathcal{P}_{0}\oplus\mathcal{P}_{1}\) which are to be blown up are the spaces of matrices of the form: \[\begin{pmatrix}P_{0}&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\,\in\,\begin{pmatrix}P_{0}&0&0\\ P_{01}&P_{1}&0\\ 0&0&0\end{pmatrix}\,.\] The points of the subspace \(\mathcal{P}_{F}^{\circ}\) satisfy \(P_{0}\neq 0\), \(P_{1}\neq 0\) and \(P_{2}\neq 0\). This strictly contains the complement of the locus \(\mathrm{Det}\) defined by \(\det(P_{0})\det(P_{1})\det(P_{2})=0\). The stratification of \(\mathbb{G}_{m}\backslash(\overline{\mathbb{G}}_{m}^{2}\times\mathcal{P}_{F}^{ \circ})\) is generated by the equations \(\lambda_{1}=0\) and \(\lambda_{2}=0\) (below left). The strata themselves are described as quotients (below, middle), which embed into the following products of blow-ups (below, right): \[\begin{array}{rcl}V(\lambda_{1})&:&(\mathbb{G}_{m}\backslash\mathcal{P}_{0} ^{\circ})\times\mathbb{G}_{m}^{2}\backslash\left(\overline{\mathbb{G}}_{m} \times\mathcal{P}_{1}^{\circ}\oplus\mathcal{P}_{2}^{\circ}\right)&\subseteq& \mathbb{P}(\mathcal{P}_{0})\times B_{\mathbb{P}(\mathcal{P}_{1})}\mathbb{P}( \mathcal{P}_{2}\oplus\mathcal{P}_{1})\\ V(\lambda_{2})&:&\mathbb{G}_{m}^{2}\backslash\left(\overline{\mathbb{G}}_{m} \times\mathcal{P}_{1}^{\circ}\oplus\mathcal{P}_{0}^{\circ}\right)\times( \mathbb{G}_{m}\backslash\mathcal{P}_{2}^{\circ})&\subseteq&B_{\mathbb{P}( \mathcal{P}_{0})}\mathbb{P}(\mathcal{P}_{0}\oplus\mathcal{P}_{1})\times \mathbb{P}(\mathcal{P}_{2})\\ V(\lambda_{1},\lambda_{2})&:&(\mathbb{G}_{m}\backslash\mathcal{P}_{0}^{\circ}) \times(\mathbb{G}_{m}\backslash\mathcal{P}_{0}^{\circ})\times(\mathbb{G}_{m} \backslash\mathcal{P}_{2}^{\circ})&\subseteq&\mathbb{P}(\mathcal{P}_{0})\times \mathbb{P}(\mathcal{P}_{1})\times\mathbb{P}(\mathcal{P}_{2})\.\end{array}\] These strata, respectively, correspond to the flags: \(0\subset V_{1}\subset V\) for \(\lambda_{1}=0\); \(0\subset V_{2}\subset V\) for \(\lambda_{2}=0\); and the full flag \(0\subset V_{2}\subset V_{1}\subset V\) for \(\lambda_{1}=\lambda_{2}=0\). ### Spaces of quadratic forms, blow-ups and \(X(p)\) Let \(F\), \(P\) be as in SS17.4. There is a natural morphism of algebraic varieties \[\mathbb{P}(\mathcal{P}_{F}) \longrightarrow \mathbb{P}(\mathcal{Q}(V))\] \[M \mapsto M^{T}M \tag{17.22}\] which sends the subspace \(\mathbb{P}(\mathcal{P}_{0}\oplus\ldots\oplus\mathcal{P}_{i})\) to \(\mathbb{P}(\mathcal{Q}(V/V_{i+1}))\subset\mathbb{P}(\mathcal{Q}(V))\), the projective space of the subspace of quadratic forms which vanish on \(V_{i+1}\). Consequently, by the universal property of blow-ups, (17.22) induces a morphism \[B_{F}\mathbb{P}(\mathcal{P}_{F})\longrightarrow B_{F}\mathbb{P}(\mathcal{Q}(V)) \tag{17.23}\] where \(B_{F}\mathbb{P}(\mathcal{Q}(V))\) is the iterated blow-up of \(\mathbb{P}(\mathcal{Q}(V))\) along the linear subspaces \[\mathbb{P}\left(\mathcal{Q}(V/V_{d})\right)\;\subset\;\mathbb{P}\left( \mathcal{Q}(V/V_{d-1})\right)\;\subset\;\ldots\;\subset\;\mathbb{P}\left( \mathcal{Q}(V/V_{1})\right)\] associated to the flag \(F\). Combining with proposition 17.11 we deduce a natural map: \[\mathcal{Z}_{P}\backslash\left(\overline{\mathcal{A}}_{P}\times\mathcal{P}_{ F}^{\circ}\right)\longrightarrow B_{F}\mathbb{P}(\mathcal{P}_{F})\longrightarrow B _{F}\mathbb{P}(\mathcal{Q}(V)). \tag{17.24}\] Recall from SS9.5 that the exceptional divisor \(\mathcal{E}\) of \(B_{F}\mathbb{P}(\mathcal{Q}(V))\) defines a stratification whose strata \(\mathcal{E}_{F^{\prime}}\) are in one-to-one correspondence with flags \(F^{\prime}\leq F\) (17.18), and are canonically isomorphic to a product of iterated blow-ups \[\mathcal{E}_{F^{\prime}}\cong\prod_{m=0}^{k}B_{F_{i}}\mathbb{P}\left(\frac{ \mathcal{Q}(V/V_{i_{m+1}})}{\mathcal{Q}(V/V_{i_{m}})}\right)\,\] where \(F_{i}\) is the induced filtration on the successive quotients \(V_{i}/V_{i+1}\) of (17.18). Let \(\mathrm{Det}\subset\mathbb{P}(\mathcal{Q}(V))\) denote the vanishing locus of the determinant, and \(\widetilde{\mathrm{Det}}\) its strict transform in \(B_{F}\mathbb{P}(\mathcal{Q}(V))\). It was proven in proposition 11.4 that \[\mathcal{E}_{F^{\prime}}\backslash\left(\mathcal{E}_{F^{\prime}}\cap \widetilde{\mathrm{Det}}\right)\cong\prod_{m=0}^{k}\left(\mathbb{P}\left( \frac{\mathcal{Q}(V/V_{i_{m+1}})}{\mathcal{Q}(V/V_{i_{m}})}\right)\; \backslash\;\mathrm{Det}\big{|}_{V_{i_{m}}/V_{i_{m+1}}}\right) \tag{17.25}\] where \(\mathrm{Det}\big{|}_{V_{i_{m}}/V_{i_{m+1}}}\) is the zero locus of the homogeneous map \[\frac{\mathcal{Q}(V/V_{i_{m+1}})}{\mathcal{Q}(V/V_{i_{m}})} \longrightarrow\mathcal{Q}(V_{i_{m}}/V_{i_{m+1}})\stackrel{{ \mathrm{det}}}{{\longrightarrow}}\mathbb{Q},\] where the first map is restriction of quadratic forms. **Theorem 17.13**.: _The map (17.24) induces an injective continuous map_ \[X(P)\longrightarrow\left(B_{F}\,\mathbb{P}(Q(V))\;\backslash\;\widetilde{ \mathrm{Det}}\right)(\mathbb{R}). \tag{17.26}\] _For every flag \(F^{\prime}\) of the form (17.18) the map (17.26) restricts to a map_ \[e(P_{F^{\prime}})\longrightarrow\mathcal{E}_{F^{\prime}}\backslash\left( \mathcal{E}_{F^{\prime}}\cap\widetilde{\mathrm{Det}}\right)(\mathbb{R}) \tag{17.27}\] _to the corresponding stratum in the exceptional locus of \(B_{F}\,\mathbb{P}(Q(V))\). The image of the map (17.26) is the closure inside \(B_{F}\mathbb{P}(\mathcal{Q}(V))(\mathbb{R})\), for the analytic topology, of the set of positive definite quadratic forms._ Proof.: The equation \(\mathrm{det}(M^{T}M)=\mathrm{det}(M)^{2}\) implies that (17.22) preserves the determinant loci \(\mathrm{Det}\) in \(\mathbb{P}(\mathcal{P}_{F})\) and \(\mathbb{P}(\mathcal{Q}(V))\), and so the morphism (17.23) preserves their strict transforms. It follows that (17.24) defines a natural map \[\mathcal{Z}_{P}\backslash\left(\overline{\mathcal{A}}_{P}\times\mathcal{P}_{ F}^{\circ}\right)\longrightarrow B_{F}\mathbb{P}(\mathcal{Q}(V))\backslash\; \widetilde{\mathrm{Det}} \tag{17.28}\] whose restriction to the open locus \(\{\lambda_{i}\neq 0,\text{ for }1\leq i\leq d\}\) is the morphism \[\mathbb{G}_{m}\backslash\mathcal{P}_{F}^{\circ}\longrightarrow\mathbb{P}( \mathcal{Q}(V))\backslash\mathrm{Det}\] which, in coordinates given by a basis of \(V\), sends a matrix \(M\) to the projective class of \(M^{T}M\). This map is invariant by left-multiplication by the subscheme \(\mathcal{K}_{P_{F}}\leq\mathcal{P}_{F}^{\circ}\) whose points are matrices \(O\) such that \(O^{T}O=1\), since one has \((OM)^{T}OM=M\). Since this equation is algebraic, it remains true on the Zariski closure, and hence (17.28) is also invariant under left multiplication by \(\mathcal{K}_{P_{F}}\). We deduce that the map \[Z_{P}\backslash\left(\overline{\mathcal{A}}_{P}\times P\right) \longrightarrow\left(B_{F}\mathbb{P}(\mathcal{Q}(V))\backslash\;\widetilde{ \mathrm{Det}}\right)(\mathbb{R})\] which is induced by taking real points of the map (17.28) and composing with the first map in (17.21), is \(K_{P}\leqslant\mathcal{K}_{P_{F}}(\mathbb{R})\) invariant and so passes to the quotient \[X(P)=Z_{P}K_{P}\backslash\left(\overline{A}_{P}\times P\right) \longrightarrow\left(B_{F}\mathbb{P}(\mathcal{Q}(V))\backslash\ \widetilde{\text{Det}}\right)(\mathbb{R})\.\] This gives a well-defined continuous map (17.26) such that the following diagram \[\begin{array}{ccc}Z_{P}\backslash\left(\overline{A}_{P}\times P\right)& \longrightarrow&\left(B_{F}\mathbb{P}(P_{F})\backslash\ \widetilde{\text{Det}}\right)(\mathbb{R})\\ \downarrow&&\downarrow\\ X(P)=A_{P}\backslash\left(\overline{A}_{P}\times X\right)&\longrightarrow& \left(B_{F}\mathbb{P}(\mathcal{Q}(V))\backslash\ \widetilde{\text{Det}}\right)(\mathbb{R})\end{array}\] commutes, where the horizontal map along the top is the morphism (17.17) and the vertical maps are \(M\mapsto M^{T}M\) on the left, and (17.23) on the right. Now compute the restriction of the map (17.26) on each stratum indexed by a flag \(F^{\prime}\) of the form (17.18). The map (17.28) restricts to a map on exceptional divisors \[\mathcal{Z}_{P_{F^{\prime}}}\backslash\mathcal{P}_{F}^{\circ}\stackrel{{ \eqref{eq:P_F^{\prime}}}}{{\longrightarrow}}\ \mathcal{S}_{F^{\prime}}^{\circ}\stackrel{{ \eqref{eq:P_F^{\prime}}}}{{\longrightarrow}}\mathcal{E}_{F^{\prime}}\.\] Since it is left-invariant by \(\mathcal{K}_{P_{F}}\), on taking real points it descends to the quotient \[e(P_{F^{\prime}})=A_{P_{F}^{\prime}}\backslash X\cong Z_{P_{F}^{\prime}}K_{P_{ F}^{\prime}}\backslash P_{F^{\prime}}\longrightarrow\mathcal{E}_{F^{\prime}} \backslash\left(\mathcal{E}_{F^{\prime}}\backslash\widetilde{\text{Det}} \right)(\mathbb{R})\.\] This morphism is injective on each stratum \(e(P_{F^{\prime}})\) since \(M^{T}M=N^{T}N\) for two invertible real matrices if and only if \(M=ON\) for \(O\) an orthogonal matrix. The final statement follows from the fact that (17.26) identifies the open \(X\subset X(P)\) with the subspace of projective classes of positive-definite matrices inside \(\mathbb{P}(\mathcal{Q}(V))(\mathbb{R})\). **Examples 17.14**.: As in example 17.12, use the splitting \(V\cong V_{2}\oplus V_{2}/V_{1}\oplus V/V_{2}\) to write endomorphisms of \(V\) in block matrix form. Consider the deepest stratum which is associated to the full flag \(F\), which corresponds to \(\mathcal{E}_{12}\). The morphism (17.27) \[e(P)\longrightarrow\mathbb{P}(\mathcal{Q}(V/V_{1}))(\mathbb{R})\times \mathbb{P}\left(\frac{\mathcal{Q}(V/V_{2})}{\mathcal{Q}(V/V_{1})}\right)( \mathbb{R})\times\mathbb{P}\left(\frac{\mathcal{Q}(V)}{\mathcal{Q}(V/V_{2})} \right)(\mathbb{R})\] is given explicitly on matrix representatives in \(P\) of classes in \(e(P)=A_{P}K_{P}\backslash P\) by: \[\begin{pmatrix}P_{0}&&\\ P_{01}&P_{1}&\\ P_{02}&P_{12}&P_{2}\end{pmatrix}\mapsto\begin{pmatrix}P_{0}^{T}P_{0}&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix},\ \begin{pmatrix}0&P_{01}^{T}P_{1}&0\\ P_{1}^{T}P_{01}&P_{1}^{T}P_{1}&0\\ 0&0&0\end{pmatrix},\ \begin{pmatrix}0&0&P_{02}^{T}P_{2}\\ 0&0&P_{12}^{T}P_{2}\\ P_{2}^{T}P_{02}&P_{2}^{T}P_{12}&P_{2}^{T}P_{2}\end{pmatrix}\] Now consider the stratum associated to the flag \(F^{\prime}:0\subset V_{1}\subset V\), which corresponds to the exceptional divisor \(\mathcal{E}_{1}\). The morphism (17.27) \[e(P^{\prime})\longrightarrow\mathbb{P}(\mathcal{Q}(V/V_{1}))(\mathbb{R}) \times\mathbb{P}\left(\frac{\mathcal{Q}(V)}{\mathcal{Q}(V/V_{1})}\right)( \mathbb{R})\] is given on matrix representatives in \(P\) of classes in \(e(P^{\prime})=A_{P^{\prime}}K_{P}\backslash P\) by: \[\begin{pmatrix}A&0\\ U&B\end{pmatrix}\ \ \mapsto\ \ \begin{pmatrix}A^{T}A&0\\ 0&0\end{pmatrix},\begin{pmatrix}0&B^{T}U\\ B^{T}B&U^{T}B\end{pmatrix}\] where \(A=P_{0}\), \(U=\left(\begin{smallmatrix}P_{01}\\ P_{02}\end{smallmatrix}\right)\), and \(B=\left(\begin{smallmatrix}P_{1}&0\\ P_{12}&P_{2}\end{smallmatrix}\right)\). #### 17.6.1. Proof of theorem 17.2 Theorem 17.2 follows from theorem 17.13 together with the description (17.8) of \(X^{\mathrm{BS}}\) as a union of \(X(P)\), for \(P\) rational parabolics. It follows from theorem 11.3 that the closure of \(X\) in \(\mathfrak{B}(\mathbb{R})\) is equal to its closure in \((\mathfrak{B}\backslash\widetilde{\mathrm{Det}})(\mathbb{R})\), since the analytic closure of any polyhedron \(\sigma\) in an admissible decomposition of \(X\) does not meet the strict transform of the determinant locus. The final part of the theorem is simply (17.25). Finally, the group of rational points \(\mathrm{GL}(V)(\mathbb{Q})\) acts on \(X\) via \(P\mapsto Pg\). Correspondingly, the algebraic group \(\mathrm{GL}(V)\) acts upon \(\mathbb{P}(\mathcal{Q}(V))\) via \((g,M)\mapsto g^{T}Mg\). Since it permutes the set of flags (17.5) and hence the centers of all the blow-ups which define \(\mathfrak{B}\), it extends to a natural action of \(\mathrm{GL}(V)\) upon \(\mathfrak{B}\). Its group of \(\mathbb{Q}\)-points \(\mathrm{GL}(V)(\mathbb{Q})\) acts algebraically upon \(\mathfrak{B}\). This action is by compatible, via \(P\mapsto P^{T}P\), with the action of \(\mathrm{GL}(V)(\mathbb{Q})\) upon \(X\). #### 17.6.2. Proof of corollary 17.3 Consider a \(\mathrm{GL}_{g}(\mathbb{Z})\)-equivariant admissible decomposition \(X=\bigcup_{\sigma}\sigma\). Consider the image of a polyhedron \(\sigma\) under the map (17.3) \[f:X^{\mathrm{BS}}{\longrightarrow}\mathfrak{B}(\mathbb{R})\.\] Denote its closure in \(X\subset X^{\mathrm{BS}}\) by \(\overline{\sigma}\). By invariance under extraneous blow-ups (proposition 5.15), the closure of \(f(\sigma)\) in \(\mathfrak{B}(\mathbb{R})\) (or equivalently, in \((\mathfrak{B}\backslash\widetilde{\mathrm{Det}})(\mathbb{R})\)) is canonically homeomorphic to the polyhedron \(\sigma^{\mathcal{B}}=\sigma^{\mathfrak{S}^{\mathfrak{m}_{\mathrm{i}}\sigma}}\) in the minimal blow-up of \(\sigma\). The continuous map \(f\) defines a homeomorphism of stratified spaces \(f:\overline{\sigma}\stackrel{{\sim}}{{\to}}\sigma^{\mathcal{B}}\) since it is injective on each open stratum and \(\overline{\sigma}\) is compact. Conclude using the fact that, by definition (11.5), \(\left|L\mathcal{A}_{g}^{\mathrm{trop},\mathfrak{B}}\right|\) is the quotient of the union of all \(\sigma^{\mathcal{B}}\) by the action of \(\mathrm{GL}_{g}(\mathbb{Z})\), since the gluing relations in the diagram category \(\mathcal{D}_{g}^{\mathrm{trop},\mathcal{B}}\) are those induced by \(\mathrm{GL}_{g}(\mathbb{Z})\).
2309.04131
Efimov states in excited nuclear halos
Universality -- an essential concept in physics -- implies that different systems show the same phenomenon and can be described by a unified theory. A prime example of the universal quantum phenomena is the Efimov effect, which is the appearance of multiples of low-energy three-body bound states with progressively large sizes dictated by the discrete scale invariance. The Efimov effect, originally proposed in the nuclear physics context, has been observed in cold atoms and $^4\mathrm{He}$ molecules. The search for the Efimov effect in nuclear physics, however, has been a long-standing challenge owing to the difficulty in identifying ideal nuclides with a large $s$-wave scattering length; such nuclides can be unambiguously considered as Efimov states. Here, we propose a systematic method to identify nuclides that exhibit Efimov states in their excited states in the vicinity of the neutron separation threshold. These nuclei are characterised by their enormous low-energy neutron capture cross-sections, hence giant $s$-wave scattering length. Using our protocol, we identified $^{90}$Zr and $^{159}$Gd as novel candidate nuclides that show the Efimov states. They are well inside the valley of stability in the nuclear chart, and are suited for experimental realisation of the Efimov states in nuclear physics.
Shimpei Endo, Junki Tanaka
2023-09-08T05:09:54Z
http://arxiv.org/abs/2309.04131v2
# Efimov states in excited nuclear halos ###### Abstract Universality -- an essential concept in physics -- implies that different systems show the same phenomenon and can be described by a unified theory. A prime example of the universal quantum phenomena is the Efimov effect, which is the appearance of multiples of low-energy three-body bound states with progressively large sizes dictated by the discrete scale invariance. The Efimov effect, originally proposed in the nuclear physics context, has been observed in cold atoms and \({}^{4}\)He molecules. The search for the Efimov effect in nuclear physics, however, has been a long-standing challenge owing to the difficulty in identifying ideal nuclides with a large \(s\)-wave scattering length; such nuclides can be unambiguously considered as Efimov states. Here, we propose a systematic method to identify nuclides that exhibit Efimov states in their excited states in the vicinity of the neutron separation threshold. These nuclei are characterised by their enormous low-energy neutron capture cross-sections, hence giant \(s\)-wave scattering length. Using our protocol, we identified \({}^{90}\)Zr and \({}^{159}\)Gd as novel candidate nuclides that show the Efimov states. They are well inside the valley of stability in the nuclear chart, and are suited for experimental realisation of the Efimov states in nuclear physics. At low energies, seemingly different quantum systems show the same behaviours independent of the microscopic details and can be described by the universal theory, as in the quantum phase transitions [1]. The Efimov states [2; 3; 4; 5], weakly bound three-body bound states featuring the discrete scale invariance (Fig. 1 (c)), are prime examples of the universal quantum phenomena. The Efimov states and their discrete scale invariance are observed in cold atoms [3; 5; 6; 7; 8; 9]. Interestingly, the Efimov states observed in different atomic species and spin states are found to behave universally [10; 11; 12; 13; 14]. The Efimov states are also predicted to appear in a condensed matter system of quantum spins with the same discrete scale-invariant pattern [15]. A key for the universality of the Efimov states is a large \(s\)-wave scattering length, which can be controllably realized in cold atoms with the Feshbach resonance [16; 17]. In contrast, the Efimov states in nuclear physics have remained elusive. The halo states with small binding energy and large spatial size appearing in the neutron-rich nuclei near the neutron drip-line [3; 18; 19; 20; 21; 22; 23; 24; 25] have been deemed as major candidates to show Efimov-like states; if a neutron-rich nuclei \({}^{A+1}_{Z}X\) has a one-neutron halo structure with the neutron predominantly occupying an outmost \(s\)-orbital, the \({}^{A+2}_{Z}X\) nuclei may have a two-neutron halo structure of the Efimov characters. Indeed, the halo state of \({}^{A+1}_{Z}X\) may be considered as a two-body state of the \({}^{A}_{Z}X\) core and a neutron if their spatial extent is much larger than the radius of \({}^{A}_{Z}X\). If the neutron is in \(s\) orbital, the core \({}^{A}_{Z}X\) and the neutron \(n_{\uparrow}\) (assumed here to be in the spin-up state for later purposes) can be considered as a two-body system with a large \(s\)-wave scattering length \(a\). The halo \({}^{A+1}_{Z}X\) nucleus can then be universally described with \(a\): \[E_{\rm 2B}=-\hbar^{2}/2\mu a^{2},\ \ \ \ \ \sqrt{\langle r^{2}\rangle}=a/ \sqrt{2}, \tag{1}\] where \(\mu\) is the reduced mass between \({}^{A}_{Z}X\) and the neutron. The two-body energy \(E_{\rm 2B}\) corresponds to the energy of the \({}^{A+1}_{Z}X\) nucleus measured from the \({}^{A}_{Z}X+n_{\uparrow}\) separation threshold, and the mean-square radius \(\sqrt{\langle r^{2}\rangle}\) is the distance between \({}^{A}_{Z}X\) and \(n_{\uparrow}\). Because of the large \(s\)-wave scattering length between \({}^{A}_{Z}X\) and \(n_{\uparrow}\), and that between the neutrons in the spin \(\uparrow\) and \(\downarrow\) states (i.e.\({}^{1}S_{0}\) state), we can consider a system of \({}^{A}_{Z}X\) nucleus with the two neutrons \(n_{\uparrow}\) and \(n_{\downarrow}\) as a three-body system interacting with large \(s\)-wave scattering lengths. Such a mass-imbalanced three-body system show the Efimov states when the \(s\)-wave scattering length is large. As \(|a|\) gets larger, they tend to follow the discrete scale invariance, with a scale factor \(e^{\pi/|s|}\approx 16-20\) universally determined by the mass ratios between the particles [3; 4; 5]. Although this scenario seems plausible, finding a neutron-rich nucleus that unambiguously shows the universal Efimov features is challenging; the neutron-rich halo nuclei are often not in the low-energy regime to be described by the universal Efimov theory. The finite-range effects of the nuclear forces are relevant. and the higher orbital angular-momentum channels play crucial roles in some nuclei, such as \({}^{11}\)Li [26; 27]. Furthermore, the neutron-rich nuclei are not easy to synthesise especially for extreme neutron-rich nuclei, such as \({}^{62}\)Ca [28; 29], \({}^{72}\)Ca [30]. With these challenges, there are only a handful of candidates for the nuclear Efimov states. For most of them, however, the \(s\)-wave scattering length is at most 10 fm order. The best candidate to realize universal Efimov physics with the current experimental technique is \({}^{19}\)B [31; 32], whose scattering length is \(\gtrsim 100\) fm. Here, we propose an alternative route to search for nuclear systems with extremely large \(s\)-wave scattering lengths, and three-body bound states that can distinctly be identified as Efimov states. In contrast to neutron-rich nuclei, we focus on nuclei well inside the valley of stability in the nuclear chart (Fig. 1 (a)). While their ground states are stable with a large binding energy per nucleon \(\sim 8\) MeV, we focus on their excited state in the vicinity of the neutron separation threshold. Suppose the excitation energy \(E_{x}\) of the nucleus \({}^{A+1}_{Z}X\) is very close to the one-neutron separation energy \(S^{\rm(GS)}_{1n}\). We may describe the excited state of \({}^{A+1}_{Z}X\) as a weakly bound two-body state of \({}^{A}_{Z}X\) nucleus and a neutron \(n_{\uparrow}\) with its relative two-body energy \(S^{\rm(GS)}_{1n}-E_{x}\) (centre of Fig. 1 (b)). If this halo state mainly occupies an \(s\)-wave orbit, \({}^{A+1}_{Z}X\) can be considered as a two-body system with a large \(s\)-wave scattering length. If we then add another neutron with an opposite spin, we have a three-body system \({}^{A}_{Z}X\)-\(n_{\uparrow}\)-\(n_{\downarrow}\) interacting with large \(s\)-wave scattering lengths, which is expected to show the Efimov states, as an 'Efimov favoured system' [3; 4]. With smaller \(|S^{\rm(GS)}_{1n}-E_{x}|\), the \(s\)-wave scattering lengths are larger (see Eq. (1))). As shown later, there are nuclear excited states with \(|S^{\rm(GS)}_{1n}-E_{x}|\sim 10\) meV, corresponding to \(|a|\sim 10^{4}\) fm. Because the number of Efimov states increases as \(|a|\) increases and show clearer universal features [2; 3; 4], we expect more Efimov states with better universal features for such nuclear systems; for example, they behave as Borromean states for \(1/a<0\) and breaks up into a halo dimer state of \({}^{A}_{Z}X\) and a neutron for \(1/a>0\) as the inverse scattering length between \({}^{A}_{Z}X\) and a neutron \(1/a\) is continuously varied (Fig. 1 (c)). Because the Efimov states are weakly bound, the Efimov states in \({}^{A+2}_{Z}X\) nucleus should appear as the excited states in the vicinity of the two-neutron separation threshold (the right most of Fig. 1 (b)). The spin-parity \(J^{P}\) of the Efimov states is the same as that of core \({}^{A}_{Z}X\) because the anti-parallel neutrons in the \(s\)-wave orbit (parity \(0^{+}\)) couples to the core. Consequently, as summarized in Fig. 1 (b), the Efimov states in \({}^{A+2}_{Z}X\) appear as sub-threshold excited states with \(J^{P}\), along with a sub-threshold excited state of \({}^{A+1}_{Z}X\) with \(J+1/2^{P}\), adjacent to the \({}^{A}_{Z}X\) ground-state nucleus with \(J^{P}\). The Efimov states are distinct from the ground-state of \({}^{A+2}_{Z}X\), which has a large binding energy per nucleon \(\sim 8\) MeV and can have angular momentum and parity \(J^{\prime\prime\prime}\) distinct from the Efimov states \(J^{P}\). Similarly, the ground state of \(\sideset{}{{}^{A+1}_{Z}}{X}\) in \(J^{P^{\prime}}\) is distinct from the excited halo state in \(J+1/2^{P}\). The feature of the Efimov states having the same \(J^{P}\) as the core because of the anti-parallel neutrons sounds akin to the two-neutron pairing. However, the neutron pairing with a conventional BCS-like mechanism is a many-body phenomenon requiring a finite background neutron density, occurring inside or at the surface of the core nucleus. Contrarily, the Efimov states are few-body phenomena that occur far away from the core. Further, the neutron pairing can occur even when the neutrons occupy a non-\(s\)-wave orbit such that the resulting states can take various angular-momentum values allowed by the addition rule, while the Efimov states are restricted to the same \(J^{P}\) as the core. ## I Diagnosis of large \(S\)-wave scattering length with neutron capture cross-section One naive way to find good candidates that would show the Efimov states in their sub-threshold excited states is to look for the nuclear data of \(S_{1n}^{\rm(GS)}\) and \(E_{x}\) and search for nuclei with small \(|S_{1n}^{\rm(GS)}-E_{x}|\ll 1\) MeV. Although this approach provides a lot of seemingly good candidates, most of them do not show the Efimov states; this is because the sub-threshold states often do not exhibit halo \(s\)-wave characteristics; it can be a conventional nuclear excited state coincidentally appearing around the neutron separation threshold; it can be a low-energy collective excitation, such as giant resonances; even if it is a halo state of \(\sideset{}{{}^{A}_{Z}}{X}\) and a neutron, it can be in the non-\(s\)-wave orbitals. Therefore, Figure 1: Schematic illustration of the Efimov states in the excited nuclei. (a) Excited states of the nuclei in the vicinity of two-neutron separation threshold \(S_{2n}^{\rm(GS)}\) (light blue 3D surface) are proposed to show the Efimov states. The near-threshold excited states of \({}^{90}\)Zr and \({}^{159}\)Gd (red circles) are specific candidates, whose ground states are well inside the valley of stability in the nuclear chart (National Nuclear Data Centre). These nuclei have outmost 3s (red) and 4s orbitals (purple), respectively. The proton-rich side is not shown because \(S_{2n}^{\rm(GS)}\) are complicated, and is out of scope of this paper. (b) The \(s\)-wave halo state of \(\sideset{}{{}^{A}_{Z}}{X}\) and a neutron \(n_{\uparrow}\), appearing as the excited state of \(\sideset{}{{}^{A}_{Z}}{X}\) nuclei (blue level) in the vicinity of the neutron separation threshold (black dashed line), is the crucial ingredient for Efimov physics. It is in \(J+1/2^{P}\), in conjunction with the \(J^{P}\) state of \(\sideset{}{{}^{A}_{Z}}{X}\), with very small two-body energy \(S_{1n}^{\rm(GS)}-E_{x}\), which can be estimated from one-neutron separation energy \(S_{1n}^{\rm(GS)}\) and the excitation energy \(E_{x}\) as measured from the ground state. The Efimov states appear as excited states of \(\sideset{}{{}^{A+2}_{Z}}{X}\) (red levels) in the vicinity of two-neutron separation threshold \(S_{2n}^{\rm(GS)}\) with \(J^{P}\), the same spin-parity as that of \(\sideset{}{{}^{A}_{Z}}{X}\) but distinct from the ground state of \(\sideset{}{{}^{A+2}_{Z}}{X}\) (black bottom level). The rest mass of the neutrons is subtracted in these diagrams. (c) Schematic illustration of the energy spectrum as the \(\sideset{}{{}^{A}_{Z}}{X}\)-neutron \(s\)-wave scattering length (horizontal axis) is varied. Energy is measured from the neutron separation threshold (black dashed line in (b)). The weakly bound Efimov states (two red curves) and the tightly bound ground state (black curve) corresponds to the excited states (red levels) and the ground state (black level) in the right most of the level diagram of \(\sideset{}{{}^{A+2}_{Z}}{X}\) in (b), whereas the dimer curve (blue curve) corresponds to the \(s\)-wave halo state (blue level) of \(\sideset{}{{}^{A+1}_{Z}}{X}\) in (b). The physical value of the core-neutron \(s\)-wave scattering length (green vertical dotted line) depends on nuclides, which can be estimated from thermal neutron capture cross-section \(\sigma_{\rm TNC}\) or \(S_{1n}^{\rm(GS)}-E_{x}\) data. the \(|S_{1n}^{(\rm GS)}-E_{x}|\) values would give us too many'spurious' candidates. Herein, we proposes that the thermal neutron capture cross-section provides much more direct access to the nuclei with the excited Efimov states. The thermal neutron capture is a reaction where the target nucleus captures an incident neutron, whose rate is represented by a cross-section \(\sigma_{\rm TNC}\). If the incident energy is small, as is the case for experiments performed at meV to eV [33; 34], the \(s\)-wave capture should be dominant. The incident neutron would silently come in close contact with the static core, rather than abruptly exciting its collective motions. This expectation should be particularly true when there is a weakly bound \(s\)-wave halo state: a neutron can be captured through such a _doorway state_ without affecting much the core nucleus (Fig. 2(b)). The capture cross-section gets larger as the radius of the _doorway state_ increases. We can therefore expect that nuclei with a large \(\sigma_{\rm TNC}\) are likely to possess a sub-threshold \(s\)-wave state with small binding energy and large spatial size, hence large \(s\)-wave scattering length (see Eq. (1)). Using the conventional formula of the low-energy elastic scattering \(\sigma_{\rm el}=\pi a^{2}\) and assuming that it would be the same order of magnitude as the neutron capture cross-section as is the case for the 'black-body' scattering [35], we obtain \[|a|\approx\sqrt{\frac{\sigma_{\rm TNC}}{\pi}} \tag{2}\] as a crude but useful estimate of the \(s\)-wave scattering length. As notable examples, extremely large neutron capture cross-sections have been reported: \(\sigma_{\rm TNC}=8.27(64)\times 10^{5}\) b for \({}^{88}\)Zr [33], and \(\sigma_{\rm TNC}=2.54(2)\times 10^{5}\) b for \({}^{157}\)Gd [33; 34], respectively. From Eq. (2), the \(s\)-wave scattering lengths for \({}^{88}\)Zr-\(n\) and \({}^{157}\)Gd-\(n\) are estimated to be \(|a|=5.1(2)\times 10^{3}\) fm and \(|a|=2.84(1)\times 10^{3}\) fm respectively. These values are about two orders of magnitude larger than those in the neutron-rich nuclei. Therefore, we expect that \({}^{88}\)Zr and \({}^{157}\)Gd plus two neutrons, namely the excited states in the \({}^{90}\)Zr and \({}^{159}\)Gd, are three-body systems with very large \(s\)-wave scattering lengths, which are conductive to observing clear signatures of Efimov states. From the \(S_{1n}^{(\rm GS)}\) and \(E_{x}\) data, we can further strengthen the evidence for the Efimov states. Indeed, \({}^{89}\)Zr possesses a sub-threshold state with an extremely small energy \(|S_{1n}^{(\rm GS)}-E_{x}|\leq 25\) meV [36; 37], too small to identify whether it is a bound (\(S_{1n}^{(\rm GS)}-E_{x}>0\)) or virtual state (\(S_{1n}^{(\rm GS)}-E_{x}<0\)). In either case, we can estimate the \(s\)-wave scattering length of \({}^{88}\)Zr-\(n\) as (Eq. (1) valid even for a virtual state) \(|a|\geq 2.9\times 10^{4}\) fm. Although this value is a few times different from that estimated from \(\sigma_{\rm TNC}\) because Eq. (2) is a crude estimate, the two different physical quantities both suggest enormous \(s\)-wave scattering lengths. The same is true for Gd, i.e., \({}^{158}\)Gd has a virtual state with \(S_{1n}^{(\rm GS)}-E_{x}=31.4\) meV [34; 38]. This suggests a large negative \(s\)-wave scattering length \(a=-2.6\times 10^{4}\) fm. \({}^{88}\)Zr and \({}^{157}\)Gd should therefore be good candidates for the Efimov states in the sub-threshold excited states. These are not the only candidates; with further neutron capture measurements, other nuclei with large \(a\) may be discovered. Some remarks are in order. First, the \(s\)-wave scattering length of the core and a neutron should be simultaneously large for \(n_{\uparrow}\) and \(n_{\downarrow}\). If this condition is not satisfied, then one of the \(s\)-wave scattering length may take a naturally small value \(|a|\sim 1\) fm. The excited states of \({}^{A+2}_{Z}X\) would then be a three-body system with only two resonant interactions. It corresponds to an 'Efimov unfavoured system' [3; 4], where the binding energies are too small and the discrete scale factor is too large to observe in experiments. It is, therefore, crucial to have the same or approximately the same \(s\)-wave scattering lengths for \({}^{A}_{Z}X\)-\(n_{\uparrow}\) and \({}^{A}_{Z}X\)-\(n_{\downarrow}\). This is true for Zr: \({}^{88}\)Zr is in \(J^{P}=0^{+}\), dictating that the \(s\)-wave scattering lengths of \({}^{88}\)Zr-\(n_{\uparrow}\) and \({}^{88}\)Zr-\(n_{\downarrow}\) are the same and simultaneously large. By contrast, \({}^{157}\)Gd is in \(J^{P}=3/2^{-}\), so that its scattering with \(n_{\uparrow}\) and \(n_{\downarrow}\) can be different. Because \(\sigma_{\rm TNC}\) is enormous, at least one of the scattering length should be large. However, with current nuclear data, we cannot conclude whether or not \({}^{157}\)Gd-\(n_{\uparrow}\) (\(J=2^{-}\)) and \({}^{157}\)Gd-\(n_{\downarrow}\) (\(J=1^{-}\)) both have large \(s\)-wave scattering lengths. Indeed, the spin-parity of the sub-threshold state of \({}^{158}\)Gd is reported as \(J^{P}=2^{-},1^{-}\)[39], and there is no clear knowledge on the relative strengths of the interactions. Therefore, as a working hypothesis, we suppose in the following that the \(s\)-wave scattering lengths of \({}^{157}\)Gd-\(n_{\uparrow}\) and \({}^{157}\)Gd-\(n_{\downarrow}\) are the same. Second, the above argument of the thermal neutron capture is not rigorous and there are nuclei in which the neutron capture is enhanced by other factors (e.g. fission of U nucleus). Nevertheless, our approach is suitable for efficiently pinning down possible candidate nuclei. The other approach based on \(S_{1n}^{(\rm GS)}\) and \(E_{x}\) is often contaminated by spurious states of complicated collective nature or higher orbitals. Furthermore, \(S_{1n}^{(\rm GS)}\) and \(E_{x}\) near the neutron separation threshold often gets blurred by a high level density or broad experimental widths. The neutron capture cross-section would give us a simple but useful guideline for which nuclei may have a possible sub-threshold state of a halo nature. This is exemplified by \({}^{42-44}\)Ca: \({}^{43}\)Ca nucleus has an excited state \(E_{x}=7.93270(3)\) very close to \(S_{1n}^{(\rm GS)}=7.93289(17)\) MeV [40]. It is then tempting to expect an \(s\)-wave halo of \({}^{42}\)Ca and a neutron with an extremely small binding energy \(S_{1n}^{(\rm GS)}-E_{x}=0.19(17)\) keV and large \(s\)-wave scattering length \(a=340\left(\begin{array}{c}+760\\ -100\end{array}\right)\) fm. However, \(\sigma_{\rm TNC}\) of \({}^{42}\)Ca is small. This is because the sub-threshold state of \({}^{43}\)Ca is not well described by the neutron plus \({}^{42}\)Ca core picture; it may be a complicated superposition of various states including collective excitations of \({}^{43}\)Ca. Our approach can efficiently exclude such nuclei with dominant non-halo contributions. ## II Efimov state in the excited nuclei We perform three-body calculations to demonstrate Efimov physics in the excited states of \({}^{90}\)Zr and \({}^{159}\)Gd (see Methods for details). The nuclei \({}^{88}\)Zr and \({}^{157}\)Gd are modelled as single particles interacting with the two neutrons by the Woods-Saxon potentials. The interaction strength is varied around the third and fourth \(s\)-wave resonances, which correspond to the dominant 3s and 4s orbitals around the thresholds in \({}^{90}\)Zr and \({}^{159}\)Gd, respectively. The interaction between the neutrons is modelled by the AV4 potential with \(a_{nn}=-19.5\) fm [41]. Our three-body model is particularly suited for low-energy sub-threshold states, where their size is large and the core excitations is irrelevant, while it cannot describe their ground states owing to their large binding energies (\(\sim 16\) MeV). To compare it with a hypothetical neutron-neutron interaction at the unitary limit \(a_{nn}=\pm\infty\), an ideal situation for the Efimov states, we also consider a scaled neutron-neutron potential \(\lambda V_{nn}(r)\), where \(\lambda\) is fine-tuned \(\lambda=1.07...\) to be \(a_{nn}=\pm\infty\). In Fig. 3, we show the three-body energies \(E_{n}^{(3)}\) for (a)\({}^{88}\)Zr+2n, and (b)\({}^{157}\)Gd+2n, respectively. When the Figure 2: (a) When the \(s\)-wave scattering length \(a\) is much larger than the typical range of the interaction \(R\), \(|a|\gg R\), there is a halo state (virtual state for \(a>0\) and bound state for \(a<0\)), whose wavefunction between the nucleus core and a neutron \(\psi\) extends up to a distance much larger than \(R\) (solid curve). This is in sharp contrast with a typical bound state with \(|a|\sim R\) (orange dotted curve), whose spatial size is as small as \(R\). (b) Schematic illustration of the thermal neutron capture process via the halo state. While \(\sigma_{\rm TNC}\sim\pi R^{2}\sim\) 1-10 fm\({}^{2}\) for an ordinary process in which a neutron is directly captured by the core nucleus, in a resonant capture process, the low-energy neutron capture cross-section is enhanced by a halo state around the core as follows: (i) An incident neutron, typically at much smaller energy than the nuclear energy scales, approaches to distance \(r\sim|a|\). (ii) For typical nuclei, the neutron cannot get captured by the nucleus (red region) as the neutron is far away from the nucleus with its size \(R\ll r\). Because of the \(s\)-wave halo state with a very large spatial size (blue region), the neutron can be virtually captured into the halo state, even when it is much further away from the core. (iii) The captured neutron may, after a while, decay into an energetically more stable state by emitting energy, e.g. multiple \(\gamma\)-rays. Thus, the halo state acts as a _doorway_ state, through which the neutron capture rate is significantly enhanced. neutron-neutron interaction is strictly at the unitary limit \(a_{nn}=\pm\infty\) (orange dashed-dotted curves), an infinite tower of the Efimov states appear around the unitary limit of the core-neutron scattering length \(1/a=0\). In particular, the excited states excellently show the universal Efimov features; start to appear as Borromean states for \(1/a<0\) and dissociate into a particle and a dimer for \(1/a>0\) as \(1/a\) is varied. They also show the discrete scale-invariant pattern dictated by the Efimov theory: the ratios of the binding energies at the unitary limit \(1/a=0\) are \(\sqrt{E_{n+1}^{\rm(3B)}/E_{n}^{\rm(3B)}}=15.26(5),15.95(5)\) for Zr and \(\sqrt{E_{n+1}^{\rm(3B)}/E_{n}^{\rm(3B)}}=16.05(5),15.86(5)\) for Gd, which are in excellent agreement with the universal values of the Efimov theory \(e^{\pi/|s|}=15.95...\) and \(15.86...\), respectively. Contrarily, the ground state in Fig. 3 exhibit marginal Efimov features; it neither dissociates into three particles for \(1/a<0\) nor into a particle plus a dimer for \(1/a>0\), and the binding energies ratios at the unitary limit are \(\sqrt{E_{2}^{\rm(3B)}/E_{1}^{\rm(3B)}}=2.4(1)\) for Zr and \(\sqrt{E_{2}^{\rm(3B)}/E_{1}^{\rm(3B)}}=4.2(2)\) for Gd. The ground state with a large binding energy is affected by the finite-range effects and is less likely to show the Efimov behaviour dictated by the universal theory. Thus, the ground state with energy \(E_{1}^{\rm(3B)}=0.466(2)\) MeV (Zr, \(1/a=0\)) \(E_{1}^{\rm(3B)}=0.556(2)\) MeV (Gd, \(1/a=0\)) is much more affected by the finite-range effects than the first excited state, which has much smaller binding energies \(E_{2}^{\rm(3B)}=81.6(4)\) keV (Zr) and \(E_{2}^{\rm(3B)}=31.9(1)\) keV (Gd). We note that these ground states in Fig. 3 should not be confused with the true ground state of \({}^{90}\)Zr and \({}^{159}\)Gd; the binding energy of the ground states in Fig. 3 \(E_{1}^{\rm(3B)}\) is considerably smaller than the true ground state binding energy \(S_{2n}^{\rm(GS)}\approx 16\) MeV, which cannot be described by our low-energy three-body model. The ground states in Fig. 3 should be considered as sub-threshold excited states of \({}^{90}\)Zr and \({}^{159}\)Gd. In Fig. 3, we also show the results with a realistic neutron-neutron interaction with \(a_{nn}=-19.5\) fm (red solid curves). In contrast to the unitary one, we only find two three-body bound states because the realistic neutron-neutron interaction is slightly shallower than the unitary one. The excited trimer is extremely weakly bound \(E_{2}^{\rm(3B)}=0.868(4)\) keV (Zr \(1/a=0\)) and \(E_{2}^{\rm(3B)}=4.52\) keV (Gd \(1/a=0\)), with their sizes estimated as \(\sqrt{\langle r^{2}\rangle}\sim\sqrt{\hbar^{2}/2m_{n}|E_{2}^{\rm(3B)}|}=155\) fm Figure 3: Energy spectra of the three-body systems for variable core-neutron \(s\)-wave scattering length \(a\) for (a) \({}^{88}\)Zr+2n, and (b)\({}^{157}\)Gd+2n. The zero energy is set at the three-body threshold energy; e.g. \({}^{88}\)Zr+n+n threshold energy (with the rest mass of the neutron subtracted), i.e. the energy of the \({}^{88}\)Zr nucleus. The thick solid curves (red) correspond to the physical neutron-neutron potential, AV4 potential, with \(a_{nn}=-19.5\) fm, and the thick dashed-dotted curves (orange) correspond to a fictitious neutron-neutron potential, namely scaled AV4 potential tuned to the unitary limit \(1/a_{nn}=0\) (we only show up to the fourth trimer states). The thin curves (blue) are two-body energy of the core nucleus + one neutron, and the thin dashed lines (blue) are the universal dimer energy in Eq. (1). The green regions and the green vertical dotted lines are the physical \(s\)-wave scattering length of the nucleus and a neutron \(a\) estimated from the thermal neutron capture cross-section data (Eq. (2)) and the neutron separation energy data (Eq. (1)), respectively: (a) As \(|S_{1n}^{\rm(GS)}-E_{x}|\leq 25\) meV [36, 37] and the sign is unknown for Zr, we denote two vertical dotted lines, each corresponding to \(a=\pm 2.9\times 10^{4}\) fm. (b) For Gd, \(S_{1n}^{\rm(GS)}-E_{x}=31.4\) meV [34, 38], which suggests a large negative \(s\)-wave scattering length \(a=-2.6\times 10^{4}\) fm denoted as a vertical dotted line. for Zr and 68 fm for Gd, where \(m_{n}\) is the rest mass of the neutron. These states originate from the Efimov states in the case of \(a_{nn}=\pm\infty\). In particular, the weaker bound states excellently show the Efimov features: they start to appear as Borromean states for \(1/a<0\), follow the behaviours of the Efimov states for \(1/a_{nn}=0\) for \(1/a\approx 0\), and dissociate into a particle and a dimer (blue solid curve) for \(1/a>0\) as \(1/a\) is varied. Thus, the weaker bound state of Zr is akin to the second excited Efimov state of \(a_{nn}=\pm\infty\), whereas that of Gd behaves similarly to the first excited Efimov state of \(a_{nn}=\pm\infty\), pushed up because the neutron-neutron scattering length is not infinite. The tighter bound state shows relatively less clear Efimovian characters due to finite-range effects, but it still may be considered as the Efimov states for most regions; it behaves as a Borromean state for \(1/a<0\) and the energy spectrum behaves in an analogous manner as \(1/a\) is varied from the unitary limit toward \(1/a>0\). The exceptions are \(1/a\ll 0\) and \(1/a\gg 0\) where they neither dissociate into three particles nor a particle plus a dimer due to finite-range effects. Except for these regions, the weaker and tighter bound sub-threshold states of Zr and Gd seems highly likely to be Efimov-like states. The ratios between the tighter and weaker bound states are \(\sqrt{E_{2}^{\rm(3B)}/E_{1}^{\rm(3B)}}=14.2\) for Zr and \(\sqrt{E_{2}^{\rm(3B)}/E_{1}^{\rm(3B)}}=7.6(3)\) for Gd at \(1/a=0\); these values are slightly better than those at \(1/a_{nn}=0\) (orange dashed-dotted curves) because of the smaller binding energies, which result in weaker finite-range effects. By using the \(\sigma_{\rm TNC}\) data, we can more specifically determine the binding energy of the Efimov trimers. From the \(\sigma_{\rm TNC}\) data, we can estimate the absolute values of \(a\) from Eq. (2), which are represented as left-most and right-most boundaries of the green regions in Fig. 3. Although these are not the accurate values of the \(s\)-wave scattering length, they are still useful for evaluating the order of magnitude of \(a\) in the Efimov plot (shown by coloured regions in Fig. 3) and estimating the binding energies of the sub-threshold trimer states as \(E_{1}^{\rm(3B)}=-176(1)\) keV and \(E_{2}^{\rm(3B)}=-0.87(1)\) keV for \({}^{90}\)Zr and \(E_{1}^{\rm(3B)}=-261(2)\) keV and \(E_{2}^{\rm(3B)}=-4.51(24)\) keV for \({}^{159}\)Gd. Better estimates of the value and the sign of \(a\) can alternatively be obtained if accurate data of \(S_{1n}^{\rm(GS)}\) and \(E_{x}\) are available; the \(s\)-wave scattering length estimated from Eq. (1), dimer binding energy (blue curve), and \(S_{1n}^{\rm(GS)}-E_{x}\)[36; 37; 38; 34] is denoted as vertical dotted lines (green), giving us the estimate \(E_{1}^{\rm(3B)}=-176(1)\) keV and \(E_{2}^{\rm(3B)}=-0.868(2)\) keV for \({}^{90}\)Zr and \(E_{1}^{\rm(3B)}=-261.7(1)\) keV and \(E_{2}^{\rm(3B)}=-4.49(2)\) keV for \({}^{159}\)Gd. These values are in excellent agreement with those with \(\sigma_{\rm TNC}\), vindicating our approach to search for the large \(s\)-wave scattering length nuclei with \(\sigma_{\rm TNC}\). To examine the dependence of these results on inter-particle interactions, we also perform our three-body calculations for different choices of neutron-neutron interactions (see Appendix for details). Most of the qualitative features are found to remain the same as Fig. 3. One notable difference is that the presence of the weaker bound state is sensitive to the value of \(a_{nn}\). We found that only the tighter bound trimer of \(|E_{1}^{\rm(3B)}|\sim 100\) keV appear for the neutron -neutron interaction with slightly smaller \(|a_{nn}|\). While accurate inputs of \(a_{nn}\) and \(a\) are necessary to conclude the presence/absence of the second trimer, we universally find that at least one trimer with very small binding energy \(E_{1}^{\rm(3B)}\sim 100\) keV and large spatial size \(\sqrt{\langle r^{2}\rangle}\sim 10-20\) fm exist irrespective of specific choice of inter-nucleon potentials. Therefore, we conclude that at least one, and possibly two, Efimov trimers appear as sub-threshold excited states of \({}^{90}\)Zr and \({}^{159}\)Gd. As the neutron-neutron interaction is common for all nuclei, we may also conjecture that there can also be one or at best two Efimov states for the other candidates in the ridge stripes in Fig. 1(a), owing to a finite value of \(a_{nn}\). However, this hypothesis must be tested in future studies, ## III 'Ridge Stripes' for the Efimov States Above the Nuclear Chart \({}^{90}\)Zr and \({}^{159}\)Gd are not the only nuclei, but should be a few examples of broader series of halo states. Let us give a global viewpoint on where in the nuclear chart our sub-threshold excited Efimov states should appear. Consider starting from light balanced nuclei and increase the number of nucleons along the valley of stability, i.e. diagonal direction in the nuclear chart. As the nucleon number increases, the nucleons in the ground-state nuclei progressively occupy higher \(nl\) states as dictated by the shell model, which means an increase of the Fermi energy. The energy of the excited \(ns\) states as measured from the Fermi energy, i.e. \(E_{x}\) of \(ns\) states, would therefore decrease. Conversely, the one-neutron separation energy for the stable nuclei inside the valley of stability is almost constant \(S_{1n}^{\rm(GS)}\sim 8\) MeV. Therefore, \(E_{x}-S_{1n}^{\rm(GS)}\) tends to decrease as the number of nucleons increases. We then arrive at the conjecture that there are optimal numbers of nucleons for each 2s, 3s, 4s states which realise small \(E_{x}-S_{1n}^{\rm(GS)}\). This suggests that there are regions in the nuclear chart, each representing \(n\)-th \(s\) state, where the nuclei in those regions are likely to possess halo \(s\)-wave state near the neutron separation threshold, rendering large \(\sigma_{\rm TNC}\). Due to periodic appearance of \(ns\) states with different principal quantum numbers, those regions should appear repeatedly, just like'stripes' when represented in the nuclear chart as in Fig. 1(a). Notably, the halo \(s\)-wave state and the Efimov states appear as excited states in the vicinity of the neutron separation thresholds. If we add an excitation energy as an additional axis on top of the nuclear chart as shown in Fig. 1(a), and represent the one- and two-neutron separation thresholds as the stability boundaries of the nuclei, then the \(s\)-wave halo state of \({}^{A+1}X\) and the Efimov state in \({}^{A+2}X\) appear in the vicinity of the curved surfaces representing \(S_{1n}^{\rm(GS)}\) and \(S_{2n}^{\rm(GS)}\), respectively. Consequently, the \(s\)-wave halo of \({}^{A+1}X\) (the Efimov states in \({}^{A+2}X\)) are likely appear in the 3D stripe regions along the curved surfaces of \(S_{1n}^{\rm(GS)}\) (\(S_{2n}^{\rm(GS)}\)), each corresponding to 2s, 3s, 4s... stripes. \({}^{90}\)Zr and \({}^{159}\)Gd belong to the 3s and 4s stripe, respectively. When we fix the number of neutrons and decrease the number of protons, moving toward the neutron-rich nuclei, the neutron separation energy decreases. It finally vanishes \(S_{1n}^{\rm(GS)}=0\) when approaching the neutron drip-line, so does the 3D stripes. The stripe regions along the curved surfaces of \(S_{1n}^{\rm(GS)}\) and \(S_{2n}^{\rm(GS)}\) therefore appear just like 'ridges' along the mountains: starting their foots from the neutron-rich drip-line, increasing their heights up to plateaus of \(S_{1n}^{\rm(GS)}\sim 8\) MeV and \(S_{2n}^{\rm(GS)}\sim 16\) MeV for nuclei in the valley of stability. Notably, the ridge stripes for the \(s\)-wave halos of \({}^{A+1}X\) on \(S_{1n}^{\rm(GS)}\) as well as the Efimov states of \({}^{A+2}X\) on \(S_{2n}^{\rm(GS)}\) are likely to appear in a parallel direction along the proton number axis. This is because the change of \(S_{1n}^{\rm(GS)}-E_{x}\) caused by the change of the neutron number tends to be much more significant than the change of proton number; if the neutron number is changed, the Fermi energy of the neutron gets dramatically changed, leading to significant change of \(E_{x}\). By contrast, if the proton number is changed with a fixed neutron number, then the energy levels of the neutrons shift. Unless this shift is so significant that the nuclei suddenly change the neutron occupation, the changes in \(S_{1n}^{\rm(GS)}\) and \(E_{x}\) are compensated by each other, resulting in a minor change in \(S_{1n}^{\rm(GS)}-E_{x}\). We therefore arrive at the schematic picture in Fig. 1(a), where the Efimov states likely appear in the ridge stripes regions, straddling above the nuclear chart and valley of stability, along the \(S_{2n}^{\rm(GS)}\) surface. The ridge stripes picture can naturally encompass conventional candidates of the \(s\)-wave halo and the Efimov states in the neutron-rich nuclei: the halo nuclei close to the neutron drip-line has a small \(S_{1n}^{\rm(GS)}\). In most candidates for Efimov states, the \(ns\) state should be close to the threshold. Therefore, these nuclei can be considered as belonging to the ridge stripe for the \(ns\) state around the foot of the ridge \(S_{1n}^{\rm(GS)},S_{2n}^{\rm(GS)}\approx 0\). As specific examples, \({}^{11}\)Li and \({}^{19}\)B, \({}^{20}\)C, and \({}^{22}\)C [22; 23; 25], may be qualitatively considered as belonging to the edge of the \(2s\) ridge. A super-neutron-rich nuclei \({}^{62}\)Ca and \({}^{72}\)Ca is theoretically studied as a candidate for the Efimov-like halo states [28; 29; 30; 42], which can be considered as belonging to the \(3s\) ridge. ## IV Conclusions & Outlook We studied nuclei excited in the vicinity of the neutron separation threshold, and proposed a systematic way to search for nuclei with a large \(s\)-wave scattering length using the thermal neutron capture cross-section data. The candidate nuclei found by our protocol, \({}^{88}\)Zr and \({}^{157}\)Gd, have enormous neutron capture cross-sections, suggesting doorway halo states with gigantic spatial sizes and hence extremely large \(s\)-wave scattering lengths, which are 1-2 orders of magnitude larger than those in previous low-energy nuclear studies. Because of the large \(s\)-wave scattering length, these nuclei are ideal testbeds for studying Efimov physics in nuclear systems. We demonstrate that at least one, and possibly two, three-body bound states of a nucleus plus two neutrons of Efimov nature appear as the subthreshold excited states in \({}^{90}\)Zr and \({}^{159}\)Gd. These are not the only candidates, but rather a few examples of richer varieties of halo nuclear states that appear in the vicinity of the neutron separation threshold. We have argued that the \(s\)-wave sub-threshold halo states and the Efimov states should appear in the 'ridge stripes' regions excited above the nuclear chart as schematically depicted in Fig. 1, each stripe region representing \(ns\) valence state (\(n=2,3,4,...\)). Our ridge stripes picture naturally encompasses both the conventional Efimov candidates of neutron-rich nuclei as a special case of \(S_{1n},S_{2n}\to 0\), and our newly proposed Zr and Gd in the middle of the valley of stability as \(S_{2n}\approx 16\) MeV. Further investigations into the properties of the sub-threshold excited states are necessary to unveil the nature of the nuclear Efimov states: e.g. their structures and reactions, detailed many-body calculations with more sophisticated interactions. \({}^{90}\)Zr and \({}^{159}\)Gd are good candidates to start exploring the nuclear Efimov physics in the experiments, and will open a novel avenue for studies on the halo nuclei and their universality. In particular, we expect their signatures can be observed from an inelastic scattering or a two-neutron transfer reaction; because \({}^{90}\)Zr is a stable nucleus, an inelastic scattering may excite the ground state to the Efimov states. Alternatively, the Efimov states in \({}^{159}\)Gd can be observed by a two-neutron transfer reaction on a stable nucleus \({}^{157}\)Gd. The other nuclei with a large neutron capture cross-section such as \({}^{135}\)Xe [33; 43] would also be a notable candidate as shown in Fig. 1. However, it is more challenging owing to unstable nature of \({}^{135}\)Xe and \({}^{137}\)Xe, similarly as unstable \({}^{88}\)Zr, \({}^{159}\)Gd nuclei. We anticipate that the Efimov states in such nuclei may be observed in the near future nuclear experiments with radio-isotope beams. The halo nuclear states are indispensable building blocks to understand the origin of the elements and the stellar reaction in the universe, and are at the frontier of the nuclear studies. In addition to the neutron-rich nuclei, proton-rich nuclei close to the proton drip-line are equally relevant. There are even more exotic cluster nuclei with \(\alpha\) particles:\({}^{8}\)Be, which is considered as a two-body bound state of two \(\alpha\) particles with a large \(s\)-wave scattering length in the vicinity of \(\alpha\) breakup threshold. The Hoyle state [44], an excited state of \({}^{12}\)C, has been speculated as a three-body Efimov-like state of three \(\alpha\) particles. In contrast to the sub-threshold states around the neutron separation threshold, these states are significantly affected by the Coulomb interaction, some of them becoming resonant states pushed up into the continuum. Although the relationship between these charged halo nuclei and our excited halos need to be clarified, our study has added a new avenue, with specific candidate nuclei, for the halo nuclear physics investigations. Furthermore, there are possibilities of \(p\)-wave or \(d\)-wave sub-threshold excited states, which may also be explored by looking at the thermal neutron capture cross-section at small but moderately large incident energy. The \(p\)-wave and \(d\)-wave halos may also appear as the ridge stripes of \(np\) and \(nd\) states above the nuclear chart, interweaving with each other and with the \(s\)-wave stripes presented in this paper. ###### Acknowledgements. We thank Tokuro Fukui, Tomohiro Uesaka, Susumu Shimoura, Kazuyuki Ogata, and Emiko Hiyama for fruitful discussions. This work was supported by JSPS KAKENHI Grant Numbers JP21H00116, JP22K03492, JP23H01174, and also by the RCNP Collaboration Research Network program as the project number COREnet-050. ## Appendix A Methods ### Three-body model We consider a three-body problem of a core nucleus plus two neutrons in the opposite spin states. We assume that the interaction potential between the core nucleus and neutrons are the same for spin-up and spin-down neutrons. This is particularly true for \({}^{88}\)Zr (\(0^{+}\) state) owing to its symmetry, but a working hypothesis for \({}^{157}\)Gd (\(3/2^{-}\) state) owing to the lack of nuclear data. The interaction between the nucleus core and a neutron is modelled as the Woods-Saxon potential \[V(r)=\frac{V_{0}}{1+\exp\left(\frac{r-R}{r_{0}}\right)}, \tag{10}\] with \(R=1.25A^{1/3}\) fm, \(r_{0}=0.65\) fm, and \(A=88,157\) is the mass number for Zr and Gd, respectively. The depth of the potential \(V_{0}\) is varied to change the \(s\)-wave scattering length \(a\) between the core nucleus and the neutron for obtaining the Efimov spectrum as a function of \(a^{-1}\). More specifically, the depth of \(V_{0}\) is tuned to be close to third and fourth \(s\)-wave resonances, which correspond to the dominant threshold 3s and 4s orbitals in \({}^{90}\)Zr and \({}^{159}\)Gd, respectively. Indeed, the selected \(V_{0}\) are in the range of \(-35\)-\(65\) MeV and \(-45\)-\(75\) MeV respectively, which are consistent with the realistic Woods-Saxon potential depth \(V_{0}=-40\)-\(50\) MeV [45]. For the interaction between the neutrons in the opposite spin states, we adopt the AV4 potential with \(a_{nn}=-19.5\) fm [41] as a realistic neutron-neutron potential in Fig. 3 because we can essentially focus on the central part of the interaction at low energy. To compare it with a idealized neutron-neutron interaction strictly at the unitary limit \(a_{nn}=\pm\infty\), we also perform the calculation with a scaled AV4 potential \(\lambda V_{nn}(r)\): \(\lambda\) is fine-tuned to be \(a_{nn}=\pm\infty\) around the first \(s\)-wave resonance, which renders \(\lambda=1.07....\) for the AV4 potential. In solving the three-body problem, we use the separable potential method as introduced in Refs [13; 14]. Namely, we construct separable functions \(g|\chi\rangle\langle\chi|\) to exactly reproduce the zero-energy two-body wavefunctions for the Woods-Saxon and neutron-neutron potentials, respectively, and solve the Skorniakov-Ter-Martirosian-like equation. This method does not require an introduction of an artificial three-body parameter, but the scales of the Efimov spectrum is naturally set by the short-range correlations introduced in the separable functions. ### Different neutron-neutron interaction To understand how universal our results are with respect to inter-particle interactions, we also perform our three-body calculations with two other neutron-neutron interactions: the Minnesota two-range Gaussian potential with \(a_{nn}=-16.9\) fm [46], and charge-dependent Malfliet Tjon two-range Yukawa potential (CDMT13) [47] which are adapted to well-reproduce the low-energy scattering characteristics of the neutrons \(a_{nn}=-18.59\) fm [32]. These potentials have different values \(s\)-wave scattering lengths, in addition to behaving rather differently at short distance. The results are summarized in Tables 1 and 2 together with those of AV4 potential discussed in the main text and in Fig. 3. First, the precise values of the binding energies are different for different potentials owing to the difference in the \(s\)-wave scattering lengths and short-range form of the potentials. The latter effect can be interpreted as changing the value of the three-body parameter, which affects the absolute value of the binding energy of the Efimov states while the other universal features remain unaffected. The first and second excited states \(E_{2}^{(3\mathrm{B})},E_{3}^{(3\mathrm{B})}\) of the unitary limit case \(a_{nn}=\pm\infty\) vary by \(\sim 15\%\), which is comparable to the variance of the three-body parameter found in the case of identical bosons [13; 14]. In the other respects, the trimer energies for different nn interactions with \(a_{nn}=\pm\infty\) behaves almost perfectly in the same manner, with higher excited states well reproducing the scaling ratio as dictated by the universal Efimov theory. The second and third excited states in Tables 1 and 2 seem to have large errors due to the uncertainty in the sign and value of the \(s\)-wave scattering length estimated from the thermal neutron capture cross-section data. If we compare \(E_{n}^{(3\mathrm{B})}\) strictly at \(1/a=0\), we find no such errors and the higher excited states better reproduce the universal scale factor. The difference in the value of \(s\)-wave scattering length \(a_{nn}\) has much more significant impact because the number of the Efimov states gets smaller for smaller value of the \(s\)-wave scattering length \(|a_{nn}|\). For the AV4 potential, there are two trimer states for both \(a_{nn}>0\) and \(a_{nn}<0\) sides. The ratio of the binding energies of \({}^{90}\)Zr is \(14.3(1)\), which is in good agreement with the Efimov universal value \(15.95\), even though \(a_{nn}\) and \(a\) are not strictly at the unitary limit. This indicates the possibility of testing the discrete scale invariance of the Efimov states in nuclear physics; however, some remarks are in order. First, the binding energy of the excited state is \(\lesssim\) keV, which is extremely small and a minute change in the value of the \(s\)-wave scattering length can significantly affect the presence of this excited Efimov state. For the Minnesota and CDMT13 potentials, we only find one trimer state. This is consistent with the number of the Efimov period estimated from the universal Efimov theory \(\frac{|s|}{\pi}\ln(|a_{nn}|/R)\approx 0.4-0.5\): due to finite value of \(a_{nn}\), there can be 1 or at best 2 Efimov states. The precise values of the binding energies of the trimer \(E_{1}^{(3\text{B})}\) is also affected by the values of \(a_{nn}\), because its spatial size \(\sqrt{\langle r^{2}\rangle}\) is comparable with \(|a_{nn}|\). While the precise values of the binding energies are affected by the detailed shape of the potential and \(a_{nn}\), we find the other features of the tighter bound trimer state remains the same as those of AV4 potential.
2309.10144
Human Gait Recognition using Deep Learning: A Comprehensive Review
Gait recognition (GR) is a growing biometric modality used for person identification from a distance through visual cameras. GR provides a secure and reliable alternative to fingerprint and face recognition, as it is harder to distinguish between false and authentic signals. Furthermore, its resistance to spoofing makes GR suitable for all types of environments. With the rise of deep learning, steadily improving strides have been made in GR technology with promising results in various contexts. As video surveillance becomes more prevalent, new obstacles arise, such as ensuring uniform performance evaluation across different protocols, reliable recognition despite shifting lighting conditions, fluctuations in gait patterns, and protecting privacy.This survey aims to give an overview of GR and analyze the environmental elements and complications that could affect it in comparison to other biometric recognition systems. The primary goal is to examine the existing deep learning (DL) techniques employed for human GR that may generate new research opportunities.
Muhammad Imran Sharif, Mehwish Mehmood, Muhammad Irfan Sharif, Md Palash Uddin
2023-09-18T20:47:57Z
http://arxiv.org/abs/2309.10144v1
# Human Gait Recognition using Deep Learning: A Comprehensive Review ###### Abstract Gait recognition (GR) is a growing biometric modality used for person identification from a distance through visual cameras. GR provides a secure and reliable alternative to fingerprint and face recognition, as it is harder to distinguish between false and authentic signals. Furthermore, its resistance to spoofing makes GR suitable for all types of environments. With the rise of deep learning, steadily improving strides have been made in GR technology with promising results in various contexts. As video surveillance becomes more prevalent, new obstacles arise, such as ensuring uniform performance evaluation across different protocols, reliable recognition despite shifting lighting conditions, fluctuations in gait patterns, and protecting privacy.This survey aims to give an overview of GR and analyze the environmental elements and complications that could affect it in comparison to other biometric recognition systems. The primary goal is to examine the existing deep learning (DL) techniques employed for human GR that may generate new research opportunities. **Keywords:** Human Gait Recognition, Deep Learning, Biometric, Action Recognition ## 1 Introduction Gait Recognition (GR) is a method of identifying individuals by their walking patterns [1]. There are different types of human recognition, such as face recognition, iris recognition, voice recognition, hand geometry, signature, vein, and gait recognition [2, 3, 4, 5, 6, 7]. DL algorithms have been widely used in various biometrics tasks such as human face and iris recognition [8]. However, very few studies focus on GR using DL. GR is a challenging issue due to the subtle differences in people's walk [9]. Computers find it challenging to recognize human gaits because subtle movements are hard to detect and variations between people, body types, and conditions [10]. DL has significantly improved many image recognition tasks, like object detection or image segmentation [11]. It is also showing promising results in GR. GR is used for identification and authentication applications such as user authentication, access control [12], etc. It is a biometric measure that captures and analyzes gait patterns from video or images to authenticate a person [13]. A GR system can be used for two different purposes: which are discussed below. ### Identification based on Gait Recognition In this use case, a system has to verify if a person is the same person who has taken earlier from the database sample [14]. The sample may be any video or images of the person's gait and the features of a gait are required to match with an image or a video [15]. To make it more accurate, the gait samples can be taken from various conditions like walking speed, walking on different surfaces, taking samples while carrying baggage, under conditions like indoors or outdoors, etc [16]. The robustness of the model is dependent on the number of samples and variations of the samples collected to train the system. A more robust model is more likely to identify the same person consistently even when walking at different speeds, carrying different types of luggage [17], etc. ### Verification based on Gait Recognition Unlike identification, where the system has to identify a person, in verification [18], the system has to match a sample that is taken at one time with another sample that is taken at other times. This is more challenging because the system has to account for all factors that impact an individual's gait, such as mood, age, health, fitness level, etc [19]. Fig 1 below depicted the human GR block diagram. GR systems are beneficial when other biometric systems, such as fingerprint recognition or facial recognition, do not work well because of their ineffectiveness in challenging environments [21]. ## 2 Overview of Gait Recognition GR has appeared in science fiction for decades, from the automated doors in Star Trek to the futuristic police identification systems in Minority Report [22]. Despite that, the technology to perform GR was not available or reliable until the last decade [23]. Visual pattern recognition is a challenging job in computer science [24]. It is also subject to variations and changes depending on the person, the environment, and the time of day [25]. It has been performed with a variety of different techniques [26]. Some of the earliest work was based on computer vision and analyzing the movement of limbs or body shapes, while more recent work has used acoustic techniques [27]. ### Distinctive Properties of Gait Recognition GR has distinctive properties that make it challenging as compared to other biometrics [28]. Due to the variation in gait, it is an unusual biometric and Figure 1: Human GR block diagram [20]. The output is obtained by using background subtraction to pass through the video or images of a moving person, extracting features from the provided data, and then comparing the resulting images to those in the database for recognition. is often referred to as a "fool-proof" biometric [29]. Additionally, GR cannot be modified aside from paralysis or amputation [30]. Furthermore, since individuals walk daily, the data used for ML algorithms can be collected easily and GR is classified as a passive biometric requiring no active user effort [31]. These factors make GR an especially valuable biometric. ### Importance of Gait Recognition Gait Recognition (GR) is a valuable tool in various situations and uses, yet it can be difficult for computers to perform this task effectively [33]. Furthermore, humans are hard enough to recognize from videos or pictures, making GR even more challenging [34]. It consists of using data obtained through visual sensors such as cameras and then collecting and processing the information to identify the individual's gait [35].This is made complicated due to the two-dimensional nature of visual data. That means the computer only has two visuals to work with. Additionally, GR tends to vary slightly depending on who is performing it and the conditions they are walking in, so computers must be able to recognize minor alterations while keeping up a high degree of accuracy [36]. ### Human Gait Representation The human gait cycle is a series of phases in which the body moves from one position to another [37]. The stance phase starts with the flat foot on the ground and ends with the body in a fully upright position [38]. The swing phase follows when the body is in motion. The additional phases include toe-off and heel-off, mid-swing, final swing, and pause [39]. Most sports and activities aim to move from one point to another as quickly and efficiently as possible. Our human gait cycle comprises two significant phases: stance and swing which are discussed below. 1. **Stance Phase**: The stance phase lasts from just before foot contact to when the ball of your foot hits the ground again after toe-off. This time allows you to prepare your muscles and joints for movement by stretching and lengthening them. 2. **Swing Phase**: The swing phase starts immediately after foot contact and ends when your ankle comes back under control, or you lift off the ground by pushing off with your toes. During this time, you are using the small muscles in your legs to push against the ground with each step you take. The cycle of these two phases is depicted in Fig 2. ## 3 Environmental Factors that Affect Gait Recognition An individual's gait is affected by many factors, including lighting, hearing, stress levels, exhaustion and diseases like Parkinson's [41]. If you plan to implement GR in your business or organization, you must understand how various environmental factors may impact its effectiveness [42]. Several environmental factors can affect the precision of GR [43]. These include: 1. Lighting Conditions - GR is 25% less accurate in low-light conditions than in bright indoor light. 2. Scarring - Traditionally, GR is used to identify and distinguish suspects in criminal proceedings. If an individual has a distinctive scar not located on the side of their face, it can be employed as a form of identification. 3. Clothing - Certain types of apparel, such as large bags or backpacks, can negatively influence the accuracy of GR. 4. Environmental Conditions - GR is more precise when used indoors and less precise when utilized outdoors. Figure 2: Human gait cycle between stance to swing phase [40]. The green color shows the movement of the left leg and the yellow color shows the moment of the right leg. We can also see single support in which the person is getting support from a single leg while moving and double support in which both legs of the person are on the ground. ### Security Strength of Gait Bio-metric GR can be valid for security and medical purposes, like recognizing people who enter a facility or monitoring a patient's progress[44]. Additionally, it can distinguish persons in crowded areas, which may be advantageous for law enforcement, intelligence gathering, and other safety-related domains[23][45]. Furthermore, GR is usually more precise in indoor settings with sufficient lighting than in outdoor and dimly lit conditions[46]. A GR algorithm must be trained on many subjects' walking styles before accurately identifying people [47]. Using more subjects than fewer subjects to train the algorithm is better. This means that the algorithm is trained on various walking patterns and can identify various people. GR can be used as a bio-metric modality as it is not easily spoofed [48]. There have been instances where people try to fake a person's gait, but these have been unsuccessful. This is because learning how to walk like another person is challenging [49]. ## 4 Gait Recognition Methods Human GR is a field of research at an early-to-mid stage [50]. The computational complexity of most methods remains high, however. For example, many approaches require multiple cameras and viewpoint normalization. Most methods still have some room for improvement. Regardless of their limitations [51], these technologies have the potential to revolutionize gait analysis. To perform the GR task, various types of approaches including appearance-based and model-based approaches are depicted in Fig 3. 1. Appearance-based approaches for object recognition seek to find objects by their visual appearance [53]. This contrasts with other methods that attempt to find objects by their geometrical, physical, or functional properties. These methods are generally computationally intensive and are thus often used in combination with other methods to create a more efficient object recognition system. 2. Model-based approaches have become increasingly popular in a variety of different fields [54]. These approaches are particularly effective in the domain of ML. There are several advantages to using model-based approaches. These advantages include the ability to improve generalization, the ability to capture relationships between variables, and the ability to learn from incomplete data. Furthermore, some papers that have used these approaches are discussed below Table 1. Figure 3: GR methods are divided into appearance and model-based approaches [52]. The first one is further divided into two methods; sihouette methods and non-sihouette methods. The later is divided into three models; 2D structural models, 3D structural models and motion models. There are many other methods that can improve the GR processes such as parametric eigenspace transformation, canonical space analysis, and deep learning method [80]. ### Parametric Eigenspace Transformation Parametric eigenspace transformation based on PCA [81], is an effective method to accelerate GR and identification methods [82]. It can detect differences in gait patterns from an image sequence [83]. This way, it is possible to recognize different gaits without using a database. It is also helpful for automatic attendance systems. It is an effective metric for automatic gait analysis and faces recognition [84]. The transformation reduces the dimensionality of data by transforming each image template from a high-dimensional to a low-dimensional space [85]. Gait is considered a sequence of images providing rich perceptual information [86]. Moreover, it can reveal a walker's identity, gender, and emotional state [87]. Therefore, it is essential to identify the gait signature consistent across multiple viewing conditions [88]. ### Canonical Space Analysis Canonical space analysis is a valuable technique for reducing the dimensionality of data [89]. It is a powerful technique that can be used in GR methods. It works by transforming each image template from a high to a lower-dimensional canonical space [90]. This transformation makes GR easier and faster. This technique can achieve high recognition accuracy even for relatively small data \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Authors (et al.)** & **Appearance-based approaches** & \multicolumn{3}{c|}{**Model-based approaches**} \\ \hline & Silhouette-Based & **Non-Silhouette** & **2D Structural** & **3D Structural** & **Motion** \\ \hline **Lwama**[55] & ✓ & & & & \\ \hline **Bashir**[56] & ✓ & & & & \\ \hline **Hayron-Acquah**[57] & ✓ & & & & \\ \hline **Sivapalan**[58] & ✓ & & & & \\ \hline **Li**[59] & ✓ & & & & \\ \hline **Wu**[60] & ✓ & & & & \\ \hline **Zeng**[61] & ✓ & & & & \\ \hline **Kusakuninran**[62] & & ✓ & & & \\ \hline **Brox**[63] & & ✓ & & & \\ \hline **Hu**[64] & & ✓ & & & \\ \hline **Akita**[65] & & & ✓ & & \\ \hline **Guo**[66] & & & ✓ & & \\ \hline **Rohr**[67] & & & ✓ & & \\ \hline **Karaulova**[68] & & & ✓ & & \\ \hline **Niyogi**[69] & & & ✓ & & \\ \hline **Cunado**[70] & & & & & ✓ \\ \hline **Yam**[71] & & & & & ✓ \\ \hline **Wagg**[72] & & & & & ✓ \\ \hline **Bouchrika**[73],**[74] & & & & & ✓ \\ \hline **Arjanto**[75] & & & & ✓ & \\ \hline **Zhao**[76] & & & & ✓ & \\ \hline **Tang**[77] & & & & ✓ & \\ \hline **Lopez-Fernández**[78] & & & & ✓ & \\ \hline **Kastaniotis**[79] & & & & ✓ & \\ \hline \end{tabular} \end{table} Table 1: Various approaches of the gait recognition method used in different studies. The check mark indicates the approach used in the respective paper. sets [91]. It is also capable of handling changing walking conditions and camera viewing angles [92]. The very demanding method for researchers nowadays is a deep neural network which is discussed below. ## 5 Overview of Deep Learning DL is a subset of ML that uses many layers of neural networks. The first papers about DL were published in the 1980s [93], but today it has become increasingly important in computer science, especially artificial intelligence [94]. Among the most notable achievements of this technology are computer programs that can play better than any human, speech recognition systems that are better than those used by humans, and cars that can drive themselves [95]. DL is based on artificial neural networks [96, 97]. DL algorithms are generally trained using massive amounts of data, i.e. image datasets, sound recordings, or data about human behavior [98]. There is a large number of deep learning techniques used by researchers in present years [99, 100, 101, 102, 103, 104, 105]. ### Recognizing Human Gait using Deep Learning and Computer Vision DL is used for various computer vision tasks, i.e. object detection and image captioning [106]. These algorithms are trained to learn from data and perform specific tasks [107]. They can be used for image recognition, GR, and other visual recognition tasks [108]. Computer Vision (CV) deals with enabling computers to see and understand visual content [109]. It is a vast subject that can be applied to various fields, including medical science, food security, traffic monitoring, robotics, and human motion recognition [110]. There are several ways that computer vision can be implemented. The most common are image recognition, object detection, and image segmentation. Image recognition is the ability to look at an image and understand it [111]. Object detection is the capability to identify individual objects in an image, even if the objects are incomplete or partially obstructed from view [112]. Segmentation allows you to distinguish a segment into parts that you can describe individually. In [113], authors presented a thorough understanding of the gait recognition framework to enhance the accuracy of several gait databases. Palla et. al. [114] proposed a novel FTS method that utilized the Firefly Algorithm to choose boundaries. PCA is used to reduce dimensionality, and multiple discriminant analysis (MDA) is used to increase class separability. The proposed technique was tested on a CASIA-B database, with the results showing excellent performance compared to other gait recognition systems discussed in published literature. Rao et. al. [115] proposed an algorithm that utilized a Gait Energy Image as a template to select key features from the inputted gait data. Afterward, these selected traits are submitted to Principal Component Analysis (PCA) and Multi-Class Linear Discriminant Analysis (LDA) to enhance gait recognition accuracy. The proposed approach can markedly improve the efficacy of gait recognition compared to existing SOTA techniques. ### Application of Deep Learning to Aging or Disabled Subjects A DL model can be trained to recognize the faces of aging or disabled subjects [116]. One popular CNN model is the orthogonal Embedding-CNN model [117]. This method uses two CNNs instead of one for face recognition. Some main steps are required to perform human GR tasks which is discussed below [118]: #### 5.2.1 Feature Selection Currently, researchers have been working on the subject of human GR using DL [119, 120]. The idea is to create a system that can recognize human gaits from video. The process of recognizing human gaits is based on a deep neural network (DNN) and the features of a person's gait. The DNN can be trained to recognize human gait patterns and is highly effective in identifying human style [121]. #### 5.2.2 Classification Almost all artificial neural network architectures trace back to similar input relevance values [122] means it can predict based on the same input features it picked up during training. This is an important aspect when studying human gait, as the symmetries of the left and right body movements can help identify an individual [123]. Furthermore, this approach can automatically classify disease and gait disorders [124, 125, 126]. #### 5.2.3 Accuracy Several factors must be considered to ameliorate the accuracy of human GR using DL [127]. One of these factors is the amount of variability in the data. This variability is inherent to human movements. In addition, it is essential to build a robust and reliable model [128]. ### Limitations and challenges of Deep Learning for Human Gait Recognition The following are some limitations of applying DL for GR [129]. It may not work well, 1. In low-light or dark environments. 2. In noisy environments with much background noise. 3. In environments where the person is wearing a heavy object or an outfit that affects how they walk. 4. If the person is walking in a way that is not natural. 5. If the person is walking unusually or oddly. 6. If the person is walking at a different speed from the sample that was used to train the model. 7. If the person is walking in a different pose or posture than the sample for model training. 8. If the person has undergone significant changes since the sample was taken, such as losing weight, gaining weight, etc. Human gait recognition is a promising biometric for human identification, but it still faces challenges due to factors such as the intrinsic variability of human walking styles, environmental factors, clothing, and lighting conditions [130]. This technology's difficulty is its ability to accurately identify humans from movement patterns alone, which is also affected by age differences and disability status [131]. Furthermore, algorithms are often required to be robust enough to handle multiple views of a person's walking style over time and across different environments [32]. Finally, gait recognition algorithms must effectively deal with outliers - people who walk differently than most others - without negatively impacting accuracy [132]. ## 6 Datasets DL (DL) is a powerful approach to ML to design the connection between inputs and outputs. The key advantage of DL is that it can be trained very efficiently using large datasets [133]. This can be used for various applications, including recognizing people at airports or helping robots walk-in environments with rugged terrain [134]. ### Ou-ISIR LP Bag Dataset The OU-ISIR LP Bag dataset was first introduced in 2018 and has attracted considerable attention from the community [135]. In less than two years, 18% of GR methods have utilized it [136]. However, there are some caveats to using it. First of all, the dataset only includes gait data with carried objects. Second, it is intended for specific applications, such as single-viewpoint applications [137]. The OU-ISIR dataset contains the gait silhouettes of 4,007 subjects. The subjects are aged from 0 to 94 years and span a wide age range. The data were acquired using two acquisition sessions with each subject. The subjects were recorded from 14 angles, with a 15-degree angle change at every step. The dataset has been extensively tested using cross-view testing protocols [138]. It comprises two databases: the OU-ISIR Gait Database and the CASIA Gait Database B [139]. It has a population of over 4000 subjects, sufficient to test and improve the algorithms. The data are also used for age estimation. Some well-known publically available datasets to evaluate different approaches for Human GR systems are described in table 2. The number of subjects is also mentioned in the table above. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Authors (et. al.)** & **Year** & **Approaches** & **Subjects** & **Datasets** \\ \hline \multirow{2}{*}{Chalidabhongse [140]} & \multirow{2}{*}{2001} & \multirow{2}{*}{UMD} & \multirow{2}{*}{55} & Video-Based Dataset \\ \hline Phillips [141] & 2002 & NIST & 74 & Video-Based Dataset \\ \hline Wang [9] & 2003 & CASIA & 124 & Video-Based Dataset \\ \hline Kale [142] & 2004 & CMU & 25 & Video-Based Dataset \\ \hline Sarkar [143] & 2005 & USF & 122 & Video-Based Dataset \\ \hline Nixon [144] & 2006 & Small/large & 12 & Video-Based Dataset \\ \hline Hofmann [145] & 2014 & TUM GAID & 305 & Video-Based Dataset \\ \hline Mantyjarvi [146] & 2005 & Speed & 36 & Accelerometer-Based Dataset \\ \hline Gafurov [147] & 2007 & Motion-recording & 50 & Accelerometer-Based Dataset \\ \hline Casale [148] & 2012 & Walking pattern & 22 & Accelerometer-Based Dataset \\ \hline Muaaz [149] & 2012 & Android phone Google G1 & 51 & Accelerometer-Based Dataset \\ \hline Ngo [150] & 2014 & Inertial sensor & 744 & Accelerometer-Based Dataset \\ \hline Reyes [151] & 2016 & Postural transitions & 30 & Accelerometer-Based Dataset \\ \hline Orr [152] & 2000 & First floor-sensor & 15 & Floor-Sensor-Based Dataset \\ \hline Suutala [153] & 2004 & Footsteps of both feet & 11 & Floor-Sensor-Based Dataset \\ \hline Middleton [154] & 2005 & Walk without footwear & 15 & Floor-Sensor-Based Dataset \\ \hline Jenkins [155] & 2007 & Children & 62 & Floor-Sensor-Based Dataset \\ \hline Otero [156] & 2005 & First wave-radar & 49 & Radar-Based Dataset \\ \hline Wang [157] & 2011 & One-arm or two-arm & 1 & Radar-Based Dataset \\ \hline \end{tabular} \end{table} Table 2: Human Gait publically available datasets with different approaches ## 7 Deep learning-based Approaches for Human Gait Recognition Every human gait is unique and challenging to identify by computer vision algorithms. One of the best ways to implement CNN is with Google's pre-trained Tensorflow model called 'Inception ResNet V2' (or just 'ResNet V2') [158]. There are various DL Models have been explored for GR. CNN and RNN (Recurrent Neural Networks) are ML models commonly used in computer vision tasks [159]. For Human GR, CNN and RNN are used because they can accurately recognize patterns in images. CNN detects patterns like shapes and their Orientations [160], whereas RNN can detect patterns like sequential order of shapes [161]. CNN and RNN are used for Human GR in the following ways. CNN for GR with the image as input CNN for GR with video as input RNN for GR with sequential data as input. In DL, a model comprises several layers of nodes--neurons that recognize data patterns by comparing them to existing models. Deep networks have several advantages over traditional supervised ML methods [162]. First, they can be trained to learn complex tasks more efficiently by leveraging their inherent representation strength. Second, they can effectively generalize across different domains by adapting their architecture and training procedure to the particular task. Third, they can be employed for large-scale assessment and forecasting on datasets too big or intricate for classic management strategies. The ability to recognize human gait could be advantageous in various applications. For instance, it could permit people with disabilities to traverse hindrances or regulate their wheelchairs. Moreover, it could enhance automated vehicles' proficiency by enabling them to be more accurately detected as pedestrians. Visual GR is used as one approach to train neural networks to recognize human gait. This entails taking pictures of people walking and using a deep-learning model to analyze the images. This approach is advantageous due to its relatively low cost and ease of implementation; however, there are some drawbacks [163][164]. For instance, individuals must constantly film and upload video footage of their surroundings, which could be problematic in specific scenarios. Additionally, this method does not provide information about the person's health condition or other factors that might influence gait. Researchers have created alternative approaches to tackle these issues that combine image processing with extra features extracted from videos. These additional elements enable them to gain insight into an individual's health status or activity level to enhance the accuracy of the prediction model [165]. Khan et. al. [166] proposed that the HAR system is a novel approach that combines conventional hand-crafted features with histograms of oriented gradients (HoG) and in-depth features. In the first phase, a human silhouette is detected using a saliency-based method. This model achieved excellent results as compared to the existing techniques. Das et. al. [167] proposed an improved CNN architecture (WMsCNN). A weight update subnetwork (Ws) is designed to adjust the weights of certain features based on how much of a contribution they made to the final classification task. The weight-updated method is used by lowering these features' sensitivity to covariate factors. Global features are subsequently generated from these factors as a result of fusion. Chiu et. al. [168] presented a method for GR using a point cloud Using Light Detection and Ranging (LiDAR) technology to detect human gaits. DL architecture is then used to improve human GR accuracy, which handles time-series data. Bayat et. al. [169] proposed method developed a bag-of-words feature representation based on a GR procedure. Its performance is evaluated by comparing the classification results with extracted features, using human gait data collected from 93 individuals walking comfortably between two endpoints in two separate sessions. The proposed technique produced far more accurate classification results than standard statistical features in all the used classifiers. Shopon et. al. [170] introduced a new approach for recognizing people walking down an unconstrained pathway. The architecture input is joint body coordinates and an adjacency matrix representing the skeleton joints. The graph neural network framework incorporates a residual connection to smooth the input feature. This framework used kinematic relations, spatial and temporal variables, and joint body variables to identify gait. State-of-the-art GR methods outperformed the study's proposed method on unconstrained paths. The CASIA-B and multi-view gait AVA datasets assessed the method's efficacy. Li et. al. [171] proposed a new method, GaitSlice, to capture Spatio-temporal slice features from human motion. Slice Extraction Device (SED) develops slice features from the top down. For example, the residual frame attention mechanism (RFAM) focuses on keyframes. In order to mimic real life more closely, GaitSlice combines parallel RFAMs with slice information to identify critical Spatio-temporal components. The method is evaluated on CASIA-B and OU-MVLP datasets and compared with six GR algorithms, taking rank-1 accuracy as the benchmark. Also compared, were the results with cross-view and walking circumstances. In this paper [172], the authors proposed a residual structure to preserve more identity information while implementing the view transformation. A fusion model is used to combine the outcomes of the three perspectives at the recognition and decision-making stage to assess the method's effectiveness in this paper. The CASIA-B gait data set was employed to assess the algorithm's performance in this study. This model performed better than prior networks and can detect more accurately than prior models, primarily when clothing and viewing angles differ. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Ref No. & Year & Approaches & Methods & Datasets & Results(Acc.\%) \\ \hline [178] & 2022 & Multilevel paradigm for CNN features selection & Classification & AVAMVG gilt, CASIA A, B and C & V99.8, 99.7, 93.3 and 92.2 \\ \hline [174] & 2019 & Multi-level features fusion and selection & Selection & AVA multi-view gait (AVAMVG), CASIA A, B and C & 100, 98.8, 87.7, and 91.6 \\ \hline [175] & 2020 & Feature selection by Firefly algorithm and Skewson & Selection & 180, 360 and 540 degrees angles of CASIA B & 94.3, 93.8 and 94.7 \\ \hline [176] & 2021 & Euclidean Norm and Geometric Mean Maximization & Selection & CASIA B & 96.0 \\ \hline [177] & 2022 & Mean absolute deviation extended aerial fusion & Classification & CASIA B & 89.0 \\ \hline [178] & 2022 & Improved Ant Colony Optimization & Classification & 0, 18, and 180 degrees angles of CASIA B & 95.2, 93.9, and 98.2 \\ \hline [179] & 2022 & Kurtosis-controlled entropy, Resnet 101 & Selection and Classification & CASIA B and Real time captured videos & 95.2 and 96.6 \\ \hline [180] & 2022 & VGG-16, PSBK and OAMSVM & Selection and Classification & 000, 18.0, 36., 54., 72., and 90\% of CASIA B & 95.8, 96.0, 95.5, 96.2 and 95.6 \\ \hline [178] & 2022 & Improved moth-frame optimization & Classification & CASIA B and TUM GAID & 91.20 and 98.00 \\ \hline [181] & 2020 & Threshold Based Feature Fusion and Multi-class SVM & Selection & CASIA A, B and C & 98.6, 93.5 and 97.3 \\ \hline [182] & 2021 & ML and Neuro Fuzzy Classifier & Classification & HMDB-31 and Hollywood2 & 82.5 and 91.9 \\ \hline [183] & 2022 & Hybrid recognition technique & Selection & NTU RGB+D, HMDB51, and UCF-101 & 97.4, 80.5, and 97.4 \\ \hline [184] & 2022 & Seven hoyer fully connected layers & Fusion & CASIA B & 90.0 \\ \hline \end{tabular} \end{table} Table 3: DL-based approaches and datasets for human gait recognition. Table 3 shows some DL-based approaches along with well-known datasets to evaluate the Human GR systems. The results of each method are compared in terms of accuracy. ## 8 Conclusion GR (Gait Recognition) is an exciting application of neural networks and deep learning that has been seen in science fiction for a long time but is now starting to be applied in the real world. This is an inspiring example of how deep learning can be used for more than just image recognition. GR provides a challenge due to considering a range of variables like a person's mood, age, fitness level, clothing type, and way of walking. Although not perfect, it still serves as a valuable tool with potential for further applications as technology improves. Physically or behaviorally measurable aspects of an individual called biometrics are utilized in this visual pattern recognition task which may be one of the most complex tasks in computer science. This makes it much easier to collect data for ML algorithms. GR is even more helpful because it is a passive biometric. This means it is automatically collected by visual sensors rather than requiring a person to act. There is a lot of research on GR in recent years and it is used in biometric technology. Although it has a high accuracy rate, it is hard to measure and validate the algorithm's performance. Some efforts have been made to push the boundaries of GR. This includes the use of the DL model, which can cognize the people based on more than one modality. This includes visual and acoustic data alongside the person's walking pattern. Various neural network architectures have been tested, including Long Short Term Memory (LSTM) networks, Convolutional Neural Networks (CNN), and GRU networks. ## 9 Future Direction Deep Learning (DL) is a widely-used Artificial Intelligence (AI) technique for Human Gait Recognition (GR). DL algorithms, such as deep neural networks, are trained to detect patterns from large datasets. Recent breakthroughs in DL have enabled the recognition of complex concepts, such as human gait. However, there are certain areas for improvement with existing DL methods that restrict their application to Human GR. One limitation is the lack of ground truth; it can be difficult to determine whether an object is a person. Another challenge is low accuracy; current DL techniques often suffer from low accuracy levels, particularly when trying to recognize humans. This makes them unsuitable for practical implementation in the real world. New ways of training and assessing DL models need to be developed to address these issues and make DL more suitable for real-world use. Researchers are exploring using deep learning (DL) models to detect abnormalities in gait patterns. These systems can be taught to recognize typical ailments such as Parkinson's or stroke by analyzing video footage of people walking normally. By distinguishing between normal and abnormal motion patterns, it may be possible to detect cases that require urgent medical attention. With DL, computers can process and interpret human movement from videos and images; thus enabling them to comprehend people's gestures and activities, making them more interactive and responsive. A primary challenge in this field is training abundant data for deep learning models, which could originate from different sources like motion capture systems, video recordings, or treadmill tests. Consequently, obtaining high-grade data from various sources is essential for guaranteeing robustness and precision when applying deep learning methods to human gait recognition. Another difficulty is using deep learning architectures for large-scale human gait identification. ## Conflict of Interest Statement The authors declare that they have no conflicts of interest related to this work. The research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ## References * [1] Alom, M.Z., Taha, T.M., Yakopcic, C., Westberg, S., Sidike, P., Nasrin, M.S., Hasan, M., Van Essen, B.C., Awwal, A.A., Asari, V.K.: A state-of-the-art survey on deep learning theory and architectures. Electronics **8**(3), 292 (2019) * [2] Shah, J.H., Sharif, M., Raza, M., Azeem, A.: Face recognition across pose variation and the 3s problem. Turkish Journal of Electrical Engineering and Computer Sciences **22**(6), 1423-1436 (2014) * [3] Murtaza, M., Sharif, M., Raza, M., Shah, J.: Face recognition using adaptive margin fisher's criterion and linear discriminant analysis. International Arab Journal of Information Technology **11**(2), 1-11 (2014) * [4] Murtaza, M., Sharif, M., Raza, M., Shah, J.H.: Analysis of face recognition under varying facial expression: a survey. Int. Arab J. Inf. Technol. **10**(4), 378-388 (2013) * [5] Sharif, M., Mohsin, S., Javed, M.Y., Ali, M.A.: Single image face recognition using laplacian of gaussian and discrete cosine transforms. Int. Arab J. Inf. Technol. **9**(6), 562-570 (2012) * [6] Aisha, A., Muhammad, S., Hussain, S.J., Mudassar, R.: Face recognition invariant to partial occlusions. KSII Transactions on Internet and Information Systems (TIIS) **8**(7), 2496-2511 (2014) * [7] Sharif, M., Raza, M., Shah, J.H., Yasmin, M., Fernandes, S.L.: An overview of biometrics methods. Handbook of Multimedia Information Security: Techniques and Applications, 15-35 (2019) * [8] De Marsico, M., Galdi, C., Nappi, M., Riccio, D.: Firme: Face and iris recognition for mobile engagement. Image and Vision Computing **32**(12), 1161-1172 (2014) * [9] Wang, L., Tan, T., Ning, H., Hu, W.: Silhouette analysis-based gait recognition for human identification. IEEE transactions on pattern analysis and machine intelligence **25**(12), 1505-1518 (2003) * [10] Zhang, R., Vogler, C., Metaxas, D.: Human gait recognition at sagittal plane. Image and vision computing **25**(3), 321-330 (2007) * [11] Ghaffarian, S., Valente, J., Van Der Voort, M., Tekinerdogan, B.: Effect of attention mechanism in deep learning-based remote sensing image processing: A systematic literature review. Remote Sensing **13**(15), 2965 (2021) * [12] Gafurov, D., Snekkenes, E., Bours, P.: Improved gait recognition performance using cycle matching. In: 2010 IEEE 24th International Conference on Advanced Information Networking and Applications Workshops, pp. 836-841 (2010). IEEE * [13] Kumar, M., Singh, N., Kumar, R., Goel, S., Kumar, K.: Gait recognition based on vision systems: A systematic survey. Journal of Visual Communication and Image Representation **75**, 103052 (2021) * [14] Wang, X., Yan, W.Q.: Human identification based on gait manifold. Applied Intelligence, 1-12 (2022) * [15] Chao, H., Wang, K., He, Y., Zhang, J., Feng, J.: Gaitset: Cross-view gait recognition through utilizing gait as a deep set. IEEE transactions on pattern analysis and machine intelligence (2021) * [16] Mu, Z., Castro, F.M., Marin-Jimenez, M.J., Guil, N., Li, Y.-R., Yu, S.: Resgait: The real-scene gait dataset. In: 2021 IEEE International Joint Conference on Biometrics (IJCB), pp. 1-8 (2021). IEEE * [17] Filipi Goncalves dos Santos, C., Oliveira, D.d.S., A. Passos, L., Goncalves Pires, R., Felipe Silva Santos, D., Pascotti Valem, L., P. Moreira, T., Cleison S. Santana, M., Roder, M., Paulo Papa, J., _et al._: Gait recognition based on deep learning: a survey. ACM Computing Surveys (CSUR) **55**(2), 1-34 (2022) * [18] Batool, F.E., Attique, M., Sharif, M., Javed, K., Nazir, M., Abbasi, A.A., Iqbal, Z., Riaz, N.: Offline signature verification system: a novel technique of fusion of glcm and geometric features using svm. Multimedia Tools and Applications, 1-20 (2020) * [19] Topham, L., Khan, W., Al-Jumeily, D., Hussain, A.: Human body pose estimation for gait identification: A comprehensive survey of datasets and models. ACM Computing Surveys (2022) * [20] Hayder, A., Dargham, J., Chekima, A., Ervin, G.: Person identification using gait. International Journal of Computer and Electrical Engineering **3**(4), 477 (2011) * [21] Shen, C., Yu, S., Wang, J., Huang, G.Q., Wang, L.: A comprehensive survey on deep gait recognition: Algorithms, datasets and challenges. arXiv preprint arXiv:2206.13732 (2022) * [22] Dinello, D.: Six infinite cyberspace cages: The internet and virtual reality. Technophobia, 147-179 (2021) * [23] Alobaidi, H., Clarke, N., Li, F., Alruban, A.: Real-world smartphone-based gait recognition. Computers & Security **113**, 102557 (2022) * [24] Agrawal, A., Rohil, M.K.: Specific motion pattern detection: state-of-the-art and challenges. In: 2021 6th International Conference for Convergence in Technology (I2CT), pp. 1-6 (2021). IEEE * [25] Zhu, Z., Guo, X., Yang, T., Huang, J., Deng, J., Huang, G., Du, D., Lu, J., Zhou, J.: Gait recognition in the wild: A benchmark. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14789-14799 (2021) * [26] Topham, L.K., Khan, W., Al-Jumeily, D., Waraich, A., Hussain, A.J.: Gait identification using limb joint movement and deep machine learning. IEEE Access (2022) * [27] Jiang, Z., Luskus, M., Seyedi, S., Griner, E.L., Rad, A.B., Clifford, G.D., Boazak, M., Cotes, R.O.: Utilizing computer vision for facial behavior analysis in schizophrenia studies: A systematic review. PloS one **17**(4), 0266828 (2022) * [28] Dou, H., Zhang, P., Zhao, Y., Dong, L., Qin, Z., Li, X.: Gait impl: Gait recognition with memory-augmented progressive learning. IEEE Transactions on Image Processing (2022) * [29] Sindhu, B., Rani, B.K.: Personnel authentication using multi modal biometrics complemented by cognitive skills. Technology **1**(1), 28-35 (2022) * [30] Isaac, E.R.: Robust analytics for video-based gait biometrics. arXiv preprint arXiv:2111.06670 (2021) * [31] Lopes, T.A.M.: Gait monitor and analyzer. a wearable sensor-based gait analysis system for physiotherapy. PhD thesis (2022) * [32] Parashar, A., Shekhawat, R.S., Ding, W., Rida, I.: Intra-class variations with deep learning-based gait analysis: A comprehensive survey of covariates and methods. Neurocomputing (2022) * [33] Song, C., Huang, Y., Wang, W., Wang, L.: Casia-e: A large comprehensive dataset for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) * [34] Han, F., Li, X., Zhao, J., Shen, F.: A unified perspective of classification-based loss and distance-based loss for cross-view gait recognition. Pattern Recognition **125**, 108519 (2022) * [35] Baghezza, R., Bouchard, K., Bouzouane, A., Gouin-Vallerand, C.: Profile recognition for accessibility and inclusivity in smart cities using a thermal imaging sensor in an embedded system. IEEE Internet of Things Journal **9**(10), 7491-7509 (2021) * [36] Wei, S., Tien, P.W., Wu, Y., Calautit, J.K.: A coupled deep learning-based internal heat gains detection and prediction method for energy-efficient office building operation. Journal of Building Engineering **47**, 103778 (2022) * [37] Bijalwan, V., Semwal, V.B., Mandal, T.: Fusion of multi-sensor-based biomechanical gait analysis using vision and wearable sensor. IEEE Sensors Journal **21**(13), 14213-14220 (2021) * [38] Song, D., Qiang, L., Liu, Y., Li, Y., Li, L.: Compliance control method of exoskeleton robot assisted by lower limb knee joint based on gait recognition. In: International Conference on Intelligent Robotics and Applications, pp. 759-768 (2021). Springer * [39] Semwal, V.B., Mazumdar, A., Jha, A., Gaud, N., Bijalwan, V.: Speed, cloth and pose invariant gait recognition-based person identification. In: Machine Learning: Theoretical Foundations and Practical Applications, pp. 39-56 (2021). Springer * [40] Mobasseri, B.G., Amin, M.G.: A time-frequency classifier for human gait recognition. In: Optics and Photonics in Global Homeland Security V and Biometric Technology for Human Identification VI, vol. 7306, pp. 434-442 (2009). SPIE * [41] Rahman, S., Griffin, H., Quinn, N., Jahanshahi, M.: The factors that induce or overcome freezing of gait in parkinson's disease. Behavioural neurology **19**(3), 127-136 (2008) * [42] Silva, L.M., Stergiou, N.: The basics of gait analysis. Biomechanics Gait Analy **164**, 231 (2020) * [43] Sprager, S., Juric, M.B.: Inertial sensor-based gait recognition: A review. Sensors **15**(9), 22089-22127 (2015) * [44] Muro-De-La-Herran, A., Garcia-Zapirain, B., Mendez-Zorrilla, A.: Gait analysis methods: An overview of wearable and non-wearable systems, highlighting clinical applications. Sensors **14**(2), 3362-3394 (2014) * [45] Shahid, J., Ahmad, R., Kiani, A.K., Ahmad, T., Saeed, S., Almuhaideb, A.M.: Data protection and privacy of the internet of healthcare things. Applied Sciences **12**(4), 1927 (2022) * [46] Dai, C., Lyu, X., Meng, F., He, J., Huang, Q., Fukuda, T.: Development of a novel motion capture and gait analysis system for rat locomotion. Advanced Robotics **35**(16), 961-980 (2021) * [47] Alsaggaf, W.A., Mehmood, I., Khairullah, E.F., Alhuraiji, S., Sabir, M.F.S., Alghamdi, A.S., El-Latif, A., Ahmed, A.: A smart surveillance system for uncooperative gait recognition using cycle consistent generative adversarial networks (ccgans). Computational Intelligence and Neuroscience **2021** (2021) * [48] Sepas-Moghaddam, A., Etemad, A.: Deep gait recognition: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) * [49] Vildjiounaite, E., Makela, S.-M., Lindholm, M., Riihimaki, R., Kyllonen, V., Mantyjarvi, J., Ailisto, H.: Unobtrusive multimodal biometrics for ensuring privacy and information security with personal devices. In: International Conference on Pervasive Computing, pp. 187-201 (2006). Springer * [50] Kirtley, C.: Clinical gait analysis: theory and practice (2006) * [51] Teepe, T., Khan, A., Gilg, J., Herzog, F., Hormann, S., Rigoll, G.: Gait-graph: Graph convolutional network for skeleton-based gait recognition. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 2314-2318 (2021). IEEE * [52] Bouchrika, I.: A survey of using biometrics for smart visual surveillance: Gait recognition. In: Surveillance in Action, pp. 3-23 (2018). Springer * [53] Leibe, B., Schiele, B.: Analyzing appearance and contour based methods for object categorization. In: 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings., vol. 2, p. 409 (2003). IEEE * [54] Zhong, M., Zhou, Y., Chen, G.: Sequential model based intrusion detection system for iot servers using deep learning methods. Sensors **21**(4), 1113 (2021) * [55] Iwama, H., Okumura, M., Makihara, Y., Yagi, Y.: The ou-isir gait database comprising the large population dataset and performance evaluation of gait recognition. IEEE Transactions on Information Forensics and Security **7**(5), 1511-1521 (2012) * [56] Bashir, K., Xiang, T., Gong, S.: Gait recognition using gait entropy image (2009) * [57] Hayfron-Acquah, J.B., Nixon, M.S., Carter, J.N.: Automatic gait recognition by symmetry analysis. Pattern Recognition Letters **24**(13), 2175-2183 (2003) * [58] Sivapalan, S., Chen, D., Denman, S., Sridharan, S., Fookes, C.: Gait energy volumes and frontal gait recognition using depth images. In: 2011 International Joint Conference on Biometrics (IJCB), pp. 1-6 (2011). IEEE * [59] Li, C., Min, X., Sun, S., Lin, W., Tang, Z.: Deepgait: A learning deep convolutional representation for view-invariant gait recognition using joint bayesian. Applied Sciences **7**(3), 210 (2017) * [60] Wu, Z., Huang, Y., Wang, L., Wang, X., Tan, T.: A comprehensive study on cross-view gait based human identification with deep cnns. IEEE transactions on pattern analysis and machine intelligence **39**(2), 209-226 (2016) * [61] Zeng, W., Wang, C.: View-invariant gait recognition via deterministic learning. Neurocomputing **175**, 324-335 (2016) * [62] Kusakunniran, W.: Recognizing gaits on spatio-temporal feature domain. IEEE Transactions on Information Forensics and Security **9**(9), 1416-1423 (2014) * [63] Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: European Conference on Computer Vision, pp. 25-36 (2004). Springer * [64] Hu, M., Wang, Y., Zhang, Z., Zhang, D., Little, J.J.: Incremental learning for video-based gait recognition with lbp flow. IEEE transactions on cybernetics **43**(1), 77-89 (2012) * [65] Akita, K.: Image sequence analysis of real world human motion. Pattern recognition **17**(1), 73-83 (1984) * [66] Guo, Y., Xu, G., Tsuji, S.: Understanding human motion patterns. In: Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 3-Conference C: Signal Processing (Cat. No. 94CH3440-5), vol. 2, pp. 325-329 (1994). IEEE * [67] Rohr, K.: Towards model-based recognition of human movements in image sequences. CVGIP: Image understanding **59**(1), 94-115 (1994) * [68] Karaulova, I., Hall, P.M., Marshall, A.D.: A hierarchical model of dynamics for tracking people with a single video camera. In: BMVC, pp. 1-10 (2000) * [69] Niyogi, S.A., Adelson, E.H., _et al._: Analyzing and recognizing walking figures in xyt. In: CVPR, vol. 94, pp. 469-474 (1994) * [70] Cunado, D., Nixon, M.S., Carter, J.N.: Automatic extraction and description of human gait models for recognition purposes. Computer vision and image understanding **90**(1), 1-41 (2003) * [71] Yam, C., Nixon, M.S., Carter, J.N.: Automated person recognition by walking and running via model-based approaches. Pattern recognition **37**(5), 1057-1072 (2004) * [72] Wagg, D.K., Nixon, M.S.: On automated model-based extraction and analysis of gait. In: Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings., pp. 11-16 (2004). IEEE * [73] Bouchrika, I., Carter, J.N., Nixon, M.S.: Towards automated visual surveillance using gait for identity recognition and tracking across multiple non-intersecting cameras. Multimedia Tools and Applications **75**(2), 1201-1221 (2016) * [74] Bouchrika, I., Nixon, M.S.: Markerless feature extraction for gait analysis. In: IEEE SMC Chapter Conference on Advanced in Cybernetic Systems (2006) * [75] Ariyanto, G., Nixon, M.S.: Marionette mass-spring model for 3d gait biometrics. In: 2012 5th IAPR International Conference on Biometrics (ICB), pp. 354-359 (2012). IEEE * [76] Zhao, G., Liu, G., Li, H., Pietikainen, M.: 3d gait recognition using multiple cameras. In: 7th International Conference on Automatic Face and Gesture Recognition (FGR06), pp. 529-534 (2006). IEEE * [77] Tang, J., Luo, J., Tjahjadi, T., Guo, F.: Robust arbitrary-view gait recognition based on 3d partial similarity matching. IEEE Transactions on Image Processing **26**(1), 7-22 (2016) * [78] Lopez-Fernandez, D., Madrid-Cuevas, F.J., Carmona-Poyato, A., Munoz-Salinas, R., Medina-Carnicer, R.: A new approach for multi-view gait recognition on unconstrained paths. Journal of Visual Communication and Image Representation **38**, 396-406 (2016) * [79] Kastaniotis, D., Theodorakopoulos, I., Economou, G., Fotopoulos, S.: Gait based recognition via fusing information from euclidean and riemannian manifolds. Pattern Recognition Letters **84**, 245-251 (2016) * [80] Mogan, J.N., Lee, C.P., Lim, K.M.: Advances in vision-based gait recognition: From handcrafted to deep learning. Sensors **22**(15), 5682 (2022) * [81] Shuai, C., Wang, X., He, M., Ouyang, X., Yang, J.: A presentation and retrieval hash scheme of images based on principal component analysis. The Visual Computer **37**(8), 2113-2126 (2021) * [82] Iwashita, Y., Sakano, H., Kurazume, R., Stoica, A.: Speed invariant gait recognition--the enhanced mutual subspace method. PloS one **16**(8), 0255927 (2021) * [83] Zhang, S., Wang, Y., Li, A.: Cross-view gait recognition with deep universal linear embeddings. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9095-9104 (2021) * [84] Khalid, S.S., Awais, M., Chan, C.-H., Feng, Z., Farooq, A., Akbari, A., Kittler, J.: Npt-loss: A metric loss with implicit mining for face recognition. arXiv preprint arXiv:2103.03503 (2021) * [85] Mc Ardle, R., Del Din, S., Donaghy, P., Galna, B., Thomas, A.J., Rochester, L.: The impact of environment on gait assessment: considerations from real-world gait analysis in dementia subtypes. Sensors **21**(3), 813 (2021) * [86] Chai, S., Liu, J., Jain, R.K., Tateyama, T., Iwamoto, Y., Lin, L., Chen, Y.-W.: A multi-head pseudo nodes based spatial-temporal graph convolutional network for emotion perception from gait. Neurocomputing **511**, 437-447 (2022) * [87] Lee, D., Soon, J., Choi, G., Kim, K., Bahn, S.: Identification of the visually prominent gait parameters for forensic gait analysis. International Journal of Environmental Research and Public Health **19**(4), 2467 (2022) * [88] Matasa, A.: A deep learning approach for learning human gait signature (2021) * [89] Wu, P., Zhang, X., He, J., Lou, S., Gao, J.: Locality preserving randomized canonical correlation analysis for real-time nonlinear process monitoring. Process Safety and Environmental Protection **147**, 1088-1100 (2021) * [90] Tang, Q., Liu, Y., Chai, Y., Huang, C., Liu, B.: Dynamic process monitoring based on canonical global and local preserving projection analysis. Journal of Process Control **106**, 221-232 (2021) * [91] Amrani, M., Bey, A., Amamra, A.: New sar target recognition based on yolo and very deep multi-canonical correlation analysis. International Journal of Remote Sensing, 1-20 (2021) * [92] Karthikram, A., _et al._: Automatic neuro disease classification based on gait analysis using bi-stacked gru. Turkish Journal of Computer and Mathematics Education (TURCOMAT) **12**(10), 1551-1555 (2021) * [93] LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural computation **1**(4), 541-551 (1989) * [94] Janiesch, C., Zschech, P., Heinrich, K.: Machine learning and deep learning. Electronic Markets **31**(3), 685-695 (2021) * [95] Parloff, R.: Why deep learning is suddenly changing your life. Fortune. New York: Time Inc (2016) * [96] Fan, S.: Al: From copy of human brain to independent learner. Machine Learning and the City: Applications in Architecture and Urban Design, 121-128 (2022) * [97] Sharif, M.I., Khan, M.A., Alhussein, M., Aurangzeb, K., Raza, M.: A decision support system for multimodal brain tumor classification using deep learning. Complex & Intelligent Systems **8**(4), 3007-3020 (2022) * [98] Bravo Sanchez, F.J., Hossain, M.R., English, N.B., Moore, S.T.: Bioacoustic classification of avian calls from raw sound waveforms with an open-source deep learning architecture. Scientific Reports **11**(1), 1-12 (2021) * [99] Arsalan, M., Khan, T.M., Naqvi, S.S., Nawaz, M., Razzak, I.: Prompt deep light-weight vessel segmentation network (plvs-net). IEEE/ACM Transactions on Computational Biology and Bioinformatics (2022) * [100] Iqbal, S., Naqvi, S., Ahmed, H., Saadat, A., Khan, T.M.: G-net light: A lightweight modified google net for retinal vessel segmentation (2022) * [101] Aslam, M., Khan, T.M., Naqvi, S.S., Holmes, G.: An ensemble of fine-tuned deep learning networks for wet-blue leather segmentation. Journal of the American Leather Chemists Association **117**(4) (2022) * [102] Khan, T.M., Robles-Kelly, A.: Machine learning: Quantum vs classical. IEEE Access **8**, 219275-219294 (2020) * [103] Khan, T.M., Robles-Kelly, A., Naqvi, S.S.: T-net: A resource-constrained tiny convolutional neural network for medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 644-653 (2022) * [104] Khan, T.M., Naqvi, S.S., Meijering, E.: Leveraging image complexity in macro-level neural network design for medical image segmentation. arXiv preprint arXiv:2112.11065 (2021) * [105] Umer, M.J., Sharif, M.I.: A comprehensive survey on quantum machine learning and possible applications. International Journal of E-Health and Medical Communications (IJEHMC) **13**(5), 1-17 (2022) * [106] Elguendouze, S., de Souto, M.C., Hafiane, A., Halftermeyer, A.: Towards explainable deep learning for image captioning through representation space perturbation. In: 2022 International Joint Conference on Neural Networks (IJCNN), pp. 1-8 (2022). IEEE * [107] Shao, L., Zhao, Y., Liu, Y.: Organic synaptic transistors: The evolutionary path from memory cells to the application of artificial neural networks. Advanced Functional Materials **31**(28), 2101951 (2021) * [108] Ding, W., Abdel-Basset, M., Hawash, H., Moustafa, N.: Interval type-2 fuzzy temporal convolutional autoencoder for gait-based human identification and authentication. Information Sciences **597**, 144-165 (2022) * [109] Sarker, I.H.: Ai-based modeling: Techniques, applications and research issues towards automation, intelligent and smart systems. SN Computer Science **3**(2), 1-20 (2022) * [110] Danaci, K.I., Akagunduz, E.: A survey on infrared image and video sets. arXiv preprint arXiv:2203.08581 (2022) * [111] Khan, A.A., Laghari, A.A., Awan, S.A.: Machine learning in computer vision: A review. EAI Endorsed Transactions on Scalable Information Systems **8**(32), 4-4 (2021) * [112] Li, B., Chan, P.H., Baris, G., Higgins, M.D., Donzella, V.: Analysis of automotive camera sensor noise factors and impact on object detection. IEEE Sensors Journal (2022) * [113] Sahu, G., Parida, P., et al.: A contemporary survey on human gait recognition. Journal of Information Assurance & Security **15**(3) (2020) * [114] Palla, S.R., Sahu, G., Parida, P.: Human gait recognition using firefly template segmentation. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization **10**(5), 565-575 (2022) * [115] Rao, P.S., Sahu, G., Parida, P., Patnaik, S.: An adaptive firefly optimization algorithm for human gait recognition. In: Smart and Sustainable Technologies: Rural and Tribal Development Using IoT and Cloud Computing: Proceedings of ICSST 2021, pp. 305-316. Springer,??? (2022) * [116] Thilagaraj, M., Arunkumar, N., Ramkumar, S., Hariharasitaraman, S.: Electrooculogram signal identification for elderly disabled using elman network. Microprocessors and Microsystems **82**, 103811 (2021) * [117] Wang, Y., Gong, D., Zhou, Z., Ji, X., Wang, H., Li, Z., Liu, W., Zhang, T.: Orthogonal deep features decomposition for age-invariant face recognition. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 738-753 (2018) * [118] Lee, L., Grimson, W.E.L.: Gait analysis for recognition and classification. In: Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition, pp. 155-162 (2002). IEEE * [119] Prakash, C., Kumar, R., Mittal, N.: Recent developments in human gait research: parameters, approaches, applications, machine learning techniques, datasets and challenges. Artificial Intelligence Review **49**(1), 1-40 (2018) * [120] Sharif, M., Khan, M.A., Faisal, M., Yasmin, M., Fernandes, S.L.: A framework for offline signature verification system: Best features selection approach. Pattern Recognition Letters **139**, 50-59 (2020) * [121] Zaki, T.H.M., Sahrim, M., Jamaludin, J., Balakrishnan, S.R., Asbula, L.H., Hussin, F.S.: The study of drunken abnormal human gait recognition using accelerometer and gyroscope sensors in mobile application. In: 2020 16th IEEE International Colloquium on Signal Processing & Its Applications (CSPA), pp. 151-156 (2020). IEEE * [122] Holzinger, A., Malle, B., Saranti, A., Pfeifer, B.: Towards multi-modal causability with graph neural networks enabling information fusion for explainable ai. Information Fusion **71**, 28-37 (2021) * [123] Yan, S.-H., Liu, Y.-C., Li, W., Zhang, K.: Gait phase detection by using a portable system and artificial neural network. Medicine in Novel Technology and Devices **12**, 100092 (2021) * [124] Nawaz, M., Chan, R.W., Malik, A., Khan, T., Cao, P.: Hand gestures classification using electrical impedance tomography images. IEEE Sensors Journal **22**(19), 18922-18932 (2022) * [125] Khan, M.A., Sharif, M.I., Raza, M., Anjum, A., Saba, T., Shad, S.A.: Skin lesion segmentation and classification: A unified framework of deep neural network features fusion and selection. Expert Systems **39**(7), 12497 (2022) * [126] Khan, M.A., Rubab, S., Kashif, A., Sharif, M.I., Muhammad, N., Shah, J.H., Zhang, Y.-D., Satapathy, S.C.: Lungs cancer classification from ct images: An integrated design of contrast based classical features fusion and selection. Pattern Recognition Letters **129**, 77-85 (2020) * [127] Addabbo, P., Bernardi, M.L., Biondi, F., Cimitile, M., Clemente, C., Orlando, D.: Temporal convolutional neural networks for radar micro-doppler based gait recognition. Sensors **21**(2), 381 (2021) * [128] Liu, Z., Sarkar, S.: Effect of silhouette quality on hard problems in gait recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) **35**(2), 170-183 (2005) * [129] Alotaibi, M., Mahmood, A.: Improved gait recognition based on specialized deep convolutional neural network. Computer Vision and Image Understanding **164**, 103-110 (2017) * [130] Sethi, D., Bharti, S., Prakash, C.: A comprehensive survey on gait analysis: History, parameters, approaches, pose estimation, and future work. Artificial Intelligence in Medicine, 102314 (2022) * [131] Chen, W., Li, J., Zhu, S., Zhang, X., Men, Y., Wu, H.: Gait recognition for lower limb exoskeletons based on interactive information fusion. Applied Bionics and Biomechanics **2022** (2022) * [132] Trabassi, D., Serrao, M., Varrecchia, T., Ranavolo, A., Coppola, G., De Icco, R., Tassorelli, C., Castiglia, S.F.: Machine learning approach to support the detection of parkinson's disease in imu-based gait analysis. Sensors **22**(10), 3700 (2022) * [133] Alzubaidi, L., Zhang, J., Humaidi, A.J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaria, J., Fadhel, M.A., Al-Amidie, M., Farhan, L.: Review of deep learning: Concepts, cnn architectures, challenges, applications, future directions. Journal of big Data **8**(1), 1-74 (2021) * [134] Chen, Y., Pandey, A., Deng, Z., Nguyen, A., Wang, R., Thonapalin, P., Nguyen, Q., Gupta, S.K.: A semi-autonomous quadruped robot for performing disinfection in cluttered environments. In: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, vol. 85451, pp. 08-08024 (2021). American Society of Mechanical Engineers * [135] Uddin, M., Ngo, T.T., Makihara, Y., Takemura, N., Li, X., Muramatsu, D., Yagi, Y., _et al._: The ou-isir large population gait database with real-life carried object and its performance evaluation. IPSJ Transactions on Computer Vision and Applications **10**(1), 1-11 (2018) * [136] Kececci, A., Yildirak, A., Ozyazici, K., Ayluctarhan, G., Agbulut, O., Zincir, I.: Implementation of machine learning algorithms for gait recognition. Engineering Science and Technology, an International Journal **23**(4), 931-937 (2020) * [137] Seely, R.D., Samangooei, S., Lee, M., Carter, J.N., Nixon, M.S.: The university of southampton multi-biometric tunnel and introducing a novel 3d gait dataset. In: 2008 IEEE Second International Conference on Biometrics: Theory, Applications and Systems, pp. 1-6 (2008). IEEE * [138] Makihara, Y., Mannami, H., Tsuji, A., Hossain, M.A., Sugiura, K., Mori, A., Yagi, Y.: The ou-isir gait database comprising the treadmill dataset. IPSJ Transactions on Computer Vision and Applications **4**, 53-62 (2012) * [139] Xu, C., Makihara, Y., Ogi, G., Li, X., Yagi, Y., Lu, J.: The ou-isir gait database comprising the large population dataset with age and performance evaluation of age estimation. IPSJ Transactions on Computer Vision and Applications **9**(1), 1-14 (2017) * [140] Chalidabhongse, T., Kruger, V., Chellappa, R.: The umd database for human identification at a distance. Technical report, Technical report, University of Maryland (2001) * [141] Phillips, P.J., Sarkar, S., Robledo, I., Grother, P., Bowyer, K.: Baseline results for the challenge problem of humanid using gait analysis. In: Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition, pp. 137-142 (2002). IEEE * [142] Kale, A., Sundaresan, A., Rajagopalan, A., Cuntoor, N.P., Roy-Chowdhury, A.K., Kruger, V., Chellappa, R.: Identification of humans using gait. IEEE Transactions on image processing **13**(9), 1163-1173 (2004) * [143] Sarkar, S., Phillips, P.J., Liu, Z., Vega, I.R., Grother, P., Bowyer, K.W.: The humanid gait challenge problem: Data sets, performance, and analysis. IEEE transactions on pattern analysis and machine intelligence **27**(2), 162-177 (2005) * [144] Nixon, M.S., Carter, J.N.: Automatic recognition by gait. Proceedings of the IEEE **94**(11), 2013-2024 (2006) * [145] Hofmann, M., Geiger, J., Bachmann, S., Schuller, B., Rigoll, G.: The tum gait from audio, image and depth (gaid) database: Multimodal recognition of subjects and traits. Journal of Visual Communication and Image Representation **25**(1), 195-206 (2014) * [146] Mantyjarvi, J., Lindholm, M., Vildjiounaite, E., Makela, S.-M., Ailisto, H.: Identifying users of portable devices from gait pattern with accelerometers. In: Proceedings.(ICASSP'05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., vol. 2, p. 973 (2005). IEEE * [147] Gafurov, D., Snekkenes, E., Bours, P.: Gait authentication and identification using wearable accelerometer sensor. In: 2007 IEEE Workshop on Automatic Identification Advanced Technologies, pp. 220-225 (2007). IEEE * [148] Casale, P., Pujol, O., Radeva, P.: Personalization and user verification in wearable systems using biometric walking patterns. Personal and Ubiquitous Computing **16**(5), 563-580 (2012) * [149] Muaaz, M., Nickel, C.: Influence of different walking speeds and surfaces on accelerometer-based biometric gait recognition. In: 2012 35th International Conference on Telecommunications and Signal Processing (TSP), pp. 508-512 (2012). IEEE * [150] Ngo, T.T., Makihara, Y., Nagahara, H., Mukaigawa, Y., Yagi, Y.: The largest inertial sensor-based gait database and performance evaluation of gait-based personal authentication. Pattern Recognition **47**(1), 228-237 (2014) * [151] Reyes-Ortiz, J.-L., Oneto, L., Sama, A., Parra, X., Anguita, D.: Transition-aware human activity recognition using smartphones. Neurocomputing **171**, 754-767 (2016) * [152] Orr, R.J., Abowd, G.D.: The smart floor: A mechanism for natural user identification and tracking. In: CHI'00 Extended Abstracts on Human Factors in Computing Systems, pp. 275-276 (2000) * [153] Suutala, J., Roning, J.: Towards the adaptive identification of walkers: Automated feature selection of footsteps using distinction sensitive lvq. In: Int. Workshop on Processing Sensory Information for Proactive Systems (PSIPS 2004), pp. 14-15 (2004). Citeseer * [154] Middleton, L., Buss, A.A., Bazin, A., Nixon, M.S.: A floor sensor system for gait recognition. In: Fourth IEEE Workshop on Automatic Identification Advanced Technologies (AutoID'05), pp. 171-176 (2005). IEEE * [155] Jenkins, J., Ellis, C.: Using ground reaction forces from gait analysis: Body mass as a weak biometric. In: International Conference on Pervasive Computing, pp. 251-267 (2007). Springer * [156] Otero, M.: Application of a continuous wave radar for human gait recognition. In: Signal Processing, Sensor Fusion, and Target Recognition XIV, vol. 5809, pp. 538-548 (2005). SPIE * [157] Wang, Y., Fathy, A.E.: Micro-doppler signatures for intelligent human gait recognition using a uwb impulse radar. In: 2011 IEEE International Symposium on Antennas and Propagation (APSURSI), pp. 2103-2106 (2011). IEEE * [158] Peng, C., Liu, Y., Yuan, X., Chen, Q.: Research of image recognition method based on enhanced inception-resnet-v2. Multimedia Tools and Applications, 1-21 (2022) * [159] Ward, T.M., Mascagni, P., Ban, Y., Rosman, G., Padoy, N., Meireles, O., Hashimoto, D.A.: Computer vision in surgery. Surgery **169**(5), 1253-1256 (2021) * [160] Guindel, C., Martin, D., Armingol, J.M.: Joint object detection and viewpoint estimation using cnn features. In: 2017 IEEE International Conference on Vehicular Electronics and Safety (ICVES), pp. 145-150 (2017). IEEE * [161] Sharma, G., Goyal, R., Liu, D., Kalogerakis, E., Maji, S.: Csgnet: Neural shape parser for constructive solid geometry. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5515-5523 (2018) * [162] Hong, Y., Hou, B., Jiang, H., Zhang, J.: Machine learning and artificial neural network accelerated computational discoveries in materials science. Wiley Interdisciplinary Reviews: Computational Molecular Science **10**(3), 1450 (2020) * [163] Jia, P., Zhao, Q., Li, B., Zhang, J.: Cjam: Convolutional neural network joint attention mechanism in gait recognition. IEICE TRANSACTIONS on Information and Systems **104**(8), 1239-1249 (2021) * [164] Liao, R., Yu, S., An, W., Huang, Y.: A model-based gait recognition method with body pose and human prior knowledge. Pattern Recognition **98**, 107069 (2020) * [165] Muhammad, K., Ullah, A., Imran, A.S., Sajjad, M., Kiran, M.S., Sannino, G., de Albuquerque, V.H.C., _et al._: Human action recognition using attention based lstm network with dilated cnn features. Future Generation Computer Systems **125**, 820-830 (2021) * [166] Khan, M.A., Sharif, M., Akram, T., Raza, M., Saba, T., Rehman, A.: Hand-crafted and deep convolutional neural network features fusion and selection strategy: an application to intelligent human action recognition. Applied Soft Computing **87**, 105986 (2020) * [167] Das, S., Meher, S., Sahoo, U.K.: A unified local-global feature extraction network for human gait recognition using smartphone sensors. Sensors **22**(11), 3968 (2022) * [168] Chiu, T.-C., Chen, T.-S., Lin, J.-M.: Human gait recognition using lidar and deep learning technologies. In: 2022 IEEE International Conference on Consumer Electronics-Taiwan, pp. 43-44 (2022). IEEE * [169] Bayat, N., Rastegari, E., Li, Q.: Human gait recognition using bag of words feature representation method. arXiv preprint arXiv:2203.13317 (2022) * [170] Shopon, M., Hsu, G.-S.J., Gavrilova, M.L.: Multi-view gait recognition on unconstrained path using graph convolutional neural network. IEEE Access (2022) * [171] Li, H., Qiu, Y., Zhao, H., Zhan, J., Chen, R., Wei, T., Huang, Z.: Gait-slice: A gait recognition model based on spatio-temporal slice features. Pattern Recognition **124**, 108453 (2022) * [172] Wen, J., Shen, Y., Yang, J.: Multi-view gait recognition based on generative adversarial network. Neural Process. Lett. **54**(3), 1855-1877 (2022) * [173] Arshad, H., Khan, M.A., Sharif, M.I., Yasmin, M., Tavares, J.M.R., Zhang, Y.-D., Satapathy, S.C.: A multilevel paradigm for deep convolutional neural network features selection with an application to human gait recognition. Expert Systems **39**(7), 12541 (2022) * [174] Arshad, H., Khan, M.A., Sharif, M., Yasmin, M., Javed, M.Y.: Multi-level features fusion and selection for human gait recognition: an optimized framework of bayesian model and binomial distribution. International Journal of Machine Learning and Cybernetics **10**(12), 3601-3618 (2019) * [175] Mehmood, A., Khan, M.A., Sharif, M., Khan, S.A., Shaheen, M., Saba, T., Riaz, N., Ashraf, I.: Prosperous human gait recognition: an end-to-end system based on pre-trained cnn features selection. Multimedia Tools and Applications, 1-21 (2020) * [176] Khan, M.A., Kadry, S., Parwekar, P., Damasevicius, R., Mehmood, A., Khan, J.A., Naqvi, S.R.: Human gait analysis for osteoarthritis prediction: A framework of deep learning and kernel extreme learning machine. Complex & Intelligent Systems, 1-19 (2021) * [177] Saleem, F., Khan, M.A., Alhaisoni, M., Tariq, U., Armghan, A., Alenezi, F., Choi, J.-I., Kadry, S.: Human gait recognition: A single stream optimal deep learning features fusion. Sensors **21**(22), 7584 (2021) * [178] Khan, A., Khan, M.A., Javed, M.Y., Alhaisoni, M., Tariq, U., Kadry, S., Choi, J.-I., Nam, Y.: Human gait recognition using deep learning and improved ant colony optimization (2022) * [179] Sharif, M.I., Khan, M.A., Alqahtani, A., Nazir, M., Alsubai, S., Binsusayyis, A., Damasevicius, R.: Deep learning and kurtosis-controlled, entropy-based framework for human gait recognition using video sequences. Electronics **11**(3), 334 (2022) * [180] Mehmood, A., Tariq, U., Jeong, C., Nam, Y., Mostafa, R., Elaeiny, A.: Human gait recognition: A deep learning and best feature selection framework. Comput. Mater. Cont **70**, 343-360 (2022) * [181] Sharif, M., Attique, M., Tahir, M.Z., Yasmim, M., Saba, T., Tanik, U.J.: A machine learning method with threshold based parallel feature fusion and feature selection for automated gait recognition. Journal of Organizational and End User Computing (JOEUC) **32**(2), 67-92 (2020) * [182] Nasir, I.M., Raza, M., Shah, J.H., Khan, M.A., Rehman, A.: Human action recognition using machine learning in uncontrolled environment. In: 2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA), pp. 182-187 (2021). IEEE * [183] Nasir, I.M., Raza, M., Shah, J.H., Wang, S.-H., Tariq, U., Khan, M.A.: Harednet: A deep learning based architecture for autonomous video surveillance by recognizing human actions. Computers and Electrical Engineering **99**, 107805 (2022) * [184] Deng, M., Sun, Y., Fan, Z., Feng, X.: Human gait recognition by fusing global and local image entropy features with neural networks. Journal of Electronic Imaging, 1 (2022)
2309.06726
Enhancing Keyphrase Generation by BART Finetuning with Splitting and Shuffling
Keyphrase generation is a task of identifying a set of phrases that best repre-sent the main topics or themes of a given text. Keyphrases are dividend int pre-sent and absent keyphrases. Recent approaches utilizing sequence-to-sequence models show effectiveness on absent keyphrase generation. However, the per-formance is still limited due to the hardness of finding absent keyphrases. In this paper, we propose Keyphrase-Focused BART, which exploits the differ-ences between present and absent keyphrase generations, and performs fine-tuning of two separate BART models for present and absent keyphrases. We further show effective approaches of shuffling keyphrases and candidate keyphrase ranking. For absent keyphrases, our Keyphrase-Focused BART achieved new state-of-the-art score on F1@5 in two out of five keyphrase gen-eration benchmark datasets.
Bin Chen, Mizuho Iwaihara
2023-09-13T05:02:11Z
http://arxiv.org/abs/2309.06726v1
# Enhancing Keyphrase Generation by BART Finetuning with Splitting and Shuffling ###### Abstract Keyphrase generation is a task of identifying a set of phrases that best represent the main topics or themes of a given text. Keyphrases are dividend int present and absent keyphrases. Recent approaches utilizing sequence-to-sequence models show effectiveness on absent keyphrase generation. However, the performance is still limited due to the hardness of finding absent keyphrases. In this paper, we propose Keyphrase-Focused BART, which exploits the differences between present and absent keyphrase generations, and performs finetuning of two separate BART models for present and absent keyphrases. We further show effective approaches of shuffling keyphrases and candidate keyphrase ranking. For absent keyphrases, our Keyphrase-Focused BART achieved new state-of-the-art score on F1@5 in two out of five keyphrase generation benchmark datasets. Keywords:keyphrase generation, deep learning, BART Finetuning, generative language model ## 1 Introduction Keyphrase generation is an important task that involves identifying a set of terms or phrases that best represent the main topics or themes of a given text, having applications in information retrieval, document classification, and summarization. A **present keyphrase** is such that its word sequence appears in the document with its order preserved. Present keyphrases can be extracted from the document. An **absent keyphrase** is not present in the text but relevant to the topic of the document. Keyphrase extraction has been extensively studied [1, 6, 9]. However, extractive methods cannot find absent keyphrases that have not appeared in the article. Recent generative methods, such as CopyRNN [5] and CatSeq [13], can directly generate candidate present and absent keyphrases from input document representations. BART is a pre-trained generative language model based on a denoising autoencoder [7], which can directly perform sequence generation tasks through finetuning, which can be applied to keyphrase generation [6]. We point out that in most of the previous work based on generative language models, finetuning is done on present and absent keyphrases together [6]. However, we argue that there exist considerable differences in the tasks of extracting present keyphrases and generating absent keyphrases, which motivates us to propose splitting the absent and present keyphrase generation tasks into two parts, and train two different generative models, where different hyperparameters are used for finetuning. The main contributions of this paper are: (1) A new model _Keyphrase-Focused BART_ is proposed, in which two BART models are finetuned separately on present and absent keyphrases, with different hyperparameter settings. (2) Shuffling keyphrase lists for prompting order-independence and augmenting samples is proposed. (3) A keyphrase ranker by a BERT cross-encoder combined with TF-IDF is introduced to improve keyphrases generated by the BART models. (4) Our experimental evaluation confirms effectiveness of these approaches. Our proposed Keyphrase-Focused BART shows new state-of-the-art records on absent keyphrases, on datasets SemEval and KP20K on F1_@_5. The ratio of F1_@_5 over the previous state-of-the-art is ranging between 9 to 37 percent, showing a wide improvement. ## 2 Related Work The following models are representative generative models, and compared against our proposed model in our evaluations: **CatSeq**[13]: An RNN-based sequence-to-sequence model with copy mechanism trained under ONE2SEQ paradigm. **CatSeqTG-2RF1**[2]: Based on CatSeq with title encoding and cross-attention. **GANMR**[10]: RL-based fine-tuning extension on CatSeq. **Fast and Constrained Absent KG**[11]: Prompt-based keyphrase generation methods, with prompt created around keyword and apply mask predict decoder. **ONE2SET**[12]: A sequence-to-sequence model based on transformers. ONE2SET generates a set of keyphrases, where the keyphrase order is ignored. **ONE2SET+KPDrop-a**[4]: KRDrop randomly drops present keyphrases for enhancing absent keyphrase generation. **ChatGPT**[8]: The large language model ChatGPT is instructed to generate keyphrases. ## 3 Keyphrase-Focused BART Fig. 1 shows our proposed model **Keyphrase-Focused BART**, which has two generative pretrained language models finetuned separately on present and absent keyphrases. **Language model separation**: In the existing approaches [2, 410, 11, 12, 13] of keyphrase generation by generative language models, just a single language model is trained over the union of present and absent keyphrases. KPDrop [5] randomly masks present keyphrases to be used as augmentation for absent keyphrases, where the absent prediction could be enhanced when the absent and masked phrases are semanti cally similar. But keyphrases are often topically distinct each other. Also, absent phrases need to be chosen from candidates that are vastly larger than the present phrases. The imbalanced candidate spaces for present and absent keyphrases will cause differences in the optimum training processes for both types. To resolve the above issues, we introduce an architecture in which two separated BART models are trained independently, where one model is trained only by present keyphrases, while the other model is trained only by absent keyphrases. Different hyperparameter settings are used for these BART models, to separately optimize the learning processes for the two tasks. **Shuffling and expanding**: Keyphrase lists shall be order independent. In [12], it is mentioned that the BART model might try to generate keyphrases by considering contextual relationships between the keyphrases. To reduce contextualities in learning output sequences, we apply shuffling on the training keyphrase lists, and add the shuffled sequences to the training dataset. **Ranking by BERT cross-encoder**: We formulate ranking candidate keyphrases as a binary classification task such that the reference keyphrases are labeled as 1, otherwise 0. The confidence score of a finetuned BERT cross-encoder [3] is is coupled with TF-IDF score as: \(log\,Score=[\alpha*\log\,Cross+(1-\alpha)*\log tf\_idf]\), where parameter \(\alpha\) is set to \(0.7\) in this paper. Note that TF-IDF score is not applicable for absent keyphrases. ## 4 Experiments ### Experimental settings We perform experiments on the five widely-used benchmark keyphrase datasets [12]: Inspec, Krapivin, NUS, SemEval, and KP20K. The baseline models we compared are Figure 1: Model Architecture of Keyphrase-Focused BART those listed in bold fonts in Section 2. Below lists variations of our model, evaluated as ablations in the experiments: **Basic BART**: BART model finetuned on the union of present and absent key-phrases. **A-P Separate**: Two BART models are finetuned on 1) present keyphrases only, with 4 training epochs, and 2) absent keyphrases only, and 8 training epochs, where more epochs are allocated than present keyphrase model, to deal with slow convergence. The learning rate and batch size are 1e-5 and 12, respectively, for both BART models. No shuffling on keyphrase lists is done. **A-P Separate+Shuffle(1)**: A-P Separate, and shuffling phrase lists once and add new lists into training dataset. The final dataset KP20K increased from 514,154 to 848,684. **A-P Separate+Shuffle(2)**: A-P Separate, and shuffling phrase lists twice and add new lists into training dataset. The final dataset KP20K increased from 514,154 to 1,086,979. **A-P Separate+Shuffle(1)+Rank**: A-P Separate+Shuffle(1), and then ranking by the BERT cross encoder. Its hyperparameter settings are: learning rate 5e-6, batch size 24, and training epochs 3. Negative filtering is used which is removing correctly predicted negatives after each epoch. We follow [2, 4] on evaluation metrics. For present and absent keyphrases, we use macro-average _F1@5_ and _F1@M_. _F1@M_ takes into account all the keyphrases generated by the model and compares them to the reference keyphrases. The results are shown in Table 1 and Table 2. All the results of our models are obtained by averaging four runs. The results of the baselines are from the cited papers. ### Results and analysis **Results on Present Keyphrases:** From Table 1, we can see that A-P Separate that separates the training dataset shows improves over Basic BART. By adding shuffling and separating to A-P Separate, the F1 scores of present keyphrases are further improved compared to using the basic BART model directly, but there is still a gap compared to ONE2SET [12]. Then the model A-P Separate \(+\) Shuffle(1) \(+\) Rank that uses the ranking unit by BERT cross-encoder and TF-IDF is further improving performance, and achieving highest F1@5 result on the Inspec dataset. We find that shuffling twice is rather falling behind of shuffling once, so we choose A-P Separate \(+\) Shuffle(1) \(+\) Rank as our best model for present keyphrases. **Results on Absent Keyphrases:** The results on absent keyphrases are shown in Table 2. We find that shuffling keyphrases once and expanding a dataset is showing improvements of 0 - 1.0 percent on F1 score. The ranking unit, on the other hand, shows little or no improvement of -0.3 to +0.1 percent to the model without the ranking unit. Overall, our Keyphrase-focused BART, with configuration of A-P Separate \(+\) Shuffle (1), achieved new state-of-the-art results on SemEval and KP20K on F1@5. The improvement of F1@5 over ONE2SET-KPDrop-a is ranging between 9 to 37 percent, achieving wide improvements. ChatGPT is showing highest score on Inspec, but the scores reported in [8] are falling behind of our proposed model on the other three datasets. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline & \multicolumn{2}{c|}{**Inspec**} & \multicolumn{2}{c|}{**NUS**} & \multicolumn{2}{c|}{**Krapivin**} & \multicolumn{2}{c|}{**SemEval**} & \multicolumn{2}{c}{**KP20K**} \\ \hline \multirow{2}{*}{**Model**} & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ \\ & 5 & M & 5 & M & 5 & M & 5 & M & 5 & M & 5 & @M \\ \hline CatSeq [13] & 22.5 & 26.2 & 32.3 & 39.7 & 26.9 & 35.4 & 24.2 & 28.3 & 29.1 & 36.7 \\ \hline CatSeqTG-2RF1 [2] & 25.3 & 30.1 & 37.5 & 43.3 & 30.0 & 36.9 & 28.7 & 32.9 & 32.1 & 38.6 \\ \hline GANMR [10] & 25.8 & 29.9 & 34.8 & 41.7 & 28.8 & 36.9 & - & - & 30.3 & 37.8 \\ \hline Fast and Constrained [11] & 26.0 & 29.4 & 41.2 & 43.9 & - & - & 32.9 & 35.6 & 35.1 & 35.5 \\ \hline SET-TRANS (ONE 2SET) [12] & 28.5 & 32.4 & 40.6 & **45.1** & 32.6 & **36.4** & 33.1 & **35.7** & 35.9 & 39.2 \\ \hline ONE2SET-KPDro-a [4] & 29.8 & 30.6 & **42.6** & 44.4 & **34.0** & 35.3 & **33.6** & 34.4 & **38.5** & **39.6** \\ \hline ChatGPT [8] & 32.5 & **40.3** & - & 20.0 & - & - & - & 18.6 & 23.2 & 25.1 \\ \hline \multicolumn{9}{c}{Proposed **Keyphrase-Focused BART** finetuned on present keyphrases} \\ \hline Basic BART & 29.5 & 29.5 & 27.1 & 27.1 & 19.9 & 19.9 & 21.4 & 21.4 & 30.7 & 30.7 \\ \hline A-P Separate + NoShuffle & 30.9 & 30.9 & 34.7 & 34.7 & 25.2 & 25.2 & 22.7 & 22.7 & 29.9 & 29.9 \\ \hline A-P Separate + Shuffle(1) & 33.1 & 33.1 & 37.9 & 37.9 & 27.8 & 27.8 & 27.6 & 27.8 & 31.5 & 31.5 \\ \hline A-P Separate + Shuffle(2) & 32.9 & 32.9 & 37.2 & 37.2 & 32.3 & 32.3 & 23.9 & 23.9 & 30.3 & 30.3 \\ \hline **A-P Separate + Shuffle(1) + Rank** & **35.8** & 35.8 & 41.2 & 41.2 & 29.0 & 29.0 & 28.3 & 28.3 & 33.7 & 33.7 \\ \hline \end{tabular} \end{table} Table 1: Results on Present Keyphrases (F1-score\(\times\) 100 ) \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline & \multicolumn{2}{c|}{**Inspec**} & \multicolumn{2}{c|}{**NUS**} & \multicolumn{2}{c|}{**Krapivin**} & \multicolumn{2}{c}{**SemEval**} & \multicolumn{2}{c}{**KP20K**} \\ \hline \multirow{2}{*}{**Model**} & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ & F1@ \\ & 5 & M & 5 & M & 5 & M & 5 & M & 5 & M & 5 & M \\ \hline CatSeq [13] & 0.4 & 0.8 & 1.6 & 2.8 & 1.8 & 3.6 & 1.6 & 2.8 & 1.5 & 3.2 \\ \hline CatSeqTG-2RF1[2] & 1.2 & 2.1 & 1.9 & 3.1 & 3.0 & 5.3 & 2.1 & 3.0 & 2.7 & 5.0 \\ \hline GANMR [10] & 1.3 & 1.9 & 2.6 & 3.8 & 4.2 & 5.7 & - & - & 3.2 & 4.5 \\ \hline Fast and Constrained [11] & 1.7 & 2.2 & 3.6 & 4.2 & - & - & 2.8 & 3.2 & 3.2 & 4.2 \\ \hline SET-TRANS (ONE 2SET) [12] & 2.1 & 3.4 & 4.2 & 6.0 & 4.8 & **7.3** & 2.6 & 3.5 & 3.6 & 5.8 \\ \hline ONE2SET-KPDro-a [4] & 3.2 & 3.2 & **7.4** & **7.4** & **7.2** & 7.2 & 4.6 & 4.7 & 6.5 & 6.6 \\ \hline ChatGPT [8] & **4.9** & **5.9** & - & 4.2 & - & - & - & 2.1 & 4.4 & 5.6 \\ \hline \multicolumn{9}{c}{Proposed **Keyphrase-Focused BART** finetuned on absent keyphrases} \\ \hline Basic BART & 2.4 & 2.4 & 3.8 & 3.8 & 3.8 & 3.8 & 2.9 & 2.9 & 6.5 & 6.5 \\ \hline A-P Separate + NoShuffle & 2.4 & 2.4 & 5.6 & 5.6 & 6.1 & 6.1 & 4.5 & 4.5 & 7.9 & 7.9 \\ \hline **A-P Separate + Shuffle(1)** & 2.4 & 2.4 & 5.6 & 5.6 & 6.4 & 6.4 & 4.9 & 4.9 & **8.9** & **8.9** \\ \hline A-P Separate + Shuffle(1) + Rank & 2.3 & 2.3 & 5.6 & 5.6 & 6.0 & 6.0 & **5.0** & **5.0** & 8.8 & 8.8 \\ \hline \end{tabular} \end{table} Table 2: Results on Absent Keyphrases (F1-score\(\times\) 100 ) Conclusion and Future Work In this paper, we proposed a generative language model approach for keyphrase generation. We show that splitting the generative language model into two tasks of absent keyphrase generation and present keyphrase extraction, and training them separately bring considerable performance improvements. Overall, for absent keyphrase generation, our Keyphrase-focused BART shows improvements on F1_@_5 by 9 and 37 percent on two datasets, from the previous state-of-the-art model. In future work, we will consider integrating prompt-based approaches for ranking candidate keyphrases.
2310.12156
Operator-Based Detecting, Learning, and Stabilizing Unstable Periodic Orbits of Chaotic Attractors
This paper examines the use of operator-theoretic approaches to the analysis of chaotic systems through the lens of their unstable periodic orbits (UPOs). Our approach involves three data-driven steps for detecting, identifying, and stabilizing UPOs. We demonstrate the use of kernel integral operators within delay coordinates as an innovative method for UPO detection. For identifying the dynamic behavior associated with each individual UPO, we utilize the Koopman operator to present the dynamics as linear equations in the space of Koopman eigenfunctions. This allows for characterizing the chaotic attractor by investigating its principal dynamical modes across varying UPOs. We extend this methodology into an interpretable machine learning framework aimed at stabilizing strange attractors on their UPOs. To illustrate the efficacy of our approach, we apply it to the Lorenz attractor as a case study.
Ali Tavasoli, Heman Shakeri
2023-09-07T13:58:58Z
http://arxiv.org/abs/2310.12156v1
# Operator-Based Detecting, Learning, and Stabilizing Unstable Periodic Orbits of Chaotic Attractors ###### Abstract This paper examines the use of operator-theoretic approaches to the analysis of chaotic systems through the lens of their unstable periodic orbits (UPOs). Our approach involves three data-driven steps for detecting, identifying, and stabilizing UPOs. We demonstrate the use of kernel integral operators within delay coordinates as an innovative method for UPO detection. For identifying the dynamic behavior associated with each individual UPO, we utilize the Koopman operator to present the dynamics as linear equations in the space of Koopman eigenfunctions. This allows for characterizing the chaotic attractor by investigating its principal dynamical modes across varying UPOs. We extend this methodology into an interpretable machine learning framework aimed at stabilizing strange attractors on their UPOs. To illustrate the efficacy of our approach, we apply it to the Lorenz attractor as a case study. ## I Introduction Over the past few decades, machine learning and deep learning techniques have dramatically transformed various fields of science and engineering [1]. As dynamic systems evolve, nonlinear interactions can result in chaotic and mixing behaviors, giving rise to uncorrelated spaces. This complexity is not limited by the form of interactions; even the simple nonlinear systems can exhibit complex behavior, as seen when a quadratic term leads to chaotic dynamics in the Rossler model [2]. This underscores the concept of natural subspaces for embedding complex non-linear systems within finite, closed spaces, thus bypassing the need for implicit function spaces in machine learning tools [3, 4]. Considerable progress has been made in fulfilling this requirement in recent years after revisiting the _Koopman operator_ through efficient data-driven approaches [5, 6, 7, 8, 9, 10]. The Koopman operator constructs linear models in the observable space, bypassing the need for nonlinear models in the state space. Koopman eigenfunctions offer an optimal coordinate set for reconstructing the dynamics, and the reduced-order models based on these rich dynamical elements encapsulate maximum information about intricate nonlinear interactions in interconnected systems. Studying the projected dynamics into each Koopman eigenspace will unveil a portion of the overall dynamics by highlighting independent nonlinear features specific to that eigenspace. By collating all these projected images, we can construct a comprehensive view of the large-scale dynamics in a flexible manner. This global linear representation is analogous to generalizing the Hartman-Grobman theorem that encompasses the entire basin of equilibrium points or periodic orbits [11]. Alternatively, it could be considered a counterpart to the principal modes of SVD approaches in linear theory [4]. Accordingly, the Koopman operator provides a theoretical framework for the spectral analysis and mode decomposition of nonlinear systems [5, 12]. Unlike traditional methods such as Fourier analysis and POD, which are commonly used to decompose signals into their frequency contents [13], the spectra of the Koopman operator facilitate the decomposition of the dynamics on an optimal nonlinear basis [14] and reveal spatial patterns corresponding to each temporal mode. Utilizing this spatiotemporal mode decomposition approach in nonlinear systems allows for investigation of the roles and interactions of constituents within different temporal modes. Hence, it facilitates the discovery of mechanisms and phenomena governing the operation of the overall system. More specifically, Koopman eigenfunctions recover coherent patterns with associated timescales [8], offering a unique setting for studying complex systems. These systems can exhibit diverse dynamical properties interacting across varying scales with a broad spectrum range. Therefore, this spectral analysis provides deep insights into the dynamics, such as invariant sets and partitions, basins of attractions for limit cycles, fixed points, and stability [15]. One way to characterize chaotic attractors is examining the unstable periodic orbits (UPOs) that are embedded densely within them [16]. UPOs make up the skeleton of a chaotic attractor, and an orbit on the attractor is the closure of the set of UPOs. The trajectory can be thought of as hopping from one UPO to the next [17]. The set of UPOs in an attractor is a dynamical invariant; their number, distribution, and properties unfold the structure of chaotic orbits, and they can be used to calculate other invariants, such as fractal dimension and topological entropy [16, 17]. Though UPOs in time series data may be identified by watching for close returns on a plane of section, this procedure is quite time-consuming due to the involvement of an ensemble of nearest-neighbor searches to reduce noise [17]. One can accelerate matters somewhat by using estimates of the local dynamics [18, 19, 20], but the computational complexity is largely inescapable [17]. The detection of UPOs leads to a natural way for controlling chaotic systems towards regular dynamics while retaining the original dynamics features. Indeed, by making only small control perturbations, chaotic dynamics can be tamed by exploiting the UPOs that exist without control, thereby avoiding creation of new orbits with different properties ones [18, 19, 20, 21, 22, 23]. As the uncontrolled orbit wanders ergodically over the attractor, it will eventually approach every (selected) UPO. When this happens, a small kick is sufficient to align the orbit with the UPO. Although requiring the dynamical equations and detecting UPOs in high-dimensional data spaces pose substantial challenges [18, 19, 20, 21, 22], as this paper demonstrates, the operator-theoretic framework provides a practical data-driven approach for detecting, identifying, and stabilizing UPOs. Our approach to detecting UPOs is based on examining the kernel integral operators in delay coordinates for the given data set. Recently studies have demonstrated the application of kernel operators for discovering various inherent dynamical properties in data-driven settings [24, 25, 26, 27, 10]. Further works on kernel operators in delay coordinates [8, 9] and reproducing kernel Hilbert spaces (RKHS) [28, 29, 30, 31] have established a solid theoretical foundation for recovering Koopman invariant subspaces and devising advanced machine learning techniques for complex nonlinear dynamics. After detecting UPOs using the kernel operator, we use the Koopman mode decomposition to identify the evolving dynamics on them. Hence, each detected UPO can be represented through principle modes that evolve linearly in the Koopman eigenfunctions space. While this interpretable learning of UPOs might be used to explore complex strange attractors, we demonstrate its further use for chaos control in next step. This involves two methods based on tracking the identified UPO's dynamics and small control perturbation based on the UPO's Koopman eigenfunctions. The rest of the paper is organized as follows. The next section outlines our approach based on the Koopman and kernel operators. Section IV demonstrates the use of this approach for detecting, identifying, and controlling UPOs. Section V presents the results of applying this approach to the Lorenz attractor. Section VI is devoted to concluding remarks. ## II Koopman operator and generator The Koopman approach considers the available data set as observables of a dynamical system with state space \(\mathcal{X}\) and flow map \(\Phi^{t}:\mathcal{X}\mapsto\mathcal{X}\), where \(t\in\mathbb{R}\) denotes time. The system's state at time \(t\) is given by \(x_{t}=\Phi^{t}(x)\), where \(x\in\mathcal{X}\). Signal recorded at \(d\) sensors are considered as an observation function \(F:\mathcal{X}\mapsto\mathbb{R}^{d}\). The dynamical system \((\mathcal{X},\Phi^{t})\) possesses ergodic measures; hence there exists a probability measure \(\mu\) on \(\mathcal{X}\), invariant under the flow map \(\Phi^{t}\), such that for every integrable function \(f:\mathcal{X}\mapsto\mathbb{C}\), the time average \(\bar{f}\) of \(f\) converges to the expectation value \(\bar{f}=\int_{\mathcal{X}}fd\mu\). Associated with the triplet \((\mathcal{X},\Phi^{t},\mu)\), we consider a Hilbert space \(\mathcal{H}=L^{2}(\mathcal{X},\mu)\) of square-integrable observables with respect to \(\mu\). The group of unitary Koopman operators \(U^{t}:\mathcal{H}\mapsto\mathcal{H}\) governs the evolution of observables under \(\Phi^{t}\). That means given \(f\in\mathcal{H}\), \(g=U^{t}f\) is defined as the observable satisfying \(g(x)=f(\Phi^{t}(x))\) for \(x\in\mathcal{X}\). An observable \(\psi_{j}\in\mathcal{H}\) is a Koopman eigenfunction if it satisfies the eigenvalue equation \[U^{t}\psi_{j}=e^{i\omega_{j}t}\psi_{j} \tag{1}\] for all \(t\in\mathbb{R}\). The eigenfrequency \(\omega_{j}\) is a real-valued frequency associated with the eigenfunction \(\psi_{j}\). Therefore, in measure-preserving dynamical systems, the Koopman eigenvalues remain on the unit circle in the complex plane, and the corresponding eigenfunctions evolve periodically under the dynamics. This is the key to the predictability of coherent patterns of dynamics. The Koopman eigenvalues and eigenfunctions appear as complex-conjugate pairs, and the Koopman eigenfunctions that correspond to different eigenfrequencies are orthogonal in the Hilbert space \(\mathcal{H}\). For every continuous flow \(\Phi^{t}\), the family of operators \(U^{t}\) has a generator \(V\), which is a skew-adjoint operator, defined as \[Vf:=\lim_{t\mapsto 0}\frac{1}{t}(U^{t}f-f),\quad f\in D(V)\subset L^{2}( \mathcal{X},\mu) \tag{2}\] Operators \(U^{t}\) and \(V\) share the same eigenfunctions, \[V\psi_{j}=i\omega_{j}\psi_{j} \tag{3}\] For ergodic systems, all eigenvalues of \(V\) are simple. ## III Dynamics adapted kernel integral operators ### _Kernel operators and delay coordinates_ Given a a collection of \(N\) samples \(F(x_{1}),...,F(x_{N})\), organized in a time-ordered manner, where each \(F(x_{i})\in\mathbb{R}^{d}\). The value of \(x_{n}\) is determined by the function \(\Phi^{n\Delta t}(x_{0})\), where \(\Delta t\) is the interval at which the data is sampled. A delay coordinate map is constructed from \(F\) by embedding \(\mathcal{X}\) in a manifold in \(\mathbb{R}^{Qd}\) as \[F_{Q}(x)=(F(x),F(\Phi^{-\Delta t}x),\cdots,F(\Phi^{-(Q-1)\Delta t}(x))) \tag{4}\] where \(Q\) (an integer) is the number of delays. Next, a kernel function \(k_{Q}:\mathcal{X}\times\mathcal{X}\mapsto\mathbb{R}_{+}\) is defined to measure the similarity of points in \(\mathcal{X}\) based on the observation function \(F_{Q}\). In this work, we use the radial Gaussian kernel with variable bandwidth defined as: \[k_{Q}(x,x^{\prime})=\exp(-\frac{\parallel F_{Q}(x)-F_{Q}(x^{\prime})\parallel^{ 2}}{\epsilon}) \tag{5}\] where \(\epsilon\) is a positive bandwidth parameter that can vary based on the available data density in \(\mathcal{X}\). In this work, we use a class of variable bandwidth kernels, also known as self-tuning kernels, introduced in [26]. Associated with the square-integrable kernel \(k_{Q}\) is a compact integral operator, \[K_{Q}f(x):=\int_{\mathcal{X}}k_{Q}(x,y)f(y)d\mu(y) \tag{6}\] Then a Markov kernel is constructed by normalizing \(K_{Q}\), \[P_{Q}f:=\frac{\tilde{K}_{Q}f}{\tilde{K}_{Q}1_{\mathcal{X}}} \tag{7}\] where \(\tilde{K}f=K_{Q}(\frac{f}{K_{Q}1_{\mathcal{X}}})\) and \(1_{\mathcal{X}}\) is the indicator function for the set \(\mathcal{X}\). The matrix \(P\) is a Markov matrix with real eigenvalues ordered as \(1=\lambda_{1}>\lambda_{2}\geq\lambda_{3}\geq...\), and real eigenvectors \(\varphi_{j}\) that are mutually orthogonal in \(\mathbb{R}^{N}\). Note that the first eigenvector corresponding to \(\lambda_{1}=1\) is the constant eigenvector \(\varphi_{1}=(1,...,1)^{T}\in\mathbb{R}^{N}\). As \(Q\) approaches infinity, the kernel integral operator \(P\) commutes with \(U^{t}\)[9]. Commuting operators have a common eigenspace, which allows us to compute eigenfunctions of \(U^{t}\) through expansions in the basis obtained from \(P\). Compactness of \(P\) makes a wide variety of data-driven tools available for approximation of integral operators. ### _Galerkin approximation of Koopman spectra_ To solve the numerically ill-posed Koopman eigenvalue problem, we replace finding the eigenvalues of \(V\) with approximating them through a regularized operator \(L_{\theta}\) with a small amount \(\theta\) of judiciously added diffusion for regularization [8, 9]. Eigenfunctions of the Koopman operator are sought in a Sobolev subspace \(\mathcal{H}^{2}\subset\mathcal{H}\)[9], on which \(V\) is a bounded operator. Based on the standard Galerkin approach, the regularized Koopman eigenvalue problem is to find \(\gamma\in\mathbb{C}\) and \(z\in\mathcal{H}^{2}\), such that for every \(f\in\mathcal{H}\) the following weak formulation holds: \[\left\langle z,L_{\theta}f\right\rangle=\gamma\left\langle f,z\right\rangle, \quad L_{\theta}=V-\theta\Delta \tag{8}\] where the operator \(\Delta\) is defined based on the eigenfunctions of the Laplace-Beltrami operator [8, 9]. In (8), \(\gamma\) and \(z\) are weak eigenvalues and eigenfunctions of \(L_{\theta}\), and \(f\) is a test function. To solve the Galerkin problem (8), we use the eigenfunctions \(\varphi_{j}\) of the kernel operator \(P\) as a basis. While the regularized operator \(L_{\theta}\) and the generator \(V\) share the same eigenfunctions, the eigenvalues of \(L_{\theta}\) are parameterized as \(\gamma_{\theta}=i\omega-\theta\eta\) where \(i\omega\) is an eigenvalue of \(V\) and \(\eta\) an eigenvalue of \(\Delta\). Next, we order the eigenfunctions by increasing Dirichlet energy values, denoted by \(E(f)=\frac{\left\langle f,\Delta f\right\rangle}{\left\|f\right\|^{2}}\). ### _Nystrom extension of Koopman eigenfunctions_ To evaluate the Koopman eigenfunctions at out-of-sample points, we first apply the Nystrom approach to extend the eigenfunctions \(\varphi_{k}\) of the Markov kernel operator [32]. Then, we use the results of the Galerkin approach for out-of-sample evaluation of Koopman eigenfunctions. Assuming that the Markov integral operator \(P\) described in Section III-A has the kernel \(p:\mathcal{X}\times\mathcal{X}\mapsto\mathbb{R}_{+}\), and the data \(X_{s}=\{x_{1},x_{2},...,x_{N}\}\), sampled from the manifold \(\mathcal{X}\), as \(N\) landmark points at which the eigenfunction \(\varphi\) is sampled. Consider \(\hat{\varphi}(x)\) as an approximation to the true \(\varphi(x)\), and the corresponding eigenvalue \(\hat{\lambda}_{i}\). Then for an unsampled point \(x\), we have \[\hat{\varphi}_{i}(x)=\frac{1}{\hat{\lambda}_{i}}\sum_{j=1}^{N}p(x,x_{j})\hat{ \varphi}_{i}(x_{j}) \tag{9}\] ## IV Extracting, identifying, and controlling UPOs ### _Extracting UPOs_ In Figure 1, we show the main idea of mapping between chaotic and periodic Markov kernel operators. The Markov kernel \(P_{Q}\) indicates a regular pattern for periodic orbits, where recurrent system states are encoded as high-probability transitions (black patches). The chaotic Markov kernel exhibits an irregular pattern with transition probabilities scattered sporadically. Thus, transitions between different chaotic states are random, resulting in unpredictable and mixing dynamics. To implement our approach, we leverage the fact that chaotic attractors are dense with UPOs. When a trajectory nears a UPO's stable manifold, it evolves (almost) periodically according to the UPO properties [18]. By zooming into parts of \(P_{Q}\) corresponding to periodic time intervals, we recover the regular pattern in Figure 1. Our goal is to discover the regular pattern patches around the diagonal of \(P_{Q}\) (see Figure 2). This sets out the application of modern machine learning techniques to discover UPOs via kernel operators in delay coordinates, and serves as an alternative to traditional approaches reliant on low-order maps or underlying model equations [23]. ### _Identification and control_ We formulate the system's dynamics on a detected UPO by computing the Koopman eigenfunctions for the trajectory on that UPO. We consider the system state on a UPO as an observable \(\mathbf{x}_{d}\in\mathcal{D}\) in the space of Koopman eigenfunctions that is invariant under \(U^{t}\), and is the closure of the span of \(\psi_{j}\). Every \(f\in\mathcal{D}\) can be decomposed as \(f=\sum_{j}\hat{f}_{j}\psi_{j}\), where \(\hat{f}_{j}=\left\langle f,\psi_{j}\right\rangle_{\mathcal{H}}\) is the inner product in \(\mathcal{H}\)[8]. Moreover, the dynamical evolution of \(f\) can be computed in a closed form via \[U^{t}f=\sum_{j}\hat{f}_{j}e^{i\omega_{j}}\psi_{j} \tag{10}\] This allows us to represent the system trajectory on a UPO as the following linear output dynamics, \[\dot{\mathbf{\psi}}=\Lambda\mathbf{\psi},\quad\quad\mathbf{x}_{d}=C\mathbf{\psi} \tag{11}\] where \(\mathbf{\psi}=[\psi_{1},\dots,\psi_{N}]^{T}\) is the vector of \(N\) Koopman eigenfunctions in the invariant subspace, \(\Lambda_{N\times N}\) is a diagonal matrix with the generator eigenvalues on the main diagonal, and \(C_{d\times N}\) is the regression matrix projecting the state space variables to the Koopman invariant subspace [33]. Thus, we collect data over the UPO and extract the Koopman eigenfunctions \(\psi_{j}\) using the Galerkin formulation (8). The extracted dynamics may serve as a reference for Fig. 1: Mapping the Markov operator between chaotic and periodic patterns by active control. closed-loop control. We consider two control architectures. The first control law tracks the UPO-induced trajectory \(\mathbf{x}_{d}(t)\) by simply computing the error \(\mathbf{e}(t)=\mathbf{x}(t)-\mathbf{x}_{d}(t)\), where \(\mathbf{x}(t)\) represents the actual system trajectory. The second method leverages the ergodic property of chaotic attractors and employs a small-perturbation policy, activating the control only if the system trajectory remains within a sufficiently small distance of the UPO's stable manifold [18]. To test this condition, the proposed algorithm monitors how Koopman eigenfunctions extend to the evolving system trajectory (see Section V-B). ## V Application to Lorenz attractor The Lorenz equations for fluid convection in a two-dimensional layer heated from below are \[\dot{x}=\sigma(y-x),\ \ \dot{y}=-xz+rx-y,\ \ \dot{z}=xy-bz \tag{12}\] where \(\sigma\), \(r\), and \(b\) are system parameters. The system state is \(\mathbf{x}=[x\ y\ z]^{T}\in\mathbb{R}^{3}\). For the Lorenz-63 model, we set \(\sigma=10\), \(r=28\), and \(b=8/3\). This is a chaotic attractor with the mixing property [34]. ### _Detecting UPOs in Lorenz model_ Figure 2 illustrates examples of detected UPOs for the Lorenz attractor by examining the Markov kernel \(P_{Q}\) and searching for regular patches (Figure 1) near different diagonal elements. We ran the Lorenz model from a random initial condition and sampled \(N=10^{4}\) data points at a sampling rate \(\Delta t=10^{-2}\)s. The data was collected along a single long trajectory, allowing for a spin-up time to ensure the trajectory had settled onto the attractor before data collection. To reduce the computational burden for large sample sizes \(N\), we sparsify \(P_{Q}\) by selecting a cutoff value \(k_{nn}\ll N\) and setting all but the largest \(k_{nn}\) elements in each row of \(K_{Q}\) to zero and symmetrizing the resulting sparse matrix. For simulation we set \(k_{nn}=0.1N\). In Figure 2, each regular patch indicates a periodic behavior for the corresponding time interval. We examine different values for the delay horizon, and Figure 2 shows the results for two values: \(Q=1000,2000\). According to Figure 2, the delay horizon \(Q\) allows for the recovery of UPOs at different scales. In this setting, increasing the delay horizon recovers UPOs with longer periods. In the limit of a large delay horizon (as \(Q\rightarrow\infty\)), the entire attractor is recovered as a UPO with a significantly long period. Recently, UPOs similar to those displayed in Figure 2 were identified using a variational approach [35]. Unlike that approach, we operate in a purely data-driven setting and use require no prior information on model equations. This is the advantage of diffusion maps [36], enhanced by delay-coordinate spaces, which boost their ability to extract intrinsic dynamical features and time scales. We take further advantage of this approach in the eigenfunctions space and identify each detected UPO based on principle Koopman modes. For example, Figure 3 shows the reconstruction of the UPO in the left of Figure (a)a. It shows the first 6 Koopman eigenfunctions with the least Dirichlet energy values among the 14 Koopman eigenfunctions reconstructing the UPO. We ran the Lorenz model on detected UPO and sampled \(N_{s}=1.4\times 10^{3}\) data points with a sampling rate of \(\Delta t=10^{-2}\)s. The Koopman eigenfunctions on the UPO were computed by the Galerkin approximation using kernel operator eigenfunctions as basis. This was achieved by a delay horizon that was twice the trajectory length along the UPO. The Koopman eigenfunctions set provides a multiscale family of geometric representations of the data, corresponding to dynamical features at different scales. From the random walk point of view of Markov processes [36], each eigenfunction scales with the probability of escaping different regions in state space. In this manner, the first eigenfunction in Figure 3 is associated with recurrent transitions between the right and left UPO's lobes. The second eigenfunction reveals another bipartite splitting of the UPO. After the trivial constant Koopman eigenfunction, representing no transition over the UPO, the first two (nontrivial) eigenfunctions in Figure 3 pertain to the slowest transition between different sets. Put another way, the probability of transitioning between the two yellow and blue sets in the first row of 3 is the lowest within the Markov process. The third eigenfunction on the left of second row in Figure 3, indicates the next slow transition. It shows a slowly evolving trajectory at the bottom of the UPO (the lower blue part in the third eigenfunction). The trajectory reaching the lower section of the UPO undergoes a slow evolution and needs a longer time to escape this region. Therefore, slow transitions between the right and left UPO's lobes or between the upper and lower parts of the UPO are captured by principle Koopman modes. Subsequent eigenfunctions discover faster transitions. Therefore, detecting UPOs in the operator-theoretic framework benefits from an interpretable, data-driven setting that identifies the dynamics by decomposing them into the most basic mechanisms, or coherent patterns. This approach has initiated a new chapter in studying chaotic systems in the presence of complexities and challenges such as convergence of nearby trajectories or continuous spectra [5, 7, 8, 10]. ### _Controlling UPOs_ We consider the nonlinear system as, \[\dot{\mathbf{x}}=F(\mathbf{x},\mathbf{u}) \tag{13}\] Fig. 2: Detecting UPOs in Lorenz attractor by examining the Markov kernel operator \(P_{Q}\). (a) Q=1000. (b) Q=2000. with \(\mathbf{x}\in\mathbb{R}^{d}\), \(\mathbf{y}\in\mathbb{R}^{m}\), and \(f:\mathbb{R}^{dm}\mapsto\mathbb{R}^{d}\) representing the system state, control input, and nonlinear map, respectively. We consider a controlled Lorenz model as \(F(\mathbf{x},\mathbf{u})=f(\mathbf{x})+\mathbf{u}\), with \(\mathbf{u}=[u_{1}\ u_{2}\ u_{3}]^{T}\) the control vector being applied linearly at each state direction. Our first control architecture utilizes the UPO's Koopman model (11) as a reference trajectory. Detecting UPOs within strange attractors, and maintaining bounded trajectories in an ergodic setting enable us to achieve practical stabilization of UPOs throughlinear control actions [18, 23]. In the setting of this paper, this results in a trajectory tracking control law involving the proportional error feedback \(\mathbf{u}=-K_{u}\mathbf{e}(t)\) where \(K_{u}\in\mathbb{R}^{d\times m}\). The effectiveness of this approach when applied to the Lorenz model, is illustrated in Figure 4. It shows that, for \(K_{u}=5I_{3\times 3}\), the initial chaotic trajectory converges to the selected UPO shortly after simulation begins. Figure 4 shows instances of larger control actions. These instances are characterized by more efforts needed to adhere to faster-evolving trajectories. Specifically, at the upper segment of the Lorenz attractor, substantial control actions are required to follow the selected UPO. In our second control architecture, to guarantee stability with minimal control action, the trajectories are allowed to evolve freely until they approach sufficiently close to the UPO (which happens infinitely many times in a chaotic attractor). At that point, control is strategically activated when the trajectory is adequately close to the UPO's stable manifold [18]. The challenge lies in detecting the stable manifold in cases where a precise dynamical model is lacking or the system is not effectively low-dimensional. Identifying the evolving dynamics of UPOs using the approach developed in this paper offers an opportunity to empirically realize the positioning of trajectories near the UPO with respect to the stable manifold. It is important to note that not all trajectory near the UPO are accurately reconstructed by the UPO's Koopman eigenfunctions. Only trajectories dwelling near the UPO for a significant time can be reconstructed. More precisely, when considering dynamics adapted kernel operators in delay coordinates (see III-A), the Nystrum extension of Koopman eigenfunctions is restricted to trajectory points that maintain a reliable trace along the UPO. Indeed, trajectories near the stable manifold are attracted to and remain close to the UPO before being repelled by the unstable manifold. These trajectories inherit the dynamical properties of the UPO, and the eigenfunctions extend to them reduced error. Therefore, we replace the physical distance (Euclidean distance in state space) with a measure in the UPO's eigenfunctions space. Crucially, we only trigger the control if the Koopman eigenfunctions of the UPO extend to the evolving trajectory with minimal error. Equation (11) is used to assess this error based on the Koopman eigenfunction set \(\hat{\mathbf{\psi}}(\mathbf{x})=[\hat{\psi}_{1}(\mathbf{x}),\ldots,\hat{\psi}_{N}(\mathbf{x} )]^{T}\), extending to the point \(\mathbf{x}\) in the vicinity of UPO. The control law is defined based on the error \(\mathbf{e}=\mathbf{x}-\hat{\mathbf{x}}\), with the reconstructed state \(\hat{\mathbf{x}}=C\hat{\mathbf{\psi}}(\mathbf{x})\), and the distance \(d_{x}\) of \(\mathbf{x}\) from the UPO, as, \[\mathbf{u}=\begin{cases}-K_{u}d_{x},&\text{ if }\mathbf{e}<\varepsilon\\ 0,&\text{ otherwise}\end{cases} \tag{14}\] where \(K_{u}\) is the control matrix gain and \(\varepsilon\) a small positive scalar. See Figure 5 for the results of this approach applied to the Lorenz attractor. Here, we have restricted the control vector to satisfy \(-0.5<u_{i}<0.5\), \(i=1,2,3\). Figure 5 indicates the convergence of the chaotic trajectory to the UPO with small control perturbation. Compared to the trajectory shown in 4, the trajectory in Figure 5 takes a longer time to settle onto the UPO, but the control action in 5 is significantly smaller. Thus, detecting and utilizing the ergodic properties in operator settings offer an effective, practical approach to tame chaos. ## VI Conclusions This paper demonstrates the use of operator settings to detect, identify, and stabilize UPOs in strange attractors. Delay embedding of the complex dynamical data into diffusion map coordinates offers a flexible approach for extracting UPOs across different time scales. The detection of UPOs is followed by a Koopman operator approach, identifing the principle dynamical modes on individual UPOs. This provides a more straightforward, interpretable tool to study com Fig. 4: The controlled Lorenz attractor: tracking the UPO. Fig. 5: The controlled Lorenz attractor: small control-perturbation. Fig. 3: The periodic orbit reconstructed by 14 Koopman modes. Here the first six modes with least Dirichlet energy are displayed. plex strange attractors, based on Koopman eigenfunctions of different UPOs that are densely embedded within the chaotic attractor. It also introduces a unique method for stabilizing chaotic attractors using individual UPOs eigenfunctions. An extended version of this work will delve into more features of strange attractors in data-driven operator settings and their role in constructing more robust predictive and control models. Exploring the possible connection between the continuous spectra and pseudospectra [7, 10, 28] of strange attractors and different UPOs' spectra is particularly a compelling subject.
2309.10622
Quasinormal Modes of Optical Solitons
Quasinormal modes (QNMs) are essential for understanding the stability and resonances of open systems, with increasing prominence in black hole physics. We present here the first study of QNMs of optical potentials. We show that solitons can support QNMs, deriving a soliton perturbation equation and giving exact analytical expressions for the QNMs of fiber solitons. We discuss the boundary conditions in this intrinsically dispersive system and identify novel signatures of dispersion. From here, we discover a new analogy with black holes and describe a regime in which the soliton is a robust black hole simulator for light-ring phenomena. Our results invite a range of applications, from the description of optical pulse propagation with QNMs to the use of state-of-the-art technology from fiber optics to address questions in black hole physics, such as QNM spectral instabilities and the role of nonlinearities in ringdown.
Christopher Burgess, Sam Patrick, Theo Torres, Ruth Gregory, Friedrich Koenig
2023-09-19T14:00:21Z
http://arxiv.org/abs/2309.10622v2
# Quasinormal Modes of Optical Solitons ###### Abstract Quasinormal modes (QNMs) are essential for understanding the stability and resonances of open systems, with increasing prominence in black hole physics. We present here the first study of QNMs of optical potentials. We show that solitons can support QNMs, deriving a soliton perturbation equation and giving exact analytical expressions for the QNMs of fiber solitons. We discuss the boundary conditions in this intrinsically dispersive system and identify novel signatures of dispersion. From here, we discover a new analogy with astrophysical black holes and describe a regime in which the soliton is a robust black hole simulator for light-ring phenomena. Our results invite a range of applications, from the description of optical pulse propagation with QNMs to the use of state-of-the-art technology from fiber optics to address questions in black hole physics, such as QNM spectral instabilities and the role of nonlinearities in ringdown. _Introduction._--Quasinormal modes (QNMs) are an area of high activity and interest following the discovery of gravitational radiation from black hole mergers [1]. After a merger, the resulting black hole _rings down_ to its final state in a characteristic fashion, described by a damped waveform with complex frequency. This phenomenon of ringdown is a generic feature of open systems, allowing us to identify natural resonances and address questions of stability [2; 3; 4]. Examples of ringdown abound, in optical cavities [5], plasmonic nanoresonators [6; 7], polariton superfluids [8], surface water gravity waves [9], supergravity [10], and even the ringing of church wells. This diversity of settings is crucial as it provides many perspectives on the ringdown phenomenon, whose significance in the context of black holes is increasingly recognised [11; 12; 13]. The ringdown of a perturbed open system is readily understood using its QNMs. These are eigenmodes of the evolution operator, with a discrete complex spectrum, \(\Omega_{n}=\omega_{n}-i\Gamma_{n}\), where the overtone index \(n\) orders the modes by increasing decay rate \(\Gamma_{n}\). The signal of ringdown is a superposition of damped QNM oscillations, with the ringdown spectrum a feature of the system, independent of the initial perturbation [14]. The fundamental mode (\(n=0\)) is longest-lived, eventually dominating the linear response and providing immediate access to characteristic information [15]. QNMs describe both perturbative field evolution around fixed bulk media with well-defined boundaries, and also open systems with time-independent repulsive potentials. In optics, they efficiently reconstruct the mode shapes of electromagnetic fields in optical resonators and plasmonic cavities, both of the former kind [5]. By contrast, black hole oscillations are of the latter type [9; 15]. To date, QNMs of _optical_ potentials have not been reported. Fiber optical solitons provide a remarkable way of creating such potentials. Indeed, suitable perturbations to the soliton are known to obey a Schrodinger equation with repulsive potential in the comoving frame [16; 17; 18]. In this Letter, we derive a perturbation equation where the soliton acts as a potential, due to a nonlinear polarization of the medium. We clarify the notion of QNMs in this intrinsically dispersive system, showing the soliton can support a discrete set. Analytic expressions for the QNMs and their complex frequencies--the QNM spectrum--are provided. We then consider a weak dispersive pulse co-propagating with and perturbing the soliton. Simulating the response, we observe the predicted ringdown. We identify the complex frequency of the fundamental mode, finding agreement with our theory. Finally, we establish a mathematical analogy between the ringdown of solitons and black holes, and discuss prospects for developing our analysis in this and other soliton-supporting systems. _Soliton perturbations._--To investigate dispersive pulse interactions with a soliton, we first describe the soliton in single-mode fibers. A linearly polarized soliton, given by a complex envelope function \(A=A_{s}(z,t)\) around a carrier of frequency \(\omega_{s}\), obeys the nonlinear Schrodinger equation (NLS) for pulse propagation [19]. The soliton has a stationary \(\mathrm{sech}^{2}(\tau)\) intensity profile in the comoving frame, which is related to the laboratory frame by \[\tau=\frac{t-z/v}{T_{0}},\quad\zeta=\frac{z|\beta_{s2}|}{T_{0}^{2}}, \tag{1}\] with \(T_{0}\) the temporal width of the soliton, \(v\) its group velocity, and \(\beta_{s2}\) its group velocity dispersion [19]. To describe dispersive pulses, we consider the NLS with additional terms for higher-order dispersion [20]. In the laboratory frame, this reads \[\frac{\partial A}{\partial z}-i\big{[}\beta(i\partial_{t}+\omega_{s})-\beta( \omega_{s})\big{]}A-i\gamma|A|^{2}A=0, \tag{2}\] where \(\beta(\omega)\) is a Taylor series for the propagation constant, and \(\gamma\) the fiber nonlinear parameter. Stable soliton propagation survives weak higher-order dispersion, with only the soliton velocity \(v\) and phase affected. The perturbed soliton solution is given in [21; 22]. We consider a dispersive pulse, \(a\), as a linear perturbation co-propagating with the soliton, \(A_{s}\). The dispersive pulse envelope is defined with a carrier frequency, \(\omega_{a}\), so the overall envelope is \[A=A_{s}+a\,e^{i(\beta(\omega_{a})-\beta(\omega_{s}))z-i(\omega_{a}-\omega_{s})t}. \tag{3}\] Inserting into Eq. (2) yields the linearized equation of motion for the dispersive pulse. We neglect fast-oscillating terms arising from frequency mixing between the soliton and dispersive pulse, given their spectral separation and no phase matching. In the comoving frame, the dispersive pulse envelope satisfies \[\frac{|\beta_{s2}|}{T_{0}}\frac{\partial a}{\partial\zeta}-\left( \beta_{s1}-\frac{\beta_{s3}}{6T_{0}^{2}}-\beta_{a1}\right)\frac{\partial a}{ \partial\tau}\] \[\qquad\qquad+T_{0}\sum_{k=2}^{\infty}\beta_{ak}\frac{(i\partial_ {t})^{k}}{k!}a-2i\gamma T_{0}|A_{s}|^{2}a=0, \tag{4}\] with \(\beta_{sk}\) and \(\beta_{ak}\) the expansion coefficients of \(\beta(\omega)\) about \(\omega_{s}\) and \(\omega_{a}\), respectively. Requiring the dispersive pulse carrier to be group-velocity matched (GVM) to the soliton, so that \(\beta_{a1}=\beta_{s1}-\beta_{s3}/6T_{0}^{2}=v^{-1}\), the equation simplifies. For narrowband perturbations around \(\omega_{a}\), we can also neglect higher-order dispersion. We thus focus on near-GVM perturbations. This is natural in the context of QNMs, which are associated with perturbations that remain coincident with a background potential until late times. These perturbations satisfy \[i\frac{\partial a}{\partial\zeta}-\frac{\beta_{a2}}{2|\beta_{s2}|}\frac{ \partial^{2}a}{\partial\tau^{2}}+2\,\mathrm{sech}^{2}(\tau)a=0, \tag{5}\] having inserted the soliton [21; 22]. Eq. (5) is our soliton perturbation equation, which may be cast as a time-reversed Schrodinger equation with an inverted Poschl-Teller potential [23]. This agrees with previous work in which the soliton acted as a repulsive potential, creating classical turning points for slow dispersive light [24; 25], and exhibiting light tunneling [17]. _Soliton quasinormal modes._--As ringdown arises in the linear response of a system, soliton ringdown can be explored using our soliton perturbation equation. To derive the soliton QNMs, we begin with mode solutions, \(a(\zeta,\tau)=u(\tau)\exp(-i\Omega\zeta)\). We obtain \[-\frac{d^{2}u}{d\tau^{2}}+\frac{2|\beta_{s2}|}{\beta_{a2}}\bigg{[}\Omega+2\, \mathrm{sech}^{2}(\tau)\bigg{]}u=0, \tag{6}\] solvable through exact methods [26]. Next, we must impose boundary conditions on the general solution to Eq. (6). In gravitational physics, one requires outgoing waves at the boundaries, as phase and group velocities are equivalent in relativistic systems, guaranteeing energy-dissipating modes. However, the situation is not so simple in the presence of dispersion. Moreover, laboratory frame energy is not conserved by Eq. (5), as time translation is not a symmetry. The relevant symmetry is translation in \(\zeta\), generating the conserved current, \[j=-\frac{\beta_{a2}}{2|\beta_{s2}|}\bigg{[}\frac{\partial a}{\partial\zeta} \frac{\partial a^{*}}{\partial\tau}+\frac{\partial a^{*}}{\partial\zeta}\frac{ \partial a}{\partial\tau}\bigg{]}. \tag{7}\] It is then natural that we require \(j\) to be asymptotically outgoing, i.e. \(\mathrm{sgn}(j)=\mathrm{sgn}(\tau)\) as \(\tau\rightarrow\pm\infty\). In the absence of dispersion, these boundary conditions coincide with those of relativistic systems. Far from the soliton potential, \(a(\zeta,\tau)\sim\exp(iK_{\pm}\tau-i\Omega\zeta)\), with asymptotic wavenumbers \(K_{\pm}\) given by the fiber's underlying dispersion relation, \(\Omega=-\frac{\beta_{a2}}{2|\beta_{s2}|}K_{\pm}^{2}\), due to Eq. (5). For decaying modes, i.e. \(\mathrm{Im}(\Omega)<0\), the boundary conditions thus set the signs of \(\mathrm{Im}(K_{+})\) and \(\mathrm{Im}(K_{-})\) so that solutions grow exponentially towards the boundaries. For complex \(\Omega\), the asymptotic form of the general mode solution contains waves that both grow and decay with separation from the soliton. QNM boundary conditions forbid the latter, as they deliver energy into the system. These unwanted waves vanish only for a discrete set of QNM frequencies \(\Omega_{n}\) where the Wronskian of two linearly independent solutions is zero, and decaying waves vanish against growing waves. In the language of scattering theory, this occurs due to the divergence of transmission and reflection amplitudes at these frequencies. These frequencies also appear as poles in the Green functions associated with Eq. (5). Therefore, our physically motivated condition that \(j\) be outgoing at infinity agrees with the standard definition of QNMs, reinforcing our approach to the boundary conditions [27; 28; 29]. Figure 1: Quasinormal mode spectra for optical solitons with \(Q=4|\beta_{s2}|/\beta_{a2}-1/4\) in the range \([0,\infty)\), plotted in the complex plane. The QNM frequencies of each soliton fall along a parabola (grey, dashed) and the QNM frequencies of each overtone index \(n\) comprise a semi-ellipse (red, solid) in the lower-half complex plane. The fundamental mode frequencies lie on a circle of radius 2 about the origin. The QNM frequencies are given by \[\Omega_{n} =\frac{\beta_{a2}}{2|\beta_{s2}|}\bigg{[}\bigg{(}n+\frac{1}{2}\bigg{)} ^{2}-\bigg{(}\frac{4|\beta_{s2}|}{\beta_{a2}}-\frac{1}{4}\bigg{)}\bigg{]}\] \[\quad-i\frac{\beta_{a2}}{|\beta_{s2}|}\bigg{(}n+\frac{1}{2}\bigg{)} \sqrt{\frac{4|\beta_{s2}|}{\beta_{a2}}-\frac{1}{4}}, \tag{8}\] where the overtone index \(n\) is a non-negative integer. The soliton QNM spectrum, plotted in Fig. 1, depends only on the group velocity dispersions via \(|\beta_{s2}|/\beta_{a2}\). This is tunable by varying the soliton central frequency, which determines the dispersion at the soliton and GVM frequencies. The available range of values is then a feature of the supporting fiber's dispersion relation. The full QNM solution is easily obtained with the general solution and QNM frequencies. The result is \[a_{n}(\zeta,\tau)=A\cosh^{n+\frac{1}{2}}(\tau)e^{\mathrm{Im}(\Omega_{n}) \zeta}e^{i\phi_{n}(\tau,\zeta)}f_{n}(\tau), \tag{9}\] with \[\phi_{n}(\tau,\zeta) \equiv-\mathrm{Im}(\lambda)\log\cosh(\tau)-\mathrm{Re}(\Omega_{n})\zeta,\] \[f_{n}(\tau) \equiv\,_{2}F_{1}\big{[}-n,1+2\lambda-n;1+\lambda-n;(e^{2\tau}+1) ^{-1}\big{]},\] \[\lambda \equiv-\frac{1}{2}+i\sqrt{\frac{4|\beta_{s2}|}{\beta_{a2}}-\frac {1}{4}}.\] The hyperbolic factor in Eq. (9) shapes the mode with exponential growth far from the soliton, while the exponential in \(\zeta\) gives an overall decay. The phase \(\phi_{n}\) enforces outgoing/ingoing comoving phase velocities at the boundaries, and the ordinary hypergeometric function \(f_{n}\) arranges that mode parity alternates with overtone index. For the fundamental QNM, \(f_{0}=1\). The first five QNMs are plotted in Fig. 2. The exponential growth of these solutions appears unphysical, but this is a typical feature of QNMs, which are necessarily decaying in advanced time. Physically, a field ringing down resembles a superposition of QNMs on only a finite region of the space [14]. _Simulations._--To demonstrate the emission of ringdown waves from the soliton, we numerically simulate the evolution of a near-GVM pulse colliding with the soliton. For various initial pulses, the collision produces ringdown waves in the perturbative pulse field. These waves visit each position in the comoving frame, in both transmission and reflection. At each position, the signature of ringdown is a decaying oscillation, with a period and decay rate given by the fundamental QNM. We simulate this process using a split-step Fourier method to solve the soliton perturbation equation obeyed by the pulse. The ringdown of an optical soliton occurs against an evolving background of non-QNM contributions to the perturbative field. These arise due to dispersion, which tends to broaden the transmitted and reflected pulses produced by the pulse-soliton collision. This effect is absent in the paradigmatic QNM systems [5; 6; 7; 8; 9; 10]. The ringdown signal is strongest around an observation point \(\tau=\tau_{0}\) near the transmitted pulse, and clearest when this pulse has least width. Therefore, we configure our initial pulse with a quadratic phase, known as a chirp, so the transmitted pulse is narrowest at a point \(\tau=\tau_{c}\) near the observation point. Our initial condition is \[a(0,\tau)=\exp\!\left[-\frac{(\tau-\tau_{p})^{2p}}{\sigma^{2p}}\right]\exp\! \left[i\frac{(\tau-\tau_{c})^{2}}{4\zeta_{c}}\right]\!, \tag{10}\] with a super-Gaussian pulse envelope about \(\tau=\tau_{p}\) and a minimum pulse width at a fiber length of \(\zeta=\zeta_{c}\). The above initial condition produces a clear ringdown signal for a range of parameters. Importantly, the dominant ringdown period and decay rate consistently agree Figure 2: Complex plots of soliton quasinormal modes, \(a_{n}(\zeta,\tau)\). The fundamental mode (**a**) and four overtones **(b-e)** are shown for \(\zeta\in[0,2],\tau\in[-6,6]\) with \(|\beta_{s2}|/\beta_{a2}=2\). They grow symmetrically about the soliton potential at \(\tau=0\), and their phase velocities reverse direction for sufficiently high overtones, visible from the contours of constant phase. This signature of dispersion represents a qualitative departure from the quasinormal modes of non-dispersive systems. with the fundamental mode. Fig. 3 presents a quintessential example of optical soliton ringdown. Three phases are identifiable: i) an initial transient phase as the transmitted pulse passes the observation point, ii) a relaxation phase dominated by ringdown waves, and iii) a transition towards a late-time power law decay owing to dispersion. These three phases are analogous to those appearing in the relaxation of black holes or hydrodynamic vortex flows [3; 9; 15]. For non-dispersive systems, the presence of a late-time tail depends only on details of the potential. In contrast, we attribute ours to the action of dispersion, as the inverted Poschl-Teller potential does not otherwise exhibit a late-time power law decay [30]. _Black hole analogy._--In optics, ringdown is predominately associated with leaky cavities and resonators. Yet the ringdown of an optical soliton, viewed as a repulsive potential, bears greater resemblance to that of black holes. Indeed, the link to black holes is beyond merely qualitative. Eq. (6) is a mathematical analogue to the low-overtone QNM equation for scalar and gravitational perturbations on the Schwarzschild black hole [3], \[-\frac{d^{2}R}{dr_{*}^{2}}+\Bigl{[}-\omega^{2}+V_{0}\operatorname{sech}^{2}( \alpha r_{*})\Bigr{]}R=0, \tag{11}\] with \(R\) the radial wavefunction and \(r_{*}\) the so-called tortoise co-ordinate. In this analogy, the soliton reproduces the effective radial gravitational potential outside of the black hole, with the centre of the soliton corresponding to the photon sphere. The analogy holds for low-lying overtones, whose mode shapes are identical to those of the soliton, with frequencies related by \(\omega\propto\sqrt{-\Omega}\). The same holds for the Nariai and near-extremal SdS black holes, but for all overtones [3]. Notably, the aforementioned black hole QNM frequencies lie along vertical lines in the complex plane, whereas those of the soliton form parabolic curves, shown in Fig. 1. This difference ultimately traces back to the quadratic relation of the spectra. The above black hole analogies are strengthened when the group velocity dispersion at the GVM frequency is much less than the finite dispersion supporting the soliton, i.e. \(\beta_{a2}\ll|\beta_{s2}|\). In this regime of weak dispersion, the low-lying QNM spectrum is \[\Omega_{n}=-2-i\sqrt{\frac{4\beta_{a2}}{|\beta_{s2}|}}\biggl{(}n+\frac{1}{2} \biggr{)}, \tag{12}\] where \(n\) is small compared with \(\sqrt{|\beta_{s2}|/\beta_{a2}}\) and we omit terms of order \(O(\beta_{a2}/|\beta_{s2}|)\). This can then be mapped onto the QNM spectra of the Nariai and near-extremal SdS black holes, in the regime of high angular momentum perturbations, i.e. the eikonal regime. Thus, the soliton ringdown is identical to that of these black holes, and the soliton becomes a black hole simulator. _Outlook._--We have shown for the first time that optical potentials can support quasi-normal modes and demonstrated how they may be excited in the case of fiber solitons. We have demonstrated a selection of correspondences, contributing to a growing zoo of table-top systems in analogue gravity [4; 9; 17]. In particular, this work sounds a bell, heralding the introduction of sophisticated techniques of QNM analysis from black hole physics into optics. Conversely, fiber solitons are now a platform with which the tools of optics may now be put to use in the study of otherwise inaccessible phenomena that arise in the study of black holes, such as the recently reported QNM spectral instability [31; 32; 33] and imprints of nonlinearity on the ringdown process [34; 35]. The above analysis employed several approximations that may be fruitful to relax, to probe the consequences for soliton QNMs. In this work, the soliton QNM spectrum was shown to be robust against weak third-order dispersion at the central frequency of the soliton. Future work can investigate the influence of further weak fiber optical effects, e.g. Raman scattering and higher-order dispersion at the perturbation frequency, which become important in the regime of ultra-short pulse lengths. The soliton QNM spectrum was shown to be tunable by varying the central frequency of the soliton. In fibers supporting several group velocity matched points, there exists a further discrete freedom to choose the GVM frequency around which we consider perturbations. The same soliton may thus possess several distinct QNM spectra. Beyond this, we can relax our near-GVM condition and consider QNMs of more general linear differential operators in Eq. (4). This can be expected to produce a continuum of QNM spectra, fully characterising the ringdown processes available to the soliton. The above QNM analysis focused on optical solitons, Figure 3: Evolution of the perturbative field \(a(\zeta,\tau)\) at a fixed point \(\tau_{0}=-5\) away from the soliton. The blue curve is the simulated solution to Eq. (5) with the initial condition in Eq. (10), \(\tau_{c}=-10\), \(\tau_{p}=20\), \(\zeta_{c}=2.5\), \(p=5\), \(\sigma=15\) and \(|\beta_{a2}|/\beta_{a2}=2.5\). Three phases are separated by vertical dashed lines, corresponding to i) the initial transient of the perturbation, ii) the emission of ringdown waves, and iii) a transition to a late-time tail. The red dashed curve shows a fit with the fundamental QNM (\(n=0\)) in Eq. (8). but our findings can describe natural resonances in any realistic system supporting NLS-type solitons, providing immediate extensions beyond optics. Furthermore, this work lays the foundation for determining QNMs of other solitons (e.g. KdV, sine-Gordon), wherever they act as stationary effective potentials in their comoving frame. Finally, we note that optical soliton ringdown has yet to be experimentally observed and tested against the framework of QNMs. Soliton pulse length will be a critical parameter in this experiment, setting the relevant length scales, and governing the importance of the Raman effect and higher-order dispersion. We anticipate that the identification of ringdown waves in propagating optical pulses will stimulate developments in communications and ultra-fast lasers, and motivate advances in the field of dispersion engineering. For the purpose of open access, the authors have applied a creative commons attribution (CC BY) licence (where permitted by UKRI, 'open government licence' or 'creative commons attribution no-derivatives (CC BY-ND) licence' may be stated instead) to any author accepted manuscript version arising. The supporting data for this Letter are openly available from [36]. This work was supported in part by the Science and Technology Facilities Council through the UKRI Quantum Technologies for Fundamental Physics Programme [Grants ST/T005866/1 (FK) & ST/T005858/1 (RG, SP & TT)]. CB was supported by the UK Engineering and Physical Sciences Research Council [Grant EP/T518062/1]. The authors would also like to thank the Perimeter Institute for Theoretical Physics for hospitality. RG also acknowledges support from the Perimeter Institute. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science
2301.13509
Waves in a Stochastic Cell Motility Model
In Bhattacharya et al. (Science Advances, 2020), a set of chemical reactions involved in the dynamics of actin waves in cells was studied. Both at the microscopic level, where the individual chemical reactions are directly modelled using Gillespie-type algorithms, and on a macroscopic level where a deterministic reaction-diffusion equation arises as the large-scale limit of the underlying chemical reactions. In this work, we derive, and subsequently study, the related mesoscopic stochastic reaction-diffusion system, or Chemical Langevin Equation, that arises from the same set of chemical reactions. We explain how the stochastic patterns that arise from this equation can be used to understand the experimentally observed dynamics from Bhattacharya et al. In particular, we argue that the mesoscopic stochastic model better captures the microscopic behaviour than the deterministic reaction-diffusion equation, while being more amenable for mathematical analysis and numerical simulations than the microscopic model.
Christian Hamster, Peter van Heijster
2023-01-31T09:54:11Z
http://arxiv.org/abs/2301.13509v1
# Waves in a Stochastic Cell Motility Model ###### Abstract In Bhattacharya et al. (Science Advances, 2020), a set of chemical reactions involved in the dynamics of actin waves in cells was studied. Both at the microscopic level, where the individual chemical reactions are directly modelled using Gillespie-type algorithms, and on a macroscopic level where a deterministic reaction-diffusion equation arises as the large-scale limit of the underlying chemical reactions. In this work, we derive, and subsequently study, the related mesoscopic stochastic reaction-diffusion system, or Chemical Langevin Equation, that arises from the same set of chemical reactions. We explain how the stochastic patterns that arise from this equation can be used to understand the experimentally observed dynamics from Bhattacharya et al. In particular, we argue that the mesoscopic stochastic model better captures the microscopic behaviour than the deterministic reaction-diffusion equation, while being more amenable for mathematical analysis and numerical simulations than the microscopic model. keywords: Gillespie Algorithms, Cell Motility, Mesoscopic Patterns, SPDEs, Chemical Langevin Equation. ## 1 Introduction In order to move around, an amoeboid cell can change its shape by polymerising actin to curve the cell membrane. The actin polymerisation is controlled by signalling molecules and experiments in _Dictyostelium discoideum_ have shown that activation of these signalling molecules happens at localised patches that can move along the membrane like a wave [1, 21]. In wild-type (WT) cells, these waves move fast and die out, creating familiar-shaped pseudopods, while in cancerous cells these waves stick to a point, creating elongated protrusions [1], see Figure 1.1. In absence of a signal, the formation of pseudopods happens at random places on the cell membrane, resulting in random motion. In contrast, when a cell senses a chemical signal, it can concentrate the random protrusions at the side of the cell where the signal comes from, leading to movement in the direction of the signal [6]. As cells are small, the difference in signal strength between the front and the back of the cell (the gradient) is small as well. Furthermore, the cell can only use discrete points at the membrane where the receptors are to estimate the direction of the signal [6]. Therefore, one of the main questions is "How can a cell use a small gradient in the signal to concentrate the actin activity in the front?". This question has been studied intensively, but no complete description of all the microscopic chemical processes involved has been given yet, see [8] for a review. In [1], the choice is made to describe the highly complex actin dynamics with a conceptual activator \(u\) and inhibitor \(v\) that diffuse and react with each other as summarised in Table 1. The species \(u\) and \(v\) are an abstraction of the dozens of components that regulate the actual cell movement, but the activator \(u\) can be thought of as Ras activity [1], which plays an important role in cell growth and differentiation [28]. In particular, \(u\) is being activated by Reaction #3 and Reaction #4, while being inhibited by Reaction #1 and Reaction #2, with propensities as indicated in the table. In addition, \(v\) is inhibited by Reaction #5, while Reaction #6 activates the inhibitor. The information on the chemical reactions, in combination with the diffusion of both species, is generally used in one of two ways. First, there is a Gillespie-type algorithm [15, 16] which can be used to simulate the involved chemical reactions on a microscopic level. For these simulations, \((u_{k}(t_{n}),v_{k}(t_{n}))\) (the solution at time \(t_{n}\) at grid cell \(k\)) is treated as the number of molecules of type \(u\) and \(v\) at time \(t_{n}\) in a grid cell with finite size. For all these individual molecules the probabilities of diffusing to other grid cells or taking part in a chemical reaction are prescribed as by Table 1. To be precise, Reaction #1 implies that the time to the next reaction that degrades a \(u\) molecule in grid cell \(k\) is exponentially distributed with rate parameter \((a_{1}u_{k}(t_{n}))^{-1}\). See the panels on the left of Figure 1 for examples of these simulations. This Gillespie-type algorithm approach takes the stochastic nature of a single cell into account. However, it is computationally very expensive and difficult to analyse mathematically. Hence, it is hard to use this type of modelling approach to make valuable predictions. Figure 1: Stochastic simulations of the microscopic Gillespie-type model from [1]. The figures on the left show stochastic simulations of the Ras activity for parameter values applicable to (A) wild-type cells and to (B) genetically modified cells, where the phosphatase PTEN has been switched off. The figures on the right show typical cell shapes corresponding to the dynamics in the left figures. This shows that mutations in the gene that codes for PTEN lead to elongated protrusions typically associated with cancer. The dotted yellow line is an indicator of the wave speed, i.e. the actin waves in (B) are slower and live longer than in (A). Reproduced from [1] under creative commons license 4.0. A second way to use the reactions in Table 1 is to derive an average large-scale limit macroscopic equation. Hence, we assume that \(u\) and \(v\) are densities on a continuous domain, described by a reaction-rate equation with diffusion, also known as a Reaction-Diffusion Equation (RDE). In particular, the RDE1 related to the chemical reactions in Table 1 is given by Footnote 1: Note that (1.1) can also be obtained from a quasi-steady-state approximation from a more complex three-component model introduced in [2], commonly referred to as a Signal Transduction Excitation Network. \[\begin{split}\partial_{t}u&=D_{u}\partial_{xx}u- a_{1}u-a_{2}uv+\frac{a_{3}u^{2}}{a_{4}+u^{2}}+a_{5}\,,\\ \partial_{t}v&=D_{v}\partial_{xx}v+\varepsilon(-c_{1 }v+c_{2}u),\end{split} \tag{1.1}\] which is a specific version of the general RDE we will encounter in SS2. This model is a variation on the classic FitzHugh-Nagumo model for neuron spiking [12, 30]. Protrusions are formed at places with high activator \(u\) and \(u\) is inhibited by the terms \(-a_{1}u\) and \(-a_{2}uv\), see Reaction #1 and Reaction #2 in Table 1. This implies that an increase in \(u\) or \(v\) leads to a decrease in \(u\), unless the increase is high enough such that activation from Reaction #3, modelled by a nonlinear Hill function \(a_{3}u^{2}/(a_{4}+u^{2})\), takes over and negates the inhibiting effects. Effectively, this means that a small increase in \(u\) can lead to a much larger increase in \(u\), that is, the system is locally activated. Once \(u\) is large and the Hill function levels off at a fixed value \(a_{3}\), the amount of inhibitor \(v\) increases via the term \(\varepsilon c_{2}u\) (related to Reaction #6), leading to a fast decay in \(u\) by the \(-a_{2}uv\) term (related to Reaction #2). The inhibitor \(v\) then decays via Reaction #5 to the rest state and activation can happen again. In addition, both species diffuse with diffusion coefficient \(D_{u}\), respectively \(D_{v}\), where it is assumed that \(D_{u}<D_{v}\). It is important to realise that, in both approaches, the modelled actin waves happen on the surface of the cell, and, as in [1], we only study a slice of this surface. Therefore, the spatial domain must be thought of as an (approximate) circle. For deterministic RDEs like (1.1), a plethora of analytical tools are available (see, for instance, Appendix B) and numerical simulations are relatively straightforward. However, being a deterministic equation, this RDE does not show the same stochastic dynamics as the Gillespie simulations and experiments. A crucial difference between the macroscopic RDE model (1.1) and the Gillespie simulations revolves around the duration of the patterns. In the RDE, an established pattern, e.g. a standing or travelling wave, will, if uninterrupted, remain there for a very long time, while these patterns are destroyed quickly both in stochastic simulations and experiments. Furthermore, when the rest state of the RDE (1.1) is stable, activation cannot come from the RDE itself, but it needs an external signal large enough to activate the nonlinear term \(a_{3}u^{2}/(a_{4}+u^{2})\) related to Reaction #3. We generally refer to the activation of these patterns as activation events. It is important to realise that the dynamics of the different chemical processes in the cell are inherently stochastic and at the size of a single cell chemical reactions are not well approximated by large-scale approximations, as Figures 1.1 and 1.2 show. In other words, treating the relevant enzymes and receptors like a continuous medium of infinitely many, infinitely small, particles is invalid, and the stochastic nature of reactions between individual molecules becomes important. This so-called _internal noise_ can serve as a signal to activate the dynamics if it is large enough at a \begin{table} \begin{tabular}{c c c c c} \hline \hline No. & Reaction & Propensity & \(u\) & \(v\) \\ \hline 1 & \(u\to\emptyset\) & \(a_{1}u\) & \(-1\) & \(0\) \\ 2 & \(u\to\emptyset\) & \(a_{2}uv\) & \(-1\) & \(0\) \\ 3 & \(\emptyset\to u\) & \(a_{3}u^{2}/(a_{4}+u^{2})\) & \(1\) & \(0\) \\ 4 & \(\emptyset\to u\) & \(a_{5}\) & \(1\) & \(0\) \\ 5 & \(v\to\emptyset\) & \(\varepsilon c_{1}v\) & \(0\) & \(-1\) \\ 6 & \(\emptyset\to v\) & \(\varepsilon c_{2}u\) & \(0\) & \(1\) \\ \hline \hline \end{tabular} \end{table} Table 1: _The chemical reactions that determine the actin wave dynamics from [1]._ certain point in space and time. As we noted before, the cell hence executes a random walk in the absence of a signal2. This implies that an external signal does not necessarily activate the dynamics at a certain point on the membrane, but rather changes the random walk of the cell into a biased random walk in the direction of the signal. Using a more extended model than presented here, it is shown in [2] that coupling an external signal to the stochastic dynamics of the cell indeed can lead to movement in the direction of that signal. Footnote 2: Describing the motion of free cells is a very subtle problem and random motion does not necessarily mean Brownian motion [27, 31]. Instead of studying the complex internal dynamics of the cell, it can be advantageous to perturb the deterministic RDE (1.1). For instance, in [1], an external source of noise is applied to the RDE (1.1), turning it into a Stochastic RDE (or Stochastic Partial Differential Equation (SPDE)). While this approach can indeed activate the dynamics and make long-term deterministic waves collapse, it is inherently _ad hoc_ and not _a priori_ based on any of the involved biologically relevant processes. In between the macroscopic level of the RDE and the microscopic level of the chemical reactions, one can derive a mesoscopic SPDE, known as a Chemical Langevin Equation (CLE) [18], that also incorporates the internal noise of the cell. In SS2, we will show that the SPDE associated with the chemical reactions as described in Table 1 plus diffusion is given by \[\begin{split} du&=\left(D_{u}\partial_{xx}u-(a_{1} +a_{2}v)u+\frac{a_{3}u^{2}}{a_{4}+u^{2}}+a_{5}\right)dt+\sigma\sqrt{(a_{1}+a_{ 2}v)u+\frac{a_{3}u^{2}}{a_{4}+u^{2}}+a_{5}}\,dW_{t}^{1}\\ &\qquad\qquad+\sigma\partial_{x}\sqrt{2D_{u}u}\,d\tilde{W}_{t}^{ 1},\\ dv&=\left(D_{v}\partial_{xx}v+\varepsilon(-c_{1}v+c_ {2}u)\right)dt+\sigma\sqrt{\varepsilon(-c_{1}v+c_{2}u)}\,dW_{t}^{2}+\sigma \partial_{x}\sqrt{2D_{v}v}\,d\tilde{W}_{t}^{2}.\end{split} \tag{1.2}\] Here, \((dW_{t}^{1},dW_{t}^{2})\) and \((d\tilde{W}_{t}^{1},d\tilde{W}_{t}^{2})\) are two independent noise vectors with space-time white noise (each component is also independent of the other) and \(\sigma\) is a measure for the strength of the noise. Indeed, in the no-noise limit \(\sigma\to 0\) the mesoscopic SPDE (1.2) reduces to the macroscopic RDE (1.1). In that sense, \(\sigma\) serves as a scale parameter. The main advantage of the SPDE description is, on one hand, that the solutions still show the rich dynamics of the Gillespie models, i.e. the activation and destruction of waves, but are computationally significantly less expensive. On the other hand, since the SPDE in the no-noise Figure 1.2: _Comparison of the deterministic model (1.1) and its stochastic counterpart (1.2). In Figure (a) we show a simulation of (1.1), which is excited at \(t=0\), resulting in two counterpropagating travelling waves. In the stochastic simulation in Figure (b), the influence of the initial excitation quickly disappears and new pulses appear constantly. The same parameters are used as in the simulations shown in the second row of Figure 1.1. Observe the similarities in the shape of the pattern. In Figure (a), the waves travel around the cell where they cancel each other, while in Figure (b) the waves cancel each other at a much shorter scale. See §3.4 for more details._ limit reduces to the deterministic RDE model (1.1), we can use well-developed Partial Differential Equation (PDE) theory to gain insight into the dynamics of the RDE (1.1) and use this to study the closely related SPDE, see for instance [19, 25]. To give an idea of the differences between the deterministic and stochastic models we plot two simulations in Figure 1.2 that will be discussed later in SS3. It is clear that the simulation of the SPDE pains a much more dynamic picture than the deterministic one, which is more in line with the inherently noisy nature of the cell's chemical processes. Hence, SPDEs are an invaluable tool in unravelling the dynamics of a cell. This article is now organised as follows. In SS2 we explain how to derive the SPDE (1.2) from Table 1. Subsequently, in SS3 we study both the SPDE (1.2) and the RDE (1.1) numerically in different parameter regimes and qualitatively compare the observed dynamics to the Gillespie simulations from [1]. In SS4, we discuss the results and how they relate to the questions posed in this introduction. ## 2 Derivation of the SPDE Our starting point to derive (1.2) is the set of chemical reactions as laid out in Table 1. First, we introduce the column vector \(X(t)=(u(t),v(t)))^{T}\), where \(T\) indicates that we transpose the row vector, and the column vector \(\mathcal{R}(X(t))\) with the propensities of the six reactions: \[\mathcal{R}(X(t))=\left(a_{1}u(t),a_{2}u(t)v(t),\frac{a_{3}u(t)^{2}}{a_{4}+u(t )^{2}},a_{5},\varepsilon c_{1}v(t),\varepsilon c_{2}u(t)\right)^{T}.\] The associated stoichiometric matrix \(\mathcal{S}\), which describes the change in \(X(t)\) for each reaction, is then given by \[\mathcal{S}=\begin{pmatrix}-1&-1&1&1&0&0\\ 0&0&0&0&-1&1\end{pmatrix}, \tag{2.1}\] see the last two columns of Table 1. On top of these reactions, we assume that both variables also diffuse, so for a well-mixed solution in a large container we find the classic PDE \[\partial_{t}X=D\partial_{xx}X+\mathcal{S}\mathcal{R}(X), \tag{2.2}\] where \(D\) is a diagonal diffusion matrix with coefficients \(D_{u}\) and \(D_{v}\) on the diagonal [3]. This PDE is identical to the RDE (1.1) and describes the dynamics of \(X(t)\), averaged over many individual reactions. When the number of reacting molecules is large enough, and when we zoom out far enough such that all individual molecules become effectively a density, the macroscopic PDE gives a good approximation of the microscopic behaviour. Statistically speaking, this means that the probability distribution of all possible states must be very sharply peaked around the average value described by the PDE, so the deviations from the mean can be ignored. ### Motivating Example The assumption that we can ignore deviations from the mean is not always valid. For example, in population dynamics, we can write down birth-death models for several hundred individuals and with this number of individuals, random deviations from the mean are actually significant. To further exemplify, and to set the stage for the upcoming derivation, let us study such a simple discrete birth-death process: suppose a population is at time \(t\) in state \(X(t)\). In the next timestep \(dt\), there are three possible outcomes: (i) the population grows by one individual with probability \(b(X(t))dt\), (ii) the population decreases by one individual with probability \(d(X(t))dt\), or (iii) nothing happens to the population with probability \(1-b(X(t))dt-d(X(t))dt\). Now, assume we have a continuous Stochastic Differential Equation (SDE) \[dx(t)=f(x(t))dt+g(x(t))d\beta_{t}, \tag{2.3}\] where \(\beta_{t}\) is Brownian motion, i.e. we can think of \(d\beta_{t}\) as a random step with average zero and variance \(dt\). We now ask the question: "When is this continuous SDE a good approximation of the described discrete birth-death process?". Or, more precisely, "What should \(f(x)\) and \(g(x)\) be such that (2.3) is a good approximation of the described discrete process?". Given a solution \(x\) of the SDE, we see that the average expected value at \(x(t+dt)\) is approximated, at lowest order in \(dt\), by \[E[x(t+dt)]=x(t)+f(x(t))dt+\mathcal{O}(dt^{2}).\] For the described birth-death process, we have that the expectation is \[E[X(t+dt)]=X(t)+[b(X(t))-d(X(t))]dt.\] Hence, the average expected jump size in population is identical for the SDE (2.3) and the birth-death process if we take \(f(x):=b(x)-d(x)\). Next, we compute the deviation from the mean of the SDE (2.3) \[\mathrm{Var}[x(t+dt)]=\mathrm{Var}[g(x(t))d\beta_{t}]+\mathcal{O}(dt^{2})=g(x (t))^{2}dt+\mathcal{O}(dt^{2})\,,\] while this deviation for the birth-death process is \[\mathrm{Var}[X(t+dt)]=b(x(t))+d(x(t))+\mathcal{O}(dt^{2}).\] Therefore, to make these deviations coincide at first order in \(dt\), we must take \(g(x):=\sqrt{b(x)+d(x)}\). Hence, the process \(x(t)\) described in (2.3), which is continuous in population size and time, is a good approximation of the discrete process \(X(t)\) when \[dx(t)=(b(x(t))-d(x(t)))dt+\sqrt{b(x(t))+d(x(t))}d\beta_{t}. \tag{2.4}\] The stochastic process \(x(t)\) shares the average and variance with \(X(t)\) but differs in other points. Higher order moments of \(x(t)\) and \(X(t)\) will not be identical and \(x(t)\) can become negative, even when \(b\) and \(d\) are chosen such that this is not possible in the discrete model. In order to link the SDE above to chemical reactions, we make the following observation. The birth of an individual can be thought of as the chemical reaction \(\emptyset\to X\) with propensity \(b(X)\) and stoichiometric value \(1\), while the death of an individual can be seen as the chemical reaction \(X\to\emptyset\) with propensity \(d(X)\) and stoichiometric value \(-1\). Next, we make an assumption which is called the _leap condition_[3]. That is, we assume that, given a state \(X(t)\), enough reactions happen in the interval \([t,t+dt]\) to describe the average jump size in \([t,t+dt]\) by a Poisson process whose parameters depend on \(X(t)\). With this leap condition assumption, we implicitly also assume that \(X(t)\) is a good approximation of the solution in the whole time interval \([t,t+dt]\). We now turn the discrete process \(X(t)\) into a continuous process \(x(t)\) by approximating the discrete Poisson process by a continuous Gaussian, see [24] for details. This approach results in an SDE similar to the SDE (2.4): \[dx(t)=(b(x(t))-d(x(t)))dt+\sqrt{b(x(t))}d\beta_{t}^{1}-\sqrt{d(x(t))}d\beta_{ t}^{2}, \tag{2.5}\] for two independent Brownian motions \(\beta_{t}^{1}\) and \(\beta_{t}^{2}\). Although visually different from (2.4), both SDEs have a noise term that is Gaussian with identical average and variance. Therefore, both SDEs describe the same stochastic process and hence we can say that (2.4) and (2.5) are equivalent. ### Derivation of the CLE We have now gained some intuition for linking more general discrete chemical reactions to continuous S(P)DEs: if we have \(M\) different molecules in a vector \(X(t)\) with diffusion matrix \(D\), \(N\) reactions given by a vector \(\mathcal{R}(X(t))\) and a stoichiometric matrix \(\mathcal{S}\), then the continuous SPDE for \(X(t)\) is given by \[dX(t)=\left(D\partial_{xx}X(t)+\mathcal{S}\mathcal{R}(X(t))\right)dt+\frac{1} {\sqrt{\Omega}}\mathcal{S}\sqrt{\mathrm{diag}(\mathcal{R}(X(t)))}dW_{t}+\frac {1}{\sqrt{\Omega}}\partial_{x}\sqrt{2DX(t)}d\tilde{W}_{t}, \tag{2.6}\] see [3, 24]. The equation is made of two parts, a local equation that describes the kinetics as in SDE (2.4) \[dX(t)=\mathcal{S}\mathcal{R}(X(t))dt+\frac{1}{\sqrt{\Omega}}\mathcal{S}\sqrt{ \mathrm{diag}(\mathcal{R}(X(t)))}dW_{t} \tag{2.7}\] and a stochastic diffusion equation \[dX(t)=D\partial_{xx}X(t)+\frac{1}{\sqrt{\Omega}}\partial_{x}\sqrt{2DX(t)}d \tilde{W}_{t}, \tag{2.8}\] as derived in [10]. Here, \(dW_{t}\) and \(d\tilde{W}_{t}\) are two independent vectors with space-time white noise. The vector \(dW_{t}\) has \(N\) components coming from the \(N\) reactions, while \(d\tilde{W}_{t}\) has the dimension \(M\) of \(X(t)\). SPDE (2.6) is known as the Chemical Langevin Equation (CLE) [18]. The vector \(X(t)\) now describes the densities of the molecules involved, not the actual number of molecules. How well the discrete number of molecules is approximated by a density is determined by the scale parameter \(\Omega\) and is in that sense a measure for the _noisiness_ of the system. In the no-noise limit \(\Omega\to\infty\), we recover the classic RDE (2.2). In contrast, for small \(\Omega\) the dynamics of the discrete process is dominated by random events and the discrete process should be described in full detail by a _chemical master equation_[17]. The CLE can be understood as the lowest order approximation of the chemical master equation for large \(\Omega\), see for more details [3]. For an overview of all different paths leading from molecular kinetics to (S)PDEs, see [26, Fig. 3.4]. It is important to realise that SPDE (2.6) does not necessarily inherit all the statistical properties of the chemical master equation, only averages and variances. Another potential issue is that it does not necessarily ensures positivity of the solutions. Just as (2.4) and (2.5) are identical, we can rewrite (2.6) in the following way: \[dX(t)=(D\partial_{xx}X(t)+\mathcal{S}\mathcal{R}(X(t)))dt+\frac{1}{\sqrt{ \Omega}}\sqrt{\mathcal{S}\mathrm{diag}(\mathcal{R}(X(t)))\mathcal{S}^{T}}dW_{ t}+\frac{1}{\sqrt{\Omega}}\partial_{x}\sqrt{2DX(t)}d\tilde{W}_{t}. \tag{2.9}\] This time, the noise vector \(dW_{t}\) has just \(M\) components, reducing the number of random vectors that must be generated (when \(M<N\)). The downside is that the computation of \(\sqrt{\mathcal{S}\mathrm{diag}(\mathcal{R}(X))\mathcal{S}^{T}}\) is in general numerically more expensive then the computation of \(\mathcal{S}\sqrt{\mathrm{diag}(\mathcal{R}(X))}\). However, in the present setting, there are no connections between the two variables in the stoichiometric matrix \(\mathcal{S}\) (2.1) and the matrix \(\mathcal{S}\mathrm{diag}(\mathcal{R}(X))\mathcal{S}^{T}\) is thus diagonal, making the computation of the square root trivial. Note that once we have the CLE (2.9), it can be applied to any set of chemical reactions and can therefore have widespread use. For example, we can now return to Table 1 and apply the CLE to these reactions, which results in \[du=\left(D_{u}\partial_{xx}u-(a_{1}+a_{2}v)u+\frac{a_{3}u^{2}}{ a_{4}+u^{2}}+a_{5}\right)dt+\sigma\sqrt{(a_{1}+a_{2}v)u+\frac{a_{3}u^{2}}{a_{4} +u^{2}}+a_{5}}dW_{t}^{1}\] \[+\sigma\partial_{x}\sqrt{2D_{u}u}d\tilde{W}_{t}^{1}, \tag{2.10}\] \[dv=\left(D_{v}\partial_{xx}v+\varepsilon(-c_{1}v+c_{2}u)\right) dt+\sigma\sqrt{\varepsilon(-c_{1}v+c_{2}u)}dW_{t}^{2}+\sigma\partial_{x}\sqrt{2D_{v} v}d\tilde{W}_{t}^{2}.\] For notational convenience, we replaced \(1/\sqrt{\Omega}\) by a small parameter \(\sigma\), resulting in the SPDE (1.2) from the introduction. In the remainder of this work, we will study the SPDE above, mainly using numerical techniques. **Remark 1**.: _It is important to realise that the SPDE above does not have a function-valued solution in general. The term \(\partial_{x}\sqrt{2DX(t)}d\tilde{W}_{t}\) makes the equation ill-posed and solutions can only be understood in terms of distributions. Therefore, it is not a priori clear if the numerical solutions shown in the next section converge to a solution of the SPDE when the spatio-temporal discretisations \(dx\) and \(dt\) are sent to 0. In SS3.4, we will discuss the implications of omitting this term on the wave dynamics._ ## 3 Simulations In this section, we will numerically investigate the PDE (1.1) and SPDE (2.10). We investigate three of the main building blocks of the PDE dynamics: localised standing waves, localised travelling waves and time-periodic solutions, together with their counterparts in the SPDE. However, before we can investigate the dynamics, we must first establish some basic properties of the (S)PDE, like the existence, uniqueness and stability of the background state(s). Since we are interested in localised waves and expect the activator to be _in rest_ otherwise, we need for the existence of these localised waves that the spatially homogeneous background state is stable. In contrast, for the time-periodic solutions, we expect the background state to be unstable such that continuous excitations of the background state can happen. The possible background states \((u^{*},v^{*})\) of (1.1) are given by the positive real solutions of the \(u\)-nullcline and \(v\)-nullcline \[0=-a_{1}u-a_{2}uv+\frac{a_{3}u^{2}}{a_{4}+u^{2}}+a_{5}\,,\qquad 0=\varepsilon(-c_ {1}v+c_{2}u)\,. \tag{3.1}\] See Figure 3.1a for a typical representation of the shape of the nullclines. Since the system parameters are all assumed to be positive, this is equivalent to finding the positive solutions \(u^{*}\) of \[-\frac{a_{2}c_{2}}{c_{1}}u^{4}-a_{1}u^{3}+\left(a_{3}+a_{5}-\frac{a_{2}a_{4}c _{2}}{c_{1}}\right)u^{2}-a_{1}a_{4}u+a_{5}a_{4}=0\,,\] with \(v^{*}=c_{2}u^{*}/c_{1}\). Due to the complexity of the general solution formula for quartic polynomials, it is not feasible to write down its solutions explicitly. However, by Descartes' rule of signs [7] we know that there is only one positive real root if \(c_{1}(a_{3}+a_{5})<a_{2}a_{4}c_{2}\) and one or three positive real roots otherwise3. The stability of a background state \((u^{*},v^{*})\) is then determined by the eigenvalues of the associated Jacobian matrix Footnote 3: Note that the origin \((0,0)\) is only a background state if \(a_{5}=0\). \[J(u^{*},v^{*})=\begin{pmatrix}-a_{1}-a_{2}v^{*}-\frac{2a_{3}a_{4}u^{*}}{(a_{4 }+(u^{*})^{2})^{2}}&-a_{2}u^{*}\\ \varepsilon c_{2}&-\varepsilon c_{1}\end{pmatrix}. \tag{3.2}\] Since we do not have an explicit formula for \((u^{*},v^{*})\), we must compute these eigenvalues numerically. For example, when we allow one free parameter, e.g. \(c_{1}\), and fix the other values, then we can compute Figure 3.1: _(a) The green line is the \(v\)-nullcline for \(c_{1}=0.18\), while the red line is the nullcline for \(c_{1}=0.35\). The blue line is the \(u\)-nullcline, independent of \(c_{1}\). The \(u\)-axis is plotted logarithmically to better highlight the shape of the nullcline for small \(u\). Note how the background state moved around the fold. (b) Visual representation of the evolution of the two (complex) eigenvalues of the Jacobian matrix (3.2) for \(c_{1}\) varying from \(0.18\) (dark blue) to \(0.35\) (yellow), following the black arrows. The other parameters are fixed at \(a_{1}=0.167\), \(a_{2}=16.67\), \(a_{3}=167\), \(a_{4}=1.44\), \(a_{5}=1.47\), \(\varepsilon=0.52\) and \(c_{2}=3.9\)._ the background states and the associated eigenvalues of the Jacobian matrix. Taking the parameter values \(a_{1}=0.167\), \(a_{2}=16.67\), \(a_{3}=167\), \(a_{4}=1.44\), \(a_{5}=1.47\), \(\varepsilon=0.52\) and \(c_{2}=3.9\) from [1] and letting \(c_{1}\) range from \(0.18\) to \(0.35\), such that \(c_{1}(a_{3}+a_{5})<a_{2}a_{4}c_{2}\), results in one admissible positive background state ranging from \((u^{*},v^{*})\approx(0.077,1.669)\) to \((u^{*},v^{*})\approx(0.142,1.586)\). Initially, for the lower values of \(c_{1}\), the eigenvalues are real and negative, resulting in a stable background state. Increasing the value of \(c_{1}\) to approximately \(0.25\) results in complex eigenvalues, still with negative real parts. When we further increase the value of \(c_{1}\) to approximately \(0.29\), both eigenvalues cross the imaginary axis, i.e. the background state undergoes a Hopf bifurcation and we expect to see time-periodic solutions. See Figure 3.1b for a visual representation of the evolution of the eigenvalues. In Figure 3.1a we show the nullclines for \(c_{1}=0.18\) and \(c_{1}=0.35\). The unique background state moved along the fold in the \(u\)-nullcline and as long as the background state is in between the two folds, the fixed point is unstable. In the next sections, we will study localised standing and travelling waves for the same parameter set with \(c_{1}<0.25\) and for time-periodic solutions with \(c_{1}>0.29\). The complex dynamics of pulse adding for \(c_{1}\)-values in the intermediate regime between these two boundary values, where the eigenvalues of the Jacobian are stable but complex-valued, is outside the scope of this work, see for example [4] for more information. So far, we only looked at background states, which are spatially homogeneous. However, we are interested in spatially nonhomogeneous patterns. By definition, a localised wave is a fixed profile \((\Phi_{u},\Phi_{v})\) that moves with a fixed speed \(c\) (possibly zero). Therefore, when we change the spatial coordinate \(x\) to \(\xi=x-ct\) using the chain rule, the profile \((\Phi_{u},\Phi_{v})\) is a stationary solution of the following shifted Ordinary Differential Equation (ODE): \[\begin{split} 0&=D_{u}\partial_{\xi\xi}\Phi_{u}+c \partial_{\xi}\Phi_{u}-(a_{1}+a_{2}\Phi_{v})\Phi_{u}+\frac{a_{3}\Phi_{u}^{2}}{ a_{4}+\Phi_{u}^{2}}+a_{5},\\ 0&=D_{v}\partial_{\xi\xi}\Phi_{v}+c\partial_{\xi} \Phi_{v}+\varepsilon(-c_{1}\Phi_{v}+c_{2}\Phi_{u}).\end{split} \tag{3.3}\] This ODE problem can be solved using numerical fixed-point algorithms. For these algorithms, a crude starting point is needed for the profile and the value of \(c\), which can come from a PDE simulation. Note that this problem is translation invariant, meaning that we find a one-dimensional family of travelling waves, all shifted versions of each other. Hence, for the solver to converge, an extra condition to fix the location of the wave is necessary. ### Standing Waves In this section, we will study standing waves, which means we look for solutions of (3.3) with \(c=0\). A solution to this ODE is shown in Figure 3.2a. We observe that both components \(u\) and \(v\) indeed start at and return to their background state \((u^{*},v^{*})\approx(0.0523,2.0394)\). We observe that the activator \(u\) changes rapidly in a small region in the spatial domain and we, therefore, call the activator \(u\) the _fast variable_. On the other hand, the inhibitor \(v\) is the _slow variable_ as it changes more gradually over a larger spatial distance. Figure 3.2b shows the corresponding phase plane. The majority of the spatial dynamics happens near the lower branch of the \(u\)-nullcline before it has a fast excursion from the lower branch to the upper branch of this nullcline and, by the symmetry \(x\mapsto-x\) of the ODE (3.3), it then returns back to the lower branch in a similar fashion. The fact that both components of the standing pulse evolve on a different spatial scale allows us to mathematically analyse this standing pulse, see Appendix B. For instance, the value \(\bar{v}\) at which the activator \(u\) makes a sharp transition (approximately \(3.8\) in Figure 3.2b), can be approximated by the algebraic relation (B.10). The analysis also explains why the solution trajectory in the phase plane closely follows the lower branch of the \(u\)-nullcline for the most part of the trajectory. By assumption, the standing wave in Figure 3.2a is a stationary solution of the PDE (1.1). This can be confirmed by using the wave from the ODE as the initial condition for a PDE simulation (not shown). However, we are not likely to find this single standing wave in a PDE simulation Figure 3.3: Simulation of the PDE (1.1), Figure (a) shows the activator \(u\) and Figure (b) the inhibitor \(v\) with an initial condition as described in the main text. The same parameters are used as in Figure 3.2. Note that the \(v\)-component does not return to its rest state in the region between the two pulses. Figure 3.2: Figure (a) shows a localised standing wave solution to ODE (3.3), found numerically with Matlab’s fsolve. The green curve is the \(u\)-component and the red curve is the \(v\)-component. In Figure (b), the \(u\)-nullcline (blue) and \(v\)-nullcline (green) of (1.1) are shown together with the \(v\)-\(u\) phase plane of the standing wave from Figure (a). The phase plane is plotted on a semi-log scale to better highlight the dynamics for small \(u\). We observe that the standing wave starts from the background state (indicated by an asterisk) and initially follows the lower branch of the \(u\)-nullcline before jumping to the upper branch of the \(u\)-nullcline and follows the same track back to the background state. The system parameters are taken from [1] and set to \(D_{u}=0.1\), \(D_{v}=1\), \(a_{1}=0.167\), \(a_{2}=16.67\), \(a_{3}=167\), \(a_{4}=1.44\), \(a_{5}=1.47\), \(\varepsilon=0.52\), \(c_{1}=0.1\), and \(c_{2}=3.9\). without a fine-tuned initial condition. As an example, we use for the simulation the initial condition \(u_{0}=u^{*}+e^{-x^{2}}\) and \(v_{0}=v^{*}+2/\cosh^{2}(5x)\) as a crude approximation of the wave. The resulting simulation is shown in Figure 3.3. This initial condition splits in, what appears to be, two well-separated localised standing waves4. However, the plot of the slow \(v\)-component makes clear that this is not the case, and that the two standing waves are connected through the slow component, i.e. the slow component is not in its rest state in between the two standing waves. For more details on the numerics of the (S)PDE simulations, see Appendix A. Footnote 4: We also observe the evolution of the initial condition back to the stable background state \((u^{*},v^{*})\), especially for initial conditions with smaller amplitudes. Simulations are not shown. The interaction between the two standing waves in Figure 3.3 through the slow \(v\)-component makes that the two standing waves repel each other on a very slow timescale as is made clear by taking long integration times, see Figure 3.4b. On an infinite domain, the two standing waves slowly drift apart forever, but on a periodic domain, we can expect them to stabilise once they are at an equal distance on both sides. On the timescales of biological processes, this slow continuous splitting is probably not relevant and on short timescales, the term'standing waves' for the solution at later times in Figure 3.3 is biologically justifiable. Furthermore, note that for our understanding of the presented dynamics, it is essential to look at both components simultaneously. In other words, for our understanding of Figure 3.3a it is essential to also look at Figure 3.3b. We now take a closer look at the short-time dynamics presented in Figure 3.4a. In [1], this splitting of the initial condition is described as two counter-propagating travelling waves, sometimes called trigger waves [14]. By the formal mathematical definition, a travelling wave is a fixed profile moving with a fixed speed, i.e. a solution of (3.3). Therefore, mathematically speaking, these do not classify as travelling waves. Instead, what we observe here would be classified as transient dynamics and pulse splitting. However, it is clear that at \(t=0\), the activity of \(u\) is around \(x=0\), and after some time it moved to two different places, justifying the term 'travelling'. If we adopt the terms'standing' and 'travelling', it is clear from Figure 3.3a that around \(t=3\) a transition occurs from travelling to standing. Standing waves with noiseFor the same parameter values as in the previous paragraph, we now study the full SPDE (2.10). In Figure 3.5, we plot realisations of the SPDE for different noise intensities. For low noise levels, we see two quasi-stationary waves appear, like in Figure 3.3, before they are destroyed at different points in time by the noise. Since the noise is low, no new activation events happen. When we increase the noise intensity, the noise is able to activate the stable background state, but the waves are also destroyed more quickly, resulting in a constant appearance and disappearance of waves. Note the comparison between Figures 3.5c and the figures in [2], where a Figure 3.4: _Same simulation as in Figure 3.3, but on different time scales. Figure (a) shows the \(u\)-component, zoomed in to highlight the short-time dynamics, while Figure (b) shows the long-time dynamics of \(u\) highlighting the pulse splitting phenomenon. Both simulations were done on a larger grid \([-60,60]\), so the waves would not affect each other on the other side of the domain on this large time scale._ similar model is studied using Gillespie algorithms. This activation of the background state is not possible in the deterministic PDE (1.1) without an external force. In Figures 3.5b and 3.5c, we see that in the first instances, many patterns are generated, causing the inhibitor to increase everywhere which blocks new activation events. After this initial phase, new activation events appear, and significantly more for higher values of the noise as expected. When we increase the noise even further, it becomes impossible to form patterns as every activation event is destroyed instantly. Therefore, pattern formation happens at intermediate values of the noise. The idea that there is some 'optimal' value of the noise resulting in complex dynamics has been observed before in, for instance, the context of nerve impulses [13]. In order to quantify this notion of optimality in the noise intensity we must first quantify the size and shape of the patterns in Figures 3.5b and 3.5c. Using Matlab's regionprops algorithm we can automatically detect the patches with a high value for the activator \(u\) (see Appendix A for details), giving us the possibility to compute the number of activation events and determine the width and duration of each event, see Figure 3.6a. In Figure 3.6b we show the statistics for a range of \(\sigma\) values. This figure shows that there is a clear cutoff for when activation events are likely to happen. For values of \(\sigma<0.035\), the average number of events is lower than 1, and the number of activation events increases sharply after this value. We observe that the width, the length and the maximum height of the events are all higher when the number of excitation events is low, but the variability in these values is also larger. in Figure 3.7, we look at the statistics of the events for the specific value \(\sigma=0.046\). The value of the maximum is sharply peaked. This is something we expect, as the maximum is mainly determined by the deterministic dynamics after the excitation. The width and length of the events are much more spread out. Especially for the width, we see a heavy tail towards zero. This is also expected because activation events come in two forms. Most events result in two waves, but a small part of the events has the shape of just a single wave, which has a width of 0.87 in the deterministic case. We checked whether or not these histograms are well approximated by a Figure 3.5: _The \(u\)-component of the SPDE (2.10) for four different values of the noise \(\sigma\). The other system parameters and initial conditions are the same as in the previous figures. In Figure (a), we only show the simulation of wave integrated up to \(T=20\) because the solution remains in the background state afterwards, the other three figures are shown up to \(T=100\)._ Gaussian distribution, but this was rejected using a Kolmogorov-Smirnov test (\(p\sim 10^{-14}\)). Using the statistics on the width, length, and maximum, we can compare the solutions of SPDE (2.10) to SPDEs with the same deterministic part but different noise terms. First, we can set the \(\partial_{x}\sqrt{2DX}dW_{t}\) term coming from the diffusion to zero. As noted in Remark 1, this term makes the mathematical analysis of the SPDE (2.10) significantly harder. Figures 3.7d-3.7 show that the statistics of the solutions do not change significantly when we delete this term. This indicates that the noise coming from the reaction terms plays a more influential role in determining the shape of the patterns. We are now also in the position to compare the CLE approach with the more ad hoc approach of adding additive white noise the to \(u\)-component to mimic the inherent noisiness of the system, see Figures 3.7g-3.7i. Indeed, the properties of the patterns are significantly different when we compare them to the full SPDE. In particular, with just white noise, the patterns are all short and narrow and do not reflect the complicated dynamics of the underlying chemical reactions and experiments (not shown). ### Travelling Waves In order to find a travelling wave solution of (1.1), understood as a solution of (3.3) with \(c\neq 0\), we must ensure that the dynamics starting from the initial condition does not reach the standing phase or returns to the background state. This can be achieved by increasing the value of \(c_{1}\). Increasing \(c_{1}\) results in a faster exponential decay of \(v\) back to the background state after an excitation, see Table 1, preventing the inhibitor from glueing the two waves together like in Figure 3.3. Simulations for an increased value of \(c_{1}\), from \(0.1\) to \(0.2\)5, are shown in Figure 3.8. Note that the PDE (1.1) still only has one stable background state \((u^{*},v^{*})\approx(0.0833,1.625)\). The initial condition splits into two counter-propagating travelling waves, but opposite to what happened with the standing wave before, they keep separating and move away from each other at a fixed speed until they collide and cancel each other out due to the periodicity of the domain, see Figure 3.8. Footnote 5: For values in between, say \(c_{1}=0.15\), the numerics becomes very sensitive to the chosen discretisation, see Appendix A. To find a single travelling wave, we again need to properly tune the initial condition. This can be done by selecting one of the two waves in Figure 3.8 and using it as the initial condition of the PDE simulation (not shown). In Figure 3.9 we show the travelling wave profile and its associated phase Figure 3.6: _In Figure (a) we show a simulation similar to those in Figure 3.5, but with \(\sigma=0.45\) and a homogeneous initial condition with \((u^{*},4v^{*})\). The red boxes are the result of the pattern finding algorithm regionprops in Matlab; it identifies all the regions of excitations which we would also find by eye, see Appendix A for details. In Figure (b), we used this algorithm to find the length, width and maximum of these pulses (left axis), as well as the total number of activation events (right axis). For each value of \(\sigma\), the number of events is averaged over 100 simulations, and the length, width and maximum are averaged over all events in the 100 simulations. We plot the average together with the standard deviation._ Figure 3.7: Figures (a)-(c) show the histograms for the width, length and maximum of the pulses for \(\sigma=0.046\), for the same data as in Figure 3.6b. For Figures (a) and (b), the bin width is fixed to \(0.25\), and for Figure (c) to \(0.1\). Figures (d)-(f) show the same histograms, but in the simulations, the noise coming from the diffusion (the last term in (2.10)) was set to 0. In Figures (g)-(i), we again show the same histograms, but with just white noise on the \(u\)-component. In order to compare the noise levels, we did not choose the same \(\sigma\) value for the three cases but chose \(\sigma\) values such that the average number of activation events per simulation is approximately 50. For the Figures (d)-(f), this means \(\sigma=0.056\), and for (g)-(i) \(\sigma=0.3\). Figure 3.8: Simulation of the PDE (1.1), Figure (a) shows the activator \(u\) and Figure (b) the inhibitor \(v\). We observe the splitting of the initial condition in two counterpropagating travelling waves with a constant speed that exist until they cancel each other out due to the periodicity of the domain. The slow inhibitor \(v\) decays back to its rest state in between the pulses. The red dotted line has a speed of \(-2.10\), which is close to the value of approximately \(-2.17\) found by solving (3.3) using a fixed point method. The parameters are \(D_{u}=0.1\), \(a_{1}=0.167\), \(a_{2}=16.67\), \(a_{3}=167\), \(a_{4}=1.44\), \(a_{5}=1.47\), \(D_{v}=1\), \(\varepsilon=0.52\), \(c_{1}=0.2\) and \(c_{2}=3.9\). plane. As with the standing pulse, the dynamics around the \(u\)-nullcline is essential. The solution trajectory starts from near the background state and follows the lower branch of the \(u\)-nullcline, jumps towards the upper branch of the nullcline and keeps following it until it falls off and returns to the lower branch to slowly evolve back towards the stable background state. In contrast to the standing pulse, see Figure 3.2, the travelling wave is no longer symmetric and it jumps back to the lower branch by falling off the edge of the upper branch. These travelling wave solutions could be analysed further using techniques similar to Appendix B. It is important to realise that we do not expect to see travelling waves in practice as the travelling wave gets destroyed when it collides with another wave. Therefore, in the stochastic simulations, it might not always be clear if we are looking at a travelling wave that collapses or at the transient dynamics towards a double pulse that subsequently gets destroyed by the noise. Travelling waves with noiseWhen we now return to SPDE (2.10), there are now four regimes for the same parameters as in the previous section. For high values of the noise, we, as before, do not observe any patterns (not shown). For low values of the noise, we just find the travelling wave (if the simulation is initiated by an appropriate initial condition) since the noise is not strong enough to destroy the wave, nor to activate another pattern, on the timescales of the simulation (not shown). The interesting dynamics happens again at the intermediate levels of the noise. As Figure 3.10a shows, the noise activates the dynamics, resulting in many counter-propagating travelling waves. A travelling wave is subsequently annihilated when it collides with a travelling wave coming from the other direction. Hence, the collision dynamics of Figure 3.8 is repeated many times on smaller spatial-temporal scales. We see in Figure 3.10 that after the annihilation of the travelling waves, the slow inhibitor \(v\) initially remains high preventing the activation of new counterpropagating travelling waves. Only when after a certain time the inhibitor has sufficiently decayed, do we see the activation of new counterpropagating travelling waves by the noise. The creation and annihilation of travelling waves happen at a shorter time scale than the decay of the inhibitor, which makes the dynamics look synchronised, or even periodic. In Figure A.2a we plot the approximate period versus the intensity of the noise. As expected, the period decreases with the intensity of the noise. It differs however significantly from the true time periodic motion we will discuss in SS3.3. When we increase the noise, the quasi-periodic pattern is broken up, as the counter-propagating travelling waves are destroyed before they collide and annihilate each other, so no synchronised patterns emerge, see Figures 3.10cand 3.10d. These patterns become relevant when we discuss the comparison between the CLE and the Gillespie simulations in Figure 1.1, see SS3.4 Figure 3.9: Profile of a single travelling wave. Figure (a) shows both components \(u\) (green) and \(v\) (red) and Figure (b) the related phase plane, plotted on a semi-log scale to highlight the dynamics for small \(u\), as well as the nullclines. The asterisk indicates the fixed point. This solution is obtained as the endpoint of a PDE simulation (not shown), i.e. similar to Figure 3.8, but with just one of the two waves as initial condition. ### Time Periodic Solutions In the previous sections, it was essential that the background state of the system was stable, because this allowed the dynamics to return to the rest state after an activation event. When we increase the value of \(c_{1}\), the background state becomes unstable through a Hopf bifurcation, see Figure 3.1b. In the phase plane, this transition is characterised by the fact that the background state is no longer located on the lower branch of the \(u\)-nullcline, as in Figures 3.2b and 3.9b, instead, it lies on the middle branch of the \(u\)-nullcline, see Figure 3.12b. Hence, after an excursion, the solution cannot return to the unstable background state and is exited again, resulting in time-periodic motion. When we start with a spatial homogeneous initial condition, the PDE simulation shows periodic oscillations in time, see Figures 3.11 and 3.12. Both components still display slow-fast behaviour, however, this time not in the spatial variable \(x\) but in the temporal variable \(t\). In the case of nonhomogeneous initial conditions, it takes several oscillations before they are all synced up spatially (not shown). The observed behaviour has the characteristics of a relaxation oscillation as studied intensively for the Van der Pol equation [32]. This is not a surprise as the Van der Pol equation formed the foundation for the classic FitzHugh-Nagumo model and PDE (1.1) can be seen as a variation on this classic model. Time periodic solutions with noiseFor small values of the noise \(\sigma\), the observed period is close to the deterministic version, but when the value of \(\sigma\) increases, the period also decreases monotonically, as is expected. Note that after excitation, the inhibitor remains high preventing activation events. When the noise is too high no patterns are observed. We can investigate the relation between the reduction of the period and the intensity of the noise. In FigureA.2b, we plot the estimated period versus the noise intensity. We indeed see that the period decreases monotonically Figure 3.10: _Simulation of the SPDE (2.10) for \(\sigma=0.02\), Figures (a) and (b), and \(\sigma=0.05\), Figures (c) and (d). The red dashed line in (a) has a slope of 2.05, close to the deterministic wave speed, but given the short time interval the wave exists, precise estimates are difficult to obtain. We observe that there is a quasi-periodic behaviour with a period of roughly 20. In Figures (c) and (d), the quasi-periodic structure is destroyed. The same parameters and initial condition are used as in Figure 3.8._ Figure 3.11: _Simulation of the PDE (1.1), Figure (a) shows the activator \(u\) and Figure (b) the inhibitor \(v\). By measuring the distances between the maxima of the oscillations we find the estimate \(T=8.14\) for the period of the oscillation. Note that this is significantly smaller than the quasi-periodic oscillations in Figure 3.10. The parameters are set to \(D_{u}=0.1\), \(a_{1}=0.167\), \(a_{2}=16.67\), \(a_{3}=167\), \(a_{4}=1.44\), \(a_{5}=1.47\), \(D_{v}=1\), \(\varepsilon=0.52\), \(c_{1}=0.4\) and \(c_{2}=3.9\)._ Figure 3.12: _Cross-section of Figure 3.11 at \(x=0\), together with the corresponding phase plane. It is clear that the solution leaves the background state (marked by an asterisk), but does not return to it._ with the noise. ### Wild-Type versus PTEN-null Cells. Now that we have studied several different fundamental patterns, we can focus on understanding the different cell shapes. In [1], two sets of parameters are compared, representing WT cells (i.e. healthy cells) and PTEN-null cells where the tumour-suppressing gene PTEN has been switched off[5]. First, we simulate the deterministic PDE (1.1) for both sets of parameters, see Figure 3.14. We observe that in both parameter regimes, there are two counter-propagating travelling waves but the specific profiles and speeds are different. Especially, note that the wave in Figure 3.14b is significantly broader and higher than the wave in Figure 3.14a. When noise is applied, the statistics of the dynamics shows a clear difference. In Figure 3.15, we compare the SPDE simulations of (2.10) to the Gillespie simulations from [1]. Focusing on the typical shape of the excitations, there is a clear qualitative correspondence between the two types of simulation. Furthermore, in both types of simulation, the average pulse duration is longer in the case of the PTEN-null cell simulations. Note that we show the SPDE simulations on a larger spatio-temporal scale to get a better idea of the distribution in shapes and the zoom-boxes highlight the detailed structure of a typical single activation event. In the case of PTEN-null cells, the background state can be excited for much lower noise values (\(\sigma\approx 0.007\)), while for WT cells, the noise needs to be twice as large (\(\sigma\approx 0.014\)) as a result of the increased values of \(c_{2}\) and \(a_{3}\). Hence, in PTEN-null cells, an already existing pattern can more easily sustain itself, leading to the elongated shapes of Figure 3.15d. Figure 3.13: _Simulation of the SPDE (2.10). Figure (a) shows the activator \(u\) and Figure (b) the inhibitor \(v\). When we average over the \(x\)-direction and measure the distance between the maxima, we find \(T\approx 7.87\). Same parameters as in Figure 3.11 with \(\sigma=0.01\)._ Figure 3.14: _Two simulations of PDE (1.1) with parameters as in [1]; \(D_{u}=0.1\), \(a_{1}=0.167\), \(a_{2}=16.67\), \(a_{4}=1.44\), \(a_{5}=1.47\), \(D_{v}=1\), \(\varepsilon=0.4\), \(c_{1}=0.1\) and, for Figure (a), \(a_{3}=167\) and \(c_{2}=2.1\), while \(a_{3}=300.6\) and \(c_{2}=3\) for Figure (b). The initial condition is equal to those in the previous figures._ ## 4 Discussion \(\&\) Outlook We set out to show how Stochastic Partial Differential Equations (CLE), or more specifically, Chemical Langevin Equations, can be used to gain more insight into the dynamics of models for cell motility. We have shown for an exemplary set of chemical reactions (see Tabel 1) that the CLE approach, combined with a basic analysis of the corresponding deterministic PDE, allows us to study the different possible patterns with relative ease, both qualitative and quantitative, while remaining close to the underlying chemical processes. To understand differences in cell behaviour, like the difference between wild-type and cancerous cells as in [1], the study of the statistical properties of the observed dynamics is essential. For instance, an essential characteristic differentiating wild-type cells from cancerous cells is how long a pattern can survive after activation. The simulations in the previous section show that the answer not only depends on the parameters of the system but crucially on the interplay between the parameters and the noise. The CLE can be used to study this interplay. A natural question to ask is if all the stochastic terms introduced in the CLE (2.9) are really necessary. Could we, for example, ignore the noise term coming from the diffusion or forget the derivation of the CLE altogether and just _naively_ add an additive white noise term to the equation for \(u\)? The histograms in Figure 3.7 indicate that the effects of the terms that come from the diffusion are minimal (for the parameter values studied here) and therefore that these terms do not contribute meaningfully to our understanding of the cell dynamics. Note that this would solve the problem of the equation being ill-posed, see Remark 1, and would open up the possibilities for more rigorous mathematical analysis based on the results in [19]. We also noted that adding just additive white noise changes the statistics significantly, which indicates that completely abandoning the CLE approach throws away too much detail. In this paper, we studied a basic activator-inhibitor system with only a limited number of chemical Figure 3.15: _Comparison of the Gillespie model, Figures (a) and (c) from [1], versus the CLE approximation (2.10), Figures (b) and (d). The same parameters as in Figure 3.14, with \(\sigma=0.06\). The initial condition is \((u^{*},2v^{*})\). This can lead to an immediate excitation of the background state in Figure (d), while in Figure (b), the excitation of the background is more spread out. The zoom-boxes highlight the details of a single excitation._ reactions. However, the derivation of the CLE (2.9) in SS2 holds for any number of molecules and for any number of chemical reactions. As such, one can see this paper as a _proof of concept_ and the methodology of this paper can be directly applied to more complex regulating systems, such as the eight-component system designed in [2]. In subsequent work, we aim to work on these type of more complex model to better understand the stochastic dynamics that causes the cell to move robustly in one specific direction. Furthermore, as shown in detail in Appendix B, the underlying deterministic RDE (1.1) is amenable for rigorous mathematical analysis by using Geometric Singular Perturbation Theory [11, 20, 22, 23, e.g.]. We derived a first-order approximation for the jump location where, under certain conditions, the standing wave has a sharp transition in its activator. This methodology could also be used to, for instance, further analyse the travelling waves to derive approximations for the speed of the waves. In other words, questions about the existence of localised solutions of (1.1) and bifurcations can thus be reduced to understanding relatively simple ODEs and the connections between them. The details of these computations are left as future work.
2309.03138
FMplex: Exploring a Bridge between Fourier-Motzkin and Simplex
In this paper we present a quantifier elimination method for conjunctions of linear real arithmetic constraints. Our algorithm is based on the Fourier-Motzkin variable elimination procedure, but by case splitting we are able to reduce the worst-case complexity from doubly to singly exponential. The adaption of the procedure for SMT solving has strong correspondence to the simplex algorithm, therefore we name it FMplex. Besides the theoretical foundations, we provide an experimental evaluation in the context of SMT solving. This is an extended version of the authors' work previously published at the fourteenth International Symposium on Games, Automata, Logics, and Formal Verification (GandALF 2023).
Jasper Nalbach, Valentin Promies, Erika Ábrahám, Paul Kobialka
2023-09-06T16:22:01Z
http://arxiv.org/abs/2309.03138v3
# FMplex: A Novel Method for Solving ###### Abstract In this paper we introduce a novel quantifier elimination method for conjunctions of _linear real arithmetic_ constraints. Our algorithm is based on the _Fourier-Motzkin variable elimination_ procedure, but by case splitting we are able to reduce the worst-case complexity from doubly to singly exponential. The adaption of the procedure for SMT solving has strong correspondence to the _simplex algorithm_, therefore we name it _FMplex_. Besides the theoretical foundations, we provide an experimental evaluation in the context of SMT solving. ## 1 Introduction _Linear real arithmetic (LRA)_ is a powerful first-order theory with strong practical relevance. We focus on checking the satisfiability of _conjunctions_ of LRA constraints, which is needed e.g. for solving quantifier-free LRA formulas using _satisfiability modulo theories (SMT) solvers_. The problem is known to be solvable in _polynomial_ worst-case complexity but, surprisingly, the _ellipsoid_ method [13] proposed in 1980 by Khachiyan is still the only available algorithm that implements this bound. However, this method is seldomly used in practice due to its high average-case effort. Instead, most approaches employ the _simplex_ algorithm introduced by Dantzig in 1947, which has a _singly exponential_ worst case complexity, but which is quite efficient in practice. A third available solution is the _Fourier-Motzkin variable elimination (FM)_ method, proposed in 1827 by Fourier [9] and re-discovered in 1936 by Motzkin [23]. In contrast to the other two approaches, FM admits quantifier elimination, but it has a _doubly exponential_ worst case complexity, even though there have been various efforts to improve its efficiency by recognizing and avoiding redundant computations (e.g. [11, 12]). In this paper, we propose a novel method, which is derived from the FM method, but which turns out to have striking resemblance to the simplex algorithm. This yields interesting theoretical insights into the relation of the two established methods and the nature of the problem itself. More precisely, our contributions include: * The presentation of _FMplex_, a new variable elimination method based on a divide-and-conquer approach. We show that it does not contain certain redundancies Fourier-Motzkin might generate and it lowers the overall complexity from _doubly_ to _singly_ exponential. * An adaptation of FMplex for SMT solving, including methods to prune the search tree based on structural observations. * A theorem formalizing connections between FMplex and the simplex algorithm. * An implementation of the SMT adaptation and its experimental evaluation. After recalling necessary preliminaries in Section 2, we introduce our novel FMplex method first for quantifier elimination in Section 3 and then for SMT solving in Section 4. We present related work and compare FMplex with other methods, first qualitatively in Section 5, and then experimentally in Section 6. We discuss future work and conclude the paper in Section 7. ## 2 Preliminaries Let \(\mathbb{R}\), \(\mathbb{Q}\) and \(\mathbb{N}\) denote the set of real, rational respectively natural (\(0\notin\mathbb{N}\)) numbers. For \(k\in\mathbb{N}\) we define \([k]:=\{1,\ldots,k\}\). Throughout this paper, we fix \(n\in\mathbb{N}\), a set \(X=\{x_{1},\ldots,x_{n}\}\) and a corresponding vector \(\boldsymbol{x}=(x_{1},\ldots,x_{n})^{T}\) of \(\mathbb{R}\)-valued variables. MatricesFor \(m\in\mathbb{N}\) let \(E^{(m)}\in\mathbb{Q}^{m\times m}\) be the identity matrix, and \(\boldsymbol{0}^{(m)}=(0\ \cdots\ 0)^{T}\in\mathbb{Q}^{m\times 1}\). The \(i\)th component of \(\boldsymbol{f}\in\mathbb{Q}^{m\times 1}\cup\mathbb{Q}^{1\times m}\) is denoted by \(f_{i}\) and the component-wise comparison to zero by \(\boldsymbol{f}\geq 0\). For \(A\in\mathbb{Q}^{m\times n}\), \(\boldsymbol{a}_{i,\cdot}\in\mathbb{Q}^{1\times n}\) and \(\boldsymbol{a}_{\cdot,i}\in\mathbb{Q}^{m\times 1}\) denote the \(i\)th row respectively column vector of \(A\). Furthermore, \(A[t]\) denotes the sub-matrix of \(A\) containing only the rows with indices from some \(I\subseteq[m]\). For \(\boldsymbol{f}\in\mathbb{Q}^{1\times m}\), \(\boldsymbol{f}A\) is a _linear combination_ of the rows \(i\in[m]\) of \(A\) with \(f_{i}\neq 0\). We call \(A\)_linearly independent_ if none of its rows is a linear combination of its other rows, and _linearly dependent_ otherwise. The _rank of \(A\) rank\((A)\)_ is the size of a maximal \(I\subseteq[m]\) with \(A[t]\) linearly independent. Linear ConstraintsLet \(\boldsymbol{a}=(a_{1},\ldots,a_{n})\in\mathbb{Q}^{1\times n}\), \(b\in\mathbb{Q}\) and \(\sim\in\{=,\leq,<,\neq\}\) a _relation symbol_. We call \(\boldsymbol{a}\)x a _linear term_ and \(\boldsymbol{a}\boldsymbol{x}\sim b\) a _linear constraint_, which is _weak_ if \(\sim\in\{=,\leq\}\) and _strict_ otherwise. A _system of linear constraints_, or short a _system_, is a non-empty finite set of linear constraints. For most of this paper, we only consider constraints of the form \(\boldsymbol{a}\boldsymbol{x}\leq b\). We can write every system \(C=\{\boldsymbol{a}_{i,\cdot}\boldsymbol{x}\leq b_{i}\mid i\in[m]\}\) of such constraints in _matrix representation_\(\boldsymbol{A}\boldsymbol{x}\leq\boldsymbol{b}\) with suitable \(A\in\mathbb{Q}^{m\times n}\) and \(\boldsymbol{b}\in\mathbb{Q}^{m\times 1}\). Conversely, every row \(\boldsymbol{a}_{i,\cdot}\boldsymbol{x}\leq b_{i},\ i\in[m]\) of \(\boldsymbol{A}\boldsymbol{x}\leq\boldsymbol{b}\) is a linear constraint. Thus, the representations are mostly interchangeable; however, the matrix representation allows redundant rows in contrast to the set notation. As the latter will play a role later on, we will stick to the matrix representation. Variable AssignmentAn _assignment_ is a function \(\alpha:Y\to\mathbb{R}\) with domain \(dom(\alpha)=Y\subseteq X\). The _extension_\(\alpha[x_{i}\mapsto r]\) is the assignment with domain \(dom(\alpha)\cup\{x_{i}\}\) such that \(\alpha[x_{i}\mapsto r](x_{j})=\alpha(x_{j})\) for all \(x_{j}\in dom(\alpha)\setminus\{x_{i}\}\) and \(\alpha[x_{i}\mapsto r](x_{i})=r\). For \(Z\subseteq Y\), the _restriction_\(\alpha|_{Z}\) is the assignment with domain \(Z\) such that \(\alpha|_{Z}(x_{i})=\alpha(x_{i})\) for all \(x_{i}\in Z\). We extend these notations to sets of assignments accordingly. The standard _evaluation_ of a linear term \(t\) under \(\alpha\) is written \(\alpha(t)\). We say that \(\alpha\)_satisfies_ (or is a solution of) a constraint \(c=(\boldsymbol{a}\boldsymbol{x}\sim b)\) if \(\alpha(a_{1}x_{1}+\ldots a_{n}x_{n})\sim b\) holds, and denote this fact by \(\alpha\models c\). All solutions of \(c\) build its _solution set_\(sol(c)\). Similarly, \(\alpha\models(\boldsymbol{A}\boldsymbol{x}\leq\boldsymbol{b})\) denotes that \(\alpha\) is a common solution of all linear constraints in the system \(A\boldsymbol{x}\leq\boldsymbol{b}\). A system is _satisfiable_ if it has a common solution, and _unsatisfiable_ otherwise. Note that each satisfiable system has also a rational-valued solution. We will also make use of the following two well-known results. **Theorem 1** (Farkas' Lemma [8]).: _Let \(A\in\mathbb{Q}^{m\times n}\) and \(\boldsymbol{b}\in\mathbb{Q}^{m\times 1}\). Then the system \(A\boldsymbol{x}\leq\boldsymbol{b}\) is satisfiable if and only if for all \(\boldsymbol{f}\in\mathbb{Q}^{1\times m}\) with \(\boldsymbol{f}\geq 0\) and \(\boldsymbol{f}A=(0,\ldots,0)\in\mathbb{Q}^{1\times n}\) it holds \(\boldsymbol{f}\boldsymbol{b}\geq 0\)._ **Theorem 2** (Fundamental Theorem of Linear Programming, as in [21]).: _Let \(A\in\mathbb{Q}^{m\times n}\) and \(\boldsymbol{b}\in\mathbb{Q}^{m\times 1}\). Then \(A\boldsymbol{x}\leq\boldsymbol{b}\) is satisfiable if and only if there exists a subset \(I\subseteq[m]\) such that \(A[t]\) is linearly independent, \(|I|=\text{rank}(A)\), and there exists an assignment \(\alpha:X\to\mathbb{R}\) with \(\alpha\models(A[t]\boldsymbol{x}=\boldsymbol{b}[t])\) and \(\alpha\models(A\boldsymbol{x}\leq\boldsymbol{b})\)._ ### Fourier-Motzkin Variable Elimination The _Fourier-Motzkin variable elimination_ (FM) [9, 23] method allows to eliminate any \(x_{j}\in X\) from a system \(A\mathbf{x}\leq\mathbf{b}\) by computing \(A^{\prime}\mathbf{x}\leq\mathbf{b}^{\prime}\) with \(\mathbf{a}^{\prime}_{\cdot,j}=0\) and such that an assignment \(\alpha\) is a solution of \(A^{\prime}\mathbf{x}\leq\mathbf{b}^{\prime}\) if and only if there is \(r\in\mathbb{Q}\) so that \(\alpha[x_{j}\mapsto r]\) is a solution of \(A\mathbf{x}\leq\mathbf{b}\). Graphically, the solution set of \(A^{\prime}\mathbf{x}\leq\mathbf{b}^{\prime}\) is the projection of the solutions of \(A\mathbf{x}\leq\mathbf{b}\) onto \(X\setminus\{x_{j}\}\). The idea of the FM method is as follows. For each \(i\in[m]\) with \(a_{i,j}\neq 0\), the constraint \(\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i}\) can be rewritten as either a _lower bound_ or an _upper bound_ on \(x_{j}\), denoted in both cases as \(bnd_{j}(\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i})\): \[\Big{(}\sum_{k\in[n]\setminus\{j\}}-\frac{a_{i,k}}{a_{i,j}}\cdot x_{k}\Big{)} +\frac{b_{i}}{a_{i,j}}\leq x_{j},\ \ \text{if}\ a_{i,j}<0,\ \ \ \ \ \ \ \text{resp.}\ \ \ \ \ \ \ x_{j}\leq\Big{(}\sum_{k\in[n]\setminus\{j\}}-\frac{a_{i,k}}{a_{i,j}}\cdot x_{k }\Big{)}+\frac{b_{i}}{a_{i,j}},\ \ \text{if}\ a_{i,j}>0.\] **Definition 1**.: _For \(A\in\mathbb{Q}^{m\times n}\), we define the index sets_ \[I_{j}^{-}(A):=\{i\in[m]\mid a_{i,j}<0\},\ \ \ I_{j}^{+}(A):=\{i\in[m]\mid a_{i,j}>0\}, \ \ \ \text{and}\ \ \ I_{j}^{0}(A):=\{i\in[m]\mid a_{i,j}=0\}.\] \(I_{j}^{-}(A)\)_, \(I_{j}^{+}(A)\)\(\text{and}I_{j}^{0}(A)\) indicate the rows of \(A\mathbf{x}\leq\mathbf{b}\) which induce lower bounds, upper bounds and no bounds on \(x_{j}\), respectively. Due to the density of the reals, there exists a value for \(x_{j}\) that satisfies all bounds if and only if each lower bound is less than or equal to each upper bound. However, since in general the involved bounds are symbolic and thus their values depend on the values of other variables, we cannot directly check this condition. To express this, we let \(A^{\prime}\mathbf{x}\leq\mathbf{b}^{\prime}\) be defined by the constraint set \[\{bnd_{j}(\mathbf{a}_{\ell,\cdot}\mathbf{x}\leq b_{\ell})\leq bnd_{j}(\mathbf{a}_{u, \cdot}\mathbf{x}\leq b_{u})\mid(\ell,u)\in I_{j}^{-}(A)\times I_{j}^{+}(A)\}\ \ \ \ \cup\ \ \ \ \{\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i}\mid i\in I_{j}^{0}(A)\}.\] In matrix representation, the FM method applies the following transformation: **Definition 2** (Fourier-Motzkin Variable Elimination).: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\), and \(j\in[n]\). Let further \(m^{\prime}=|I_{j}^{-}(A)|\cdot|I_{j}^{+}(A)|+|I_{j}^{0}(A)|\) and \(F\in\mathbb{Q}^{m^{\prime}\times m}\) be a matrix consisting of exactly the following rows:1_ Footnote 1: Remember that we use lower case letters for rows of matrices with the respective upper case letter as name. Thus, \(\mathbf{\epsilon}^{(m)}_{i,\cdot}\) denotes the \(i\)th column vector of the identity matrix \(E^{(m)}\). \[-\frac{1}{a_{\ell,j}}\cdot\mathbf{\epsilon}^{(m)}_{\ell,\cdot}+\frac{1}{a_{u,j}} \cdot\mathbf{\epsilon}^{(m)}_{u,\cdot}\ \ \text{for every pair}\ \ (\ell,u)\in I_{j}^{-}(A)\times I_{j}^{+}(A)\ \ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ \mathbf{\epsilon}^{(m)}_{i,\cdot}\ \ \text{for every}\ \ i\in I_{j}^{0}(A).\] _Then the Fourier-Motzkin variable elimination \(\mathsf{FM}_{j}(A\mathbf{x}\leq\mathbf{b})\) of \(x_{j}\) from the system \(A\mathbf{x}\leq\mathbf{b}\) is defined as the system \(F\mathbf{A}\mathbf{x}\leq\mathbf{F}\mathbf{b}\)._ The consistency of \(A\mathbf{x}\leq\mathbf{b}\) can be checked by successively eliminating variables \(x_{n},\ldots,x_{1}\), obtaining intermediate systems \(A^{(n-1)}\mathbf{x}\leq\mathbf{b}^{(n-1)},\ldots,A^{(0)}\mathbf{x}\leq\mathbf{b}^{(0)}\). All entries of the transformation matrix \(F\) in the definition above are positive, and thus for any \(k\in\{0,\ldots,n-1\}\) and any row \(i^{\prime}\) in \(A^{(k)}\mathbf{x}\leq\mathbf{b}^{(k)}\), there exists \(0\leq\mathbf{f}\in\mathbb{Q}^{m\times 1}\) s.t. \(\mathbf{f}A=\mathbf{a}^{(k)}_{i^{\prime},\cdot}\) and \(\mathbf{f}\mathbf{b}=b^{(k)}_{i^{\prime}}\), or in short: \(\sum_{i\in[m]}f_{i}\cdot(\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i})=(\mathbf{a}^{(k)}_{i^{ \prime},\cdot}\mathbf{x}\leq b^{(k)}_{i^{\prime}})\). We call this kind of linear combinations _conical combinations_. By Farkas' Lemma (Theorem 1), if \(A^{(0)}\mathbf{x}\leq\mathbf{b}^{(0)}\) is unsatisfiable, then so is \(A\mathbf{x}\leq\mathbf{b}\). If it is satisfiable, then it is satisfied by the empty assignment, which can be extended successively to a model of \(A^{(1)}\mathbf{x}\leq\mathbf{b}^{(1)},\ldots,A^{(n-1)}\mathbf{x}\leq\mathbf{b}^{(n-1)}\) and \(A\mathbf{x}\leq\mathbf{b}\). A major drawback of the Fourier-Motzkin variable elimination is its doubly exponential complexity in time and space w.r.t. the number of eliminated variables. Moreover, many of the generated rows are redundant because they are linear combinations of the other rows, i.e. they could be omitted without changing the solution set of the system. Redundancies might already be contained in the input system, or they arise during the projection operation. While removing all redundancies is expensive, there are efficient methods for removing some redundancies of the latter type, for example Imbert's acceleration theorems [10, 11, 12]. **Lemma 1** (Redundancy by Construction).: _Let \(A\in\mathbb{Q}^{n\times n},\boldsymbol{b}\in\mathbb{Q}^{n\times 1}\) and \(F\in\mathbb{Q}^{m^{\prime}\times m}\). Let furthermore \(A^{\prime}=FA\), \(\boldsymbol{b}^{\prime}=F\boldsymbol{b}\) and \(i\in[m^{\prime}]\). If there exists \(\boldsymbol{r}\in\mathbb{Q}^{1\times m^{\prime}}\) with \(\boldsymbol{r}\geq 0\), \(r_{i}=0\) and \(\boldsymbol{r}F=\boldsymbol{f}_{i,}\) (i.e. the \(i\)th row of \(A^{\prime}\boldsymbol{x}\leq\boldsymbol{b}^{\prime}\) is a conical combination \(\boldsymbol{r}FA\boldsymbol{x}\leq\boldsymbol{r}F\boldsymbol{b}\) of the other rows), then that row is redundant in \(A^{\prime}\boldsymbol{x}\leq\boldsymbol{b}^{\prime}\), i.e. the solution set does not change when omitting it: \(\text{sol}(A^{\prime}\boldsymbol{x}\leq\boldsymbol{b}^{\prime})=\text{sol}( A^{\prime}[[m^{\prime}]\setminus\{i\}]\boldsymbol{x}\leq\boldsymbol{b}^{\prime}[[m^{ \prime}]\setminus\{i\}])\)._ ## 3 FMplex as Variable Elimination Procedure The FM method encodes that none of the lower bounds on some variable \(x_{j}\) in a system \(A\boldsymbol{x}\leq\boldsymbol{b}\) is larger than any of its upper bounds. In our _FMplex_ method, instead of considering all lower-upper bound combinations at once, we _split the problem into a set of sub-problems_ by case distinction either on _which of the lower bounds is the largest_ or alternatively on _which of the upper bounds is the smallest_. For splitting on lower bounds, for each lower bound on \(x_{j}\) we consider solutions where this lower bound is maximal under all lower bounds, and at the same time not larger than any of the upper bounds. The upper bound case is analogous. Then \(A\boldsymbol{x}\leq\boldsymbol{b}\) is satisfiable if and only if there exists a solution in one of these sub-problems. Asymptotically, these sub-problems are significantly smaller than the systems produced by FM, so that in total our approach produces _at most exponentially_ many constraints after iterated application, in contrast to the doubly exponential effort of the FM method. Formally, if there are no upper or no lower bounds on \(x_{j}\), then there is no need for case splitting and we follow FM using \(\exists\mathbf{x}_{j}.\ A\boldsymbol{x}\leq\boldsymbol{b}\equiv A[\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol }}}}}}}}}}}}}}}} \boldsymbol{} \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbol \boldsymbol \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol{ \boldsymbol \cdot \boldsymbol{ \boldsymbol \cdot \cdot}}}}}}}}}}} \boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \cdot}}}}}}}}}}}}}} \boldsymbol{}}\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol \boldsymbolboldsymbol{\cdotcdot}}}}}}}}}}}}}}} \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbolboldsymbolboldsymbolboldsymbol \cdot}}}}}}}}}}}}}}} \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbolboldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \cdot \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol \cdot \boldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbolboldsymbol \boldsymbol \boldsymbol{\boldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbol \boldsymbol \boldsymbol{\boldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbolboldsymbol{\boldsymbolboldsymbol \boldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbolboldsymbol{\boldsymbolboldsymbol \boldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbolboldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbolboldsymbol{\boldsymbolboldsymbol \boldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbolboldsymbol{\boldsymbolboldsymbol \boldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbolboldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbolboldsymbol{\boldsymbolboldsymbol \boldsymbolboldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbolboldsymbol{\boldsymbolboldsymbol \boldsymbolboldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbolboldsymbol \boldsymbolboldsymbol{\boldsymbolboldsymbol \boldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbol \boldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbol\boldsymbol \boldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbol\boldsymbol{\boldsymbol \boldsymbolboldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbol\boldsymbol \boldsymbol{\boldsymbolboldsymbol\boldsymbol \boldsymbolboldsymbolboldsymbol{\boldsymbol \boldsymbolboldsymbol\boldsymbolboldsymbol \boldsymbol{\boldsymbolboldsymbol\boldsymbol \boldsymbolboldsymbolboldsymbolboldsymbol\boldsymbol{\boldsymbol \boldsymbolboldsymbol\boldsymbol \boldsymbol{\boldsymbolboldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbolboldsymbol\boldsymbol \boldsymbol{\boldsymbolboldsymbol\boldsymbol \boldsymbolboldsymbol\boldsymbol{\boldsymbol \boldsymbolboldsymbol\boldsymbolboldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbolboldsymbol\boldsymbol \boldsymbolboldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbolboldsymbolboldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbolboldsymbol\boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbolboldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbolboldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbolboldsymbol \boldsymbolboldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbolboldsymbol\boldsymbolboldsymbol{\boldsymbol \boldsymbol\boldsymbolboldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbolboldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbolboldsymbol \boldsymbol\boldsymbolboldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbolboldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbolboldsymbol\boldsymbol{\boldsymbol\boldsymbolboldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbolboldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbolboldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbolboldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbolboldsymbol\boldsymbolboldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbolboldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbolboldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbolboldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbolboldsymbol\boldsymbolboldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbolboldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbolboldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbolboldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbolboldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbolboldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbolboldsymbol\boldsymbol \boldsymbolboldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbolboldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbolboldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbolboldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbolboldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol{\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol{\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol{\boldsymbol\boldsymbol\boldsymbol\boldsymbol\boldsymbol\boldsymbol \boldsymbol\boldsymbol{\boldsymbol\ **Definition 3** (Restricted Projection).: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\) and \(j\in[n]\)._ * _If_ \(I_{j}^{-}(A)\neq\emptyset\) _and_ \(I_{j}^{+}(A)\neq\emptyset\)_, then for any_ \(i\in I_{j}^{-}(A)\cup I_{j}^{+}(A)\) _we fix_ \(F\in\mathbb{Q}^{(m-1)\times m}\) _arbitrarily but deterministically to consist of exactly the following rows:_ \[\frac{1}{a_{i,j}}\cdot\mathbf{e}_{i,\cdot}^{(m)}-\frac{1}{a_{i^{ \prime},j}}\cdot\mathbf{e}_{i^{\prime},\cdot}^{(m)}\text{ for every }i^{\prime}\in I_{j}^{-}(A) \setminus\{i\},\] \[-\frac{1}{a_{i,j}}\cdot\mathbf{e}_{i,\cdot}^{(m)}+\frac{1}{a_{i^{ \prime},j}}\cdot\mathbf{e}_{i^{\prime},\cdot}^{(m)}\text{ for every }i^{\prime}\in I_{j}^{+}(A) \setminus\{i\},\qquad\text{ and }\qquad\mathbf{e}_{i^{\prime},\cdot}^{(m)}\text{ for every }i^{\prime}\in I_{j}^{0}(A).\] _Then the_ restricted projection \(P_{j,i}(A\mathbf{x}\leq\mathbf{b})\) _of_ \(x_{j}\) _w.r.t. the row_ \(i\) _from the system_ \(A\mathbf{x}\leq\mathbf{b}\) _is defined as the system_ \(F\mathbf{x}\leq F\mathbf{b}\)_. We call_ \(F\) _the_ projection matrix _corresponding to_ \(P_{j,i}(A\mathbf{x}\leq\mathbf{b})\)_._ * _If_ \(I_{j}^{-}(A)=\emptyset\) _or_ \(I_{j}^{+}(A)=\emptyset\)_, then we define the projection matrix_ \(F\in\mathbb{Q}^{|I_{j}^{0}(A)|\times m}\) _to have exactly one row_ \(\mathbf{e}_{i^{\prime},\cdot}^{(m)}\) _for each_ \(i^{\prime}\in I_{j}^{0}(A)\)_, and define_ \(P_{j,\perp}(A\mathbf{x}\leq\mathbf{b})\) _as_ \(F\mathbf{x}\leq F\mathbf{b}\)_._ The following lemma states a crucial result for our method: The solutions of the restricted projections for all lower (or all upper) bounds of a variable exactly cover the projection of the entire solution set. **Lemma 2**.: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\), \(j\in[n]\) and \(I\in\{I_{j}^{-}(A),I_{j}^{+}(A)\}\). If \(I_{j}^{-}(A)\neq\emptyset\) and \(I_{j}^{+}(A)\neq\emptyset\), then_ \[sol(A\mathbf{x}\leq\mathbf{b})|_{X\setminus\{x_{j}\}}=\bigcup_{i\in I}sol(P_{j,i}(A \mathbf{x}\leq\mathbf{b})).\] _Otherwise (\(I_{j}^{-}(A)=\emptyset\) or \(I_{j}^{+}(A)=\emptyset\)), it holds \(sol(A\mathbf{x}\leq\mathbf{b})|_{X\setminus\{x_{j}\}}=sol(P_{j,\perp}(A\mathbf{x}\leq\bm {b})).\)_ Proof.: The case \(I_{j}^{-}(A)=\emptyset\) or \(I_{j}^{+}(A)=\emptyset\) follows from the correctness of FM. Assume \(I=I_{j}^{-}(A)\), the case \(I=I_{j}^{+}(A)\) is analogous. \(\supseteq\)**:**: Let \(i\in I_{j}^{-}(A)\) and \(\alpha\models P_{j,i}(A\mathbf{x}\leq\mathbf{b})\), then for all \(\ell\in I_{j}^{-}(A)\), \(u\in I_{j}^{+}(A)\) it holds \(\alpha(bnd_{j}(\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{\ell}))\leq\alpha(bnd_{j}(\mathbf{a} _{i,\cdot}\mathbf{x}\leq b_{u}))\). Thus, \(\alpha[x_{j}\mapsto\alpha(bnd_{j}(\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i}))]\models A \mathbf{x}\leq\mathbf{b}\). \(\subseteq\)**:**: Let \(\alpha\models A\mathbf{x}\leq\mathbf{b}\) and \(i=\arg\max_{\ell\in I_{j}^{-}(A)}(\alpha(bnd_{j}(\mathbf{a}_{\ell,\cdot}\mathbf{x} \leq b_{\ell})))\), then for all \(u\in I_{j}^{+}(A)\) it holds \(\alpha(bnd_{j}(\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i}))\leq\alpha(bnd_{j}(\mathbf{a}_{u,\cdot}\mathbf{x}\leq b_{u}))\) and thus \(\alpha\models P_{j,i}(A\mathbf{x}\leq\mathbf{b})\). **Definition 4** (FMplex Variable Elimination).: _For \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\), \(j\in[n]\) and \(*\in\{-,+\}\), we define_ \[\mathbb{FMP}_{j}^{*}(A\mathbf{x}\leq\mathbf{b})=\begin{cases}\{P_{j,i}(A\mathbf{x}\leq\bm {b})\mid i\in I_{j}^{*}(A)\}&\text{ if }I_{j}^{-}(A)\neq\emptyset\text{ and }I_{j}^{+}(A)\neq\emptyset\\ \{P_{j,\perp}(A\mathbf{x}\leq\mathbf{b})\}&\text{ otherwise}.\end{cases}\] The FMplex elimination defines a set of restricted projections which can be composed to the full projection according to Lemma 2. Lifting this from sets to logic naturally results in the following theorem which demonstrates the usage of our method. **Theorem 3**.: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\), and \(j\in[n]\). Then_ \[\exists x_{j}.\,A\mathbf{x}\leq\mathbf{b}\quad\equiv\quad\bigvee_{S\in\mathbb{FMP}_{j}^ {+}(A\mathbf{x}\leq\mathbf{b})}S\quad\equiv\quad\bigvee_{S\in\mathbb{FMP}_{j}^{-}(A \mathbf{x}\leq\mathbf{b})}S.\] For eliminating multiple variables, we iteratively apply \(\mathbb{FMP}^{-}\) or \(\mathbb{FMP}^{+}\) to each restricted projection resulting from the previous elimination step. Note that we can choose the next variable to be eliminated as well as the variant independently in every branch. **Example 2**.: _We continue Example 1, from which we eliminated \(x_{2}\) and now want to eliminate \(x_{1}\):_ \[\exists x_{1}.\ \exists x_{2}.\ A\mathbf{x}\leq\mathbf{b} \equiv\ \exists x_{1}.\bigvee_{S\in\mathbb{FMP}^{-}_{2}(\mathbf{Ax}\leq\mathbf{b})}S\] \[\equiv\ \exists x_{1}.\ (x_{1}\leq 3\wedge-3x_{1}\leq-3\wedge-x_{1} \leq 1)\ \vee\exists x_{1}.\ (-x_{1}\leq-3\wedge-2x_{1}\leq 0\wedge 0\leq 4)\] _We eliminate the two quantifiers for \(x_{1}\) separately, using_ \[\mathbb{FMP}^{-}_{1}(x_{1}\leq 3\wedge-3x_{1}\leq-3\wedge-x_{1} \leq 1)=\{(0\leq 2\wedge 0\leq 2),(0\leq-2\wedge 0\leq 4)\}\text{ and }\] \[\mathbb{FMP}^{-}_{1}(-x_{1}\leq-3\wedge-2x_{1}\leq 0\wedge 0\leq 4 )=\{(0\leq 4)\}\] _giving us the final result \(\exists x_{1}.\ \exists x_{2}.\ A\mathbf{x}\leq\mathbf{b}\ \equiv\ ((0\leq 2\wedge 0\leq 2) \vee(0\leq 4\wedge 0\leq-2))\vee(0\leq 4)\)._ We analyze the complexity in terms of the number of new rows (or constraints) that are constructed during the elimination of all variables: **Theorem 4** (Complexity of \(\mathbb{FMP}\)).: _Let \(A\in\mathbb{Q}^{m\times n}\), and \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\). When eliminating \(n\) variables from \(\mathbf{Ax}\leq\mathbf{b}\), the \(\mathbb{FMP}^{-}\) method constructs \(\mathcal{O}(n\cdot m^{n+1})\) new rows._ Proof.: The number \(N(m,n)\) of constructed rows is maximal if the system consists only of lower bounds and one upper bound. Then, \(\mathbb{FMP}^{-}\) yields \(m-1\) new systems of size \(m-1\), from which \(n-1\) variables need to be eliminated; thus \(N(m,n)\leq(m-1)\cdot((m-1)+N(m-1,n-1))\). With \(k=\min(n,m)\), we obtain \(N(m,n)\leq\sum\limits_{i=1}^{k}(m-i)\cdot\prod\limits_{j=1}^{i}(m-j)\leq n \cdot m^{n+1}\). While still exponential, this bound is considerably better than the theoretical doubly exponential worst-case complexity of the FM method. Shortly speaking, FMplex trades one exponential step at the cost of the result being a decomposition into multiple partial projections. However, there are systems for which FMplex produces strictly more rows than the FM method: In the worst case from the above proof, FM obtains a single system of the same size as each of the sub-problems computed by \(\mathbb{FMP}^{-}\). Although in this case, we could simply employ \(\mathbb{FMP}^{+}\) instead, it is unclear whether there exists a rule for employing \(\mathbb{FMP}^{-}\) or \(\mathbb{FMP}^{+}\) that never produces more constraints than FM. Like FM, FMplex keeps redundancies from the input throughout the algorithm, thus there might be identical rows in the same or across different sub-problems. But in contrast to FM, FMplex does not introduce any redundancies by construction in the sense of Lemma 1. **Theorem 5**.: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\) and \(k\in[m]\). Assume \((A^{(0)}\mathbf{x}\leq\mathbf{b}^{(0)})=(\mathbf{Ax}\leq\mathbf{b})\) and for all \(j\in[k]\), let \((A^{(j)}\mathbf{x}\leq\mathbf{b}^{(j)})\in\mathbb{FMP}^{-}_{j}(A^{(j-1)}\mathbf{x}\leq\mathbf{ b}^{(j-1)})\cup\mathbb{FMP}^{+}_{j}(A^{(j-1)}\mathbf{x}\leq\mathbf{b}^{(j-1)})\). Let \(F^{(1)},\ldots,F^{(k)}\) be the respective projection matrices, and \(F=F^{(k)}\cdot\ldots\cdot F^{(1)}\). Then \(F\) is linearly independent._ Proof.: By definition, the projection matrices are linearly independent, and thus so is their product \(F\). ## 4 FMplex as Satisfiability Checking Procedure A formula is satisfiable if and only if eliminating all variables (using any quantifier elimination method such as FM or FMplex) yields a tautology. However, FMplex computes smaller sub-problems whose satisfiability implies the satisfiability of the original problem. Therefore, we do not compute the whole projection at once, but explore the decomposition using a depth-first search. The resulting search tree has the original system as root, and each node has as children the systems resulting from restricted projections. The original system is satisfiable if and only if a leaf without any trivially false constraints exists. An example is depicted in Figure 2. We start with a basic version of the algorithm and then examine how the search tree can be pruned, resulting in two variants; all versions are given in Algorithm 1. An important observation is that we can decide independently for each node of the search tree, which variable to eliminate next and whether to branch on lower or on upper bounds. **Definition 5** (Branch Choices).: _The set of branch choices for a system \(A\mathbf{x}\leq\mathbf{b}\) is_ \[\text{branch\_choices}(A\mathbf{x}\leq\mathbf{b})= \{\{(x_{j},i)\mid i\in I_{j}^{-}(A)\}\mid j\in[n]\wedge I_{j}^{-} (A)\neq\emptyset\wedge I_{j}^{+}(A)\neq\emptyset\}\] \[\cup\{\{(x_{j},i)\mid i\in I_{j}^{+}(A)\}\mid j\in[n]\wedge I_{j} ^{-}(A)\neq\emptyset\wedge I_{j}^{+}(A)\neq\emptyset\}\] \[\cup\{\{(x_{j},\bot)\}\mid j\in[n]\wedge(I_{j}^{-}(A)=\emptyset \lor I_{j}^{+}(A)=\emptyset)\}.\] For an initial input \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\) with \(\widehat{m}\) rows, we define the depth-first search using the recursive method \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};A\mathbf{x}\leq\mathbf{b},F)\) in Algorithm 1 where \(A\mathbf{x}\leq\mathbf{b}\) is the currently processed sub-problem in the recursion tree. We track the relation of \(A\mathbf{x}\leq\mathbf{b}\) to \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\) in terms of linear combinations using the parameter \(F\). The initial call is defined as \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}})=\mathtt{FMplex}( \widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}},E ^{(\widehat{m})})\). We allow that \(A\mathbf{x}\leq\mathbf{b}\) contains identical rows when they are obtained in different ways (which is reflected by \(F\)). We need to keep these duplicates for proving the results of this section. SolutionsIf a trivially satisfiable node is found, the algorithm constructs an assignment starting with the empty assignment and extends it in reverse order in which the variables were eliminated. For every variable \(x_{j}\), a value is picked above all lower and below all upper bounds on \(x_{j}\) evaluated at the underlying assignment. By the semantics of the projection, the value of the designated (largest lower or smallest upper) bound on \(x_{j}\) is suitable. ConflictsWe distinguish inconsistencies in \(A\mathbf{x}\leq\mathbf{b}\) by the following notions: We call a row \(i\) of \(A\mathbf{x}\leq\mathbf{b}\) a _conflict_ if it is of the form \(\mathbf{a}_{i,\cdot}=\mathbf{0}^{(n)}\) with \(b_{i}<0\). We call the conflict _global_ if \(\mathbf{f}_{i,\cdot}\geq 0\) and _local_ otherwise. In case of a global conflict, Farkas' Lemma allows to deduce the unsatisfiability of \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\), thus stopping the search before the whole search tree is generated. Then a set of conflicting rows \(K\) of the input system corresponding to \(\mathbf{f}_{i,\cdot}\) is returned. In particular, the set \(\{\widehat{\mathbf{a}}_{j,\cdot}\ \mathbf{x}\leq\widehat{b}_{j}\mid f_{i,j}\neq 0\}\) is a minimal unsatisfiable subset of the constraints in \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\). In case of a local conflict, we simply continue to explore the search tree. The algorithm returns _PARTIAL-UNSAT_ to indicate that \(A\mathbf{x}\leq\mathbf{b}\) is unsatisfiable, but the unsatisfiability of \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\) cannot be derived. This approach, formalized in Algorithm 1a, guarantees that the initial call will never return _PARTIAL-UNSAT_; we always find either a global conflict or a solution. The correctness and completeness of \(\mathtt{FMplex}\) follows from Theorem 3 and Theorem 6. **Theorem 6**.: _Let \(\widehat{A}\in\mathbb{Q}^{\widehat{m}\times n}\), and \(\widehat{\mathbf{b}}\in\mathbb{Q}^{\widehat{m}}\times 1\). Then \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\) is unsatisfiable if and only if the call \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}})\) to Algorithm 1a terminates with a global conflict._ Figure 2: The search tree corresponding to Example 2. The very first leaf (bottom left) is already satisfiable, meaning that the rest would not need to be computed. ``` [MISSING_PAGE_POST] Proof Idea for Theorem 6.: If \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\) is unsatisfiable, then there exists a minimal unsatisfiable subset \(\widehat{K}\) of the corresponding constraints. We construct a path in the search tree induced by Algorithm 1a yielding a conflict that is a linear combination of \(\widehat{K}\). As \(\widehat{K}\) is minimal, the linear combination is positive, i.e. the conflict is global. The other direction of the equivalence follows immediately with Farkas' Lemma. Consult the appendix for a detailed proof. ### Avoiding Redundant Checks We observe that each row \(i\) in a sub-problem \(A\mathbf{x}\leq\mathbf{b}\) in the recursion tree of \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}})\) corresponds to a row \(\hat{t}\) in \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\) in the sense that it is a linear combination of the rows \(\{\hat{t}\}\cup\mathcal{N}\) of \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\), where \(\mathcal{N}\subseteq[\widehat{m}]\) corresponds to the lower/upper bounds designated as largest/smallest one to compute \(A\mathbf{x}\leq\mathbf{b}\): **Theorem 7**.: _Let \(\widehat{A}\in\mathbb{Q}^{\widehat{m}\times n}\) and \(\widehat{\mathbf{b}}\in\mathbb{Q}^{\widehat{m}\times 1}\). Let \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};A\mathbf{x}\leq b,F)\) be a call in the recursion tree of the call \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}})\) to Algorithm 1a, where \(A\in\mathbb{Q}^{m\times n}\) and \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\) (by construction \(m\leq\widehat{m}\))._ _Then there exists a set \(\mathcal{N}\subseteq[\widehat{m}]\) such that_ 1. \(A\mathbf{x}\leq\mathbf{b}\) _is satisfiable if and only if_ \((\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}})\wedge(\widehat{A}|\mathcal{N}|\mathbf{x}= \widehat{\mathbf{b}}|\mathcal{N})\) _is satisfiable,_ 2. _there exists an injective mapping_ \(\mathcal{B}_{\mathcal{N},F}:[m]\to[\widehat{m}],i\mapsto\hat{t}\) _with_ \(\{\hat{t}\}=\{i^{\prime}\in[\widehat{m}]\mid f_{i,\hat{t}^{\prime}}\neq 0\} \setminus\mathcal{N}\)_._ Proof Idea.: The statement follows with a straight forward induction over the elimination steps, where the original row corresponding to the chosen bound is added to \(\mathcal{N}\), and \(\mathcal{B}_{\mathcal{N},F}\) keeps track of which constraint corresponds to which original row. Consult the appendix for a detailed proof. We call the above defined set \(\mathcal{N}\) the _non-basis_, inspired from the analogies to the simplex algorithm (discussed in Section 5.1). By the above theorem, the order in which a non-basis is constructed has no influence on the satisfiability of the induced sub-problem. In particular: **Theorem 8**.: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\), \(j\in[n]\), and let \(i,i^{\prime}\in[m]\) be row indices with \(a_{i,j}\neq 0\) and \(a_{i^{\prime},j}\neq 0\). If \(P_{j,i}(A\mathbf{x}\leq\mathbf{b})\) is unsatisfiable, then \(P_{j,i^{\prime}}(A\mathbf{x}\leq\mathbf{b})\wedge(\mathbf{a}_{i,\cdot}\mathbf{x}=b_{i})\) is unsatisfiable._ Proof.: By Theorem 7, if \(P_{j,i}(A\mathbf{x}\leq\mathbf{b})\) is unsatisfiable, then \((A\mathbf{x}\leq\mathbf{b})\wedge(\mathbf{a}_{i,\cdot}\mathbf{x}=\mathbf{b_{i}})\) is unsatisfiable, and trivially \((A\mathbf{x}\leq\mathbf{b})\wedge(\mathbf{a}_{i,\cdot}\mathbf{x}=\mathbf{b_{i}})\wedge(\mathbf{a}_{i, \cdot}\mathbf{x}=\mathbf{b_{i}})\) is unsatisfiable as well. Using Theorem 7 in the other direction yields that \(P_{j,i^{\prime}}(A\mathbf{x}\leq\mathbf{b})\wedge(\mathbf{a}_{i,\cdot}\mathbf{x}=\mathbf{b_{i}})\) is unsatisfiable. This suggests that if \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};A\mathbf{x}\leq\mathbf{b},F)\) with non-basis \(\mathcal{N}\) has a child call for row \(i\) which does not return \(\mathit{SAT}\), then no other call in the recursion tree of \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};A\mathbf{x}\leq\mathbf{b},F)\) where the corresponding non-basis contains \(\mathcal{B}_{\mathcal{N},F}(i)\) will return \(\mathit{SAT}\) either. Hence, we can ignore \(\mathcal{B}_{\mathcal{N},F}(i)\) as designated bound in the remaining recursion tree of \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};A\mathbf{x}\leq\mathbf{b},F)\). **Example 3**.: _Consider the system from Example 1, with an additional constraint \(c_{5}:(-x_{2}\leq 0)\). If \(c_{5}\) is tried first as greatest lower bound on \(x_{2}\), then the combination with \(c_{2}:(-2x_{2}\leq-2)\) yields the local conflict \(\frac{1}{2}c_{2}-c_{5}=(0\leq-1)\). Thus, this branch and, due to Theorem 8, any non-base containing row \(5\) yields an unsatisfiable system._ _Next, we try \(c_{1}\) as greatest lower bound on \(x_{2}\) resulting in the combinations \(\frac{1}{2}c_{2}-c_{1}=(x_{1}\leq 3)\), \(c_{5}-c_{1}=(x_{1}\leq 4)\), \(c_{1}+c_{3}=(-3x_{1}\leq-3)\) and \(c_{1}+c_{4}=(-x_{1}\leq 1)\) and corresponding non-base \(\{1\}\)._ _If we now choose \((x_{1}\leq 4)\) as smallest upper bound on \(x_{1}\), leading to the non-base \(\{1,5\}\), another local conflict occurs: \((x_{1}\leq 3)-(x_{1}\leq 4)=(0\leq-1)\). As \(5\) is contained in the non-base, we could know beforehand that this would happen and thus avoid computing this branch._ We update the \(\mathtt{FMplex}\) algorithm as shown in Algorithm 1b using the following definition: **Definition 6**.: _The set of branch choices for \(A\mathbf{x}\leq\mathbf{b}\) with \(m\) rows w.r.t. \(I\subseteq[m]\) is_ \[\text{branch\_choices}(A\mathbf{x}\leq\mathbf{b},I)= \{\{(x_{j},i)\mid i\in I_{j}^{-}(A)\setminus I\}\mid j\in[n]\wedge I _{j}^{-}(A)\neq\emptyset\wedge I_{j}^{+}(A)\neq\emptyset\}\] \[\cup\{\{(x_{j},i)\mid i\in I_{j}^{+}(A)\setminus I\}\mid j\in[n] \wedge I_{j}^{-}(A)\neq\emptyset\wedge I_{j}^{+}(A)\neq\emptyset\}\] \[\cup\{\{(x_{j},\bot)\}\mid j\in[n]\wedge(I_{j}^{-}(A)=\emptyset \lor I_{j}^{+}(A)=\emptyset)\}.\] It is easy to see that this modification prevents visiting non-basis twice in the following sense: **Theorem 9**.: _Let \(\text{F\!\text{{{{{\text{{{{{{\text{{\ Proof.: By construction of bt_lvl', \(\mathbf{a}^{\prime}_{i}\), \(\mathbf{x}\leq b^{\prime}_{i}\) is a positive sum of rows from \(A\mathbf{x}\leq\mathbf{b}\), i.e. there exists an \(\mathbf{f}\in\mathbb{Q}^{1\times m}\) such that \((\mathbf{f}A\mathbf{x}\leq\mathbf{f}\mathbf{b})=(\mathbf{a}^{\prime}_{i}\), \(\mathbf{x}\leq b^{\prime}_{i})\). Then by Farkas' Lemma, \(A\mathbf{x}\leq\mathbf{b}\) is unsatisfiable. While it is complete and correct, Algorithm 1c does not always terminate with a _global_ conflict (i.e. Theorem 6 does not hold any more), even if we do not ignore any rows (i.e. omit Line 17): **Example 4**.: _We use Algorithm 1c to eliminate variables with the static order \(x_{3},x_{2},x_{1}\) from the system on the right, always branching on lower bounds. We first choose row \(1\) as greatest lower bound on \(x_{3}\). Rows \(3\) and \(4\) are retained as they do not contain \(x_{3}\) and the combination of row \(1\) with row \(5\) is positive, so these constraints have backtrack level \(0\). The combination with row \(2\) has backtrack level \(1\) because both rows are lower bounds. Using this constraint as greatest lower bound on \(x_{2}\) and combining it with row \(4\) leads to a local conflict with backtrack level \(1\). This means that the call at level \(1\) is unsatisfiable and thus we backjump to level \(0\)._ _The second branch is visited, leading to the non-basis \(\mathcal{N}=\{2,5,1\}\) after three steps, where a local conflict lets us backjump to level \(0\) again. As there are no more lower bounds on \(x_{3}\), the algorithm returns UNSAT without finding a global conflict._ ## 5 Relation to Other Methods ### Simplex Algorithm The simplex method [6, 18] is an algorithm for linear optimization over the reals and is able to solve _linear programs_. The _general simplex_[7] is an adaption for checking the satisfiability of systems of linear constraints. We illustrate its idea for the weak case. Remind that given a system \(A\mathbf{x}\leq\mathbf{b}\) with \(m\) rows, by the fundamental theorem of linear programming (Theorem 2), \(A\mathbf{x}\leq\mathbf{b}\) is satisfiable if and only if there exists some maximal subset \(\mathcal{N}\subseteq[m]\) such that \(A[\mathcal{N}]\) is linearly independent and \(A\mathbf{x}\leq\mathbf{b}\cup A[\mathcal{N}]\mathbf{x}=\mathbf{b}[\mathcal{N}]\) is satisfiable - the latter can be checked algorithmically using Gaussian elimination, resulting in a system where each variable is replaced by bounds induced by the rows \(\mathcal{N}\). This system along with the information which element in \(\mathcal{N}\) was used to eliminate which variable is called a _tableau_. The idea of the simplex method is to do a local search on the set \(\mathcal{N}\) (called _non-basis_), that is, we replace some \(i\in\mathcal{N}\) (_leaving variable_) by some \(i^{\prime}\in[m]\setminus\mathcal{N}\) (_entering variable_) obtaining \(\mathcal{N}^{\prime}:=\mathcal{N}\cup\{i^{\prime}\}\setminus\{i\}\) such that \(A[\mathcal{N}^{\prime}]\) is still linearly independent. The clou is that the tableau representing \((A\mathbf{x}\leq\mathbf{b})\wedge(A[\mathcal{N}]\mathbf{x}=\mathbf{b}[\mathcal{N}])\) can be efficiently transformed into \((A\mathbf{x}\leq\mathbf{b})\wedge(A[\mathcal{N}^{\prime}]\mathbf{x}=\mathbf{b}[\mathcal{N}^{ \prime}])\) (called _pivot operation_), and progress of the local search can be achieved by the choice of \(i\) and \(i^{\prime}\). These local search steps are performed until a satisfying solution is found, or a conflict is found. These conflicts are detected using Farkas' Lemma (Theorem 1), i.e. a row in the tableau induces a trivially false constraint and is a positive linear combination of some input rows. As suggested by Theorem 7, there is a strong correspondence between a tableau of the simplex algorithm and the intermediate systems constructed in FMplex. More precisely, if a non-basis of a simplex tableau is equal to the non-basis of a leaf system of Algorithm 1a, then the tableau is satisfiable if and only if the FMplex system is satisfiable. In fact, we could use the same data structure to represent the algorithmic states. Comparing the two algorithms structurally, FMplex explores the search space in a tree-like structure using backtracking, while simplex can jump between neighbouring leaves directly. The idea for Algorithm 1b that excludes visiting the same non-basis in fact arose from the analogies between the two methods. Further, we observe a potential advantage of FMplex: Simplex has more non-bases reachable from a given initial state than the leaves of the search tree of FMplex, as FMplex needs only to explore all lower or all upper bounds of a variable while simplex does local improvements blindly. Heuristically, simplex cuts off large parts of its search space and we expect it often visits fewer non-bases than FMplex - however, as the pruning done by FMplex is by construction of the algorithm, we believe that there might be combinatorially hard instances on which it is more efficient than simplex. ### Virtual Substitution Method _Virtual substitution_[21, 27] admits quantifier elimination for real arithmetic formulas. Here, we consider its application on existentially quantified conjunctions of linear constraints. The underlying observation is that the satisfaction of a formula changes at the zeros of its constraints and is invariant between the zeros. Thus, the idea is to collect all _symbolic zeros_\(\mathrm{zeros}(\varphi)\) of all constraints in some input formula \(\varphi\). If all these constraints are weak, then a variable \(x_{j}\) is eliminated by plugging every zero and an arbitrarily small value \(-\infty\) into the formula, i.e. \(\exists x_{j}\). \(\varphi\) is equivalent to \(\varphi[-\infty/x_{j}]\vee\vee_{\xi\in\mathrm{zeros}(\varphi)}\varphi[\xi/x_ {j}]\). The formula \(\varphi[t/x_{j}]\) encodes the semantics of substituting the term \(t\) for \(x_{j}\) into the formula \(\varphi\) (which is a disjunction of conjunctions). As we can pull existential quantifiers into disjunctions, we can iteratively eliminate multiple variables by handling each case separately. The resulting algorithm for quantifier elimination is singly exponential; further optimizations ([26] even proposes to consider only lower or upper bounds for the test candidates) lead to a very similar procedure as the FMplex quantifier elimination: Substituting a test candidate into the formula is equivalent to computing the restricted projection w.r.t. a variable bound. However, our presentation allows to exploit the correspondence with the FM method. Virtual substitution can also be adapted for SMT solving [4] to a depth-first search similar to FMplex. A conflict-driven search for virtual substitution on conjunctions of weak linear constraints has been introduced in [16], which tracks intermediate constraints as linear combinations of the input constraints similarly to FMplex. Their conflict analysis is a direct generalization of the global conflicts in FMplex and is thus slightly stronger than our notion of local conflicts. However, their method requires storing and maintaining a lemma database, while FMplex stores all the information for pruning the search tree locally. The approaches have strong similarities, although they originate from quite different methods. Further, our presentation shows the similarities to simplex, is easily adaptable for strict constraints, and naturally extensible to work incrementally. ### Sample-Based Methods There exist several depth-first search approaches, including McMillan et al. [23], Cotton [6] and Korovin et al. [17, 18], which maintain and adapt a concrete (partial) variable assignment. They share the advantage that combinations of constraints are only computed to guide the assignment away from an encountered conflict, thus avoiding many unnecessary combinations which FM would compute. Similar to FMplex, these methods perform a search with branching, backtracking and learning from conflicting choices. However, they branch on variable assignments, with infinitely many possible choices in each step. Interestingly, the bounds learned from encountered conflicts implicitly partition the search space into a finite number of regions to be tried, similar to what FMplex does explicitly. In fact, we deem it possible that [17] or [18] try and exclude assignments from exactly the same regions that FMplex would visit (even in the same order). However, the sample-based perspective offers different possibilities for heuristic improvements than FMplex: choosing the next assigned value vs. choosing the next lower bound; deriving constant variable bounds vs. structural exploits using Farkas' Lemma; possibility of very quick solutions vs. more control and knowledge about the possible choices. Moreover, these methods offer no straight-forward adaption for quantifier elimination, while FMplex does. However, [23] and [6] can handle not only conjunctions, but any quantifier-free LRA formula in conjunctive normal form. ## 6 Experimental Evaluation We implemented several heuristic variants of the FMplex algorithm, as well as the generalized _simplex_ and the _FM_ methods as non-incremental DPLL(T) theory backends in our SMT-RAT solver [5] and compared their performance in the context of satisfiability checking. Using the transformation given in [25] and case splitting as in [3], we extended the method to also handle strict and not-equal-constraints. The base version of FMplex (Algorithm 1a) was tested with two different heuristics for the choice of the eliminated variable and for the order in which the branches are checked. These choices may strongly influence the size of the explored search tree; in the best case, the very first path leads to a satisfiable leaf or to a global conflict. Min-FanoutWe greedily minimize the number of children: for any \(A\boldsymbol{x}\leq\boldsymbol{b}\) and \(I\), we choose \(V\in\textit{branch\_choices}(A\boldsymbol{x}\leq\boldsymbol{b},I)\) such that \(|V|\) is minimal; in case that this minimum is 1, we prefer choices \(V=\{(x_{j},\bot)\}\) for a \(j\in[n]\) over the other choices. We prefer rows with a lower (earlier) backtrack level, motivated by finding a global conflict through trying positive linear combinations first. Moreover, if backtracking is used then we expect this heuristic to allow for backtracking further back on average. Min-Column-LengthA state-of-the-art heuristic for simplex in the context of SMT solving is the _minimum column length_[15]: we choose the variables for leaving and entering the non-basis such that the number of necessary row operations is minimized. We resemble this heuristic in FMplex as follows: we prefer choices \(\{(x_{j},\bot)\}\) and if there is no such \(j\), we take the \(j\in[n]\) with minimal \(|I_{j}^{-}(A)|+|I_{j}^{+}(A)|\) and take the smallest choice between \(I_{j}^{-}(A)\) and \(I_{j}^{+}(A)\). We first choose the rows which have the least non-zero coefficients (i.e. contain the least variables) to prefer sparse sub-problems. This can be understood as _Min-Row-Length_. We consider the following solver variants: FMplex-a-MF0 and FMplex-a-MCL implement Algorithm 1a with the Min-Fanout and the Min-Column-Length heuristic, respectively. FMplex-a-Rand-1/2 denotes two variants of Algorithm 1a where all choices are taken pseudo-randomly with different seeds. FMplex-b-MF0 implements Algorithm 1b and FMplex-c-MF0 implements Algorithm 1c, both using the Min-Fanout heuristic. Our approach is also compared to non-incremental implementations FM and Simplex. The FMplex variants and FM always first employ Gaussian elimination to handle equalities. All solvers were tested on the SMT-LIB [2] benchmark set for Q_F_LRA containing 1753 formulas. As all evaluated solvers are non-incremental, we also generated conjunctions of constraints by solving each of these Q_F_LRA problems using a DPLL(T) SMT solver with an FMplex-c-MF0 theory solver backend, and extracting all conjunctions passed to it. If the solver terminated within the time and memory limits, we sampled 10 satisfiable and 10 unsatisfiable conjunctions (or gathered all produced conjunctions if there were fewer than 10). This amounted to 3084 (777 sat, 2307 unsat) additional benchmarks. The experiments were conducted on identical machines with two Intel Xeon Platinum 8160 CPUs (2.1 GHz, 24 cores). For each formula, the time and memory were limited to 10 minutes and 5 GB. \begin{table} \begin{tabular}{l r r r r r r|r r r r} & \multicolumn{4}{c}{SMT-LIB} & \multicolumn{4}{c}{Conjunctions} \\ & \multicolumn{1}{c}{solved} & \multicolumn{1}{c}{sat} & \multicolumn{1}{c}{unsat} & \multicolumn{1}{c}{TO} & \multicolumn{1}{c|}{MO} & \multicolumn{1}{c}{solved} & \multicolumn{1}{c}{sat} & \multicolumn{1}{c}{unsat} & \multicolumn{1}{c}{TO} & \multicolumn{1}{c}{MO} \\ \hline Simplex & 958 & 527 & 431 & 714 & 81 & 3084 & 777 & 2307 & 0 & 0 \\ FM & 860 & 461 & 399 & 577 & 316 & 2934 & 747 & 2187 & 107 & 43 \\ FMplex-a-MFO & 814 & 432 & 382 & 840 & 99 & 2962 & 743 & 2219 & 122 & 0 \\ FMplex-a-MCL & 820 & 435 & 385 & 830 & 103 & 2965 & 742 & 2223 & 119 & 0 \\ FMplex-a-Rand-1 & 742 & 383 & 359 & 906 & 105 & 2806 & 668 & 2138 & 278 & 0 \\ FMplex-a-Rand-2 & 743 & 383 & 360 & 905 & 105 & 2823 & 671 & 2152 & 261 & 0 \\ FMplex-b-MFO & 822 & 434 & 388 & 830 & 101 & 2988 & 744 & 2244 & 96 & 0 \\ FMplex-c-MFO & 920 & 499 & 421 & 733 & 100 & 3084 & 777 & 2307 & 0 & 0 \\ Virtual-Best & 982 & 532 & 450 & 651 & 120 & 3084 & 777 & 2307 & 0 & 0 \\ \end{tabular} \end{table} Table 1: Number of solved instances, timeouts (TO) and memory-outs (MO). Figure 3: Scatter plots: Each dot represents a single instance. In (a) and (b), instances at the very top or right exceeded the resource limit. Such instances are not considered in (c) and (d). The results in Table 1 show that Simplex solved the most SMT-LIB instances, followed by our FMplex-c-MFO and then FM. Interestingly, FM solves fewer conjunctive instances than the base version of FMplex due to higher memory consumption (43 memory-outs for FM, while the others have none). We see that a reasonable variable heuristic makes a difference as FMplex-a-Rand-* perform much worse than FMplex-a-MFO and FMplex-a-MCL. However, between the latter two, there is no significant difference. While our first optimization used in FMplex-b-MFO has no big impact, the backtracking implemented in FMplex-c-MFO allows for solving more instances within the given resource limits. The running times for each individual SMT-LIB instance depicted in Figures 2(a) and 2(b) reveal that FM and FMplex-c-MFO often behave similar, but FM fails on a number of larger instances. We suspect that the smaller intermediate systems of FMplex are a main factor here. While Simplex is often faster than FMplex-c-MFO and solves 61 SMT-LIB instances not solved by FMplex-c-MFO, it fails to solve 23 instances on which FMplex-c-MFO succeeds (Of these instances, FM solves 3 respectively 14 instances). Accordingly, the Virtual-Best of the tested solvers performs significantly better than just Simplex, indicating potential for a combination of Simplex and FMplex-c-MFO. Figure 2(c) compares the number of constraints generated by FM and FMplex-c-MFO on the conjunctive inputs. Especially on larger instances, FMplex seems to be in the advantage. Motivated by Section 4.1, Figure 2(d) compares the number of Simplex pivots to the number of systems in FMplex-c-MFO. We see that neither is consistently lower than the other, though Simplex seems to be slightly superior. Due to the log-log scale, not shown are 1305 instances in which either measurement is 0 (920 instances for Simplex, 981 for FMplex-c-MFO). The implementation and collected data are available at [https://doi.org/10.5281/zenodo.7755862](https://doi.org/10.5281/zenodo.7755862). ## 7 Conclusion We introduced a novel method _FMplex_ for quantifier elimination and satisfiability checking for conjunctions of linear real arithmetic constraints. Structural observations based on Farkas' Lemma and the Fundamental Theorem of Linear Programming allowed us to prune the elimination or the search tree. Although the new method is rooted in the FM method, it has strong similarities with both the virtual substitution method and the simplex method. The experimental results in the context of SMT solving show that FMplex is faster than Fourier-Motzkin and, although simplex is able to solve more instances than FMplex, there is a good amount of instances which can be solved by FMplex but cannot be solved by simplex. In future work, we aim to combine the structural savings of FMplex with the efficient heuristic of simplex, i.e. we transfer ideas from FMplex to simplex and vice-versa. Furthermore, we will investigate in tweaks and heuristics. For instance, we plan to adapt the perfect elimination ordering from [19] and work on an incremental adaption for SMT solving. Last but not least, we plan to increase the applicability of FMplex as a quantifier elimination procedure, including a different handling of strict inequalities, which is more similar to FM.
2309.07376
VCD: A Video Conferencing Dataset for Video Compression
Commonly used datasets for evaluating video codecs are all very high quality and not representative of video typically used in video conferencing scenarios. We present the Video Conferencing Dataset (VCD) for evaluating video codecs for real-time communication, the first such dataset focused on video conferencing. VCD includes a wide variety of camera qualities and spatial and temporal information. It includes both desktop and mobile scenarios and two types of video background processing. We report the compression efficiency of H.264, H.265, H.266, and AV1 in low-delay settings on VCD and compare it with the non-video conferencing datasets UVC, MLC-JVC, and HEVC. The results show the source quality and the scenarios have a significant effect on the compression efficiency of all the codecs. VCD enables the evaluation and tuning of codecs for this important scenario. The VCD is publicly available as an open-source dataset at https://github.com/microsoft/VCD.
Babak Naderi, Ross Cutler, Nabakumar Singh Khongbantabam, Yasaman Hosseinkashi, Henrik Turbell, Albert Sadovnikov, Quan Zhou
2023-09-14T01:39:40Z
http://arxiv.org/abs/2309.07376v2
# Vcd: A Video Conferencing Dataset for Video Compression ###### Abstract Commonly used datasets for evaluating video codecs are all very high quality and not representative of video typically used in video conferencing scenarios. We present the Video Conferencing Dataset (VCD) for evaluating video codecs for real-time communication, the first such dataset focused on video conferencing. VCD includes a wide variety of camera qualities and spatial and temporal information. It includes both desktop and mobile scenarios and two types of video background processing. We report the compression efficiency of H.264, H.265, H.266, and AV1 in low-delay settings on VCD and compare it with the non-video conferencing datasets UVC, MLC-JVC, and HEVC. The results show the source quality and the scenarios have a significant effect on the compression efficiency of all the codecs. VCD enables the evaluation and tuning of codecs for this important scenario. The VCD is publicly available as an open-source dataset at [https://github.com/microsoft/VCD](https://github.com/microsoft/VCD). Babak Naderi, Ross Cutler, Nabakumar Singh Khongbantabam, Yasaman Hosseinkashi, Henrik Turbell, Albert Sadovnikov, Quan Zhou Microsoft Corporation, Redmond, USA **Index Terms**: Video Dataset, Video Quality, Video Compression, Low-delay, Real-Time Communication ## 1 Introduction Video conferencing has become an essential means of communication, especially with the shift to remote work and learning over the past few years. High-quality video compression is critical for providing a smooth and effective video conferencing experience. Most video codec evaluation relies on benchmark datasets capturing broadcast or cinematic content, which represents video with high spatial resolution, temporal resolution, and visual complexity [1]. However, video conferencing presents a very different use case from entertainment video. Video conferencing content typically comes from webcams of a range of quality less than studio-quality cameras, with lower spatial resolution temporal resolution, and lower visual complexity compared to traditional video benchmarks. The content in video conferencing is typically a talking person or people in a conference room, typically from stationary cameras, but sometimes from a mobile device. Despite the key differences between video conferencing and entertainment video, current benchmarking datasets remain focused on the latter. Datasets such as HEVC Class A-E [1], UVG [2], and MCL-JCV [3] feature 4K and HD video sequences with high spatial and temporal complexity which is useful for assessing codec performance in entertainment applications. However, they do not capture the specific characteristics of real-world video conferencing streams. It is crucial to test video codecs with video conferencing workloads to ensure they are not optimized solely for cinematic content. We present the Video Conferencing Dataset (VCD), the first publicly accessible video codec benchmark for video conferencing applications. VCD comprises diverse video sequences originating from mobile and desktop conferencing scenarios, showcasing various spatial and temporal attributes. It encompasses content with and without visual background replacement, a prevalent feature in modern video conferencing setups. To establish the utility of VCD, we ran rigorous statistical tests that involved three mainstream datasets and four codecs (H.264/AVC [4], H.265/HEVC [5], H.266/VVC [6], AV1 [7]). Statistical models reveal interaction effects between dataset choice and video codec performance, emphasizing that conclusions drawn from mainstream datasets may not apply to video conferencing scenarios. In Section 2, we discuss related work, and in Section 3, we describe the VCD design. In Section 4, we provide analysis on VCD, and in Section 5, we provide conclusions. ## 2 Related Work The Joint Video Experts Team (JVET) developed the official test sequences to evaluate H.265/HEVC [5] encoder proposals [1]. It consists of 15 video clips in four classes: B-E, Class B is five 1080p natural video sequences, Class C is four WVGA natural sequences, Class D is four WQVGA natural video sequences, and Class E is three 720p high-quality video sequences of people talking and 4 videos of screen content of various resolutions. JVET extended the HEVC test set to support H.266/VVC [6] by adding an additional 10 video clips in classes A1, A2, and F [8]. Class A1 and A2 are each four 4K natural video sequences. Class F is two natural sequences and two videos of screen content. The Media Communications Lab JND-based Coded Video (MCL-JCV) dataset [3] consists of 30 1080p video sequences. The sequences include 3 genres (cartoon, sports, indoor), 3 semantic classes (people, water, salience), and 3 feature classes (fast motion, camera motion, dark scene). No analysis of spatial or temporal information was used in the design of MCL-JVC, and no description of the camera source or analysis of camera quality was provided. The Ultra Video Group (UVG) [2] is a set of 16 4K natural sequences captured at 50 or 120 FPS. It includes spatial and temporal information analysis, which shows the 16 sequences cover a wide range of spatial and temporal information. The dataset was captured with a studio-quality camera. The Tencent Video Dataset (TVD) [9] is a set of 86 4K natural sequences captured with studio-quality cameras. A variety of scenes with static or moving objects are included, but no analysis of spatial or temporal information is done. The Large-Scale Screen Content Dataset [10] provides 714 sequences of screen content, which complements VCD. ## 3 Dataset VCD consists of 160 talking-head video sequences using mutually exclusive subjects and environments. It is organized in four scenarios, each 40 sequences. The first scenario is Talking Head (TH) videos and includes sequences as they were recorded by each participant's webcam without further processing (see Figure 1). Scenario two refers to Talking Head with Opaque Background filter (TH-OB) where the Microsoft Teams background filter pipeline is applied and the participant's backgrounds are replaced by two popular background images used in Microsoft Teams video calls. Scenario three is similarly processed by Teams' background segmentation pipeline, however with a blurred background filter (TH-BB). The last scenario includes handheld mobile recordings (TH-M) and includes inside and outside recorded videos. At least 30% of sequences are with active speakers in each scenario. The distribution of temporal and spatial information (TI and SI) [11]1 of video sequences for each scenario are presented in Figure 2. In addition, Figure 3 represents the distribution of subjective quality ratings for all sequences in the dataset. Footnote 1: Values are calculated using [https://github.com/VQEG/siti-tools](https://github.com/VQEG/siti-tools) The dataset is released in YUV420p pixel format as 1080p 30 FPS 10-second clips. Besides the sequences, we open-source the subjective Mean Opinion Score (MOS), spatial information (SI), and temporal information (TI) of each sequence. ### Recordings and selection procedures In our data collection process, video sequences were obtained through a crowd-sourcing platform. Participants were instructed to enact specific behaviors in front of their cameras as they were recorded to enact common video conferencing scenarios. We captured considerably more videos than present in VCD. This surplus enabled us to curate the videos to approximately match the desired quality and complexity distribution within each scenario. We only used video sequences with a resolution of 1080p or higher with at least 10 seconds duration and a person visible in the entire video duration. To produce a well-balanced coverage across diverse video qualities and complexity for this specific domain, we performed the following curation procedure for each scenario: Initially, we evaluated the subjective video quality of the captured video clips following the ITU-T Rec. P.910 ACR test [11] and its crowdsourcing implementation [12] and computed average SI and TI of each sequence. Subsequently, we categorized the videos into four quality brackets ([1,2.5], [2.5,3.5], [3.5, 4.5], [4.5,5)), and four SI/TI regions given the distribution of the entire set. Finally, we selectively pruned the number of videos in each bracket to achieve an approximate distribution ratio of 20%, 30%, 30%, and 20% across the quality brackets and 25% for each SI/TI region. During selection, the suitability of video sequences for subjective quality assessment (e.g., presence of edges, landmarks, and details), and the difficulty of sequence due to different factors like moving objects, lighting conditions, and contrast are considered. Finally, video clips in TH-OB and TH-BB are processed with the corresponding video pipelines and the video quality of the entire video set was evaluated in a separate subjective test. Figure 3 shows the resulting distribution video quality of sequences within each scenario in the final set. As shown in Figure 1(b)-1(c), background processing led to a different distribution of SI/TI than non-background processing. ## 4 Analysis In this section, we present the analysis of the effects of video quality and dataset composition on comparing video encoding performance of different codecs. We measured the performance of the codecs using the Bjontegaard-delta rate (BD-Rate) metric [13], which represents the percentage of Figure 1: Thumbnail images of 40 sequences belong to Talking Head (TH) without Background effect scenario. saved bitrate compared to a baseline codec when delivering the same video quality [14]. We report the average of the BD-Rates of the individual videos in each segment and 95% confidence interval. We encoded the videos with four codecs, namely H.264 (baseline codec for calculating BD-Rate), H.265, H.266, and AV1. **H.264 - H.265**: We encoded the videos using the Intel Quick Sync Video[15] hardware encoder at five different quantization parameters, ranging from 20 to 44. We used a Surface laptop and applied a low-delay configuration (i.e., fast preset, no look ahead, large GOP, only one I-Frame). For **H.266**, we used VVenC[16] which provides a fast implementation of H.266. We encode videos using the low-delay and fast preset setting with no look ahead and large GOP size. We encoded each video with five quantization parameters from 22 to 42. Similarly, we used 7 quantization parameters for **AV1** from 22 to 52. ### Sequence quality To illustrate the effect of video quality on codec performance, we divided VCD into three quality categories, namely high-quality (\(MOS\geq 3.8\)), medium-quality (\(2.7\leq MOS<3.8\)), and low-quality (\(MOS<2.7\)), and evaluated the BD-Rates of H.265, H.266, and AV1 for each category. Table 1 presents the average BD-Rates achieved by the codecs relative to H.264 on each video quality category. The result of two-way mixed ANOVA shows a significant main effect of the encoder (\(F(2,314)=1436\), \(p<0.001\), \(\eta^{2}=0.9\)) and a significant main effect of source sequence quality (\(F(2,157)=9.17\), \(p<0.001\), \(\eta^{2}=0.11\)) on the achieved BD-Rate over VMAF. A post hoc test using Holm-Bonferroni adjustment reveals that the codec's compression efficiency when applied on low and medium-quality sequences is significantly lower than their performance on high-quality source sequences, (\(p=0.011\) and \(p<0.001\), respectively). Similar results were observed when using PSNR instead of VMAF. ### Scenarios Table 2 reports the compression efficiency of codecs on the VCD dataset when segmented into different scenarios (see Section 3). A two-way mixed ANOVA reveals that there is a significant interaction effect between the encoder and dataset scenarios on compression efficiency in terms of VMAF's BD-Rate (\(F(6,312)=23.02\), \(p<0.001\), \(\eta^{2}=0.31\)). A similar interaction effect is also observed for PSNR's BD-Rate. The presence of these interaction effects in the model indicates that the relative performance of encoders on a given scenario may not replicate for other scenarios. Results of the post hoc test show that the compression efficiency in the scenario with background replacement (TH-OB) is significantly different from the other scenarios when measured over VMAF (\(p<0.018\) for all combinations after Holm-Bonferroni correction). In addition, H.265's and H.266's efficiency are significantly different in mobile (TH-M) and desktop (TH) sc Figure 3: Quality distribution of video sequences in VCD dataset measured in ACR subjective test. Figure 2: Distribution of Spatial and Temporal information of video sequences in VCD dataset varies by background processing narios. H.265 achieved a much higher compression on mobile sequences compared to the desktop's (Hedges's \(g=0.827\), \(p=0.004\)). In contrast, H.266 performed considerably better on desktop scenarios (TH) compared to the mobile sequences (Hedges's \(g=-0.856\), \(p=0.003\)). A post hoc test also shows a significantly different BD-Rate calculated over PSNR in both scenarios where backgrounds are processed (TH-OB and TH-BB). ### Comparison with other datasets Table 3 reports the compression efficiency of the codecs on the entirety of VCD, MCL-JCV, HEVC, and UVC datasets. One-way independent ANOVA models (one per codec) with Dataset as the independent factor and BD-Rate as the dependent variable show statistically significant differences between these datasets in terms of the codec performance measurement (\(\eta_{H.266}^{2}=0.04\), \(\eta_{H.265}^{2}=0.053\), \(\eta_{AV1}^{2}=0.044\)). This difference indicates the challenges in predicting the performance of codecs on a specific dataset when it is optimized on general datasets. For further insights, we used a two-way mixed ANOVA model with both codec and dataset as independent variables. This model shows a significant interaction effect between the codec and dataset (\(F(6,422)=2.773\), \(p=0.012\), \(\eta^{2}=0.04\)). The interaction effect indicates the potential for different patterns of BD-Rate by codec for each dataset. For example, on the HEVC dataset, the order of H.265 and AV1 is the reverse of the same codecs on the VCD dataset. Finally, Figures 3(a)-3(b) show the rate-distortion curves of video codecs on the VCD dataset for PSNR and VMAF. Figure 3(c)-3(d) illustrates the performance of H.266 on all datasets, which shows a large difference in performance between VCD and general datasets.
2309.13303
C$^2$VAE: Gaussian Copula-based VAE Differing Disentangled from Coupled Representations with Contrastive Posterior
We present a self-supervised variational autoencoder (VAE) to jointly learn disentangled and dependent hidden factors and then enhance disentangled representation learning by a self-supervised classifier to eliminate coupled representations in a contrastive manner. To this end, a Contrastive Copula VAE (C$^2$VAE) is introduced without relying on prior knowledge about data in the probabilistic principle and involving strong modeling assumptions on the posterior in the neural architecture. C$^2$VAE simultaneously factorizes the posterior (evidence lower bound, ELBO) with total correlation (TC)-driven decomposition for learning factorized disentangled representations and extracts the dependencies between hidden features by a neural Gaussian copula for copula coupled representations. Then, a self-supervised contrastive classifier differentiates the disentangled representations from the coupled representations, where a contrastive loss regularizes this contrastive classification together with the TC loss for eliminating entangled factors and strengthening disentangled representations. C$^2$VAE demonstrates a strong effect in enhancing disentangled representation learning. C$^2$VAE further contributes to improved optimization addressing the TC-based VAE instability and the trade-off between reconstruction and representation.
Zhangkai Wu, Longbing Cao
2023-09-23T08:33:48Z
http://arxiv.org/abs/2309.13303v1
C\({}^{2}\)VAE: Gaussian Copula-based VAE Differing Disentangled from Coupled Representations with Contrastive Posterior ###### Abstract We present a self-supervised variational autoencoder (VAE) to jointly learn disentangled and dependent hidden factors and then enhance disentangled representation learning by a self-supervised classifier to eliminate coupled representations in a contrastive manner. To this end, a Contrastive Copula VAE (C\({}^{2}\)VAE) is introduced without relying on prior knowledge about data in the probabilistic principle and involving strong modeling assumptions on the posterior in the neural architecture. C\({}^{2}\)VAE simultaneously factorizes the posterior (evidence lower bound, ELBO) with total correlation (TC)-driven decomposition for learning factorized disentangled representations and extracts the dependencies between hidden features by a neural Gaussian copula for copula coupled representations. Then, a self-supervised contrastive classifier differentiates the disentangled representations from the coupled representations, where a contrastive loss regularizes this contrastive classification together with the TC loss for eliminating entangled factors and strengthening disentangled representations. C\({}^{2}\)VAE demonstrates a strong effect in enhancing disentangled representation learning. C\({}^{2}\)VAE further contributes to improved optimization addressing the TC-based VAE instability and the trade-off between reconstruction and representation. ## 1 Introduction In recent years, integrating stochastic variational inference into deep neural networks (DNNs) forms a new paradigm - deep variational learning (DVL). DVL jointly characterizes dependencies between hidden neural features and between their distributions, going beyond deep neural principles and synergizing analytical statistical principles. Variational autoencoders (VAEs) represent a typical milestone for DVL, which transforms point-based autoencoders into process-oriented VAE learning. Various VAEs have been proposed in recent years to robustly fit the likelihoods of diverse data, such as tabular data [14; 48; 31; 3; 2], images [36; 39; 38; 45], and sequences [22; 16]. By estimating the likelihood over all data points, a VAE learns a smooth representation space under certain manifold hypotheses. It characterizes variational low-dimensional distributions corresponding to the input feature space and produces analytical results leveraging deep features and relations learned by DNNs. Consequently, VAEs further enhance representation learning for more challenging learning tasks such as out-of-domain detection [12; 27], time series anomaly detection [29; 11], multi-task learning [37], domain adaptation [21; 40], and continual learning [15; 49]. However, a significant gap remains in VAEs, i.e., exploring the distribution dependency between hidden features of DNNs, which has shown beneficial for leveraging stochastic factor interactions and downstream tasks [41; 47]. On the other hand, to enable more explainable variational reconstruction, a recent interest and challenge in VAE studies are to enable their unsupervised disentangled learning. Disentangled learning has been widely explored in supervised representation learning and classification [7] to learn single hidden units sensitive to single generative factor change but invariant to other factors' variances. However, unsupervised disentangled learning in VAEs is more challenging. A common approach involves the total correlation (TC) to remedy the insufficient expressive posterior in the surrogate loss of vanilla VAEs. TC is a variant of mutual information to quantify the redundancy in multivariate dimensions [18]. For VAEs, TC is incorporated into their evidence lower bounds (ELBO) to induce factorized variational distributions with a loss \(TC(\mathbf{Z})\) capturing the divergence between estimated posterior \(q_{\theta}(\mathbf{Z})\) and prior \(p(\mathbf{Z})\) over hidden features \(\mathbf{Z}\): \[\begin{split} TC(\mathbf{Z})&=TC\left(z_{1},z_{2}, \ldots,z_{d}\right)\\ &=\mathbb{E}_{q_{\theta}(z_{1},z_{2},\ldots,z_{d})}\left[\log \frac{q_{\theta}\left(z_{1},z_{2},\ldots,z_{d}\right)}{p\left(z_{1}\right)p \left(z_{2}\right)\ldots p\left(z_{d}\right)}\right]\\ &=KL(q_{\theta}(\mathbf{Z})||p(\mathbf{Z})).\end{split} \tag{1}\] However, factorizing the prior, i.e., \(p(\mathbf{Z}):=\prod_{j=1}^{d}p\left(z_{j}\right)\) involves strong IID assumption between hidden features \(\{\mathbf{z}_{j}\}\). Further, enforcing TC does not guarantee to capture dependent structures by the posterior distribution, no matter what the estimator is, by either mutual information estimators [25; 13; 17; 37; 6] or density ratio tricks [23; 50]. This is because the dependencies between hidden features may vary, where some are coupled more strongly than others, resulting in more (we call explicit) vs less (implicit) explanatory hidden features. For example, high cholesterol may be more affiliated with dietary habits and exercises than with age and gender. While the TC-based factorization ensures the independence between features, more explanatory (explicit) features may still be coupled with other less explanatory (implicit) ones in the hidden feature space. Hence, the TC factorization only guarantees the independence between those disentangled explicit features but ignores the dependencies in the entire hidden space. This forms another important gap in VAEs. This work addresses both aforementioned gaps in modeling distribution dependency in the hidden neural space and further differentiates strongly coupled hidden features from weakly coupled features for improving unsupervised disentangled representations. To this end, we build a contrastive copula variational autoencoder (C\({}^{2}\)VAE). First, as copula functions have been demonstrated powerful in learning high-dimensional dependence [32], a neural Gaussian copula function learns the dependence between hidden features and identifies coupled representations. Then, a self-supervised contrastive classification mechanism contrasts the disentangledly factorized representations with these coupled representations sampled from a neural Gaussian copula function. Further, C\({}^{2}\)VAE filters those strongly dependent hidden features captured by the copula function and induces an optimal posterior distribution characterizing more factorizable hidden features for improved disentangled representations. The main contributions include: * We disclose the existence of different degrees of dependencies between hidden features in the deep feature space, where some features are more strongly coupled than others. A neural copula function is incorporated into VAE to learn high-dimensional feature dependencies and differentiate strongly vs weakly coupled representations over features. * We enhance disentangled representations in TC-based factorization by contrasting the weakly with strongly coupled representations. A contrastive loss is incorporated into VAE, which differentiates those strongly vs weakly dependent features and encourages more disentangled representations, thus filtering more dependent features with coupled representations. * Our work thus learns a more expressive posterior with more explanatory features, where we extract more independent features for disentanglement but filter more coupled representations. C\({}^{2}\)VAE thus improves disentangled representations, the instability of TC-based VAEs, and the trade-off between reconstruction and representation. We evaluate C\({}^{2}\)VAE on four synthetic and natural image datasets: two grayscale (dSprites, SmallNORB) and two colored (3D Shapes, 3D Cars). It demonstrates the effect of the C\({}^{2}\)VAE design and mechanisms in outperforming the existing TC-based models in terms of four intervention, prediction, and information based disentanglement performance measures. ## 2 Related Work Here, we discuss three sets of work relating to ours: factorized posterior estimation for disentangled representation, contrastive VAEs, and copula for deep variational learning. ### Factorized Posterior Estimation for Disentangled Representation Unsupervised disentangled learning in VAEs aims to learn hierarchical distribution dependencies between hidden features toward inducing hidden units independently discriminative to generative factor variance, thus capturing those explanatory features in the hidden space [7]. This requires meeting a factorizable and diagonal assumption on estimating posterior distributions in VAEs [8; 24] to generate decoupled features by stochastic variational inference. To eliminate the entanglement between hidden features, the TC and dual total correlation (DTC) are incorporated into evidence lower bound (ELBO) under the factorization assumption. Specifically, penalizing the TC and DTC terms aims to regularize the posterior estimation toward discarding those dependent feature pairs or clusters, respectively. Accordingly, the recent research focuses on accurately estimating these TC terms. For example, \(\beta\)-TCVAE [13] derives a decomposed ELBO by the Monte Carlo (MC) estimation iteratively over samples. HFVAE [17] constructs an MC-based estimator by partially stratified sampling. These methods suffer from the MC-based scalability issue and inductive bias (such as relating to batch size). Further, FactorVAE [23] involves an adversarial mechanism to train a density ratio-based ELBO. GCAE [50] captures dependencies in feature groups by specifying discriminators on specific DTC terms. In contrast, C\({}^{2}\)VAE involves a new attempt for disentangled learning to differ disentangled from coupled features and representations. ### Contrastive VAEs Contrastive learning enables self-supervision. One typical example is to contrast similar with dissimilar data points by a triplet loss to encode and discriminate semantic features in a hypothesis space for representation learning [19]. Another recent topic is to train conditional generative models in a contrastive manner to exploit the correlations between data samples which could be in various types. cVAE [1] learns a foreground reconstruction by eliminating the background information among dependent feature pairs. C-VAE [14] learns a latent variable indicator by a minority/majority loss to address the class imbalance in downstream tasks. ContrastVAE [43; 46] aggregates the posterior from two different views of comments for a sequential recommendation. NCP-VAE [4] trains an optimal prior for sampling with a contrastive loss in an adversarial way. These studies focus on reconstruction for specific learning tasks, and limited work contributes to inference accuracy in VAEs. C\({}^{2}\)VAE makes the first attempt to learn and differ strongly vs weakly coupled features for contrastive representation disentanglement. ### Copula in Deep Variational Networks Copula functions are introduced to DVL neural networks including VAEs and variational LSTM (VLSTM), where copula learns the dependencies between hidden features. Copula-based VAEs and VLSTM integrate copula dependence modeling into variational inference to improve autoencoders and LSTMs. CopulaVAE [41] replaces the collapsible ELBO with a Gaussian copula-based posterior to avoid the KL vanishing in language modeling. Copula-based VLSTM [47] learns dependence degrees and structures between hidden features for leveraging LSTM for sequential forecasting. [42] adopts a Gaussian copula to model the correlations between discrete latent variables for a conditional generation from Bernoulli posterior. [34; 44] integrate a copula function into LSTM to model the dependence structures for forecasting. Instead, C\({}^{2}\)VAE integrates copula representations into contrastive classification to downplay those coupled features. ## 3 The C\({}^{2}\)VAE Model We introduce factorized posterior estimation, copula-coupled representation learning, and contrastive disentangled learning. These form the key constituents of our C\({}^{2}\)VAE. As shown in Figure 1, the encoder output in C\({}^{2}\)VAE is converted to two sets of representations: (1) the neural factorized posterior distribution \(q_{\phi}(z|x)\) as a multivariate Gaussian with a diagonal covariance structure; and (2) a copula coupled representation by a new encoder branch as a covariance encoder, which shares the same framework as the posterior encoder. This auxiliary encoder parameterized by \(\phi_{c}\) captures the dependence between hidden variables by learning the neural copula function \(c\). Copula learns the dependence coefficient matrix \(\Sigma\). These two sets of representations share the dimension of hidden variables and learn their respective representations parameterized by mean \(\mu_{c}\) and coefficient matrix \(\Sigma\), respectively. ### Factorized Posterior Estimation for Disentangled Representations VAE [24] is a generative model with a generative process: \(p(\mathbf{x})=\int p(\mathbf{x}|\mathbf{z})p(\mathbf{z})\mathrm{d}\mathbf{z}\) over data \(\mathbf{x}\) and hidden features \(\mathbf{z}\) learned in a deep manner. By sampling from the prior \(p(\mathbf{z})\) of hidden features, the generative distribution \(p(\mathbf{z}|\mathbf{x})\) can be approximated by a variational distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\). Further, to incorporate this generative learning into the autoencoder framework, a surrogate loss below is derived from approaching the reconstruction \(p_{\theta}(\mathbf{x}|\mathbf{z})\) by a decoder parameterized by \(\theta\) to the inference \(q_{\phi}(\mathbf{z}|\mathbf{x})\) by an encoder parameterized by \(\phi\). The VAE learning process can be denoted as: \[\begin{split}\log p(\mathbf{x})&\geq\mathbb{E}_{q(\bm {z})}\log\frac{p(\mathbf{x},\mathbf{z})}{q(\mathbf{z})}\\ &\geq\mathbb{E}_{q(\mathbf{z})}\log p(\mathbf{x}\mid\mathbf{z})+\mathbb{E}_{q (\mathbf{z})}\log\frac{p(\mathbf{z})}{q(\mathbf{z})}\\ &\geq\mathbb{E}_{q(\mathbf{z})}\log p(\mathbf{x}\mid\mathbf{z})-D_{KL}(q(\bm {z})\|p(\mathbf{z})).\end{split} \tag{2}\] When trained by a stochastic gradient variational Bayes (SGVB) estimator, VAE optimizes: \[\mathcal{L}_{ELBO}\equiv\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\Big{[}\log p_{ \theta}(\mathbf{x}\mid\mathbf{z})\Big{]}-D_{KL}\Big{(}q_{\phi}(\mathbf{z}\mid\mathbf{x})|p(\bm {z})\Big{)}. \tag{3}\] VAE reconstructs samples by optimizing the likelihood function \(\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\Big{[}\log p_{\theta}(\mathbf{x}\mid\mathbf{z}) \Big{]}\) and learns a low-dimensional representation under a manifold hypothesis by regularizing \(D_{KL}\Big{(}q_{\phi}(\mathbf{z}\mid\mathbf{x})|p(\mathbf{z})\Big{)}\). To learn disentangled representations by VAEs for explanatory hidden generative factors, under the factorizable assumption, the posterior distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\) is estimated by decomposing it into Figure 1: C\({}^{2}\)VAE: The architecture and contrastive learning of disentangled representations for VAE. \(\mathcal{L}_{TC}\) optimizes disentangled factorized representations, \(\mathcal{L}_{C}\) enhances the disentanglement by distinguishing factorized representations from coupled representations. several independent and identically distributed (IID) conjugate distributions. Then, we convert the ELBO in Eq. (3) to a TC-based ELBO as follows: \[\begin{split}\mathcal{L}_{\text{TC}}:=&\mathbb{E}_{q(z \mid\mathbf{z})}\Big{[}\log p(\mathbf{x}|\mathbf{z})-D_{KL}\big{(}q(\mathbf{z}\mid\mathbf{x})\|\bar{ q}(\mathbf{z}\mid\mathbf{x})\big{)}-D_{KL}\big{(}q(\mathbf{z})\mid\bar{q}(\mathbf{z})\big{)} \Big{]},\\ =& LL(\mathbf{x}\mid\mathbf{z})-I(\mathbf{x},\mathbf{z})-TC(\mathbf{z}) \\ =&\mathcal{L}_{\text{ELBO}}-\gamma\mathbb{E}_{q(z)} \Big{[}\log\frac{\Psi(z)}{1-\Psi(z)}\Big{]},\end{split} \tag{4}\] where \(\bar{q}(\mathbf{z}):=\prod_{j=1}^{d}q\left(\mathbf{z}_{j}\right)\), \(LL(\mathbf{x}|\mathbf{z})\) is the log-likelihood of data samples, \(I(\mathbf{x},\mathbf{z})\) is the mutual information between \(\mathbf{x}\) and \(\mathbf{z}\). The TC term is estimated by the density ratio trick: \[D_{KL}(q(\mathbf{z})||\bar{q}(\mathbf{z}))\approx\log\frac{q(\mathbf{z})}{\bar{q}(\mathbf{z}) }=\log\frac{\mathcal{P}(y=1\mid\mathbf{z})}{\mathcal{P}(y=0\mid\mathbf{z})}=\log\frac {\mathcal{P}(y=1\mid\mathbf{z})}{1-\mathcal{P}(y=1\mid\mathbf{z})}=\log\frac{\Psi(z)} {1-\Psi(\mathbf{z})}, \tag{5}\] \(\Psi(\mathbf{z})\) is a classifier. The TC term quantifies the dependencies between \(d\)-dimensional hidden variables. Figure 2 illustrates this TC-based decomposition of the vanilla ELBO in Eq. (3). The TC-based ELBO \(\mathcal{L}_{\text{TC}}\) is a loose bound to ensure the independence between factors \(\mathbf{z}\) in the factorized posterior. It avoids a correlation structure between hidden variables toward disentangled representations. ### Learning Copula Coupled Representations Uniform\((0,1)\). Under Sklar's theorem [32], there exists a joint copula function \(C(\cdot)\) which captures the dependencies between variables given the cumulative distribution function of multiple variables \(\mathbf{z}_{1},\mathbf{z}_{2},\ldots,\mathbf{z}_{d}\). Their multivariate cumulative distribution \(F\) can be modeled by copula over marginal distributions as: \[F\left(\mathbf{z}_{1},\ldots,\mathbf{z}_{d}\right)=C\left(F_{1}\left(\mathbf{z}_{1}\right),\ldots,F_{d}\left(\mathbf{z}_{d}\right)\right), \tag{6}\] Gaussian copula is an elliptical whose marginal distribution \(F(\mathbf{z})\) is subject to an elliptical family. With \(u_{i}=F_{i}(\mathbf{z}_{i})\), we can obtain the copula density function \(c\) by: \[c(u_{1},\ldots,u_{d})=F\big{(}F^{-1}(z_{1}),F^{-1}(z_{2}),\ldots,F^{-1}(z_{d })\big{)}, \tag{7}\] where \(F^{-1}(z)\) is the inverse cumulative distribution function of marginal Gaussian distribution \(F\) and the copula function \(c\) is a multivariate density normal distribution parameterized with mean \(\mu_{c}\) and covariance matrix \(\Sigma\). When imposing a dependence assumption on latent representations, subject to a diagonal multivariate Gaussian distribution with mean \(\mu_{c}\) and variance \(\sigma_{c}\), a Gaussian copula joint distribution with covariance matrix \(\Sigma\) is sampled in neural settings and by a differentiable reparameterization. Here, we adopt the Cholesky-based parameterization of coefficient matrices to induce the latent samples. The Cholesky parameterization [41] is for the joint distribution of Gaussian copula, which factorizes a correlation matrix into a triangular matrix and its transposition for sampling the copula function directly in a high dimensional space. To ensure the numerical stability, i.e., the matrix needs to Figure 2: The element-wise decomposition of ELBO under the factorizable assumption based on information theory. Under the comparison of objectives in VAEs, we can conclude that TC-based factorization, e.g., [20; 37; 23], provides a tighter bound than other methods, e.g., [17]. be positive definite, having all diagonal elements to be \(1\), we learn the components separately: \(\mathbf{\Sigma}=\mathbf{w}\cdot\mathbf{I}+\mathbf{v}\mathbf{v}^{\mathbf{T}}\), which is defined as: \[\begin{split}\Sigma=&\left[\begin{array}{ccccc}1&&Softplus( \Sigma;\phi_{G})\\ &&\ddots&&\\ Softplus(\Sigma;\phi_{G})&&1\\ 1&&Tanh(\Sigma;\phi_{G})&1\\ &\ddots&&\\ Tanh(\Sigma;\phi_{G})&&1\\ &=\mathbf{w}\cdot\mathbf{I}+\mathbf{v}\mathbf{v}^{\mathbf{T}}.\end{array} \right]+\\ \end{split} \tag{8}\] The decomposition generates the positive definite covariance \(\Sigma=LL^{T}\) for reparameterization. By sampling from the uniform distribution, we acquire the coupled representations: \(\mathbf{z}_{q}=\mathbf{\mu}_{c}+\mathbf{\sigma}_{c}\odot\mathbf{\epsilon}_{c}\), where \(\mathbf{\epsilon}_{c}\sim\mathcal{N}(0,\mathbf{I})\), maintaining the dependencies between individual dimensions. Algorithm 1 shows the process of representation sampling. Different from the low-rank representation in [41, 42], we generate the coefficient matrix directly and replace the ReLU function by the Softplus function to ensure the positive definite property of the triangular matrix \(L\). Consequently, with the coupled representations learned, we can apply the contrastive learning in Section 3.3 to distinguish the discrepancy over the factorized representation \(\mathbf{z}_{q}\) and this coupled representation \(\mathbf{z}_{p}\) following the contrastive learning framework in Eq. (9). This will make the learned posterior distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\) more factorizable. ### Contrastive Learning for Enhancing Disentangled Representations Although different strategies are available to estimate the TC term in Eq. (5) with factorized factors in a DNN setting, there is no theoretical guarantee to acquire the optimal posterior for disentangled learning. This is attributed to the difficulty in modeling a heterogeneous and hierarchical posterior distribution while the TC-based ELBO decomposition is IID. In contrast, statistically, it is easier to model the correlation structure in the low-dimensional factorized factors. Accordingly, to address the incorrect amortized inference and reconstruction error of the modified bound in Eq. (4) for disentanglement, the optimal posterior can be approximated in a contrastive way: we can learn an unsupervised classifier \(\Psi\) parameterized by \(\psi\) to distinguish the aforementioned factorized representation \(\mathbf{z}_{q}\) from the coupled representation \(\mathbf{z}_{p}\) learned from the entire hidden space as discussed in Section 3.2. First, with these two representations \(\mathbf{z}_{q}\) and \(\mathbf{z}_{p}\), we define their (1) strongly dependent (positive) pair \((\mathbf{z}_{q},q(\mathbf{z}|\mathbf{x}))\), where \(\mathbf{z}_{q}\) can be treated as drawn from a (similar) target distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\), denoted as \(H(q_{\phi}(\mathbf{z}|\mathbf{x}),1)\) with a pseudo label \(1\) indicating that the learning representation is favorable from the target distribution; and (2) strongly independent negative pair \((\mathbf{z}_{p},p(\mathbf{z}|\mathbf{x}))\), where \(\mathbf{z}_{p}\) is drawn from a dissimilar distribution \(p(\mathbf{z}|\mathbf{x})\), denoted by \(H(p(\mathbf{z}|\mathbf{x}),0)\) with a pseudo label \(0\). Then, we learn the classifier \(\Psi\) to determine whether the representation comes from the target or a dissimilar distribution with a contrastive loss \(\mathcal{L}_{\Psi}\): \[\begin{split}\mathcal{L}_{\Psi}&=H(q_{\phi}(\mathbf{z}| \mathbf{x}),1)+H(p(\mathbf{z}|\mathbf{x}),0)\\ &=\frac{1}{N}\sum_{n=1}^{N}\Big{[}\ln\big{(}\sigma(\Psi_{\psi}( \mathbf{z}_{q}^{n}))\big{)}+\ln\big{(}1-\sigma(\Psi_{\psi}(\mathbf{z}_{p}^{n}))\big{)} \Big{]}.\end{split} \tag{9}\] where \(N\) is the number of samples. We train \(\Psi\) with the pseudo labels for \(\Psi_{\psi}(\mathbf{z}_{q}^{n})\) over factorized posterior \(\mathbf{z}_{q}\) and \(\Psi_{\psi}(\mathbf{z}_{p}^{n})\) over coupled representations \(\mathbf{z}_{p}\). By minimizing \(\mathcal{L}_{\Psi}\), consequently, to enhance disentanglement, the contrastive loss and classifier \(\Psi\) ensure that the latent variables inferred by the encoder discard those features drawn from the similar distribution, i.e., retaining those independent features from the dissimilar distribution. ### The C\({}^{2}\)VAE Algorithm We build C\({}^{2}\)VAE as follows, with its architecture and information flow shown in Figure 1. Given data \(\mathcal{D}=\left\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{n}\right\}\), we first learn its posterior distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\) per the factorization assumption. By applying the reparameterization trick, we train the TC-based ELBO with a factorized posterior \(p_{\theta}(\mathbf{z}|\mathbf{x})\). Then, the optimal posterior \(q^{*}(\mathbf{z}|\mathbf{x})\) is trained in iterations that discard those dependent features. Further, the classifier \(\Psi(\mathbf{z}_{p},\mathbf{z}_{q};\psi)\) is trained to distinguish the factorized representation \(\mathbf{z}_{q}\sim q_{\phi}(\mathbf{z}|\mathbf{x})\) from the coupled representation \(\mathbf{z}_{p}\sim p(\mathbf{z};\mu_{c},\sigma_{c})\), where \(\mu_{c},\sigma_{c}\) are the parameters of the neural copula function discussed in Section 3.2. Algorithm 2 shows the C\({}^{2}\)VAE processes. It involves a two-phase optimization process. Parameters \(\phi,\theta\) are fixed in optimizing Eq. (9); the same in optimizing Eq. (4) by fixing parameters \(\psi\). ## 4 Experiments ### Data and Baselines ``` 1:Input: Training data \(\mathcal{D}\), training batch \(B\) 2:Output: Parameters of encoder \(\theta\), decoder \(\phi\), and classifier \(\Psi\). 3:while unconverged do 4:for\(B\) in \(\mathcal{D}\)do 5: Generate the TC loss in terms of the discriminator 6: Compute gradients of Eq. (4) w.r.t. \(\theta\) and \(\phi\) 7: Update the parameters of encoder \(\phi\) and decoder \(\theta\) 8:endfor 9:for\(B\) in \(\mathcal{D}\)do 10: Generate coupled representations in Algorithm 1 11: Compute gradients of Eq. (9) w.r.t. \(\psi\) 12: Update parameters \(\psi\) of the classifier 13:endfor 14:endwhile ``` **Algorithm 2** The training process of C\({}^{2}\)VAE **Datasets** We evaluate C\({}^{2}\)VAE on (1) two grayscale datasets: dSprites [20] as a binary 2D shape dataset with 737,280 samples, and SmallNORB [26] as a toy dataset with 48,600 synthetically rendered images; and (2) two color datasets: 3D Shapes [9] as a 3D shape dataset with 480,000 RGB images, and 3D Cars [33] as a 3D car dataset with 17,568 images generated from 24 rotation angles corresponding to 199 car models. **Baselines** For a fair comparison, we compare C\({}^{2}\)VAE with three total correlation-based VAEs, which involve some decompositions and approximations under a mild assumption and sharing the same deep frameworks. \(\beta\)-VAE [20] is a variant of the basic VAE, with a penalty on \(D_{KL}\) in the vanilla ELBO by an additional coefficient \(\beta\) to acquire the disentangled representations. \(\beta\)-TCVAE [13] was the first work splitting the TC term to obtain the more factorizable posterior in a Monte Carlo estimator. FactorVAE [23] shows another way to acquire the factorized posterior in a density ratio estimator. Table 3 in the supplemental shows more details about their architectures and hyperparameter tuning. ### Effect of Disentangled Representations **Disentanglement measures** For comprehensive and fair quantitative evaluation, we use the following measures [10] to assess the effect of disentangled representations: (1) intervention-based: FactorVAE score (FAC); (2) information-based: Mutual Information Gap (MIG) [28]; and (3) prediction-based: Separated Attribute Predictability (SAP) [25], [23]. Further, to verify the effectiveness of a learned factorized prior, the Unsupervised Score [30] estimates the discrepancy between learned representations and optimal ones. Among these measures, the Mutual Information (MI) score verifies the correlations between latent variables; lower Total Correlation (TC) and Normalized Wasserstein Distance (WCN) identify stronger correlations between a Gaussian posterior and its marginals. The settings of the baselines for disentangled representation learning are shown in Table 4. **Disentangled learning results.** Table 2 depicts the quantitative evaluation results of each algorithm. The results of each entry are averaged over five random seeds. We follow the experimental settings in literature to set coefficients as \(\beta=4\) for \(\beta\)-VAE [20], \(\beta=4\) for \(\beta\)-TCVAE [23], \(\gamma=10\) for FactorVAE, as this affects the relation between parts in the surrogate loss which plays an important role in balancing reconstruction and representation. In addition, \(\gamma=6.4\) is another optimal hyperparameter in [23] to generate disentangled representations for latent traversals. On dSprites, C\({}^{2}\)VAE outperforms the factorized VAE FactorVAE overall metrics except for the total correlation distance. In particular, C\({}^{2}\)VAE performs well on latent metrics: SAP and FAC rather than on representation-based metrics like MIG, which are estimated by the Monte Carlo sampling. Similar observations can be seen in the other three datasets. The unsupervised score shows the effect of the learned factorized distribution. C\({}^{2}\)VAE fits the assumption with the lowest WCN in acquiring the most factorized posterior with the multiplication of marginal distributions. The disentanglement performance on dSprites over latent traversals is in Section 2. As shown by animations, total correlation-based VAEs, including \(\beta\)-TCVAE, FactorVAE, and C\({}^{2}\)VAE, can disentangle more factors than \(\beta\)-VAE. The factors of shape, position \(y\), and scale are entangled in \(\beta\)-TCVAE, while the factors of shape and orientation are entangled in FactorVAE in accordance with [13; 23]. In summary, compared with FactorVAE, C\({}^{2}\)VAE achieves the best disentanglement than others in disentangling the orientation factor with less reconstruction error. ### The trade-off between Reconstruction and Representation By bringing the total correlation-based estimation into VAE optimization, C\({}^{2}\)VAE acquires a loose bound in Eq. (2) than the original ELBO. This contributes to obtaining better disentanglement performance but hinders the model from overfitting data. By evaluating the trade-off between reconstruction and representation, we draw the training curves of reconstruction loss over iterations. Figure 3 shows a comparison of reconstruction error on dSprites with five random seeds on the two TC-based models. It shows that C\({}^{2}\)VAE retains a stable training curve with smaller variance over five trials in acquiring a reasonable representation induced by a stable training stage as shown in [35]. In addition, C\({}^{2}\)VAE induces more accurate amortized inference with the contrastive classifier to achieve a smaller reconstruction loss than the compared VAEs. ### Ablation Studies We investigate the effect of different coupled representations captured by various copula functions in C\({}^{2}\)VAE. The following C\({}^{2}\)VAE variants are created to capture different dependencies between dimensions. * C\({}^{2}\)VAE-I, where the contrastive posterior is estimated by permuting batch latent variables under the independence test assumption [5]. * C\({}^{2}\)VAE-G, where the contrastive representation is sampled by Gaussian copula based on the learned neural posterior distribution. * C\({}^{2}\)VAE-S, where the contrastive representation is sampled by Student copula. Student copula is a copula function that incorporates the student's t-distribution. It is often used to model variables with heavy-tailed distributions or when extreme values are more likely. It can be denoted as: \[C(u_{1},u_{2},\ldots,u_{n};\rho,\nu)=T\left(T^{-1}(u_{1};\nu),T^{-1}(u_{2}; \nu),\ldots,T^{-1}(u_{n};\nu);\rho\right)\] (10) where \(\rho\) refers to the correlation matrix, \(\nu\) is the degree of freedom, and \(T\) refers to the cumulative distribution function of the t distribution. * C\({}^{2}\)VAE-M, where the contrastive representation is sampled by Gaussian mixture copula. The Gaussian mixture copula is a copula function based on the Gaussian mixture model, used for modeling the dependence structure among multivariate random variables. It combines Figure 3: Learning curves on dSprites. \[C\left(u_{1},u_{2},\ldots,u_{n};\theta\right)=\sum_{i=1}^{k}w_{i} \cdot C_{i}\left(\Phi^{-1}\left(u_{1};\mu_{1i},\sigma_{1i}\right),\Phi^{-1}\left( u_{2};\mu_{2i},\sigma_{2i}\right),\ldots,\Phi^{-1}\left(u_{n};\mu_{ni},\sigma_{ni} \right)\right), \tag{11}\] \[\theta\text{ refers to the correlation matrix, and }w_{i}\text{ is the weight of each copula part.}\] From Table 1, we can summarize that the C\({}^{2}\)VAE with different representations may converge at different stages. The C\({}^{2}\)VAE with Gaussian copula achieves better disentanglement performance w.r.t. metric SAP. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{4}{c}{**Unsupervised Scores**} & **MIG** & **SAP** & **FAC** \\ \cline{2-5} \cline{5-5} \cline{7-6} \(\beta\)-VAE (\(\beta=4\)) & \(0.15\pm 0.06\) & \(10.7\pm 0.16\) & \(0.12\pm 0.41\) & \(0.19\pm 0.01\) & \(0.019\pm 0.009\) & \(0.78\pm 0.026\) \\ \(\beta\)-TCVAE & \(0.17\pm 0.15\) & \(11.2\pm 0.06\) & \(0.11\pm 0.007\) & \(0.17\pm 0.06\) & \(0.031\pm 0.006\) & \(0.70\pm 0.009\) \\ FactorVAE & \(0.11\pm 0.92\) & \(\mathbf{10.05}\pm 0.922\) & \(0.11\pm 0.009\) & \(0.20\pm 0.010\) & \(0.028\pm 0.015\) & \(0.81\pm 0.034\) \\ \hline C\({}^{2}\)VAE (\(\gamma=10\)) & \(0.11\pm 0.33\) & \(11.8\pm 0.3\) & \(0.099\pm 0.026\) & \(0.20\pm 0.001\) & \(\mathbf{0.044}\pm 0.22\) & \(0.84\pm 0.001\) \\ C\({}^{2}\)VAE (\(\gamma=6.4\)) & \(\mathbf{0.11}\pm 0.57\) & \(12.4\pm 0.015\) & \(\mathbf{0.079}\pm 0.13\) & \(\mathbf{0.21}\pm 0.003\) & \(\mathbf{0.035}\pm 0.014\) & \(\mathbf{0.85}\pm 0.002\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Unsupervised Scores**} & **MIG** & **SAP** & **FAC** \\ \cline{2-5} \cline{7-6} \(\beta\)-VAE (\(\beta=4\)) & \(0.17\pm 0.022\) & \(12.38\pm 0.76\) & \(0.34\pm 0.14\) & \(0.10\pm 0.002\) & \(0.04\pm 0.008\) & \(0.59\pm 0.20\) \\ \(\beta\)-TCVAE & \(0.14\pm 0.012\) & \(2.1\pm 0.19\) & \(0.32\pm 0.001\) & \(0.13\pm 0.010\) & \(0.05\pm 0.003\) & \(0.60\pm 0.01\) \\ FactorVAE & \(0.21\pm 0.007\) & \(12.23\pm 0.560\) & \(0.38\pm 0.033\) & \(0.14\pm 0.019\) & \(0.061\pm 0.008\) & \(\mathbf{0.62}\pm 0.30\) \\ \hline C\({}^{2}\)VAE (\(\gamma=10\)) & \(0.14\pm 0.016\) & \(11.55\pm 0.5\) & \(0.25\pm 0.14\) & \(0.15\pm 0.0001\) & \(0.066\pm 0.007\) & \(0.62\pm 0.0004\) \\ C\({}^{2}\)VAE (\(\gamma=6.4\)) & \(0.14\pm 0.017\) & \(11.96\pm 0.734\) & \(0.27\pm 0.011\) & \(\mathbf{0.15}\pm 0.017\) & \(\mathbf{0.066}\pm 0.006\) & \(0.61\pm 0.26\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Unsupervised Scores**} & **MIG** & **SAP** & **FAC** \\ \cline{2-5} \cline{7-6} \(\beta\)-VAE (\(\beta=4\)) & \(0.15\pm 0.21\) & \(2.3\pm 0.16\) & \(0.12\pm 0.52\) & \(0.24\pm 0.005\) & \(0.058\pm 0.0005\) & \(0.93\pm 0.005\) \\ \(\beta\)-TCVAE & \(0.11\pm 0.007\) & \(2.1\pm 0.31\) & \(0.007\pm 0.052\) & \(0.32\pm 0.004\) & \(0.050\pm 0.009\) & \(0.97\pm 0.36\) \\ FactorVAE & \(0.11\pm 0.014\) & \(\mathbf{1.5}\pm 0.14\) & \(0.06\pm 0.042\) & \(\mathbf{0.33}\pm 0.004\) & \(0.047\pm 0.0004\) & \(0.98\pm 0.21\) \\ \hline C\({}^{2}\)VAE (\(\gamma=10\)) & \(\mathbf{0.08}\pm 0.015\) & \(4.1\pm 0.48\) & \(0.08\pm 0.016\) & \(0.17\pm 0.003\) & \(0.054\pm 0.0002\) & \(0.95\pm 0.003\) \\ C\({}^{2}\)VAE (\(\gamma=6.4\)) & \(0.09\pm 0.006\) & \(2.8\pm 0.18\) & \(\mathbf{0.06}\pm 0.024\) & \(0.23\pm 0.002\) & \(\mathbf{0.075}\pm 0.001\) & \(\mathbf{0.99}\pm 0.025\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Unsupervised Scores**} & **MIG** & **SAP** & **FAC** \\ \cline{2-5} \cline{7-6} \(\beta\)-VAE (\(\beta=4\)) & \(0.18\pm 0.006\) & \(14.7\pm 0.78\) & \(0.38\pm 0.03\) & \(0.04\pm 0.032\) & \(0.02\pm 0.098\) & \(0.82\pm 0.088\) \\ \(\beta\)-TCVAE & \(0.13\pm 0.012\) & \(11.6\pm 0.66\) & \(0.28\pm 0.03\) & \(\mathbf{0.07}\pm 0.024\) & \(0.02\pm 0.014\) & \(\mathbf{0.89}\pm 0.064\) \\ FactorVAE & \(0.16\pm 0.008\) & \(13.9\pm 0.98\) & \(0.37\pm 0.02\) & \(0.06\pm 0.029\) & \(0.02\pm 0.005\) & \(0.86\pm 0.036\) \\ \hline C\({}^{2}\)VAE (\(\gamma=10\)) & \(0.13\pm 0.007\) & \(\mathbf{11.3}\pm 0.76\) & \(0.14\pm 0.04\) & \(0.06\pm 0.0001\) & \(0.02\pm 0.004\) & \(0.87\pm 0.0003\) \\ C\({}^{2}\)VAE (\(\gamma=6.4\)) & \(\mathbf{0.12}\pm 0.007\) & \(11.5\pm 0.80\) & \(\mathbf{0.14}\pm 0.04\) & \(0.05\pm 0.018\) & \(\mathbf{0.02}\pm 0.002\) & \(0.86\pm 0.024\) \\ \hline \hline \end{tabular} \end{table} Table 2: Performance (mean \(\pm\) std) on different datasets and by different models w.r.t. different evaluation metrics. We evaluate \(\beta\)-VAE, \(\beta\)-TCVAE, and FactorVAE on dSprites and 3D Shapes. Their settings include different random seeds and hyperparameters. Refer to Appendix G for details. Conclusion This paper presents a novel TC-based VAE C\({}^{2}\)VAE, which is trained with contrastive disentangled learning by differentiating and removing coupled features and their representations. Consequently, C\({}^{2}\)VAE learns more factorizable representations for disentanglement while eliminating those strongly coupled features and representations by copula-based dependency learning. Experiments show C\({}^{2}\)VAE achieves better disentanglement performance compared with other TC-based VAEs.
2309.15014
Efficient adaptive Bayesian estimation of a slowly fluctuating Overhauser field gradient
Slow fluctuations of Overhauser fields are an important source for decoherence in spin qubits hosted in III-V semiconductor quantum dots. Focusing on the effect of the field gradient on double-dot singlet-triplet qubits, we present two adaptive Bayesian schemes to estimate the magnitude of the gradient by a series of free induction decay experiments. We concentrate on reducing the computational overhead, with a real-time implementation of the schemes in mind. We show how it is possible to achieve a significant improvement of estimation accuracy compared to more traditional estimation methods. We include an analysis of the effects of dephasing and the drift of the gradient itself.
Jacob Benestad, Jan A. Krzywda, Evert van Nieuwenburg, Jeroen Danon
2023-09-26T15:34:33Z
http://arxiv.org/abs/2309.15014v3
**Efficient adaptive Bayesian estimation of a slowly fluctuating Overhauser field gradient** ## Abstract **Slow fluctuations of Overhauser fields are an important source for decoherence in spin qubits hosted in III-V semiconductor quantum dots. Focusing on the effect of the field gradient on double-dot singlet-triplet qubits, we present two adaptive Bayesian schemes to estimate the magnitude of the gradient by a series of free induction decay experiments. We concentrate on reducing the computational overhead, with a real-time implementation of the schemes in mind. We show how it is possible to achieve a significant improvement of estimation accuracy compared to more traditional estimation methods. We include an analysis of the effects of dephasing and the drift of the gradient itself.** ###### Contents * 1 Introduction * 2 System dynamics * 3 Adaptive Bayesian estimation * 3.1 Method of moments fit * 3.2 KL-divergence fit using a neural network * 4 Results * 4.1 Estimation of a static parameter * 4.2 Single estimation of a slowly drifting parameter * 4.3 Sequential estimation of a slowly drifting parameter * 5 Summary and conclusions * A Training the neural network Introduction Spin-based semiconductor devices offer several very useful properties for hosting qubits, including their small size, long relaxation times, fast gate-operation times, and a good potential for scalability based on their similarity to conventional electronic devices [1, 2, 3]. Initially, research focused on type III-V semiconductors, and particularly GaAs, because there is no valley degeneracy in the conduction band and heterostructure engineering was further developed than for other materials. However, soon it was realized that for III-V semiconductors hyperfine coupling of the localized spins to the nuclear spin baths cannot be avoided, and the resulting randomly fluctuating Overhauser fields limit such devices to very short spin dephasing times \(T_{2}^{*}\thicksim 20\) ns [4, 5, 6]. Indeed, it was the eventual development of devices based on materials that can be isotopically purified to be almost nuclear-spin-free, such as Si and Ge [7, 8, 9, 10, 11, 12, 13], that propelled a recent leap in performance for spin qubits, providing high fidelity and long coherence times which allowed for 4- and 6-qubit quantum logic with spin qubits [14, 15]. The harmful Overhauser field fluctuations are, however, very slow (typically on the scale of seconds), and an alternative approach could thus be to monitor these fluctuations in real time and adjust qubit control accordingly: accurate knowledge about the Overhauser fields can be used to significantly extend the qubit coherence time [16, 17, 18]. Furthermore, while universal control of spin qubits in materials with weak spin-orbit coupling has typically relied on the use of micromagnets [19, 20, 21, 15] or microwave striplines [22, 5, 23], the Overhauser fields can also be used as control axes, as long as they are known within sufficient uncertainty [24, 25]. The development of fast and reliable protocols for real-time estimation of Overhauser fields could thus lift some of the main limitations of spin qubits realized in type III-V semiconductor devices, but also allow the use of Si and Ge devices without the costly process of isotopic purification. Besides, such protocols can also be used to estimate other slowly fluctuating Hamiltonian parameters, such as the low-frequency components of charge noise [26], and thus eliminate their contribution to qubit decoherence. Although thus relevant in a much broader sense, we will focus here on Hamiltonian parameter estimation in the context of fluctuating Overhauser fields. More specifically, we will consider double-quantum-dot-based singlet-triplet qubits, where the Overhauser field gradient over the two dots \(\Delta B_{z}\) is the most important fluctuating parameter to be estimated. One powerful tool for quantum sensing and estimation is provided by Bayesian statistics [27], which can be used to optimize estimation procedures on-the-fly. Bayesian estimation schemes have already been used for estimating the Overhauser fields in GaAs-based spin qubits [17, 18, 25], although so far only in a non-adaptive way where the whole estimation procedure, based on a series of single-shot free induction decay experiments, is predetermined. Inspired by the availability of field-programmable gate arrays (FPGAs) which perform real-time data processing and control feedback [28, 29, 30, 31, 32], we investigate the feasibility of implementing fast and efficient adaptive Bayesian estimation of \(\Delta B_{z}\) in a singlet-triplet spin qubit, keeping the state-of-the-art experimental equipment in mind as a boundary condition, both in terms of the limits on calculation complexity and information storage capacity. Ideally, an adaptive Bayesian estimation scheme uses global optimization, in the sense of always considering all possible future measurements when deciding for the next set of parameters. Global optimization strategies are, however, hard to implement in an efficient way, and one thus usually reverts to a so-called greedy strategy, where only the optimization of the next single-shot experiment is considered. Although thus suboptimal, such greedy strategies have been shown to yield an exponential scaling of the estimation error as a function of the number of single-shot experiments [33, 34]. Exact implementation of the optimal greedy adaptive scheme is, however, still computationally too intensive for real-time feedback in most instances, and instead there are typically two options: (i) approximate the distribu tion of possible estimates such that simple parametric solutions are possible [34, 35], or (ii) use Monte-Carlo sampling and approximate the optimal experiment to perform using some heuristic [36, 37, 38, 39, 40, 41]. Otherwise, simple analytical solutions are only attainable for specific problems [42, 43] or when the space of experimental design is sufficiently constrained [44]. In this paper, we focus on the first option, where \(\Delta B_{z}\) is estimated based on the approximation that its probability distribution remains Gaussian throughout the whole procedure [34], the advantage being that this only requires working with two parameters (the mean and variance of the distribution). The main challenge with this approach is to fit the posterior distribution after each measurement to a Gaussian in a computationally efficient way. We propose two methods that are simple enough to implement on a state-of-the-art FPGA and improve on existing schemes in that they allow for any Gaussian prior, including priors with small mean compared to their standard deviation. We first present a scheme where the fitting is based on the method of moments, for which we derive an efficient implementation that only relies on few straightforward calculations, paying particular attention to the problem of how to handle distributions of \(\Delta B_{z}\) that are centered around zero [34]. Secondly, we explore the possibility of using a neural network (NN) to replace the parametric update equations for the Gaussian mean and variance, as it has been shown that NNs can be applied for tasks like finding the optimal design of experiments [45], updating the parameter distribution [46], and predicting the Hamiltonian at future times [47]. Finally, we show how the Gaussian approximation can also allow straightforwardly to account for the Overhauser-field dynamics in between estimations, thus adding a component of prediction to the schemes. The structure of the rest of this paper is as follows. In Section 2 we introduce the basic physics of singlet-triplet qubits, focusing on the role of the Overhauser gradient, and Section 3 introduces the rationale behind Bayesian estimation and presents the specifics of the two schemes we propose. Then, in Section 4.1, we present numerical simulations of the two estimation schemes, benchmarking them against a more standard non-adaptive approach, both with and without a finite phenomenological dephasing time \(T\). In Section 4.2 we analyze how a slow drift of the parameter to be estimated limits the number of useful measurements that can be performed, and how this can be related to the dephasing time \(T\). Finally, in Section 4.3 we consider how the evolution of a Gaussian distribution in the Fokker-Planck formalism makes our schemes predictive, allowing for fewer measurements in future estimations. ## 2 System dynamics Below we will discuss Bayesian estimation protocols for both static and slowly fluctuating Hamiltonian parameters, in relatively general terms. The specific system we will have in mind throughout is a two-electron singlet-triplet spin qubit hosted in a double quantum dot defined in a III-V-based semiconductor heterostructure. In this Section, we will highlight the relevant part of the physics of this system. Singlet-triplet qubits are usually hosted in double quantum dots tuned to a \((1,1)\) charge configuration. In that regime, the gate-tunable exchange interaction \(J(\epsilon)\) controls the qubit splitting, and a randomly fluctuating Overhauser field gradient drives rotations around the \(x\)-axis on the Bloch sphere. The two-level qubit Hamiltonian can be approximated as \[H=\frac{\hbar\omega(t)}{2}\sigma_{x}+\frac{J(\epsilon)}{2}\sigma_{z}, \tag{1}\] where \(\sigma_{x,z}\) are Pauli matrices in the qubit basis \(\{\ket{0},\ket{1}\}\) and \(\hbar\omega(t)=g\mu_{\mathrm{B}}[B_{z}^{(1)}(t)-B_{z}^{(2)}(t)]\) in terms of the fluctuating Overhauser fields \(\mathbf{B}^{(1,2)}(t)\) on the two dots, with \(g\) the effective electronic \(g\)-factor and \(\mu_{\mathrm{B}}\) the Bohr magneton (see Fig 1). In principle, the estimation scheme presented below can be used to find the qubit frequency \(\Omega(\epsilon,t)=\sqrt{\omega(t)^{2}+J(\epsilon)^{2}/\hbar^{2}}\) for any detuning \(\epsilon\). However, for simplicity we will concentrate on the case where all free qubit evolution takes place deep in the \((1,1)\) region, where \(J(\epsilon)\approx 0\) and we thus estimate \(\Omega(t)\approx|\omega(t)|\). The tunable exchange splitting is only made non-zero for initialization and readout purposes in this case. We assume that the two Overhauser fields and thus the field gradient \(\omega(t)\) follow an OU process [48]. The slow fluctuations of the effective fields arise from averaging the nuclear spin polarization dynamics of the typically \(10^{5}\)-\(10^{6}\) nuclei that surround the electrons localized in the quantum dots, as illustrated in Fig 1. The dynamics of the OU process are compatible with the microscopic picture of a classical birth-death process, where random nuclear spin flips occur with a fixed rate (picturing, for simplicity, the nuclei to be Ising spins). This results in a net diffusion of \(\omega(t)\) with an entropic drift towards zero. Working with a probability distribution for \(\omega\), the dynamics of such a drift-diffusion process can be approximated by a Fokker-Planck equation. For an initial probability distribution that is Gaussian, with average \(\mu(0)\) and variance \(\sigma(0)^{2}\), solving the Fokker-Planck equation yields the time-dependent distribution \[p(\omega,t)=\frac{1}{\sqrt{2\pi\sigma(t)^{2}}}\exp\left\{-\frac{[\omega-\mu(t )]^{2}}{2\sigma(t)^{2}}\right\}, \tag{2}\] which is a Gaussian distribution with a time-dependent mean \(\mu(t)=\mu(0)e^{-\Gamma t}\) and variance \(\sigma(t)^{2}=\sigma_{K}^{2}+[\sigma(0)^{2}-\sigma_{K}^{2}]e^{-2\Gamma t}\). The parameters \(\sigma_{K}\) and \(\Gamma\) follow from the drift and diffusion constants and can be interpreted as the steady-state r.m.s. value of the Overhauser field gradient (\(\sigma_{K}\sim 50\) MHz typically) and the slow relaxation rate of nuclear spin polarization (\(\Gamma\sim 0.2\) Hz typically). We note that the inverse, \(T_{c}=\Gamma^{-1}\sim 5\) s, sets the correlation time scale of the fluctuations of \(\omega\), which defines the scale of the time window within which a single estimation of \(\omega(t)\) is useful since the value would have drifted enough so that all potential information gain is lost. ## 3 Adaptive Bayesian estimation We assume that the tool we have available for probing the Overhauser gradient is so-called free induction decay (FID) experiments: (i) the qubit is initialized in the basis state \(|0\rangle\) (at the north Figure 1: Due to the randomly fluctuating nuclear spins of the host material, the Overhauser fields acting on the two electron spins are unequal and slowly fluctuating. We model the dynamics of these fields as a drift–diffusion process, driven by rapid randomly occurring nuclear spin-flip processes. pole of the Bloch sphere), then (ii) it is left to precess freely around the field \(\omega(t)\) that points along the \(x\)-axis on the Bloch sphere, for a time \(\tau\), after which (iii) the qubit state is projectively measured in the qubit basis \(\{\ket{0},\ket{1}\}\), i.e., along the Bloch sphere's \(z\)-axis. The outcome (0 or 1) of every such experiment contains information about the precession frequency \(\omega(t)\), and the goal is to find an optimal set of FID experiments (a set of "waiting times" \(\{\tau_{n}\}\), with \(n\) the index tracking the number of FID experiments) from which accurate knowledge about \(\omega(t)\) can be distilled as efficiently as possible. For this purpose, we use a Bayesian estimation approach where our knowledge about \(\omega(t)\) is encoded in a probability distribution \(p(\omega)\), which gets updated after each FID experiment according to Bayes' rule, \[p_{n}(\omega|d_{n},\tau_{n},T)\propto p_{n-1}(\omega)p(d_{n}|\omega,\tau_{n},T). \tag{3}\] Here, \(p_{n-1}(\omega)\) is the prior (old) distribution of \(\omega\) and \(p_{n}(\omega|d_{n},\tau_{n},T)\) the posterior (new) distribution, which is a compromise between our prior knowledge of \(\omega\) and the new data point \(d_{n}\) obtained, taking into account \(\tau_{n}\) as well as the model parameter \(T\) (see below) [49]. In our case, the data points are the binary measurement outcomes \(d_{n}\in\{0,1\}\) of the FID experiment, labeling the two qubit states. The so-called likelihood function--the probability to measure \(d\) for given \(\omega\), \(\tau\), and \(T\)--is given by the Born rule \[p(d|\omega,\tau,T)=\frac{1}{2}\Big{[}1+(-1)^{d}e^{-\tau^{2}/T^{2}}\cos{(\omega \tau)}\Big{]}, \tag{4}\] where we included a phenomenological "dephasing time" \(T\) that limits the coherence of a single-shot measurement and subsequently defines the longest useful waiting time \(\tau_{n}\) for each FID experiment. Indeed, for \(\tau\gtrsim T\) the likelihood function quickly reduces to \(p=\frac{1}{2}\) for both \(d\), independent of the other parameters, meaning that no information can be gained from the experiment. In many cases, the appropriate value to insert for \(T\) can also be estimated from experiments in a Bayesian fashion [45, 50]. Before the first FID experiment is performed, i.e., when we have no information about \(\omega\) at all, we assume a Gaussian probability distribution, \[p_{0}(\omega)=\frac{1}{\sqrt{2\pi\sigma_{K}^{2}}}\exp{\left\{-\frac{\omega^{2} }{2\sigma_{K}^{2}}\right\}}, \tag{5}\] corresponding to the steady-state limit of Eq. (2). We then see from Eqs. (3,4) that, independently of the choice of \(\{\tau_{n}\}\) and \(T\) and of the measurement outcomes \(d_{n}\), every subsequent iteration of the probability distribution will be symmetric in \(\omega\), i.e., \(p_{n}(\omega)=p_{n}(-\omega)\). This is a consequence of the projective nature of the measurements; the direction of precession around the \(x\)-axis on the Bloch sphere is impossible to distinguish with FID experiments such as those performed here. In this sense, the best we can achieve is an accurate estimate of \(|\omega|\). We now turn to the question how to choose the best set of waiting times \(\{\tau_{n}\}\). An important feature of Bayesian estimation is that it allows for "on-the-fly" optimization of free parameters: For each new experiment an optimal time \(\tau_{n}\) can be computed, based on the current distribution function, in order to gain the maximum amount of information about \(\omega\)[51]. There are several ways to quantify such information gain, the change in information entropy being the canonical choice [52]; yet in order to make calculations feasible to implement on an FPGA in real time, it would be easier to consider a simple quantity such as the variance of the distribution, using its degree of minimization during the estimation procedure as a measure for success. However, since our distribution function is always symmetric in \(\omega\), a straightforwardly calculated variance of \(p(\omega)\) will in general not be a good measure for the uncertainty in \(|\omega|\). We address this problem with the next simplification we make: Throughout the whole estimation procedure, we will always approximate posterior distributions \(p(\omega|d,\tau,T)\) with a bimodal Gaussian, \[q(\omega,\mu,\sigma)=\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left\{-\frac{\omega^{2 }+\mu^{2}}{2\sigma^{2}}\right\}\cosh\left(\frac{\omega\mu}{\sigma^{2}}\right), \tag{6}\] where \(\pm\mu\) and \(\sigma^{2}\) parameterize the mean and variance of the two Gaussian peaks, respectively, thus taking care of the indistinguishability of the sign of \(\omega\)[34]. In the context of real-time Overhauser gradient estimation, this approximation has several important advantages: (i) analytic expressions of the \(\tau_{n}\)-dependent posterior distributions are straightforward to evaluate, possibly allowing for efficient optimization of \(\tau_{n}\) on an FPGA, (ii) the distribution function is fully characterized by only two parameters, \(\mu\) and \(\sigma\), throughout the whole estimation procedure, which can save significant memory as compared to storing the actual distribution function \(p_{n}(\omega)\), and (iii) incorporating the (slow) dynamics of the Overhauser gradient is straightforward by implementing the time-dependence of \(\mu\) and \(\sigma\) as mentioned below Eq. (2). Returning to the question how to choose the \(\{\tau_{n}\}\), we see that a natural choice is to pick the waiting times in a way that minimizes \(\sigma\) in the Gaussian approximation of the distribution function. Ideally, one follows an adaptive strategy that aims at every iteration for a path that results in a globally optimal solution, i.e., a minimal expected \(\sigma\) for the final distribution after the last iteration. However, since this is difficult in practice [51], one must typically settle for a "greedy" approach where the optimization is only considering the next experiment, which is what we will do here. We thus consider the estimation procedure sketched in Fig 2, which includes the following steps: 1. The prior \(p_{n-1}(\omega)\) is forced to always be a bimodal Gaussian distribution. 2. Use Eq. (3) to calculate \(p_{n}(\omega|d_{n},\tau,T)\) for all relevant \(\tau\) and for both potential outcomes \(d_{n}\in\{0,1\}\) (blue and red curves). 3. Fit both calculated posteriors at each candidate \(\tau\) to the bimodal Gaussian form (dashed curves). 4. Select the \(\tau\) whose fit yields the smallest expected \(\sigma^{2}\), considering a weighted average over the two possible measurement outcomes. Figure 2: The Bayesian estimation cycle adapted here: To a bimodal Gaussian prior we apply Bayes’ rule for many different \(\tau\) and both potential outcomes \(d\in\{0,1\}\). All resulting posteriors are fitted to a bimodal Gaussian again and the \(\tau\) yielding the lowest posterior expectation value for \(\sigma\) is selected. Finally, the experiment is performed, which determines the new prior. 5. Perform experiment \(n\) with the selected \(\tau_{n}\); the outcome determines the prior for the next iteration. The main remaining question is how to fit the posteriors calculated from Bayes' rule to a bimodal Gaussian distribution in a computationally efficient way. Below we present two different fitting schemes we investigated. ### Method of moments fit Fitting a more complicated distribution to a single Gaussian peak is a common approximation in statistics, and computationally simple fits can be made using for instance Laplace's approximation [53] or the method of moments (MM). In our case, as explained above, we will always have a symmetric distribution function that we want to fit to a _bimodal_ Gaussian, which in general requires some caution in designing the fitting procedure. In this section, we will explore the use of the MM to fit our posterior distributions to Eq. (6), as the oscillating nature makes Laplace's approximation unsuited in this case since it relies on the curvature of the distribution mode. The simplest application of the MM amounts to finding the bimodal Gaussian distribution whose two lowest moments are equal to the corresponding moments of the distribution one wants to fit. The fact that our distributions are always symmetric causes all odd moments to vanish, which means that the moments we focus on are the second and fourth. The raw moments \(\langle\omega^{2}\rangle_{n}\) and \(\langle\omega^{4}\rangle_{n}\) of the posterior distribution for the two possible measurement outcomes \(d_{n}\in\{0,1\}\) follow from calculating \[\langle\omega^{r}\rangle_{n}=\int_{0}^{\infty}d\omega\,\omega^{r}p_{n}(\omega |d_{n},\tau,T), \tag{7}\] where \(p_{n}(\omega|d_{n},\tau,T)\) is found using Bayes' rule (3). When the prior distribution \(p_{n-1}(\omega)\) is a bimodal Gaussian distribution, with parameters \(\mu_{n-1}\) and \(\sigma_{n-1}\), analytic expressions can be found for \(\langle\omega^{2}\rangle_{n}\) and \(\langle\omega^{4}\rangle_{n}\), due to the simple form of the likelihood function [see Eq. (4)], \[\langle\omega^{2}\rangle_{n} =\frac{\Re\big{\{}f_{n-1}(0)+(-1)^{d_{n}}f_{n-1}(\tau)\big{\}}}{ \Re\big{\{}n_{n-1}(0)+(-1)^{d_{n}}n_{n-1}(\tau)\big{\}}}, \tag{8}\] \[\langle\omega^{4}\rangle_{n} =\frac{\Re\big{\{}g_{n-1}(0)+(-1)^{d_{n}}g_{n-1}(\tau)\big{\}}}{ \Re\big{\{}n_{n-1}(0)+(-1)^{d_{n}}n_{n-1}(\tau)\big{\}}}, \tag{9}\] with \[f_{n}(t) =e^{-\frac{1}{2}\alpha_{n}^{2}t^{2}}(\zeta_{n}^{2}+\sigma_{n}^{2} )e^{i\mu_{n}t}, \tag{10}\] \[g_{n}(t) =e^{-\frac{1}{2}\alpha_{n}^{2}t^{2}}(\zeta_{n}^{4}+6\zeta_{n}^{2} \sigma_{n}^{2}+3\sigma_{n}^{2})e^{i\mu_{n}t},\] (11) \[n_{n}(t) =e^{-\frac{1}{2}\alpha_{n}^{2}t^{2}}e^{i\mu_{n}t}, \tag{12}\] where \(\zeta_{n}=\mu_{n}+i\sigma_{n}^{2}t\) and \(\alpha_{n}^{2}=\sigma_{n}^{2}+2T^{-2}\). We then pick the bimodal Gaussian that has its first two non-zero raw moments closest to \(\langle\omega^{2}\rangle_{n}\) and \(\langle\omega^{4}\rangle_{n}\), from which the fit parameters \(\hat{\sigma}^{2}\) and \(\hat{\mu}\) follow as \[[\hat{\sigma}_{n}^{(d_{n})}]^{2} =\langle\omega^{2}\rangle_{n}-\Re\left\{\sqrt{\frac{1}{2}\big{(}3 \langle\omega^{2}\rangle_{n}^{2}-\langle\omega^{4}\rangle_{n}\big{)}}\right\}, \tag{13}\] \[\hat{\mu}_{n}^{(d_{n})} =\sqrt{\langle\omega^{2}\rangle_{n}-[\hat{\sigma}_{n}^{(d_{n})}]^ {2}}. \tag{14}\] In these expressions, we discard imaginary contributions, which can arise when the computed raw moments of the posterior do not adhere to constraints set on the moments of the bimodal Gaussian, occurring when \(\mu\) is small compared to \(\sigma\). Formally speaking, we see that the calculated value of \(\hat{\sigma}_{n}^{2}\) becomes complex when the posterior distribution is leptokurtic, which indeed happens when the contributions at positive and negative \(\omega\) are not well-separated anymore. Discarding the imaginary part of \(\hat{\sigma}_{n}^{2}\) in that case corresponds to approximating the posterior distribution by a unimodal Gaussian centered at zero. This approach thus resembles a simplified version of the scheme presented in Ref. [35], where the number of modes in a fitted multi-modal Gaussian needed to be continuously adjusted based on the weight distribution over the modes. As explained above, the fit parameters \(\hat{\sigma}^{2}\) and \(\hat{\mu}\) should in principle be evaluated for each candidate \(\tau\), and the optimal waiting time \(\tau_{n}\) will be the one that minimizes the expected variance \[\mathbb{E}_{d_{n}}\big{[}\hat{\sigma}_{n}^{2}\big{]}=\frac{1}{2}\{\big{(}[\hat {\sigma}_{n}^{(0)}]^{2}-[\hat{\sigma}_{n}^{(1)}]^{2}\big{)}e^{-\frac{1}{2} \alpha_{n-1}^{2}+\tau^{2}}\cos(\mu_{n-1}\tau)+[\hat{\sigma}_{n}^{(0)}]^{2}+[ \hat{\sigma}_{n}^{(1)}]^{2}\}, \tag{15}\] i.e., \(\tau_{n}=\arg\min_{\tau}\mathbb{E}_{d_{n}}[\hat{\sigma}_{n}^{2}]\). The fact that we have, via Eqs. (8-15), an explicit expression for \(\mathbb{E}_{d_{n}}[\hat{\sigma}_{n}^{2}]\) allows in principle for minimization of the expected variance. However, since this expression is in general a complicated function of \(\tau\) with many local minima, analytic minimization is still challenging and most likely too complex to perform efficiently on an FPGA in real time. Therefore, we start by investigating the limits of small and large \(\mu_{n-1}/\sigma_{n-1}\). For \(\mu_{n-1}/\sigma_{n-1}\gg 1\) we find \[\mathbb{E}_{d_{n}}\big{[}\hat{\sigma}_{n}^{2}\big{]}=\sigma_{n-1}^{2}-\frac{ \sigma_{n-1}^{4}\tau^{2}\sin(\mu_{n-1}\tau)}{e^{\alpha_{n-1}^{2}+\tau^{2}}- \cos(\mu_{n-1}\tau)}. \tag{16}\] This expression displays fast oscillations as a function of \(\tau\), its local minima occurring at times for which \(\mu_{n-1}\tau=(k+\frac{1}{2})\pi\), with \(k\) an integer. The oscillations have an envelope function \(\sigma_{n-1}^{2}(1-\sigma_{n-1}^{2}\tau^{2}e^{-\alpha_{n-1}^{2}+\tau^{2}})\) that is minimal for \(\tau=1/\alpha_{n-1}\). In the limit \(\mu_{n-1}/\sigma_{n-1}\gg 1\), the optimal waiting time \(\tau_{n}\) can thus be taken to be \[\tau_{n}=\left(\left\lfloor\frac{\mu_{n-1}}{\pi\alpha_{n-1}}-\frac{1}{2} \right\rfloor+\frac{1}{2}\right)\frac{\pi}{\mu_{n-1}}, \tag{17}\] where \(\lfloor\dots\rfloor\) denotes rounding off to the nearest integer. This choice of waiting time leads to an expected variance \(\mathbb{E}_{d_{n}}\big{[}\hat{\sigma}_{n}^{2}\big{]}\approx\sigma_{n-1}^{2}[1 -(\sigma_{n-1}^{2}/e\alpha_{n-1}^{2})]\), cf. Ref. [34]. For the case of \(\mu_{n-1}/\sigma_{n-1}\ll 1\) we find \[\mathbb{E}_{d_{n}}\big{[}\hat{\sigma}_{n}^{2}\big{]}=\sigma_{n-1}^{2}-\sigma_ {n-1}^{4}\tau^{2}\Re\left\{\sum_{\eta=\pm 1}\frac{\sqrt{2+\eta e^{\frac{1}{2} \alpha_{n-1}^{2}+\tau^{2}}}}{2\sqrt{2}e^{\frac{1}{2}\alpha_{n-1}^{2}+\tau^{2}} }\right\}, \tag{18}\] which has its global minimum at \(\tau\approx 1.75/\alpha_{n-1}\), where \(\mathbb{E}_{d_{n}}\big{[}\hat{\sigma}_{n}^{2}\big{]}\approx\sigma_{n-1}^{2}[1 -0.60(\sigma_{n-1}^{2}/\alpha_{n-1}^{2})]\). However, we see that if we instead would evaluate the expected variance at \(\tau=1/\alpha_{n-1}\), i.e., at the optimal time we found in the large-\(\mu_{n-1}\) limit, we would find an expected variance of \(\mathbb{E}_{d_{n}}\big{[}\hat{\sigma}_{n}^{2}\big{]}\approx\sigma_{n-1}^{2}[1 -0.54(\sigma_{n-1}^{2}/\alpha_{n-1}^{2})]\), the improvement in \(\hat{\sigma}^{2}\) being reduced by only \(10\%\). We take this as a motivation to consistently aim for \(\tau_{n}=1/\alpha_{n-1}\) for the next experiment, throughout the whole range of \(\mu_{n-1}/\sigma_{n-1}\). We will thus always use Eq. (17) for evaluating the new waiting time \(\tau_{n}\), thus picking the local minimum closest to \(\tau=1/\alpha_{n-1}\). However, when \(\mu_{n-1}<\frac{1}{2}\pi\sigma_{n-1}\) (which signals that the oscillations as a function of \(\mu_{n-1}\tau\) are slower than the \(\tau\)-dependence of the envelope function and thus start to become irrelevant, the expected variance ultimately converging to the \(\mu_{n-1}\)-independent expression that Eq. (18) gives) we take \(\tau_{n}=1/\alpha_{n-1}\). Since the method presented in this section contains several approximations and is based on a relatively rough fitting technique, it will not always yield truly optimal fit parameters nor the most effective \(\tau_{n}\). However, as argued above, we expect the results to always be reasonably good and the advantage of the method is that all calculations that need to be done after each FID experiment [i.e., evaluating Eqs. (13,14,17)] amount to evaluating straightforward analytic expressions, which can be done with minimal computational overhead. ### KL-divergence fit using a neural network The problem with MM estimators, while much simpler to calculate than maximum likelihood estimators, is that they in general are biased and not robust with respect to the samples they are derived from. An example is the problem we encounter when getting complex-valued estimators for small \(\mu/\sigma\) in the procedure outlined in the previous Section, which is rooted in the fact that we try to estimate parameters of a bimodal Gaussian using samples from a distribution that has higher kurtosis than if they actually were drawn from a bimodal Gaussian. This suggests that in the case of small \(\mu/\sigma\), more sophisticated estimators for \(\mu\) and \(\sigma\) should ideally be used (such as maximum likelihood estimators), typically requiring a numerical fitting procedure. In the context of our work, however, this might be too complex and time-consuming for an efficient real-time implementation. A possible workaround to investigate is training a neural network to perform the fitting task [54]; indeed, modern FPGAs allow for the implementation of neural networks for on-the-fly processing of data. Ultimately, the problem boils down to mapping the old parameters \(\mu_{n-1}\) and \(\sigma_{n-1}\) to the optimal waiting time \(\tau_{n}\) and the resulting updated values of \(\mu_{n}\) and \(\sigma_{n}\) for both outcomes \(d_{n}=0,1\), i.e., we want to learn the map \[f:\{\mu_{n-1},\sigma_{n-1}\}\rightarrow\{\hat{\mu}_{n}^{(0)},\hat{\sigma}_{n} ^{(0)},\hat{\mu}_{n}^{(1)},\hat{\sigma}_{n}^{(1)},\hat{\sigma}_{n}\}. \tag{19}\] One could thus perform all (computationally costly) numerical fitting beforehand, for a relevant range of parameters \(\mu_{n-1}\) and \(\sigma_{n-1}\), and then interpolate the map \(f\) by teaching it to a neural network. Here, we investigate this possibility by performing the numerical fit through minimizing the KL-divergence between the true posterior \(p_{n}(\omega|d_{n},\tau_{n},T)\) and the bimodal Gaussian distribution (6). Explicitly, this is done by finding \[\mu_{n}^{(d_{n})},\sigma_{n}^{(d_{n})}=\operatorname*{arg\,min}_{\mu,\sigma} \int d\omega\,p_{n}(\omega|d_{n},\tau_{n},T)\log\left[\frac{p_{n}(\omega|d_{n},\tau_{n},T)}{q(\omega,\mu_{n-1},\sigma_{n-1})}\right]. \tag{20}\] The reason we use the KL-divergence, rather than a least-squares fit, is twofold: (i) the KL-divergence is exactly meant as a metric for similarity between two distributions, and (ii) the least-squares fit empirically results in too narrow distributions that have a near-zero probability density in areas where the true posterior actually has a significant weight. In the KL-divergence fit, the second issue is counteracted by the argument of the logarithm, forcing the fitted distribution to cover the true posterior to a greater extent. We emphasize, to avoid confusion, that we are not using the KL-divergence as a loss function for the training of the neural network, but rather to calculate the map (19) to be taught. The data set for training the neural network is generated on a grid of linearly spaced \(\mu_{n-1}\) and log-spaced \(\sigma_{n-1}\). We numerically calculate the full posterior distribution for each pair of parameters, for different measurement outcomes \(d\) and times \(\tau_{n}\). A numerical KL-divergence fit to a bimodal Gaussian as explained above is performed for each combination of inputs, and the target value for each feature \(\{\mu_{n-1},\sigma_{n-1}\}\) is chosen to be the set \(\{\mu_{n}^{(0)},\sigma_{n}^{(0)},\mu_{n}^{(1)},\sigma_{n}^{(1)},\tau_{n}\}\) that minimizes the expected variance \(\mathbb{E}_{d_{n}}[\sigma_{n}^{2}]\) as a function of \(\tau_{n}\). For this task, we used a standard feed-forward NN while keeping the storage size of a typical FPGA as a boundary condition in mind. The network is trained by minimizing the mean square error (MSE) between predictions and the target values. A subtlety to address is that because of their role in the estimation scheme, the tolerance for errors in the output of the NN varies across the map. Indeed: (i) the errors in \(\mu_{n}^{(d)}\) and \(\sigma_{n}^{(d)}\) must be contained so that the bimodal Gaussian based on the output parameters still has a significant overlap with the one based on the target values, i.e., both errors should not exceed the scale of \(\sigma_{n}^{(d)}\) itself, and (ii) the error in \(\tau_{n}\) must be contained so that the measurement performed using this time is consistent with the updates for \(\mu_{n}^{(d)}\) and \(\sigma_{n}^{(d)}\). In practice, the calculation of a MSE-based loss function therefore requires including a variable weight for the error, depending on the values of the inputs \(\mu_{n-1}\) and \(\sigma_{n-1}\). We implemented this by instead teaching the NN a map where all five output parameters are renormalized by \(\sigma_{n-1}\), using a plain MSE as the loss function, and finally retrieving the predictors of interest by applying the inverse transformation to the output as a post-processing step, see Appendix A for more details. We found that a sufficiently large neural network is capable of learning an accurate fit over several orders of magnitude of \(\mu\), \(\sigma\) and \(\tau\), though its performance in different regions of the map was not consistent. However, for a truly useful fit, the size of the network needed makes it unfeasible to implement straightforwardly on the current generation of FPGAs. Therefore we explored alternative approaches as well, where the NN is only used in the regime of small \(\mu/\sigma\), where the simple MM fit of Section 3.1 does not work optimally. We found that a significantly smaller neural network (three layers of 20 neurons each) manages to consistently learn the map where \(\mu<2\pi\sigma\), and we therefore investigated a hybrid approach as alternative to the MM: we use the MM for \(\mu_{n-1}\geq 2\pi\sigma_{n-1}\), while using the NN map when \(\mu_{n-1}<2\pi\sigma_{n-1}\) where a good fit is achievable (cf. Ref. [34]). Although we believe that the network can be made even more compact and still give acceptable results, we did not investigate this further. ## 4 Results In this Section we analyze the performance of the two estimation schemes outlined above, and compare them to the more commonly used Bayesian protocol with linearly spaced \(\tau_{n}\)[17, 18, 25]. To do so, we first discuss the experimentally relevant time scales involved in the estimation. Firstly, the Hamiltonian parameter \(\omega\) to be estimated can be assumed constant during the estimation protocol only if the typical time scale associated with its variation, i.e., the correlation time \(T_{c}\thicksim\Gamma^{-1}\), is much longer than the total estimation time \(T_{e}\ll T_{c}\). The estimation time \(T_{e}\) includes \(N\) repetitions of the FID experiment, each of which involves an initialization-evolution-readout sequence. During the \(n\)-th repetition, the qubit undergoes free evolution for time \(\tau_{n}\), while the initialization and readout steps take additional time \(T_{\text{exp}}\), typically of the order \(T_{\text{exp}}\thicksim 10\)\(\mu\)s. In total, the \(n\)-th repetition thus takes \(T_{n}=\tau_{n}+T_{\text{exp}}\), and the estimation time can be formally written as \(T_{e}=NT_{\text{exp}}+\sum_{n}\tau_{n}\). The constraint \(T_{e}\ll T_{c}\) thus implicitly sets a limit to the number of available repetitions \(N\) (in Section 4.2 we will investigate this constraint in more detail). Furthermore, the phenomenological dephasing time \(T\) sets the upper bound on the individual evolution times \(\tau_{n}\lesssim T\) and thus limits the total estimation time \(T_{e}\lesssim N(T+T_{\text{exp}})\) as a result. In our simulations, we first set \(T_{c}\to\infty\), i.e., we treat \(\omega\) as a static parameter, and we compare the estimation methods both in the dephasing-free case and for finite \(T\). Next, we include the dynamics of \(\omega\) by using a finite \(T_{c}\) and we analyze their effect on the estimation procedure, focusing on the relevant example of low-frequency fluctuations of \(\omega\) (assuming it to be the Overhauser field gradient). Finally, we extend this analysis to include an arbitrary additional separation time \(T_{w}\) that elapses between consecutive runs of the estimation protocol and thus defines a time window in which the knowledge obtained about \(\omega\) can be employed for qubit control with improved coherence. Altogether, this thus presents a complete protocol for the tracking of a slowly varying Hamiltonian parameter in practice, with small enough overhead to be implemented on an FPGA in a typical experiment. ### Estimation of a static parameter In order to benchmark the schemes, we simulate many estimations where the true parameter \(\omega\) to be estimated is assumed static and is drawn from the normal distribution \(p_{0}(\omega)\), truncated here to \(\omega\in[-2\sigma_{K},2\sigma_{K}]\) since the NN is only trained to be valid on this domain, and we use both estimation methods outlined in Section 3. The results following from the MM (red) and hybrid schemes (green) are shown in Fig 3, where we compare them to a standard non-adaptive scheme with uniform sampling in time, \(\tau_{n}=n\pi\sigma_{K}^{-1}/2\), and a uniform initial prior (blue), as used in [17, 18, 25]. The distributions of deviations of the estimates \(\hat{\mu}_{n}\) from the true values are plotted as "violin plots" on a logarithmic scale, where the horizontal bars indicate the median of each distribution.1 Fig 3(a) shows the results as a function of the total number \(N\) of FID experiments per estimate, and Fig 3(b) presents the same results but now as a function of the total estimation time \(T_{e}\), where we included \(T_{\text{exp}}=750~{}\sigma_{K}^{-1}\). The inset in Fig 3(a) shows the total number of parameters that need to be stored on the FPGA during an estimation procedure, as a function of the number of experiments each estimate consists of. Footnote 1: The reason for plotting the median rather than the mean is that the latter will be skewed to deceivingly large values due to the occurrence of few outliers where the estimation fails [55]. As can be seen from Fig 3(a), the median error of both the MM (red) and the hybrid scheme (green) decreases exponentially with the number of measurements \(N\) starting at around 20 measurements, and both schemes clearly outperform the uniform sampling (blue). This exponential improvement is similar to the one found in Ref. [34], where a two-step process was employed, consisting of a "warm-up" round of estimates with linearly spaced \(\tau_{n}\), followed by an adaptive procedure similar to the one outlined in Section 3.1 where \(\mu/\sigma\) was assumed to have become large enough such that the positive part of the distribution could be fitted straightforwardly to a single Gaussian peak. Having such a warm-up round means that the scheme does not take advantage of the fact that the Gaussian approximation only requires storing two variables since the warm-up needs to store the entire distribution. Also, for very small \(\omega\) one would presumably have to use many measurements in the warm-up period.2 Footnote 2: Asymptotically, all of the adaptive optimization schemes mentioned implement a quantum binary search algorithm where the binary choices relate to the measurement outcomes \(d=0\) and \(d=1\). Each outcome results in a different posterior, choosing which side of the prior’s mean it centres on, giving a factor \(1-e^{-1}\approx 0.63\) improvement in the variance at each step, cf. Eq. (16). Since the two adaptive schemes are optimized in a greedy way, i.e., to yield the maximum gain per experiment, they show their exponential improvement clearly as a function of \(N\). The improvement is less pronounced when considering the total experiment time \(T_{e}\) rather than the number of experiments, as illustrated in Fig 3(b). Indeed, greedy schemes typically tend to require exponentially spaced waiting times, and 50 experiments with such an adaptive scheme will thus take significantly longer than the same number of linearly spaced experiments. However, since the experimental overhead time \(T_{\text{exp}}\sim 10~{}\mu\)s needed for initialization and readout is typically orders of magnitude longer than \(\sigma_{K}^{-1}\sim 20~{}\)ns, which sets a typical scale for the waiting times, the number of experiments \(N\) and the total time \(T_{e}\) often scale similarly for moderate \(N\), typically up to tens of experiments, see Fig 3. Interestingly, the scheme based on the MM does not fare very well for small numbers of measurements (up to \(N\sim 15\)), likely due to the approximation for evaluating the evolution time being suboptimal at small \(\mu/\sigma\). We also note that, although the NN interpolation of a KL-divergence fit gives an optimal choice for the waiting times also at small \(\mu/\sigma\), it does not seem to give a significant improvement over the uniform time spacing during the first few measurements. The inset of Fig 3(a) gives an indication of the amount of memory needed to perform the different schemes, as a function of \(N\). The scheme based on the MM (red) only requires tracking of _three_ parameters (and calculating very few equations). While the hybrid scheme (green) technically only needs to track seven variables, it does need to store the NN on the FPGA and to feed data through the network (which in this case consisted of 1005 trainable parameters). For the uniform time sampling (blue) the computational cost depends on how the procedure is implemented. Here, we used a Fourier-coefficient representation of the instantaneous distribution functions [33], so that the number of parameters increases quadratically with \(N\) and the distribution is represented accurately at all times.3 Comparing the three methods, the computational advantage of the simple MM approach is dramatically clear and, in our opinion, clearly outweighs the modest loss in accuracy for smaller \(N\). Footnote 3: One could alternatively represent the distribution by discretizing \(\omega\) with a desired resolution or use Monte-Carlo sampling to get an arbitrarily good resolution without storing too many data points. We now add a finite dephasing time \(T\), which limits the evolution time during each repetition, \(\tau_{n}\lesssim T\). In its simplest form, the phenomenological parameter \(T\) corresponds to the time scale on which the coherent oscillations that are probed in each FID experiment preserve. As an example, for the case where coherence is limited by qubit relaxation or by leakage to non-computational states, the parameter would be directly related to the relaxation or leakage time \(T\sim T_{1}\). Alternatively, the time scale \(T\) can be related to the fluctuations of the parameter to be estimated itself around its static value; if such fluctuations would consist of uncorrelated Figure 3: (a) Violin plots of the logarithm of the absolute estimation errors of \(10^{4}\) simulations after \(N\) single-shot measurements with linear \(\tau_{n}\) (blue) and adaptive \(\tau_{n}\) using the MM scheme (red) or the hybrid scheme (green), where the true frequency is sampled from the initial prior truncated to \(\omega\in[-2\sigma_{K},2\sigma_{K}]\). Horizontal lines indicate the median error over the simulations. The inset shows for each of the schemes how many parameters have to be stored during the estimation. (b) Same results versus total experiment time \(T_{e}\) assuming that initialization and readout take \(T_{\text{exp}}=750\ \sigma_{K}^{-1}\). white noise, then the dephasing time is simply \(T\thicksim T_{2}\), where \(T_{2}\) is the commonly measured dephasing time.4 Footnote 4: We highlight that in all of the above cases the decay of the likelihood function [cf. Eq. (4)] would be exponential in \(T\), instead of Gaussian. This would, however, only modify details in the estimators \(\hat{\sigma}_{n}^{(\alpha_{k})}\) and \(\hat{\mu}_{n}^{(\alpha_{k})}\) but keep the intuitive interpretation of \(T\) as the upper bound of \(\tau\) intact. In this work we chose Gaussian decay, to align with the case of spin qubits affected by low-frequency noise, in the form of the fluctuating Overhauser fields themselves or possibly as residual effects on \(\omega\) of \(1/f\) charge noise. To show how the value of \(T\) affects the estimation procedure, we plot in Fig 4 the distributions of absolute errors after \(N=50\) FID experiments as a function of the normalized dephasing time \(T\sigma_{K}\) for the same three schemes as in Fig 3, using the same color coding. The distributions are again plotted as violin plots on a logarithmic scale, where the horizontal lines indicate the median. As the coherence time \(T\) becomes short, the advantage of the adaptive schemes diminishes and the accuracy eventually becomes similar to that of the uniform time sampling. Exponential improvement of the error is only possible with an exponential increase in waiting times \(\tau_{n}\), and since the FID times are capped at \(\tau_{n}\lesssim T\) the usefulness of adaptive schemes naturally becomes limited in the case of very short dephasing times. ### Single estimation of a slowly drifting parameter For the case of \(\omega\) representing the slowly drifting Overhauser field gradient, with a typical correlation time \(T_{c}\thicksim 5\) s, the longest useful waiting time (related to the dephasing time \(T\) discussed above) becomes more intricately connected to the other time scales mentioned in the beginning of this Section. Assuming that the dynamics of \(\omega\) are fully driven by the fluctuations of the nuclear spin ensemble, which we model as an OU process, we investigate the effect of these drift-diffusion dynamics on the estimation procedure. In the absence of any additional information about the Overhauser gradient \(\omega\), our knowledge about it is described by the probability distribution \(p_{0}(\omega)\). The uncertainty associated with this distribution then determines the "standard" dephasing time usually associated with the fluctuating Overhauser fields in GaAs-based spin qubits, \(T_{2}^{K}\thicksim\sigma_{K}^{-1}\thicksim 20\) ns. The role of our estimation procedure is to reduce the uncertainty in \(\omega\) to a final value \(\sigma_{f}\ll\sigma_{K}\), and hence significantly extend the dephasing time of the qubit for operations performed right after Figure 4: Violin plots of the logarithm of the absolute estimation errors of \(10^{4}\) simulations after \(N=50\) single-shot measurements as a function of the phenomenological dephasing time \(T\). We show results with linearly spaced waiting times \(\tau_{n}\) (blue) and adaptive \(\tau_{n}\) using the MM scheme (red) or the hybrid scheme (green), where the true \(\omega\) is sampled from the initial prior truncated to \(\omega\in[-2\sigma_{K},2\sigma_{K}]\). Horizontal lines indicate the median error over the simulations. the estimation. One can clearly not reach a final uncertainty \(\sigma_{f}\) if \(\omega\) diffuses over more than \(\thicksim\sigma_{f}\) during a single FID experiment.5 To estimate the time \(\tau_{2\sigma_{f}}\) over which a Gaussian peak with width \(\sigma_{f}\) evolves to a peak with width \(2\sigma_{f}\) we can use Eq. (2), yielding6 Footnote 5: If this constraint is violated, then the estimation procedure might produce so-called outliers, i.e., estimates that are much further off from the true value of \(\omega\) than typical ones. Footnote 6: Formally we always work with the bimodal distribution \(\mathrm{q}(\omega,\mu,\sigma)\), but since both this distribution and the dynamics of \(\mu\) and \(\sigma\) following from the FP equation are symmetric in \(\omega\), we can simply use Eq. (2) to predict the evolution of our parameters. Furthermore, the relaxation of \(\mu(t)\) toward zero also contributes to the drift of the distribution function describing \(\omega\). For small \(\sigma_{f}/\sigma_{K}\) and \(t/T_{c}\), however, we find that this relaxation goes \(\propto t/T_{c}\), whereas the change in \(\sigma\) is \(\propto(t/T_{c})^{1/2}\). \[\tau_{2\sigma_{f}}\thicksim\frac{T_{c}}{2}\ln\left(\frac{\sigma_{K}^{2}-\sigma _{f}^{2}}{\sigma_{K}^{2}-4\sigma_{f}^{2}}\right). \tag{21}\] In the limit of \(\sigma_{f}\ll\sigma_{K}\), this reduces to \[\tau_{2\sigma_{f}}\thicksim\frac{3T_{c}}{2}\frac{\sigma_{f}^{2}}{\sigma_{K}^{ 2}}. \tag{22}\] Since both adaptive estimation schemes investigated in this work are greedy and converge to a roughly exponential increase of \(\tau_{n}\) and exponential decrease of \(\sigma_{n}\) as a function of the experiment number \(n\), the relation (22) also yields a maximal achievable accuracy \(\sigma_{\mathrm{min}}/\sigma_{K}\) and corresponding maximum number of useful single-shot experiments \(N_{\mathrm{max}}\) as a function of \(T_{c}\). In principle, these quantities are defined through \(\tau_{N_{\mathrm{max}}}+T_{\mathrm{exp}}=\tau_{2\sigma_{\mathrm{min}}}\), but throughout this section we will set \(T_{\mathrm{exp}}=0\), in order to find the intrinsic limits on the estimation accuracy purely set by the dynamics of \(\omega\) interfering with the FID experiments. The expectation is that as soon as the two Gaussian peaks in the bimodal distribution are well separated, the schemes will converge to a sequence where \(\tau_{n}\thicksim a_{\tau}(1-e^{-1})^{-n/2}\) and \(\sigma_{n}\thicksim a_{\sigma}(1-e^{-1})^{n/2}\)[35], where the prefactors \(a_{\tau}\) and \(a_{\sigma}\) depend on how quickly this exponential regime is reached. Using Eq. (22) we find \[N_{\mathrm{max}}\thicksim\frac{2\ln\left(\frac{3}{2}\frac{T_{c}}{a_{\tau}} \frac{a_{\sigma}^{2}}{\sigma_{K}^{2}}\right)}{3[1-\ln(e-1)]}. \tag{23}\] This sets the minimal achievable variance as \(\sigma_{\mathrm{min}}^{2}\thicksim a_{\sigma}^{2}(1-e^{-1})^{N_{\mathrm{max}}}\), for which the longest single-shot waiting time needed is \(\tau_{N_{\mathrm{max}}}=a_{\tau}(1-e^{-1})^{-N_{\mathrm{max}}/2}\). Since this time \(\tau_{N_{\mathrm{max}}}\) is in any case the longest useful waiting time, one can use \(T=\tau_{N_{\mathrm{max}}}\) to limit the choice of \(\tau_{n}\) accordingly. The presence of a finite \(T_{\mathrm{exp}}\) can be incorporated straightforwardly into the theory, resulting in results that are slightly modified quantitatively. To illustrate this intrinsic limitation on the estimation accuracy, we simulated \(10^{4}\) estimations up to \(N=60\) using the scheme based on the MM, while letting \(\omega\) continuously fluctuate following an OU process with \(\sigma_{K}=50\) MHz and \(T_{\mathrm{exp}}=0\). In Fig 5 we plot the median accuracy of the resulting estimates as a function of the number of total FID experiments,7 for different \(T_{c}\) ranging from \(100\)\(\mu\)s to \(10^{5}\) s (different traces). We found that for our simulations, Eq. (23) predicts that \(N_{\mathrm{max}}\thicksim 1.45\ln(494\,\sigma_{K}T_{c})\), where the factors \(a_{\tau}\) and \(a_{\sigma}\) were determined from fitting \(a_{\tau}(1-e^{-1})^{-n/2}\) and \(a_{\sigma}(1-e^{-1})^{n/2}\) to the average of actual \(\tau_{n}\) and \(\sigma_{n}\) used by the scheme, in the limit of \(T_{c}\thicksim\infty\). The resulting values for \(N_{\mathrm{max}}\) are indicated in Fig 5 with vertical dashed lines, confirming that Eq. (23) gives a good estimate for the maximum useful number of FID experiments. This means that for a typical spin-qubit system (where \(T_{c}\thicksim 5\) s) and using the greedy adaptive estimation schemes presented in this paper, there is generally no point in performing more than \(N\sim 35\) single-shot experiments in a single estimation, and the dephasing time can be set to \(T\sim\tau_{35}\approx 3\)\(\mu\)s. This also means that the intrinsic limitation on the estimation accuracy caused by the diffusion of \(\omega\) is roughly \(\sigma_{\min}/\sigma_{K}\approx 10^{-3}\). ### Sequential estimation of a slowly drifting parameter Another consequence of having an \(\omega\) that slowly drifts is that in practice the estimation procedure has to be repeated over time to update our knowledge about \(\omega\), in order to keep the effective qubit dephasing time suppressed. Denoting the time in between two estimations by \(T_{w}\), we understand that if \(T_{w}\gtrsim T_{c}\) all previously gained knowledge about \(\omega\) has become obsolete at the start of each estimation procedure and the correct initial prior is always \(p_{0}(\omega)\). However, if one sets \(T_{w}\lesssim T_{c}\) then we can expect the initial prior to be still somewhat narrowed, potentially allowing for a more efficient accurate estimation of \(\omega\) using fewer FID experiments. An important advantage of using the Gaussian approximation throughout the whole estimation scheme is that it is very straightforward to include such an "idle time" into the model. Indeed, if the final posterior distribution of the estimation can be mapped to the Gaussian parameters \(\mu_{f}\) and \(\sigma_{f}\), then we can use Eq. (2) to obtain the parameters that characterize the distribution at time \(T_{w}\) after the estimation procedure, \[\mu(T_{w}) =\mu_{f}e^{-T_{w}/T_{c}}, \tag{24}\] \[\sigma(T_{w}) =\sqrt{\sigma_{K}^{2}+(\sigma_{f}^{2}-\sigma_{K}^{2})e^{-2T_{w}/ T_{c}}}. \tag{25}\] We see that when \(T_{w}\lesssim T_{c}\) the final \(\sigma(T_{w})\) is indeed significantly smaller than \(\sigma_{K}\). The values for \(\mu(T_{w})\) and \(\sigma(T_{w})\) as given by Eqs. (24,25) can easily be evaluated on an FPGA, allowing to start the subsequent estimation from the initial prior \(q[\omega,\mu(T_{w}),\sigma(T_{w})]\) instead of \(p_{0}(\omega)\). In a way, we thus keep track of the optimal prior to use, which presumably reduces the number of FID experiments needed in each estimation to achieve good accuracy. In Fig 6 we illustrate this approach, using the hybrid estimation scheme presented in Section 3.2 and setting \(T_{c}=5\) s and \(\sigma_{K}=50\) MHz. Fig 6(a) shows a simulated \(\omega(t)\) following from an OU stochastic process in blue. We then simulated six subsequent estimation procedures, spaced by \(T_{w}=0.2\) s, where we stopped whenever we reached \(\sigma_{f}\leq 2\) MHz. The resulting final values \(\mu_{f}\) and \(\sigma_{f}\) are indicated by the red points with error bars (the error bars Figure 5: Median estimation error as a function of the number of single-shot experiments used, for \(10^{4}\) simulations of the scheme based on the MM. Here, we included the drift–diffusion dynamics of \(\omega\) throughout the whole simulation, using \(\sigma_{K}=50\) MHz and different correlation times \(T_{c}\) (different traces). The vertical dashed lines show the maximum useful number of single shots as given by Eq. (23). show \(2\sigma_{f}\)). The initial prior at \(t=0\) is given by \(p_{0}(\omega)\), depicted as the black point at \(|\omega|=0\) with an error bar of \(2\sigma_{K}\). The evolution of \(\mu\) and \(2\sigma\) as given by Eqs. (24,25) in between estimations is illustrated by the red shaded areas, still resulting in initial priors (shown in black) for all estimations after the first one that are significantly narrower than \(p_{0}(\omega)\). We find that for the values used here, all subsequent estimations require \(N\approx 9\) to obtain an accuracy of \(\sigma_{f}=2\) MHz, whereas one typically needs \(N\approx 13\) to reach the same accuracy starting from \(p_{0}(\omega)\). In Fig 6(b) we explore the dependence of the average number of FID experiments needed, \(\langle N\rangle\), to reach \(\sigma=2\) MHz on the idle time \(T_{w}\) in between estimations. We see that \(\langle N\rangle\) increases roughly logarithmically until \(T_{w}\thicksim T_{c}\), where it saturates at \(\langle N\rangle\approx 12.5\). Depending on the time window needed for coherent qubit operations, one could thus efficiently reduce the estimation overhead needed by adjusting \(T_{w}\) in the experiment and using Eqs. (24,25) for an adaptive adjustment of the initial prior for each estimate. Conversely, one can use the results presented in Section 4.2 to estimate the maximal time \(T_{w}\) in between estimations for which the uncertainty in \(\omega\) stays below a given threshold. Suppose that all qubit operations require \(\sigma<\sigma_{\text{max}}\) and that one is able to efficiently estimate \(\omega\) to an accuracy \(\sigma_{f}\); then the time window available for coherent qubit operations is given by \[T_{w}\thicksim\frac{T_{c}}{2}\ln\left(\frac{\sigma_{K}^{2}-\sigma_{f}^{2}}{ \sigma_{K}^{2}-\sigma_{\text{max}}^{2}}\right), \tag{26}\] which for \(\sigma_{\text{max}},\sigma_{f}\ll\sigma_{K}\) reduces to \(T_{w}\thicksim T_{c}(\sigma_{\text{max}}^{2}-\sigma_{f}^{2})/2\sigma_{K}^{2}\). Figure 6: (a) Example of tracking a drifting Overhauser gradient (shown in blue) with the hybrid estimation scheme. A series of FID experiments is performed every 200 ms until the final Gaussian posterior variance becomes \(\sigma_{f}<2\) MHz (assuming that \(\omega\) remains constant within these few measurements). Red dots mark the final posterior estimates \(\mu_{f}\), with error bars of \(2\sigma_{f}\). Equations (24,25) give a recipe for how to adjust the new prior as the time between estimations increases (the shaded red regions indicate the evolution of \(\mu\) and \(2\sigma\)). The black dots show the resulting prior estimates \(\mu(T_{w})\), again with error bars of \(2\sigma(T_{w})\). (b) The average number of single shots needed to regain a final posterior variance with \(\sigma_{f}<2\) MHz as a function of the idle time \(T_{w}\) between measurements. Summary and conclusions We have investigated two different efficient adaptive Bayesian estimation schemes that can track a slowly fluctuating Overhauser field gradient with a zero-mean Gaussian distribution in time, using a series of free induction decay experiments. Both schemes perform a greedy optimization of the single-shot estimation parameters based on the current knowledge of the field gradient, in order to obtain an exponential scaling of the estimation error with the number of experiments. The small number of variables needed by these schemes to track the gradient makes them well-suited for real-time estimation performed on an FPGA, and the robustness of Bayesian methods combined with the ability of the schemes to handle zero-value estimates makes them applicable to real-world quantum estimation problems. We also show how our simple Gaussian approach lends itself excellently for _predictive_ estimation by use of the Fokker-Planck equation to anticipate how our knowledge of the field gradient evolves after an estimation has been performed. We included a discussion of the effects of a finite dephasing time on the estimation schemes and we analyzed the effect of the fluctuations of the field gradient itself on the robustness of the schemes, yielding a useful insight in the intricate interplay of all experimental time scales involved. ## Acknowledgements We gratefully acknowledge useful discussions with Fabrizio Berrita, Torbjorn Rasmussen, Anasua Chatterjee, and Ferdinand Kuemmeth. Funding informationThis project was funded within the QuantERA II Programme that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733. This work is part of INTFELLES-Project No. 333990, which is funded by the Research Council of Norway (RCN), and it received funding from the Dutch National Growth Fund (NGF) as part of the Quantum Delta NL programme. Parts of the computations were performed on resources provided by the NTNU IDUN/EPIC computing cluster [56]. Code availabilityExamples of code used for the numerical calculation can be found at [https://github.com/jacobdben/efficient-bayesian-estimation](https://github.com/jacobdben/efficient-bayesian-estimation). ## Appendix A Training the neural network As mentioned in the main text, it becomes necessary to teach the NN a renormalized map to get acceptably bounded errors such that the NN yields outputs that perform the correct update of the information about \(\omega\) across a larger range of parameters. Specifically, the renormalized map that we used in teaching the NN reads as \[\tilde{f}:\{\mu_{n-1},\sigma_{n-1}\}\,\rightarrow\,\left\{\frac{\hat{\mu}_{n }^{(0)}-\mu_{n-1}}{\sigma_{n-1}},\frac{\hat{\sigma}_{n}^{(0)}}{\sigma_{n-1}},\frac{\hat{\mu}_{n}^{(1)}-\mu_{n-1}}{\sigma_{n-1}},\frac{\hat{\sigma}_{n}^{ (1)}}{\sigma_{n-1}},\frac{\sigma_{n-1}\hat{\star}_{n}}{2}\right\}. \tag{27}\] Data is generated over a grid of inputs \(\mu_{n-1}\) and \(\sigma_{n-1}\), with a linearly spaced range \([0,1]\) and logarithmically spaced range \([0.001,0.5]\) of the values, respectively. For illustration, the desired output of \(\tau_{n}\) and its renormalized counterpart \(\sigma_{n-1}\tau_{n}/2\) are shown in Fig 7(a,b), where we set \(T\rightarrow\infty\) for simplicity. It is important to accurately capture the fine features that are visible in the renormalized output [Fig 7(b)] and not in the original output [Fig 7(a)], since one otherwise would perform a measurement that is inconsistent with the updated distribution of \(\omega\). In Fig 7(a) these features become obscured by the large range of values the output can have (a difference of two orders of magnitude in this case). In order to obtain some insight into how the NN is learning the map, we present snapshots of the learning process in Fig 8, using the same feed-forward network with three layers, each with 20 neurons, and hyperbolic tangent activation functions as used in Section 4. The five small plots show a line trace taken at \(\mu_{n-1}=0.5\) of the output \(\sigma_{n-1}\tau_{n}/2\) to be learned (faint red line) together with the learned output (red dashed line) at different stages of the training. The bottom right plot shows the validation loss curve (blue dots) and a quantification of the number of visible peaks in the output of the NN (black line), with this number being an indication of the fineness of the features in the map that the network has learned. The NN first learns the region of small \(\mu_{n-1}/\sigma_{n-1}\) in the map [bottom right corner in Fig 7(b)], and then gradually learns features for increasing values of \(\mu_{n-1}/\sigma_{n-1}\). This is also reflected in the loss curve, as drops in the loss seem to be correlated with the NN's discovery of a new peak. Of course, the decision for what counts as a visible peak is somewhat subjective; we used the function scipy.signal.find_peaks (with prominence=0.05) to count the number of peaks. While the NN finds the first few peaks at small \(\mu_{n-1}/\sigma_{n-1}\) relatively quickly, the subsequent peaks at larger \(\mu_{n-1}/\sigma_{n-1}\) take increasingly longer to learn. It is thus hard to quantify how many peaks are eventually possible for the NN with 3 layers of 20 nodes to learn, in the sense that the number of training epochs needed quickly becomes impractically large. The fact that the region with large \(\mu_{n-1}/\sigma_{n-1}\) is difficult to teach the NN was the motivation for the hybrid scheme that only uses the NN fitting where it was able to learn well (small \(\mu_{n-1}/\sigma_{n-1}\)), while switching to the method of moments fitting where the NN struggles (large \(\mu_{n-1}/\sigma_{n-1}\)). The decision boundary for the hybrid method was set to \(\mu_{n-1}=2\pi\sigma_{n-1}\), as illustrated in Fig 7(c). Figure 7: (a) The original target values for \(\tau_{n}\) and (b) the renormalized values \(\sigma_{n-1}\tau_{n}/2\), for \(T\to\infty\). The details that become visible after renormalization are important for consistency between updated parameters. (c) The hybrid estimation scheme partitions the domain of inputs at \(\mu_{n-1}=2\pi\sigma_{n-1}\), where the NN is used for small \(\mu_{n-1}/\sigma_{n-1}\) (blue region) and the MM for large \(\mu_{n-1}/\sigma_{n-1}\) (red region). Figure 8: (small plots) Snapshots of the learning process using the renormalized \(\tau_{n}\). The faint read lines show a line cut of the map to be learned at \(\mu_{n-1}=0.5\), the dashed lines the output of the NN at different stages of the training process. (bottom right) Validation loss as a function of training epoch number (blue dots, left) and number of peaks found in the output at \(\mu_{n-1}=0.5\) (black line, right).
2304.13499
A Cooperative NOMA User Pairing in UAV-Based Wireless Networks
NOMA (non-orthogonal multiple access) will be seen as a promising technology for enhancing spectrum efficiency in future cellular networks. The use of unmanned aerial vehicle (UAV) Amplify-and-Forward (AF) relaying as a moving access point or BS, on the other hand, has emerged as a potential solution to wireless networks' high traffic demands. In this paper, we study joint user pair and resource allocation-based distance to optimize fair throughput in a downlink scenario, which concentrates on UAV-aided communication from different wireless-powered nodes. Several transmission methods are proposed, including NOMA as well as UAV cooperative relaying and two representative node-pairing strategies. The simulation results illustrate that the proposed user pairing strategies for cooperative NOMA and UAV-based cellular networks enhance downlink transmission performance and ensure optimum use of power and bandwidth resources.
Ghania Khraimech, Fatiha Merazka
2022-12-24T20:35:16Z
http://arxiv.org/abs/2304.13499v1
# A Cooperative NOMA User Pairing in UAV-Based Wireless Networks ###### Abstract NOMA (non-orthogonal multiple access) will be seen as a promising technology for enhancing spectrum efficiency in future cellular networks. The use of unmanned aerial vehicle (UAV) Amplify-and-Forward (AF) relaying as a moving access point or BS, on the other hand, has emerged as a potential solution to wireless networks' high traffic demands. In this paper, we study joint user pair and resource allocation-based distance to optimize fair throughput in a downlink scenario, which concentrates on UAV-aided communication from different wireless-powered nodes. Several transmission methods are proposed, including NOMA as well as UAV cooperative relaying and two representative node-pairing strategies. The simulation results illustrate that the proposed user pairing strategies for cooperative NOMA and UAV-based cellular networks enhance downlink transmission performance and ensure optimum use of power and bandwidth resources. NOMA, UAV, User pairing, Cooperative communication, Amplify-and-forward ## I Introduction Due to considerable technological developments across several drone-related domains ranging from embedded systems to autonomy, control, security, and communications, the domain of unmanned aerial vehicles (UAVs), sometimes known as drones, has undergone a significant revolution in recent years. UAV communications are also a useful strategy to provide communication links throughout temporary events and after catastrophes in distant regions missing cellular infrastructure [1]-[5]. One of the encouraging use case scenarios of UAVs is designing flexible, adaptable, and wireless multiple antennas in the sky [6]. Because of the high elevation of UAVs, line-of-sight (LoS) in UAV communications can provide better small-scale fading between UAVs and ground customers than in traditional ground links, which presents both opportunities, as well as challenges in the development of UAV cellular networks citeb7. Due to the obvious low - power consumption of UAVs, achieving enhanced spectrum and energy efficiency is essential to the ability to get the most out of UAV-based network technologies [8]-[10]. NOMA seems to be a useful product for achieving both spectrum efficiency and energy efficiency in next-generation wireless technologies and beyond, especially in UAV communication networks [11]. The potential advantage of NOMA over OMA in a cellular communication system where a base station (BS) is equipped with enormous antenna arrays was successfully investigated in [12]. In comparison to conventional OMA techniques, NOMA is capable of a more efficient manner utilizing available resources by arbitrarily maximizing users' specific fading channel on both single-cell and cellular systems [13, 14], and it is ready to accommodate multiple members with specific quality-of-service (QoS) criteria in the same users access [15, 16, 17]. Theoretically, spectral efficiency combining within a frequency, time, and code block delivers the composite signal to multiple customers concurrently in NOMA. NOMA is based on the use of superposition coding (SC) at the transmitter and successive interference cancellation (SIC) techniques at the receiver [4, 12]. Multiple access users in the power domain can be implemented in this scenario by using various energy degrees for consumers in the same resource block. As a result, by employing NOMA techniques to increase the available spectrum and energy efficiency, UAV networks may offer several users at the same time [18]. The downlink of a UAV-enabled wireless communication is investigated in this article using power-domain NOMA and cooperative relaying, with the ground nodes being wireless-powered equipment. Air-to-ground (A2G) communication channels [19] are used by these devices, and they are characterized by an altitude-dependent route loss exponent and fading. We focus on a user pairing system for NOMA with cooperative relaying cellular networks, in which access devices are separated into two groups and a unit pair is produced from each group. After that, the channel capacity is divided by the number of pairs, with each pair sharing the same sub-channel to deliver their data. The primary issues that we address in this work are related to how to pair NOMA users within each available orthogonal resource, user-pairing strategy for NOMA as well as cooperative relaying in our UAV model, which has a better performance given a specific user pairing strategy. The remainder of this paper is organized as follows. In Section II, we present the system and channel models considered throughout this article. Section III details user pairing in cooperative NOMA. Performance analysis of the Cooperative NOMA-based transmission and pairing strategies evaluated in this work are described in Section IV. The simulation results are provided and discussed in Section V. Finally, Section VI presents general conclusions regarding the obtained results in this paper as well as prospects for our future work. ## II System Model As shown in Fig. 1, we consider a link between a transmitter BS and receivers, 4 users, separated by a distance \(R_{i}\). We assume that direct communication between the transmitter and the receivers is possible, where there is one source node BS, \(K\) half-duplex amplify-and-forward (AF) UAV relay nodes denoted as \(R_{k},k=1,...,K\) and four users \(U_{1},U_{2},U_{3}\), and \(U_{4}\), respectively. We use UAVs, each equipped with a single antenna, to enable this link and maximize its capacity. Each UAV acts as an amplify-and-forward relay where it simply receives the signal, amplifies, and re-transmits it in a synchronized manner. \([H]_{1}\in C^{N_{T}\times N_{R}}\) denotes the communication between the transmitter and the UAVs, whereas, \([H]_{2}\in C^{N_{R}\times N_{U}}\) denotes the channel between the UAVs and the receiver. We assume that both channels are strong LoS channels with each element defined as \([H]_{i,j}=\frac{\lambda}{4\pi d_{i,j}}e^{\frac{j2\pi d_{i,j}}{\lambda}}\), where \(\lambda\) where are the signal frequency band and \(d_{i,j}\) is the distance between antenna elements \(i\) and \(j\) at the transmitter, relay or receiver. ## III User Pairing in Cooperative NOMA ### _Cooperative NOMA_ We know that NOMA exploits successive interference cancellation (SIC) [20], in which one user detects the other user's message from the superposition-coded incoming signal before decoding his own. Thus, while conducting SIC, the near user decodes the information of the far user. This is a step that cannot be ignored. In every case, the data of the far user must be decoded by the near user. Now that the near user has access to the far user's data, they can help him by relaying it to him. Because the distant user's channel with the broadcasting BS is poor, the close user's retransmission of his data will supply him with diversity. In other words, he'll get two copies of the same message. One comes from the BS, and the other comes from a nearby user who is serving as a relay. As a result, we can assume the far user's outage probability to reduce. Cooperative communication/cooperative relaying [21] is the name for this concept. We can see that NOMA naturally encourages cooperative communication because the near user has access to the data of the far user, which they must decode anyway. The benefit of cooperative communication is link redundancy, which allows a message to be transmitted twice. In those other words, if one link is down, the other is very certainly up and running. The scenario in which both links fall at the same time is less likely than one link going down at a time. We have a lower outage probability and, as a result, more diversity benefits without the need for extra antennas in this situation (i.e., MIMO). Another benefit of relaying is that it might essentially extend the BS's coverage area. Let's create a cooperative NOMA network now that we've seen what cooperative communication is and how it might benefit our network. Under a line-of-sight channel, we'll consider a downlink transmission with a BS and two NOMA users relayed by a UAV acting as an amplify-and-forward relay. [22] With the UAV, we have a near user with a stronger channel and a distance user with weak channel circumstances. The transmission is split into two parts. Let us refer to the first time slot as "direct transmission" and the second as "relaying." #### Iii-A1 Direct transmission slot In the direct transmission slot, the BS uses NOMA to transmit data intended for the near user \(x_{n}\) and the far user \(x_{f}\). The near user uses SIC to decode the far user's data first and then proceeds to decode its data. The far user will only perform direct decoding. At the end of the direct transmission slot, the achievable data rates at the near user and far user are, \[R_{n}=\frac{1}{2}log_{2}(1+\alpha_{n}\rho|h_{n}|^{2}). \tag{1}\] for the detection of SIC, the desired signal and interference signal is presented as \[R_{f,n}=\frac{1}{2}log_{2}(1+\frac{\alpha_{f}\rho|h_{f}|^{2}}{\alpha_{n}\rho| h_{f}|^{2}+1}). \tag{2}\] Notation * \(\alpha_{n}\): power allocation coefficient for the near user. * \(\alpha_{f}\): power allocation coefficient for the far user. * \(h_{n}\) : the channel between BS and near the user. * \(h_{f}\) : the channel between BS and far user. * \(\rho\): transmit \(SNR=\frac{P}{\sigma^{2}}\), where \(P\) is the transmit power and \(\sigma^{2}\) is the noise variance * As usual, \(\alpha_{f}>\alpha_{n}\), and, \(\alpha_{n}\)+\(\alpha_{f}\)=1 We have this factor of \(\frac{1}{2}\) in front of the achievable rates because we have two time slots of equal duration and \(R_{n}\), \(R_{f}\) are the achievable rates during the first time slot alone. #### Iii-A2 Relaying slot The next half of the time slot is called the relaying slot. As we saw, the near user already has the far user's data because he decoded it in the previous time slot. In Fig. 1: Illustration of the system model. the relaying time slot, the near user just transmits this data to the far user. The achievable rate of the far user at the end of the relaying slot is, \[\mathbf{R}_{f,2}=\frac{1}{2}log_{2}(1+\rho|\mathbf{h}_{nf}|^{2}) \tag{3}\] here, \(h_{nf}\) is the channel between the near user and the far user. We can already see that \(R_{f,2}>R_{f,1}\) because of two reasons: * There is no interference from other transmissions * There is no fractional power allocation. The whole transmit power is given to the far user #### Iii-A3 Diversity combining Now, at the end of the two-time slots, the far user has two copies of the same information received through two different channels. The far user can now use a diversity combining technique. For example, he can use selection combining to choose the copy which was received with high SNR. After selection combining, the achievable rate of the far user would be, \[\mathbf{R}_{f}=\frac{1}{2}log_{2}(1+\max(\frac{\alpha_{f}\rho|\mathbf{h}_{f}|^{ 2}}{\alpha_{n}\rho|\mathbf{h}_{f}|^{2}+1},\rho|\mathbf{h}_{f}|^{2})) \tag{4}\] If we do not use cooperative relaying, the achievable rate of the far user would be, \[\mathbf{R}_{f,noncoop}=log_{2}(1+\frac{\alpha_{f}\rho|\mathbf{h}_{f}|^{2}}{ \alpha_{n}\rho|\mathbf{h}_{f}|^{2}+1}) \tag{5}\] The factor of \(\frac{1}{2}\) does not exist here because the entire time slot will be used for transmission in non-cooperative communication. If we DID NOT use NOMA, for example, if we use Time-division multiple access (TDMA), we will allocate half of the time slot for transmission of the far user data. Hence, the achievable rate of the far user would be, \[\mathbf{R}_{f,OMA}=\frac{1}{2}log_{2}(1+\rho|\mathbf{h}_{f}|^{2}) \tag{6}\] Now that we have seen what cooperative communication is, and how it is useful to our network, let us design a cooperative NOMA network, that we order the performances of different schemes as cooperative NOMA \(>\) non-cooperative NOMA \(>\) Orthogonal Multiple Access (OMA). So, we can see from Fig. 2 that cooperative communication is beneficial. ### _User Pairing in NOMA_ We have previously shown that NOMA can support numerous consumers in the same band simultaneously moment. The next logical step will be to figure out how many users can be supported per frequency carrier. In practice, if the number of users is increased beyond a certain point, the network's total throughput begins to decline. As a result, the number of users per carrier cannot be increased forever. A feasible option to support all users is to use hybrid NOMA. Hybrid NOMA is a technique that combines NOMA and any of the OMA approaches. Consider the combination of TDMA and NOMA, as indicated in Fig.3. Assume we have a time slot with a duration of 4ms. Within this time frame, we must support four users. Now, TDMA will divide the 4 ms slot into four 1 ms slots, and each user will be assigned one slot. The four users will be given the entire 4 ms slot by NOMA. This, as we all know, will increase SIC complexity and processing time. In contrast, hybrid NOMA divides the 4 ms slot into two 2 ms slots and assigns two NOMA users to each slot. As can be shown, hybrid NOMA can provide service to all users while reducing complexity. As we progress through this article, we'll talk more about hybrid NOMA methods. Now we must figure out how to match users within each orthogonal resource that is offered. Should we group users together(1,2),(3,4) or (1,3),(2,4) or (1,4),(2,3). Fig. 3: Hybrid NOMA. Fig. 2: Outage probabilities of the considered transmission schemes: cooperative NOMA, non-cooperative NOMA, Orthogonal Multiple Access (OMA). ## IV Performance Analysis Let us consider a downlink cooperative communication-based UAV scenario with four users \(U_{1},U_{2},U_{3}\quad and\ U_{4}\). Let \(d_{1},d_{2},d_{3}\,and\ d_{4}\) denote the distances of those users, respectively, from the UAV. \(U_{1}\) is the nearest user and \(U_{4}\) is the farthest user. Therefore, their channel conditions are ordered as \(|\mathbf{h_{1}}|^{2}<|\mathbf{h_{2}}|^{2}<|\mathbf{h_{3}}|^{2}<|\mathbf{h_{4}} |^{2}\). We have two orthogonal resource blocks (time/ frequency/ subcarriers) to which two users must be allocated. We will pair users based on various distances. There are two basic ways to accomplish this: * Near-far pairing (N-F) * Near-near, far-far pairing (N-N, F-F) #### Iv-1 Near-far pairing (N-F) In this method, the nearest user to the UAV is paired with the farthest user from the UAV. The next nearest user is paired with the next farthest user and so on. In our example, \(U_{1}\) is the nearest user and \(U_{4}\) is the farthest user. So, N-F pairing will pair \(U_{1}\) with \(U_{4}\) in one resource block. \(U_{2}\) will be paired with \(U_{3}\) in the next resource block. In the first pair of users, \(U_{1}\) is the near user and \(U_{4}\) is the far user. Therefore, we have to choose the power allocation coefficients as \(\alpha_{1}<\alpha_{4}\). So, \(U_{1}\) should perform SIC, while \(U_{4}\) will perform direct decoding. Similarly, in the second pair of users, \(U_{2}\) is the near user and \(U_{3}\) is the far user. Therefore, we have to choose \(\alpha_{2}<\alpha_{3}\). Here, \(U_{2}\) should perform SIC while \(U_{3}\) will perform direct decoding. The achievable rates for the users in the first pair are, \[\mathbf{R}_{1,nf}=\frac{1}{2}log_{2}(1+\frac{P\alpha_{1}|h_{1}|^{2}}{\sigma^{ 2}})(after\ SIC). \tag{7}\] \[\mathbf{R}_{4,nf}=\frac{1}{2}log_{2}(1+\frac{P\alpha_{4}|h_{4}|^{2}}{P\alpha _{1}|h_{4}|^{2}+\sigma^{2}}). \tag{8}\] Similarly, for the second pair, \[\mathbf{R}_{2,nf}=\frac{1}{2}log_{2}(1+\frac{P\alpha_{2}|h_{2}|^{2}}{\sigma^ {2}})(after\ SIC). \tag{9}\] \[\mathbf{R}_{3,nf}=\frac{1}{2}log_{2}(1+\frac{P\alpha_{3}|h_{3}|^{2}}{P\alpha _{2}|h_{3}|^{2}+\sigma^{2}}). \tag{10}\] The sum rate of the (N-F) scheme will be: \[\mathbf{R}_{nf}=\mathbf{R}_{1,nf}+\mathbf{R}_{2,nf}+\mathbf{R}_{3,nf}+ \mathbf{R}_{4,nf}. \tag{11}\] #### Iv-2 Near-near, far-far pairing (N-N, F-F) Another way to perform user pairing is to group the nearest user with the next nearest user. The farthest user is grouped with the next farthest user. If we follow this strategy, in our example, \(U_{1}\) will be paired with \(U_{2}\) in one resource block. \(U_{3}\) will be paired with \(U_{4}\) in the next resource block. Now, in the first pair of users, \(U_{1}\) is nearest to the BS when compared to \(U_{2}\). Therefore, we have to choose \(\alpha_{1}<\alpha_{2}\)\(\cdot\)\(U_{1}\) should perform SIC, \(U_{2}\) will perform direct decoding. Similarly, \(U_{3}\) is closer to the BS than \(U_{4}\). So, we have to choose \(\alpha_{3}<\alpha_{4}\). \(U_{3}\) should perform SIC, while \(U_{4}\) will perform direct decoding. The achievable rates for the users in the first pair are, \[\mathbf{R}_{1,nn}=\frac{1}{2}log_{2}(1+\frac{P\alpha_{1}|h_{1}|^{2}}{\sigma^{ 2}})(after\ SIC). \tag{12}\] \[\mathbf{R}_{2,nn}=\frac{1}{2}log_{2}(1+\frac{P\alpha_{2}|h_{2}|^{2}}{P\alpha _{1}|h_{2}|^{2}+\sigma^{2}}). \tag{13}\] Similarly, for the second pair, \[\mathbf{R}_{3,nn}=\frac{1}{2}log_{2}(1+\frac{P\alpha_{3}|h_{3}|^{2}}{\sigma^ {2}})(after\ SIC). \tag{14}\] \[\mathbf{R}_{4,nn}=\frac{1}{2}log_{2}(1+\frac{P\alpha_{4}|h_{4}|^{2}}{P\alpha _{3}|h_{4}|^{2}+\sigma^{2}}). \tag{15}\] The sum rate of the (N-F) scheme will be \[\mathbf{R}_{nn}=\mathbf{R}_{1,nn}+\mathbf{R}_{2,nn}+\mathbf{R}_{3,nn}+ \mathbf{R}_{4,nn}. \tag{16}\] ## V Simulation Results We have observed two different AF relaying UAV cooperative NOMA approaches. The question is which is better, and what if we just multiplex all of the users on the same carrier (SC-NOMA), with no user pairing? Given the additional work required for user pairing, why do we need NOMA at all? Why not use TDMA instead? Let's find out by putting everything together in MATLAB. In this simulation, \(N\) users with \(N=4\) are considered. When we utilize each of the user pairing strategies with the UAV that we looked at, we will plot the network's sum rate. We will also examine the network's sum-rate performance with simply SC-NOMA and TDMA with UAV. When the near user is coupled with the far user, as seen in Fig.4, a higher sum rate is produced. This supports the well-known notion that cooperative NOMA operates better Fig. 4: Sum-rate performance of the network with SC-NOMA, TDMA, hybrid NOMA N-F pairing and hybrid NOMA NN-FF pairing with/without UAV. when the channel conditions, or the LoS probability model in UAV communication, are different between the two users. When near-near, far-far pairing is employed, NOMA-based UAV still outperforms TDMA-based UAV in terms of sum rate, but the difference is not significant. When compared to TDMA, the performance of SC-NOMA-based UAVs is poor since all users are crammed onto the same carrier, generating interference. This also validates our assumption that increasing the number of customers sharing the same carrier without paying a premium is impossible. Another benefit of the N-F pairing UAV relay over the N-N-F-F pairing UAV relay is the careful selection of users who may be impacted by interference. In contrast to the N-N-F-F system, which gives interference-free transmission to one near the user and one far user (assuming perfect SIC), the N-F strategy provides interference-free reception to both near users (assuming perfect SIC). As a result, N-F has a higher sum rate. When noise dominates the interference power, however, the choice of users who experience interference makes a little effect. Both systems are projected to perform similarly. ## VI Conclusion and Future Work In this paper, a cooperative NOMA user pairing under an LoS channel is proposed, which is provided by a UAV serving as an AF relay. We investigated two user pairing strategies to improve downlink transmission performance. According to our results, implementing a user pairing strategy offers the most appropriate use of power and bandwidth resources. If we couple users at random, we may not be able to maximize the network's potential. The conversion of our single-antenna UAV relay model to a multiple-antenna UAV relay should be interesting. These issues will be addressed in the future.
2309.04119
Penetrating Shields: A Systematic Analysis of Memory Corruption Mitigations in the Spectre Era
This paper provides the first systematic analysis of a synergistic threat model encompassing memory corruption vulnerabilities and microarchitectural side-channel vulnerabilities. We study speculative shield bypass attacks that leverage speculative execution attacks to leak secrets that are critical to the security of memory corruption mitigations (i.e., the shields), and then use the leaked secrets to bypass the mitigation mechanisms and successfully conduct memory corruption exploits, such as control-flow hijacking. We start by systematizing a taxonomy of the state-of-the-art memory corruption mitigations focusing on hardware-software co-design solutions. The taxonomy helps us to identify 10 likely vulnerable defense schemes out of 20 schemes that we analyze. Next, we develop a graph-based model to analyze the 10 likely vulnerable defenses and reason about possible countermeasures. Finally, we present three proof-of-concept attacks targeting an already-deployed mitigation mechanism and two state-of-the-art academic proposals.
Weon Taek Na, Joel S. Emer, Mengjia Yan
2023-09-08T04:43:33Z
http://arxiv.org/abs/2309.04119v1
# Penetrating Shields: A Systematic Analysis of ###### Abstract This paper provides the first systematic analysis of a synergistic threat model encompassing memory corruption vulnerabilities and microarchitectural side-channel vulnerabilities. We study speculative shield bypass attacks that leverage speculative execution attacks to leak secrets that are critical to the security of memory corruption mitigations (i.e., the shields), and then use the leaked secrets to bypass the mitigation mechanisms and successfully conduct memory corruption exploits, such as control-flow hijacking. We start by systematizing a taxonomy of the state-of-the-art memory corruption mitigations focusing on hardware-software co-design solutions. The taxonomy helps us to identify 10 likely vulnerable defense schemes out of 20 schemes that we analyze. Next, we develop a graph-based model to analyze the 10 likely vulnerable defenses and reason about possible countermeasures. Finally, we present three proof-of-concept attacks targeting an already-deployed mitigation mechanism and two state-of-the-art academic proposals. ## 1 Introduction Memory corruption bugs [19, 25, 82] are one of the oldest security problems in computer systems. According to the MITRE 2021 rankings [62], the top 10 most dangerous software weaknesses encompass multiple memory corruption bugs, including buffer overflows and use-after-frees. Designing effective and low-overhead memory safety mechanisms has become a rich research area [82] and extensive progress has been made in both academia and industry. For example, in academia, there have been a vast number of works proposing to leverage architectural innovations or insights to reduce performance and storage overhead of software-only defenses, including CHERI [89], Hardbound [28], Watch-Dog [64], WatchdogLite [65], CHEx86 [78], No-FAT [94], ZERO [95], REST [79], Califorms [76], AOS [49], C3 [56], and Morpheus [32]. Moreover, the industry has been actively adopting mitigation mechanisms. For example, major processor vendors have announced and shipped products with many features supporting increased memory safety, such as Intel MPX [68], Intel CET [11], ARM Pointer Authentication [71], ARM MTE [77], and SPARC ADI [10]. Despite the vast number of solutions that have been proposed and deployed, memory corruption attacks continue to pose a serious security threat. More concerningly, we observe that there is a growing trend of leveraging _synergistic_ attack strategies to bypass memory corruption mitigations. Specifically, instead of only exploiting memory corruption vulnerabilities, adversaries have started to build advanced attacks by _combining_ attack strategies which fall into traditionally disjoint threat models, targeting different layers of the system stack. Notably, multiple recent works have demonstrated that microarchitectural side channels, which exploit _hardware_ vulnerabilities, can be used to break Address Space Layout Randomization (ASLR) [33, 35, 43] and ARM Pointer Authentication (PA) [72], two primary defenses in modern systems for protecting against _software_ vulnerabilities. **Motivation: Speculative Shield Bypass Attacks.** We call an attack a _speculative shield bypass_ attack (SSB attack for short) if it exploits speculative execution to bypass memory corruption mitigations (i.e., the shields), such as ASLR and ARM PA. Specifically, a speculative shield bypass attack exploits side-channels to leak some secret that is critical to the security of a memory corruption mitigation. Once a speculative shield bypass attack acquires the secret knowledge necessary to bypass the mitigation mechanisms, it exploits this knowledge to conduct a memory corruption exploit, such as control-flow hijacking. It is critically important to study speculative shield bypass attacks. On one hand, SSB attacks out-perform traditional memory corruption attacks, violating both confidentiality and integrity of the system in ways that were not traditionally possible. On the other hand, SSB attacks are easy to miss and difficult to mitigate. Modern systems have a large and complex attack surface. To handle such non-trivial complexity, security researchers today often partition the problem space into disjoint threat models, exploring each threat model separately. However, to mitigate against SSB attacks, researchers must first acquire domain expertise that spans both side-channel vulnerabilities as well as memory corruption vulnerabilities. **This Paper.** We strive to provide the _first_ systematic analysis of a threat model that encompasses both memory corruption and microarchitectural side-channel threat models, focusing on speculative shield bypass attacks. In this quest, we aim to answer the following questions. * Are any memory corruption mitigations vulnerable to speculative shield bypass attacks? How do we identify and classify the insecure mitigations? * What new insights can we learn from the analysis? Can these insights be used to guide the design of future memory corruption mitigation mechanisms to be resilient against speculative shield bypass attacks? To answer the above questions, we first systematize a taxonomy which allows one to reason about the security of memory corruption mitigations against speculative shield bypass attacks. Through this process, we identify two classes of defenses that are likely vulnerable to speculative shield bypass attacks, namely ones that utilize tamperable metadata or randomize the address layout. We find that 10 mitigations fall in these two classes of defenses. Next, we develop a graph-based model to systematically analyze the 10 likely vulnerable defenses and to reason about possible countermeasures. We call our graphs "speculative information flow graphs" (SIF graphs for short). SIF graphs help us precisely visualize the information flow between the security checks imposed by the memory corruption defenses and observable microarchitectural events, identifying sources of microarchitectural side-channel leakage. To further support our analysis results, we demonstrate proof-of-concept attacks breaking three state-of-the-art defenses, Stack Smashing Protection [24], Always On Heap Memory Safety (AOS) [49], and Cryptographic Capability Computing (C3) [56]. **Contributions.** In summary, we make the following contributions: * A taxonomy identifying the critical characteristics that lead to vulnerabilities in memory corruption mitigations against speculative shield bypass attacks. * A graph-based model to analyze and reason about side-channel vulnerabilities in memory corruption mitigations and possible countermeasures. * Three proof-of-concept demonstrations of speculative shield bypass attacks. ## 2 Background Speculative shield bypass attacks exploit the synergies that arise at the convergence of side-channel vulnerabilities and memory corruption vulnerabilities. In this section, we give a brief background on these vulnerabilities and their exploits. ### Micro-architectural Side Channel Attacks Micro-architectural side-channels are a class of vulnerabilities that enables an attacker to _leak information_ and steal some _secret_ from the victim application running on the same machine. To do so, the attacker monitors the side effects of the victim's actions on various micro-architectural structures. As formalized in [50], a micro-architectural side-channel attack comprises of either pre-existing or attacker-generated code run in the victim's _security domain_ that 1) accesses secret information and 2) transmits that information over a communication channel that 3) is received by an attacker. Despite existing protection and isolation mechanisms, the signal transmitted over the channel leaks a secret that should be confined to its security domain. From a telecommunications perspective, the _transmitter_ is in the victim's code, and the _receiver_ is in the attacker's code, and the medium of the _communication channel_ is the micro-architectural state that can be modulated by the activity of the transmitter [18]. In general, the receiver measures the microarchitectural resource usage of the victim, using explicit timer instructions [59, 91], or a custom timer constructed through shared memory [30]. Side-channel attacks have been demonstrated on an ever-growing list of microarchitectural structures [81], such as branch predictors [12, 31], caches [17, 27, 29, 36, 57, 59, 66, 91], translation lookaside buffers [34], on-chip networks [87], and memory controllers [86]. In this paper, our proof-of-concept attacks exploit cache-based side-channels. The most popular variants of cache-based side-channel attacks are Flush+Reload [91] and Prime+Probe [59]. In Flush+Reload, the attacker and the victim share memory. The attacker first flushes one of the victim's cache lines out of the cache, and then waits for the victim to access the cache line, causing modulation on the cache line (i.e., changing the state of the cache line). The attacker then detects the modulation by measuring the time to access the same cache line. If the access time is short (i.e., cache hit), the attacker can infer that the cache line was accessed by the victim. In Prime+Probe, the attacker first constructs eviction sets. An eviction set is a group of memory addresses that map to the same cache set with at least as many lines as the associativity of the cache. The attacker then repeats the following steps. First, the attacker populates cache sets that he or she wants to monitor, with the eviction sets. Next, the attacker waits for the victim to access some cache lines. Finally, the attacker detects the modulation of the cache by measuring the time to re-access the eviction sets. If the re-access time of an eviction sets is high (i.e., cache miss), the attacker can infer that the corresponding cache set was accessed by the victim. ### Speculative Execution Attacks Speculative execution attacks are a subset of microarchitectural side-channel attacks that exploit the speculative nature of modern processors. Modern processors execute instructions ahead-of-schedule, by predicting the outcome of control-flow decisions and _speculatively executing_ instructions based on those predictions. If a prediction turns out to be correct, the speculatively executed instructions are retired. Otherwise, the instructions are squashed and the processor's state is rolled back to the architectural state before these instructions. These mis-speculated instructions which are deemed to be squashed are called _transient instructions_[20]. The micro-architectural state that is modified by these transient instructions causes modulations to the side-channels, ultimately leaking secrets. By exploiting mis-speculated execution, an attacker can circumvent software invariants, exercising code paths that should be unreachable, and leak _any_ secret in the victim's addressable virtual memory. Today, a plethora of speculative execution attacks have been demonstrated, including Spectre [52], Meltdown [58] and their many variants [88, 83, 41, 53, 80, 41]. ### Memory Corruption Attacks Memory corruption bugs are an orthogonal class of vulnerabilities to microarchitectural side channels. In addition to leaking secrets, memory corruption bugs can also be exploited to violate integrity, compromise authority and perform arbitrary code execution. Software written in low-level languages like C or C++ are prone to memory corruption bugs. Data from Google and Microsoft [3, 8] indicate that nearly 70% of the security bugs found today are due to memory corruption vulnerabilities in C/C++ software. The most prevalent memory corruption vulnerabilities violate either spatial or temporal safety. Spatial vulnerabilities include buffer overflow and underflow vulnerabilities, while temporal vulnerabilities are largely dominated by use-after-free vulnerabilities. Spatial safety violations can be subdivided further into adjacent and non-adjacent overflows [4]. We provide examples of three types of memory corruption vulnerabilities in Listing 1. An attacker can exploit the adjacent buffer overflow vulnerability in line 4 by passing in a string longer than the size of buf, and thereby overwrite locations adjacent to buf. In line 7, an attacker can exploit the non-adjacent buffer overflow vulnerability by controlling the arguments x and y to perform an arbitrary write. In line 10, an attacker can exploit the use-after-free vulnerability to overwrite a freed location. ``` 1void 2vulnerable(imtx,inty){//x,yareattackercontrolled 3charbuf[128]; 4gets(buf);//1.adjacentoverflowvulnerability 5 6int*alloc1=(int*)malloc(N*sizeof(int)); 7alloc[x]=y;//2.non-adjacentoverflowvulnerability 8 9free(alloc1); 10alloc[0]=y;//3.use-after-freevulnerability 11} ``` Listing 1: Examples of memory corruption vulnerabilities.Once attackers have access to such memory corruption vulnerabilities, they can target different program properties to achieve various end goals. **Control-Flow Hijacking Attacks.** Corrupting a code pointer causes a control-flow transfer to anywhere in executable memory. Code pointers include return addresses on the stack and function pointers anywhere in memory. Today, control-flow hijacking attacks are executed using attack techniques called return oriented programming (ROP) [73] which corrupt return addresses or jump-oriented programming (JOP) [16] which corrupt indirect code addresses (typically function pointers). Collectively, ROP and JOP are called code reuse attacks (CRAs). To mount a CRA, an attacker first analyzes the victim's code to identify sequences of instructions that end with a return or jump instruction (called _gadgets_). Next, the attacker uses a memory corruption vulnerability to inject a sequence of target addresses corresponding to a sequence of gadgets. Later, when a code pointer is de-referenced or the function returns, it moves to the location of the first gadget. At the termination of the first gadget, the control-flow is transfered to the second gadget (and so on) by a control flow instruction such as return or jump. **Other Attacks.**_Data-oriented programming_ attacks [42, 45] cause malicious end results without changing the control flow of the program, by only manipulating data pointers. _Data corruption attacks_[22] target non-pointer data stored in memory. For example, an attacker can bypass authorization checks by manipulating program flags. ## 3 Threat Model The threat model of speculative shield bypass attacks considers two popular scenarios: (1) A sandbox scenario where an attacker is confined to a sandboxed environment with limited code execution on the target machine, such as in Javascript sandbox inside a browser, Google Native Client [92], and Linux eBPF [47]. In the sandbox scenario, the host (sandbox creator) is the victim. (2) A cloud scenario where an attacker runs on a remote device and interacts with another program, such as in remote web servers and SGX enclaves [23]. In the cloud scenario, the remote web server and the remote SGX enclave are the victim. We make the following assumptions. First, there exists an exploitable memory corruption vulnerability in the victim, which allows the attacker to write to some memory locations in the victim program. The attacker's goal is to exploit this vulnerability to perform a memory corruption exploit, such as control-flow hijacking. Second, to defend against software vulnerabilities, the victim incorporates one of the respective memory corruption mitigation considered in this paper. Hence, the attacker's intermediate goal is to bypass this defense (i.e., the shield). Third, the attacker is able to perform a micro-architectural side-channel attack on the machine running the victim program. Thus, if the defense mechanism that the victim incorporates relies on the confidentiality of some secret information, it may be vulnerable to a speculative shield bypass attack. ## 4 Speculative Shield Bypass Attacks A speculative shield bypass attack (SSB attack for short) consists of two critical steps: 1) performing an _information disclosure attack_ using a side-channel vulnerability to gain some secret that is critical to a memory corruption defense, and 2) using the leaked secret knowledge to _spoof the security check_ of the defense while performing a memory corruption attack. In this section, we study an example of a speculative shield bypass attack called Speculative Probing [33]. Speculative Probing is a state-of-the-art SSB attack penetrating ASLR. ASLR adds random offsets to memory segments, such as the code segment, the stack, and the heap, to obfuscate the locations of code-reuse gadgets which are necessary to construct code-reuse attacks [14, 1]. As a result, attackers need to correctly guess the secret offsets to calculate the addresses of the gadgets. In Speculative Probing, the attacker first exploits a side-channel vulnerability to leak the secret offset, by targeting the following code snippet in the victim: if (cond) { call f_ptr(); }. Specifically, the attacker exploits a memory corruption vulnerability (Section 2.3) to overwrite f_ptr using a guessed offset. To check whether the offset is guessed correctly, the attacker triggers mis-speculation on the branch to execute the indirect call. During the indirect call, the memory management unit (MMU) performs a security check on the address added with guessed offset. If the offset is correct, the guessed address is mapped, and hence the security check passes, modulating the memory system. Otherwise, the security check triggers a speculative exception (which is squashed) instead. Thus, the attacker can monitor whether the guessed address is mapped or not using microarchitectural side-channels. The attacker repeats the above process, abusing the security check performed by the MMU as a side-channel, ultimately brute-forcing the secret offset. Once the secret offset is leaked, the attacker proceeds to the second step of conducting a memory corruption exploit, by setting the condition cond to true. This time, the indirect call will commit, rather than being squashed, resulting in a traditional control-flow hijacking attack to the disclosed gadget address. To this date, there already exist two case studies demonstrating speculative shield bypass attacks, namely Speculative Probing [33] breaking ASLR, and PACMAN [72] breaking ARM Pointer Authentication. Yet, most memory corruption mitigation proposals from both industry and academia continue to consider side channels out-of-scope of their threat model. We reason that this trend may incur a critical security crisis if memory corruption mitigations continue to be deployed with such a narrow threat model in consideration, despite being vulnerable to SSB attacks in the wild. To better anticipate this crisis, we develop a taxonomy (Section 5) to identify the critical characteristics that lead to speculative shield bypass vulnerabilities and a graph-based model (Section 6) to analyze and reason about side-channel vulnerabilities in these memory corruption mitigations. ## 5 Classifying Mitigations for Speculative Shield Bypass Analysis In this section, we aim to develop a taxonomy which allows one to reason about the security of memory corruption mitigations against speculative shield bypass attacks. Our ultimate goal is to identify and analyze the critical characteristics shared among a subset of memory corruption mitigations, that may be exploited using side-channel attacks. We start by assessing the fundamental roots of current taxonomies of memory corruption defenses and why they are insufficient in analyzing these defenses against speculative shield bypass attacks. Fundamentally, memory corruption vulnerabilities occur due to the lack of sufficient security checks in the vulnerable programs. Memory corruption defenses aim to protect these possibly vulnerable programs by augmenting them with automated security checks during their runtime. As such, past taxonomies [19, 25, 67, 74, 82] have tended to categorize the defenses inside memory corruption attack models, in relation to which steps of the attack the augmented security checks occur. While this approach is helpful in understanding which types of memory corruption attacks can be mitigated by the varying security checks imposed by the respective defenses, it is not as useful for understanding the resiliency of the defenses against speculative shield bypass attacks. Recall from Section 4, the second step in a speculative shield bypass attack is to make use of some leaked secret to spoof the security check of a defense. In other words, from a SSB attack perspective, it is not a question of which step of a memory corruption attack that a security check occurs in. Rather, the question is _whether an attacker can forge rogue information that can spoof a security check_. Our taxonomy primarily focuses on answering this question. We show our taxonomy in Figure 1 with examples of state-of-the-art mitigation mechanisms (focus on hardware-software co-designs) from both academia [14, 28, 32, 49, 56, 64, 78, 79, 89, 94, 95] and industry [77, 11, 11, 68, 71, 77]. Mitigations in red are likely vulnerable to speculative shield bypass attacks. We first divide mitigations depending on whether they perform _spoofable_ security checks (i.e., given the ability to corrupt a pointer, an attacker can also corrupt additional input that also goes into the security checks with the pointers) or _unspoofable_ security checks (i.e., while a pointer may be corruptible, the additional input to the security check cannot be controlled by an attacker). For spoofable security check based defenses, we further subdivide them depending on what kind of information is used in the security checks, which is also the information that the attacker aims to leak. _Address layout randomization_ based defenses add random "_offsets_" to the address of various (sub-)objects, entangling these offsets with the originally predictable (sub-)object addresses. On the other hand, _tamperable metadata augmentation_ based defenses augment addresses or pointers with additional information called "_metadata_" that is maintained separately from the pointers. Since the security checks of these defenses are spoofable for an attacker who is equipped with the correct offset or metadata, these defenses maintain this offset or metadata as a _secret_ with high entropy so that it is difficult for the attacker to guess it correctly. For unspoofable security check based defenses, we found that all defenses in this category can also be categorized as _tamperproof metadata augmentation_ based defenses which similarly augment addresses or pointers with metadata. However, they differ greatly in that they maintain unspoofable security checks by enforcing the integrity of their metadata. In fact, as the security checks are unspoofable, these defenses do not share the incentive of maintaining the privacy or high entropy of their metadata. Thus, the metadata in these defenses are generally public information (i.e., not secret), nullifying any merits of information disclosure. We now provide examples for each category to show how spoofable and unspoofable security checks work. **Address layout randomization.** Address layout randomization based defenses randomize the position of (sub-)objects in virtual memory. Such defenses include ASLR [14, 1], Morpheus [32] and Califorms [76]. Specifically, these defenses add random offsets to the addresses of various (sub-)objects, entangling the offset with the originally predictable addresses. Because a memory corruption exploit which overwrites a pointer allows an attacker to control the address (and hence the offset), address layout randomization schemes can be categorized as performing spoofable security checks. Specifically, we formulate the security check below with attacker-controlled variable in blue and secret in red. ``` if(corrupted_ptr=addr+rand_offset):pass; else:fail; ``` If the secret offset is leaked, an attacker can adjust the value of corrupted_ptr appropriately to bypass the security check. **Tamperable Metadata Augmentation.** Tamperable metadata augmentation based defenses augment addresses or pointers with separately maintained metadata. These metadata are used to perform varying security checks against the addresses or pointers. However, the integrity of the metadata is not enforced in these schemes. For example, stack smashing protection [24] maintains its metadata on the stack. ARM Pointer Authentication [70], ARM MTE [77], AOS [49], No-FAT (temporal safety) [94], and SPARC ADI [10] maintain their metadata inside unused upper bits of pointers. C3 [56] re-formats pointers to entangle the metadata with the pointers. Since these locations can be overwritten by a memory corruption exploit, the security checks can be spoofed. As an example, we formulate the security check of ARM MTE below. ARM MTE maintains a tag inside unused upper bits of a protected pointer, and a memory tag that is associated with the location addressed by the pointer. A security check succeeds if the tag inside the pointer matches the memory tag. ``` if(corrupted_tag=memory_tags[corrupted_ptr]):pass; else:fail; ``` If the secret memory tag is leaked, the attacker can adjust the value of the tag inside the pointer appropriately to bypass the security check. **Tamperproof Metadata Augmentation.** Tamperproof metadata augmentation based defenses similarly augment addresses or pointers with metadata that is used to perform varying security checks. The difference lies in the fact that instead of relying on privacy and high entropy of the metadata, these defenses directly enforce integrity of their metadata. Today, there exists three approaches for enforcing metadata integrity. First, mitigations that perform out-of-bound security checks for all memory accesses can place the metadata in an area outside the bounds of any other object in the address space called _shadow memory_[68, 65, 28, 78]. Alternatively, mitigations can transparently tag the memory addresses that store the metadata by leveraging additional specialized hardware called _tagged memory_[79, 89, 95]. Lastly, mitigations can leverage _page attributes_ to enforce integrity of the pages that hold the metadata, as in [11, 7]. Since the metadata which is used to perform security checks cannot be modified by an attacker, the security checks that are performed by these defenses cannot spoofed. As such, these defenses are unspoofable and thus are not vulnerable to speculative shield Figure 1: Taxonomy of proposed or deployed memory corruption mitigations when focusing on speculative shield bypass attacks. Mitigations in red are likely vulnerable to speculative shield bypass attacks. bypass attacks. **Taxonomy summary.** In this section, we have systematized a taxonomy of state-of-the-art memory corruption defenses focusing on their security against speculative shield bypass attacks. In this taxonomy, two categories of defenses (in red in Figure 1) may be vulnerable to speculative shield bypass attacks. Specifically, mitigations that augment tamperable metadata or add random offsets to addresses or pointers inevitably perform spoofable security checks; these spoofable security checks make these defenses sensitive to information disclosure attacks, since the leakage of secret metadata or offset empowers an adversary with the ability to disarm a security check and ultimately bypass the respective defenses. On the other hand, the taxonomy has also led to finding of a class of defenses that are resilient to speculative shield bypass attacks; such defenses enforce the integrity of their metadata by techniques such as out-of-bound shadow memory, tagged memory, or page attributes. Furthermore, the metadata of these defenses are generally public information and indifferent to information disclosure. ## 6 Modeling Speculative Shield Bypass Attacks In Section 5, we found two categories of defenses that may be vulnerable to speculative shield bypass attacks. In this section, we use a graph-based model to systematically analyze the 10 defenses in these two categories. The goal is to capture the common characteristics of information leakage of the different mitigations and ultimately better understand how to defend against speculative shield bypass attacks. ### Speculative Information Flow Graph Recall that in a speculative execution attack, an attacker triggers secret-dependent speculative execution in a victim program to result in distinguishable microarchitectural side effects. Therefore, our graph-based modeling approach focuses on tracking and visualizing secret-dependent information flows, including data-flow and control-flow dependencies at the microarchitectural level. We call our graph a speculative information flow graph (SIF graph in short), because it captures the information flow during speculative execution. In a speculative information flow graph, a node represents a microarchitectural operation, which may or may not be observable via microarchitectural side channels. For example, a node could be a memory access operation, whose address can be distinguished via cache-based side channels, or a security check operation which is introduced by a mitigation mechanism. There exist two types of relationships between a pair of nodes. First, if there exists a directed path from one node the the other, this means there exists a control-flow or data-flow dependency between the two operations, and the two operations are forced to happen sequentially in hardware. Second, if there does not exist a directed path between the two nodes, this means there does not exist information flow between them and the two operations can happen in any order. To clarify these relationships, we show the corresponding SIF graph for each of the mitigation schemes in Figure 2. The graphs describe the execution of the Spectre gadget if (cond) (y = ld x; z = ldy;). We highlight the secrets in each mitigation scheme using red text, and the attacker-controlled variables using blue text. The secrets in the mitigation mechanisms are either some tamperable metadata or the randomized offset. The attacker-controlled variables include the variable \(x\) and any metadata embedded inside it. ### Baseline Microarchitecture We start by analyzing the baseline microarchitecture. The mis-speculation window is shaded in gray and we use yellow nodes to represent microarchitectural observable events. We use stack smashing protection [24] as an example. Stack smashing protection detects buffer overflows of local stack-based buffers, which overwrite the saved return address. By placing a random value (called a canary) between the return address and the local buffers at function entries, the integrity of the canary can be checked before the return of the function and thus the overflow can be detected. Hence, the secret in the stack smashing protection is the canary. In Figure 2(a), within the speculation window, there are two operations. The first operation performs a load using an attacker-controlled address \(x\) and retrieves \(y\). If the attacker controls the value of \(x\) to equal to the address of the location that stores the canary, the value of \(y\) would be the secret canary. Next, the second load operation acts as a transmission operation and leaks the value of \(y\) (the canary) via side channels. Once the canary value is leaked, the attacker can spoof security checks and bypass stack smashing protection. We demonstrate a proof-of-concept attack against stack smashing protection on a real machine in Section 7.1. **Takeaway 1:** If a mitigation is vulnerable to Spectre or its variants, and if the mitigation's metadata is stored in addressable virtual memory, then the secret metadata can be leaked via Spectre or its variants. Memory corruption defenses introduce security checks to guard protected pointer de-reference operations. Interestingly, we found that defense mechanisms that leverage hardware-software co-design vary widely in their relationships between their security checks and their protected operations. We summarize these relationships into three categories: 1) the security check sequentially guards the protected operation; 2) the security check modifies the behavior of the protected operation; 3) the security check happens in parallel with the protected operation. ### Sequentially Guarded Security Checks We show the SIF graphs for mitigations using sequentially guarded security checks in Figure 2(b)-(e). In these schemes, when executing the Spectre gadget, each of the two load oper ations is guarded with a security check node, meaning that the load operation will only be performed after the security check succeeds. For example, in ASLR (Figure 2(b)), the security check operation examines whether the address \(x\) is mapped; that is, whether the address is within a valid region defined by the start and the end of the code or data segment and a randomized offset. If \(x\) is mapped, the load operation will be performed. Otherwise, the load will be stalled and marked as illegal. The processor then waits for the branch to be resolved to decide whether to commit or squash these two instructions. We make the following observations from the SIF graphs. First, there exists an information flow from the security check node to the protected operation node, where the latter is an observable microarchitectural event. Hence, even if a mitigation's metadata resides in the virtual address space, it cannot be leaked via the attack vector described in Section 6.2. **Takeaway 2:** If there exists an information flow between a preceding security check and a following observable microarchitectural event, then a memory corruption mitigation is able to mitigate Spectre and its variants. Second, the security check results in a boolean outcome where only one case results in an observable microarchitectural event. Thus, the security check outcome can be leaked by observing whether the protected operation happens or not via microarchitectural side channels. Note that, leaking the boolean security checkout outcome is sufficient for the attacker to spoof security checks. This is because, with such a primitive, an attacker can abuse the security check to try all possible values of the attack-controlled variable (variable \(x\)) via brute-force, while suppressing exceptions via speculative execution, until the security check succeeds. Once this value is leaked, the attacker can use this value to spoof the security checks. SIF graphs can be used to reason about existing attacks as well as derive new attacks. For example, Speculative Probing [33] and PACMAN [72] both exploit the information-flow dependency discussed above, to define appropriate "transient-crash" primitives, and bypass ASLR and ARM Pointer Authentication. We demonstrate a PoC attack targeting C3 in Section 7.2, where we additionally identify a design flaw of C3 due to its use of a symmetric encryption algorithm. To the best of our knowledge, we are the first to find vulnerabilities and demonstrate PoC attacks on C3. ### Califorms Califorms [76] differs to some degree from the five mitigations discussed so far. We show the SIF graph in Figure 2(f). Instead of using security checks to guard protected operations, Califorms always speculatively execute the protected operations. However, to also mitigate Spectre, Califorms uses the security check outcomes to influence the return values of the protected operations during speculation. Specifically, if the Figure 2: Modeling speculative execution behaviors of memory corruption mitigations when executing a Spectre gadget. Red texts indicate secrets in the mitigation mechanisms, and blue texts indicate attacker-controlled variables. security check succeeds, the first load returns the correct data of \(y\) and the second load will proceed as normal. Otherwise, the value of \(y\) will be set to \(0\). Despite the minor difference, Califorms shares an important common feature with the aforementioned schemes; that is, there exists an information flow from the security checks to the protected operations, and the speculative execution of protected operations can cause distinguishable microarchitectural side effects. Therefore, a speculative shield bypass attack can leak the security check outcome in Califorms by monitoring the second load operation. To find the attack value that can be used to spoof the security check, the attacker can try all possible values for \(x\) by brute-force until the returned value \(y\) is non-zero. We summarize the attack vector discussed in Section 6.3 and Section 6.4 below. **Takeaway 3:** As long as there exists an information flow from a security check to distinguishable microarchitectural events during speculative execution, an attacker can leak the security check outcome via microarchitectural side channels. This phenomena arms an attacker with the ability to brute-force the attacker-controlled variable and find the secret value that can be used to spoof the security check. Note that, unlike the attack vector in Section 6.2, this attack vector does not require the secret metadata/offset to be encoded inside the virtual address space. Entropy and Attack Bandwidth.The speculative shield bypass attack discussed above leaks a boolean value of the security check outcome, i.e. whether the check fails or not. Thus, if a secret is n-bit, one must brute-force up to \(2^{n}\) iterations. As such a primitive requires the attacker to brute-force the secret, the entropy of these defense mechanisms largely determines the feasibility of such attacks. Unfortunately, as AOS, ARM PA, No-FAT leverage unused upper bits of a pointer, the entropy of these mechanisms vary between 11 and 16 bits, which is not enough to defend against brute-force attacks. In Califorms, the entropy of the size for each redzone is only 8 (i.e. 3 bits). The entropy of ASLR on a 64-bit system is between 19 bits (Windows) and 28 bits (Linux heap) [46]. Similar to ASLR, Morpheus randomizes the offset added to segments of the program. However, the entropy of Morpheus is 60 bits and the secret offset is re-randomized every 50ms. Thus, to brute-force the secret offset for Morpheus, one must be able to perform in-average \(2^{59}\) attempts in under 50ms, which is quite challenging. For example, according to PACMAN [72], testing one boolean outcome takes around 2.69ms when the branch predictor is trained 64 times. As this implies that one can only try at most 18 brute-force attempts before re-randomization of the secret offset, we conclude that Morpheus is likely secure from brute-force attacks. ### Parallel Security Checks The third category of mitigations performs security checks in parallel with the protected operations during speculative execution. We show the SIF graph for AOS [49] in Figure 2(g). In AOS, the security check outcome does not interact with the speculative execution of the protected operations. Specifically, if the security check fails, the protected operation is still speculatively issued and only marked as causing an exception to be handled later. The exception is handled when the instruction reaches the head of ROB, but is ignored if the speculation window is squashed. **Takeaway 4:** Parallel security checks lack information flow from a security check node to distinguishable microarchitectural events. Hence, they are resilient to the attack vector described in Section 6.3 and Section 6.4. Despite the resiliency of AOS to the attack vector described in Section 6.3 and Section 6.4, AOS is vulnerable to speculative shield bypass attacks. To understand why, observe that in Figure 2(g), the two load operations are always executed speculatively, independent from the outcomes of the security checks. Hence, parallel security check based defenses are vulnerable to Spectre or its variants. Since AOS is vulnerable to Spectre, arbitrary data in the virtual address space can be leaked speculatively. Furthermore, AOS persistently embeds the secret metadata, which is a hash (also called PAC), inside unused upper bits of the pointer. We demonstrate a PoC attack targeting AOS in Section 7.3. To the best of our knowledge, we are the first to find vulnerabilities and demonstrate PoC attacks on AOS. **Takeaway 5:** Parallel security checks lack information flow from a security check node to the following protected node. Hence, they are vulnerable to Spectre and its variants. Thus, if the metadata is encoded inside the virtual address space, they are vulnerable to the attack vector described in Section 6.2. ### Other Mitigation Mechanisms There are two mitigation mechanisms that are listed under spoofable security checks, but are not discussed above, that is, ARM MTE [77] and SPARC ADI [10]. According to public documentation, the secrets in these two schemes are memory tags, which are stored in specialized hardware structures. However, there were no public documentation revealing enough microarchitectural details for us to derive their SIF graphs. In the case that they allow security check outcomes to affect observable microarchitectural events (similar to the first two categories of mitigations), they will be vulnerable to speculative shield bypass attacks. On the other hand, if they use parallel security checks, considering that their secrets do not exist in addressable virtual memory, they will be secure from from speculative shield bypass attacks. ### Generality of SIF graphs Speculative information flow graphs (SIF graphs) are an effective tool to reason about information flows between security checks and observable microarchitectural events. In this section, we have used SIF graphs to analyze mitigation mechanisms using a Spectre gadget. It should be noted that SIF graphs can also be used to analyze other varying gadgets, such as the ones which use the pointers (\(x\) and \(y\)) as branch targets for jump or return instructions. Furthermore, even though we only summarize three categories of mitigations using SIF graphs, the graph itself is, in fact, general enough to capture different types of control-flow and data-flow dependencies. In other words, SIF graphs can be used to analyze new mitigations that yield new information flow patterns yet to be discovered. Moreover in Figure 2, we used a single node to represent the security check operation for each mitigation mechanism because these check operations are simple. In the case that a security check operation becomes complex (e.g., requiring accessing complex data structures), we should consider decomposing a security check node into multiple nodes and investigate whether the security check itself introduces microarchitectural side effects that can leak a secret. Finally, SIF graphs can be used to guide the design of countermeasures, which we discuss in Section 8. ## 7 Proof-of-Concept Demonstrations In this section, we demonstrate proof-of-concept (PoC) attacks against three memory corruption defenses that we have analyzed. The first attack bypasses stack smashing protection [24] using a Spectre primitive to leak the value of a _stack canary_. The second attack bypasses C3 [56] by leveraging the information flow between the security check nodes and the observable micro-architectural event nodes in its SIF graph to brute-force a synonym cryptographic address of a target object on the heap. Once we leak a synonym cryptographic address, we leverage Spectre to break data encryption. The third attack bypasses AOS [49] using a Spectre primitive to leak the PACs of objects on the heap. In all the three PoC demonstrations, we set up a synthetic victim application, which contains 1) a memory corruption vulnerability, that allows either adjacent or non-adjacent buffer overflow; 2) a Spectre gadget where the branch condition can be controlled by the attacker; 3) a win function which resides in the victim's address space lacking any control-flow entry from the program to that win function. Our goal in the PoC demonstrations is to bypass each mitigation mechanism and demonstrate a successful control-flow hijack attack by overwriting a victim return address or a function pointer to the win function. Despite the simplifications to our victim applications, given that these three components are common in real-world applications, we believe our PoC attacks promptly demonstrate the consequences of synergistic attacks in the wild. ### Breaking Stack Smashing Protection In our first PoC, we bypass stack smashing protection (explained in Section 6.2) using a Spectre primitive and an adjacent buffer overflow vulnerability. We conduct our experiment on an Intel Xeon Gold 5220R CPU @ 2.20GHz, running Ubuntu 18.04.6 LTS. In our PoC, we undertake two steps. **Step 1: Using a Spectre primitive to leak the canary.** According to the SIF graph in Figure 2(a), the canary can be leaked via Spectre attacks. Thus, we leverage a standard Spectre primitive to leak the canary byte-by-byte using a Flush+Reload attack. We repeat the attack 8 times to leak all 8 bytes of the canary, which can be used to spoof the security check. **Step 2: Overwriting the return address on the stack to the address of the win function.** The limitations of an adjacent buffer overflow vulnerability (the gets example in Listing 1) requires us to overwrite the canary, before overwriting the return address. Thus, we embed the correct canary value that we leaked during step 1 into our payload, so that the canary is the same value after being overwritten, while the return address is modified to point to win. At the function epilogue, the modified (but same value) canary passes validation and the program then executes a ret instruction that uses the modified return address. This triggers the win function, declaring our control-flow hijack attack to be successful. ### Breaking C3 In our second PoC, we bypass C3 [56]. C3 is built using two key techniques, _address encryption_ and _data encryption_, which we break in steps. To break address encryption, we abuse the control-flow dependence between the security check and protected memory operation (Section 6.3). To break data encryption, we additionally identify another design flaw of C3, namely symmetric encryption of data. Lacking a real-world C3 implementation, we implement C3 using an augmented allocator and an architectural simulator. We evaluate our attack in gem5 [15]. **Address Encryption.** For each heap allocation, C3 obtains a virtual address (VA) to generate a cryptographic address (CA for short). CAs act as a unique signature for that allocation. Figure 3 shows the CA computation process. A virtual address (VA) is divided into three parts: the offset, the lower address, and the upper address. The lower address and the upper address, as well as a 4-bit version number and a secret key are used as inputs to a block cipher (K-cipher [54]) to generate an encrypted address. 1 The decryption process is reversed. Footnote 1: We omit several technical details regarding C3 that are less relevant to our proof-of-concept, such as the power. **Breaking Address Encryption.** According to the SIF graph in Figure 2(e), an attacker can exploit the information flow between the security check and protected memory operations to leak the boolean security check outcome. Specifically, in our PoC, we aim to use the security check outcome to distinguish between CAs that decrypt to a specified VA and CAs that result in garbled VAs, which we call a _cryptographic address oracle_. With such an oracle, an attacker is able to brute-force all possible CAs that decrypt to a specified VA. Note that, due to the 4-bit version number, there exist 16 different CAs that map to the same VA after decryption. We use the gadget snippet like the one below to construct our cryptographic address oracle. Specifically, by transiently brute-forcing CA_guess, the attacker tries to forge an address called CA_attack that decrypts to the same VA as the victim's address CA_victim. We refer to CA_attack as a _synonym CA_ of CA_victim. ``` structvictim{void*fp=&(correct_func());}CA_victim=newvictim;if(cond){dummy=*CA_guess;} ``` We first train the branch to be taken, and trigger a speculative de-reference of CA_guess. We then use Prime+Probe to monitor the cache set that virtual address of CA_victim maps to and count the number of cache misses. In our experiment, we are able to precisely distinguish incorrect CAs from synonym CAs. Specifically, when the guess is a synonym CA, our Prime+Probe implementation in gem5 observes 3 cache misses. Otherwise, our implementation observes 2 cache misses. **Data Encryption.** In addition to address encryption, C3 uses data encryption to further enhance its protection coverage. The data encryption module performs an XOR operation that can be summarized as below, where PRP is a keystream generated using a pseudo-random permutation of the cryptographic address (CA). The decryption process is reversed. With such an encryption scheme, a CA becomes a unique key for data of each allocation. ``` Data_plaintext\(\oplus\)PRP(CA)\(\Rightarrow\)Data_encrypted ``` For example, assume CA1 and CA2 are synonym CAs that decrypt to the same VA. If plaintext data is encrypted using CA1, but then decrypted using CA2, the result is garbled data as shown below. ``` Data_plaintext\(\oplus\)PRP(CA1)\(\Rightarrow\)Data_encrypted ``` Listing 2: Attacking data encryption of C3. ``` 1CA_victim=newvictim;//CA_victimpointstofp 2if(cond){//Spectregadget 3garbled_byte=*CA_attack;dummy=array[garbled_byte*64]; 5} ``` 6//computethedatatooverwritefp 7mask=fp_garbled^fp; 8*CA_attack=addr_win^mask; 9callCA_victim.fp();//call-to-win ``` Listing 3: Attacking data encryption of C3. **Breaking Data Encryption.** To bypass data encryption, we exploit the weakness of the symmetric encryption via XORs. 2 Our insight is that according to the algorithm above, if the attackers are able to leak one pair of plaintext data and the corresponding garbled data, they can derive a mask (like the snippet below) that can be used to directly translate between any possible pairs of plaintext data and garbled data without any knowledge regarding the keystream. Footnote 2: We note that our attack is specific to the weakness of XOR encryption. If C3 were to upgrade to a more advanced encryption algorithm, it may require us to acquire the same CA, not just a synonym CA, during our attack on address encryption, increasing the attack difficulty. ``` mask=Data_plaintext\(\oplus\)Data_garbled =PRP(CA1)\(\oplus\)PRP(CA2) ``` Our PoC attack works as follows, shown in Listing 2. The victim allocates a struct victim (similar to Listing 3) on heap and obtains CA_victim, which points to a function pointer fp that we will try to overwrite (line 1). First, following the approach described above to break address encryption, we obtain CA_attack which is a synonym CA of CA_victim. Second, we use a Spectre primitive to leak the garbled data inside the function pointer fp (lines 2-5). Third, after we leak the garbled data, which we denote as garbled_fp, we compute the mask that can be used to translate between garbled data and plaintext data. Next, the mask is XORed with the address of the win function to derive the intermediate data, and we overwrite fp using this intermediate data and CA_attack (line 8). Finally, as the fp is successfully overwritten, when decrypted during function call using CA_victim, the control flow is successfully directed to our win function (line 9). ### Breaking AOS In our third PoC, we bypass AOS using a Spectre primitive and a non-adjacent buffer overflow vulnerability. We conduct our experiment using an architectural simulator and an augmented compiler. Specifically, the hardware implementations [5] are in gem5 [15] and the compiler implementations [6] are in LLVM [55]. **AOS for Heap Memory Safety.** In AOS, for every heap allocation, the compiler generates a pointer authentication code (PAC), which is a cryptographic hash, and stores the PAC in the upper unused bits of the pointer. Additionally, Figure 3: Address Encryption in C3. the bound information of the pointer is stored in a Hashed Bounds Table (HBT) indexed using the generated PAC. Upon de-referencing a pointer, the AOS hardware extracts the PAC from the pointer, uses the PAC to index into the HBT to retrieve the bound information, and performs a bound-check operation. To understand how AOS works, consider the following example in Listing 3 and the corresponding heap content and HBT status in Figure 4. In this example, the victim struct contains a function pointer that the attacker wants to overwrite. The victim program allocates an array at address 0x1000 and a victim struct at address 0x1040 on the heap. The obtained ptr_a and ptr_b will have PAC values embedded in the high bits, denoted as PAC_a and PAC_b, respectively. When the attacker uses the vulnerability on line 4 to perform an out-of-bound access (i.e., setting x larger than array size), the AOS hardware will capture it, because the PAC value of ptr_a[x] will be used to do a HBT lookup and a bound check will detect such illegal bound-bypass activity. **AOS Vulnerability.** As discussed in Section 5, AOS uses tamperable metadata and its security check can be spoofed. Attackers can bypass AOS's protection if they can modify a pointer together with its PAC to match another valid pointer and its associated PAC. Using the same example from Listing 3, an out-of-bound write on line 4 could succeed if the index x can set the PAC bits of ptr_a to match PAC_b. As a result, the bound information will be retrieved from the HBT using PAC_b as index, and the bound check will go through. To do so, the attacker needs to first leak the PAC values. **Attacking AOS.** In our third PoC, we set up our synthetic victim program similar to the code in Listing 3. We undertake two steps to break AOS. **Step 1: Using a Spectre primitive to leak PAC values.** According to the SIF graph in Figure 2(g) and the discussion in Section 6.5, AOS is vulnerable to Spectre and its variants due to parallel security checks which fail to protect load operations during speculative execution. Our experiments show that a Spectre primitive works effectively on an architectural simulator of AOS, and that a Prime+Probe attack on the cache can precisely leak the two PACs, PAC_a and PAC_b. **Step 2: Overwriting a function pointer on the heap to the address of the win function.** We then modify the fp inside the victim struct by exploiting the non-adjacent buffer overflow vulnerability on line 4 in Listing 3. To bypass AOS protection, we calculate the index x by subtracting PAC_a from PAC_b and then add the offset to make the resulted pointer points to the victim struct As the PAC of ptr_a[x] becomes PAC_b, the bound check is spoofed to pass, and ptr_b.fp is modified to the address of the win function. Later when the tampered function pointer is called, the win function is successfully triggered, declaring our control-flow hijack attack to be successful. ## 8 Countermeasures We present potential countermeasures, derived from the insights in Section 5 and Section 6. First, as discussed in Section 5, tamperproof metadata augmentation based schemes are not vulnerable to speculative shield bypass attacks as their security checks cannot be spoofed. As such, one can incorporate techniques used by these mechanisms to enforce metadata integrity. Second, one can leverage the SIF graphs from Section 6 to reason about possible countermeasures for spoofable security checks. For example, for the schemes that are insecure due to the leakage of security check outcomes (Figure 2(b)-(f)), one can try to remove the edges from the graph, that is, to remove the information flow paths from security check nodes to observable microarchitectural event nodes. In other words, one could alter the SIF graphs to perform parallel checks, similar to AOS (Figure 2(g)). This would prevent an attacker from leaking the security check outcome for Figure 2(b-f). In fact, shortly after the publication of PACMAN [72], ARM suggested such a patch [44]. However, such an approach may not be straightforward for many defense mechanisms. For instance, for ASLR, Morpheus, and C3, if the security check fails, this would mean that the address is unmapped and it is unclear what data should be returned. In this case, one could consider returning data from another mapped address instead. For example, to protect code pointers against SSBs in ASLR, one can compare each instruction address fetched with the base and bound of the code segment of the program. If the address is within the base and bound of code segment, the address is mapped and the original address can be fetched. If the address is not, one can instead fetch the modulo of the requested address by the code segment size, added to the base. When no attack is happening, this scheme only adds one comparison. As such, this scheme would incur at most one extra cycle of latency when fetching an instruction. However, such an approach would lead to Spectre vulnerabilities (Section 6.5) and thus any secrets related to the ASLR scheme should not be stored in the virtual address space. To achieve this, one can Figure 4: AOS protection example. either generate the metadata only during critical sections of the program [70] or leverage specialized hardware to maintain the metadata [94, 77, 10]. A more sophisticated approach would be to explore integrating prior works that mitigate information leakage from speculative microarchitectural events. Mitigation mechanisms that hide cache modulations caused by speculative memory accesses, such as [13, 48, 75, 90] could serve this purpose. In terms of SIF graphs, such direction would be equivalent to transforming all micro-architectural events (the yellow nodes in Figure 2) into unobservable events (grey nodes). In other words, such a transformation would not only block Spectre from leaking secrets that are stored in virtual memory (Figure 2(a), (g)), but also block leaking security check outcomes (Figure 2(b)-(f)). STT [93] would also mitigate leakage of metadata from virtual memory (Figure 2(a), (g)), but cannot block leakage of security check outcomes (Figure 2(b)-(f)) as it allows the first load to propagate during speculation. However, we expect integrating such mechanisms with memory corruption defenses to be non-trivial. ## 9 Discussion: Threat Models Traditional speculative-execution attacks, memory corruption attacks and speculative shield bypass attacks differ in the underlying assumptions of their respective threat models, and in the attacker's end goals. Traditional speculative-execution attacks assume secure software, but hardware that is vulnerable to side-channels. Hence, as noted in Section 2.1, in a traditional speculative-execution attack, the attacker ultimately aims to violate confidentiality by leaking program data in the victim program's security domain. Conversely, memory corruption attacks assume secure hardware, but buggy software. Thus, as noted in Section 2.3, memory corruption attacks, in addition to leaking program data, aim to violate _integrity_ by modifying the program data, ultimately compromising authority and performing arbitrary code execution. Speculative shield bypass attacks assume both side-channel-vulnerable hardware and buggy software. Under this assumption, speculative shield bypass attacks not only aim violate confidentiality by leaking program data, but in-line with memory corruption attacks, aim to violate integrity and modify program data, ultimately to gain arbitrary code execution with higher authority. Thus, under a speculative-shield-bypass threat model, the attacker's end goal is to perform a successful memory corruption attack. However the presence of a memory corruption mitigation mechanism necessitates an intermediate step; that is first gaining the ability to bypass the defense. In this context, the primary question for a speculative shield bypass attack is whether whatever security check imposed by a defense is spoofable. If so, the speculative component of a speculative shield bypass attack does not aim to read arbitrary program data, but rather uniquely targets the defense metadata to facilitate the more powerful memory corruption attack. Note that while some defense schemes maintain their metadata inside the virtual address space [1, 24, 32, 49], making the distinction between program data and metadata obscure, in many defense schemes this is not true [94, 94, 10, 56, 71]. There is a complex interaction between the properties of a memory corruption defense and its vulnerabilities to varying threat models. As noted in Section 6.3, while most memory defenses were not designed to protect against traditional speculative-execution attacks, the extra security checks sometimes break the standard transmitters used by speculative-execution attacks. For example, in Figure 2(b)-(e), spectre transmitters are broken because any load would be blocked by the defense scheme's during speculation. In the same section, we also showed that while the security checks imposed by the memory-corruption defenses in Figure 2(b)-(e) block standard transmitters, they introduce new limited transmitters. These cannot transmit arbitrary program data, but they can transmit the metadata of memory-corruption defenses, which lead to SSB vulnerabilities. Reciprocally, some memory-corruption mitigations, while not vulnerable to SSB because their metadata are tamperproof, are unable to block standard speculative-execution attacks. For instance, consider CHERI [89]. Because the integrity of its metadata is enforced, CHERI's security checks are unspoofable, and hence CHERI is resilient to speculative shield bypass attacks. However, CHERI performs parallel security checks similar to AOS in Figure 2(g). Thus, as explained in Section 6.5, CHERI is unable to protect its program data from Spectre and its variants. In fact, most tamperproof metadata defenses opt to perform parallel security checks schemes (for performance reasons), making them vulnerable to Spectre and its variants, including [68, 78, 79, 28, 64, 65] and [76]. We summarize security trade-offs of memory-corruption-mitigation schemes between the traditional-speculative-execution threat model and speculative-shield-bypass threat model in Table 1. Column 3 and 4 indicate whether program data or the metadata/offset can be leaked via traditional speculative-execution attacks. Column 5 indicates whether an attacker with a memory write vulnerability is able to corrupt the metadata/offset. Column 6 indicates whether the metadata/offset is brute-forceable by abusing security checks. Column 7 indicates whether the respective memory corruptions are vulnerable to SSB attacks. This requires the metadata to be corruptable and, if so, either vulnerable to traditional speculative-execution attacks or brute-forceable. In addition to the complicated trade off introduced by varying assumptions about the security of the software, hardware, and attacker's end goals, memory-safety-mechanism designs need to consider the trade-offs in the extent of memory safety provided, the performance overhead, as well as intrusivity of the designs in the hardware and software. Memory-safety schemes can broadly be categorized into two categories depending on whether they are _preactive_ or _reactive_ under a memory safety violation. Preactive-memory-safety schemes enforce memory safety rules such as spatial safety and/or temporal safety directly. For example, to enforce spatial safety, these mechanisms check whether each memory access is within bounds of the respective (sub-)object. To enforce such strict security rules, these mechanisms maintain and operate on a large amount of additional metadata for each pointer, such as base and bound of the pointer for spatial safety, or an allocation identifier for temporal safety. Thus, preactive-memory-safety schemes tend to have a high performance and storage overhead. Instead of enforcing memory safety rules directly, reactive-memory-safety mechanisms aim at enforcing relaxed security rules in order to keep the associated overheads low. Such relaxed security rules allow memory safety violations such as spatial or temporal violations, but aim to protect the confidentiality or integrity for a selected set of assets in the program. For example, stack smashing protection allows a buffer overflow violating spatial safety, but enforces integrity of the return address by later checking the value of the canary. We summarize the trade-offs among preactive-memory-safety schemes and reactive-memory-safety schemes in Table 2. Due to the subjective nature of hardware and software intrusivity, and the difficulty to put in comparison, we instead provide a column on the maturity of the designs. ## 10 Related Work In this work, we conducted a systematic analysis of memory corruption mititigations, distinctively focusing on vulnerabilities that arise at the convergence of memory corruption and side-channel threat models. Prior works have also conducted extensive systematic analysis, albeit focusing on only one of the two traditional threat models. **SoK on Memory Corruption Attacks and Defenses.** Szekeres et al. [82] systematically analyze memory corruption attacks and defenses and Burow et al. [19] systematically compare various proposed CFI (control-flow integrity) mech \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Classification} & \multirow{2}{*}{Proposal} & \multicolumn{2}{c|}{Vulnerable to Spectre+} & \multicolumn{2}{c|}{Metadata/Offset} & \multicolumn{1}{c|}{Vulnerable} \\ \cline{3-7} & & Program Data & Metadata/Offset & Corruptable & Brute-forceable & to SSB \\ \hline \multirow{6}{*}{Tamperproof} & CHERI/Intel MPX/Hardbound & \multirow{2}{*}{Yes, but public} & \multirow{2}{*}{x} & \multirow{2}{*}{x} & \multirow{2}{*}{x} & \multirow{2}{*}{x} \\ & Watchdog/Watchdoglite & & & & & \\ & ARM BTI/Intel CEI/ REST & & & & & \\ & ZERO/No-FAT (spatial) & x & x & x & x & x \\ \hline Address Layout & ASLR/Californors (non-adjacent) & x & x & ✓ & ✓ & ✓ \\ Randomization & Morpheus & x & x & ✓ & x & x \\ \hline \multirow{2}{*}{Tamperable} & Stack Canary/AOS/No-FAT (temporal) & ✓ & ✓ & ✓ & ✓ & x & ✓ \\ & ARM MTE/SPARC ADI &? &? & ✓ &? &? \\ \multicolumn{2}{|c}{C3/ARM PA} & x & x & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 1: **Memory corruption mitigations under traditional-speculative-execution and speculative-shield-bypass threat models** \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Classification} & \multirow{2}{*}{Proposal} & \multicolumn{2}{c|}{Spatial Protection} & Temporal & \multirow{2}{*}{Maturity} & \multirow{2}{*}{Metadata} & \multirow{2}{*}{Performance} \\ \cline{3-4} \cline{6-7} & & Inter-Object & Intra-Object & Protection & & & Overhead \\ \hline \multirow{6}{*}{Tamperproof} & CHERI & ✓ & ✓ & x & Prototyped & 256 bits/pointer & \(\propto\) \# of pointer ops \\ & Intel MPX & ✓ & ✓ & x & Discontinued & 2 words/pointer & \(\propto\) \# of pointer derefs \\ & Hardbound & ✓ & ✓ & x & Simulation & 0-2 words/pointer, & \(\propto\) \# of pointer derefs \\ & Watchdog & & & & 4 bits/word & \(\propto\) \# of pointer derefs/ops \\ & Watchdoglite & ✓ & ✓ & x & Simulation & 4 words/pointer & \(\propto\) \# of pointer derefs/ops \\ & CHEx86 & ✓ & x & ✓ & Simulation & 2 words/pointer & \(\propto\) \# of pointer ops \\ & REST & adjacent only & x & ✓ & Simulation & 8-64B token & \(\propto\) \# of mem accesses \\ & Californors & ✓ & ✓ & ✓ & Simulation & 1-7B/pointer & \(\propto\) \# of mem accesses \\ & No-FAT & ✓ & ✓ & ✓ & Simulation & 1KB/process table & \(\propto\) \# of pointer derefs \\ \hline \multirow{6}{*}{Tamperable} & ARM MTE & \multirow{2}{*}{YSARC ADI} & \multirow{2}{*}{x} & \multirow{2}{*}{✓} & \multirow{2}{*}{Commercial} & Embed inside pointer and & \(\propto\) \# of pointer derefs \\ & SPARC ADI & ✓ & x & ✓ & Simulation & 8B/pointer & \(\propto\) \# of pointer derefs \\ & AOS & ✓ & x & ✓ & Simulation & Entangle with pointer & \(\propto\) \# of pointer derefs \\ & C3 & ✓ & x & ✓ & ✓ & Simulation & \(\propto\) \# of pointer derefs \\ \hline \hline \multirow{2}{*}{Classification} & \multirow{2}{*}{Proposal} & \multicolumn{2}{c|}{Protected Assets} & \multirow{2}{*}{Maturity} & \multirow{2}{*}{Metadata} & \multirow{2}{*}{Performance} \\ \cline{3-4} \cline{6-7} & RET & Code Pr & Data Pr & Data & & Overhead & Overhead \\ \hline \multirow{2}{*}{Tamperproof} & ARM BTI & x & x & x & x & Commercial & 1 instruction/branch target & \(\propto\) \# of branches \\ & Intel CET & ✓ & x & x & x & Commercial & 1 instruction/branch target, & \(\propto\) \# of branches and returns \\ & ZERO & ✓ & ✓ & ✓ & ✓ & Simulation & 2 bits/word & \(\propto\) \# of mem accesses \\ \hline Address Layout & ASLR & ✓ & ✓ & ✓ & ✓ & Commercial & none & negligible \\ Randomization & Morpheus & ✓ & ✓ & ✓ & ✓ & Simulation & 2 bits/word & \(\propto\) 1/re-randomization interval \\ \hline \begin{tabular}{c} Tamperable \\ Metadata \\ \end{tabular} & Stack Canary & ✓ & x & x & x & Commercial & 8B/return address & \(\propto\) \# of returns \\ & ARM PA & ✓ & ✓ & ✓ & x & Commercial & Embed inside pointer & \(\propto\) \# of pointer derefs \\ \hline \end{tabular} \end{table} Table 2: **Tradeoffs of Memory Safety Schemes. First table for preactive schemes, second table for reactive schemes.** anisms. Cowan et al. [25] survey the various types of buffer overflow vulnerabilities, attacks, and defenses, and discuss different combinations of prevention techniques. Saito et al. [74] survey and classify memory corruption mitigation technologies that are pervasive in operating systems and compilers and Novkovic [67] analyzes the root causes of memory corruption vulnerabilities and categorizes existing defensive mechanisms based on the attack techniques they focused on preventing. **SoK on Microarchitectural Side Channels.** DAWG [50] presents a general attack schema of micro-architectural covert and side channels and CaSA [18] presents a communication-based model to describe side channels. Pandora's box [85] systematically studies side-channel vulnerabilities caused by various micro-architectural optimizations and Transient Fail [21] conducts a systematic analysis of transient execution attacks and defenses. Lastly, Deng et al. [26] propose a three-step modeling approach to exhaustively enumerate all the possible cache timing-based vulnerabilities. **Graph-based Models.** Graph-based models are commonly used to represent side-channel attacks that exploit speculative execution [37, 39, 69, 84, 40, 39]. At a high level, SIF graphs largely differ from prior works in that they analyze the leakage of the metadata of a defense, while prior works aim to analyze the leakage of program data. The closest related work is He et al. [39]. There are two key differences. [39] focuses on information leakage from a protected node (often a load). In comparison, SIF graphs focus on the information leakage from a security check node (and hence its possible abuse by an attacker). This key difference ultimately leads to two contradicting outcomes. [39] shows that leakage happens due to the _absence_ of security dependencies. To the contrary, SIF graphs show leakage happens due to the _existence_ of security dependencies (Section 6.3). **Speculative control-flow hijacking.** It is important to distinguish speculative shield bypass attacks from speculative control-flow hijacking [61, 51] attacks. Speculative control-flow hijacking attacks also combine memory corruption attack techniques to _speculatively_ execute attacker-chosen code paths, to leak information via microarchitectural side-channels. For example, SpecROP [61] showed that by poisoning BTB, the attacker can speculatively chain small code gadgets to construct more powerful information leakage attacks. Moreover, Mambretti et al. [61] demonstrated that speculative control-flow hijacking can speculatively bypass stack smashing protection [24], GCC's viable verification [9], and Go's runtime memory safety checks [2]. There are two key differences between speculative shield bypass attacks and speculative control-flow hijacking attacks. First, speculative control-flow hijacking attacks, by nature, are still ultimately speculative. The speculative gadgets will eventually be squashed. Conversely, speculative shield bypass attacks use the leaked secrets to bypass memory corruption defenses non-speculatively, ultimately conducting memory corruption attacks that result in architectural consequences. Second, speculative control-flow hijacking attacks leak the program data. To the contrary, speculative shield bypass attacks leak the metadata necessary for memory corruption attacks. ## 11 Conclusion This paper performed a systematic analysis of state-of-the-art memory corruption defense proposals from both industry and academia against speculative shield bypass attacks. By systematizing a taxonomy we identified the key source of vulnerability to speculative shield bypass attacks, namely _spoofable security checks_. The taxonomy also helped us identify two classes of memory corruption mitigations that perform spoofable security checks, i.e., those that use tamperable metadata or randomize the address layout. Next, we developed a graph-based model that helps us precisely visualize the information flow between the security checks and observable microarchitectural events. The model helps us identify patterns leading to side-channel leakage as well as reason about countermeasures. In short, our key contribution is a systematic analysis of memory corruption defenses, focusing on speculative shield bypass attacks.
2309.03676
LRCs: Duality, LP Bounds, and Field Size
We develop a duality theory of locally recoverable codes (LRCs) and apply it to establish a series of new bounds on their parameters. We introduce and study a refined notion of weight distribution that captures the code's locality. Using a duality result analogous to a MacWilliams identity, we then derive an LP-type bound that improves on the best known bounds in several instances. Using a dual distance bound and the theory of generalized weights, we obtain non-existence results for optimal LRCs over small fields. In particular, we show that an optimal LRC must have both minimum distance and block length relatively small compared to the field size.
Anina Gruica, Benjamin Jany, Alberto Ravagnani
2023-09-07T12:28:02Z
http://arxiv.org/abs/2309.03676v1
# LRCs: Duality, LP Bounds, and Field Size ###### Abstract We develop a duality theory of locally recoverable codes (LRCs) and apply it to establish a series of new bounds on their parameters. We introduce and study a refined notion of weight distribution that captures the code's locality. Using a duality result analogous to a MacWilliams identity, we then derive an LP-type bound that improves on the best known bounds in several instances. Using a dual distance bound and the theory of generalized weights, we obtain non-existence results for optimal LRCs over small fields. In particular, we show that an optimal LRC must have both minimum distance and block length relatively small compared to the field size. ## 1 Introduction In the last decade, _locally recoverable codes_ have been a central topic in communication and distributed storage [4, 6, 8, 13, 22]. A code has good locality properties when each entry of each codeword can be recovered from a small set of other entries of the same codeword. This property is captured by a parameter of the underlying code called _locality_. Small locality allows, for instance, a fast recovery process when the code is used for distributed storage. Ideally, a code has both small locality and large minimum distance, as the latter offers protection from errors and erasures. Most of the research on locally recoverable codes focuses on bounds [1, 4, 8, 11, 13], constructions and decoding [2, 3, 17, 22]. This paper pertains to the former line of research. As one expects, a code cannot have all the desirable properties at the same time. In particular, it cannot have arbitrarily small locality while having large minimum distance for the given dimension and block length. The trade-offs among all these parameters are captured by various bounds, the best known of which is probably the _Generalized Singleton Bound_[8]. Codes attaining this bound with equality are known to exist for some parameter sets over large fields. Over small fields, it is still a wide open questions which parameters locally recoverable codes can have. In this paper, we investigate two aspects of the theory of locally recoverable codes that have overall been neglected so far, namely their _duality theory_ and _field size_. We first demonstrate that duality results, besides being mathematically interesting, represent a very powerful tool for investigating the parameters of locally recoverable codes and establishing new bounds. In the second part of the paper we derive various results that link the size of the underlying field to the other parameters of a locally recoverable code. In particular, we prove that codes attaining the Generalized Singleton Bound of [8] cannot exist for some parameter sets. The rest of the introduction briefly illustrates the contributions made by this paper, pointing the reader to the relevant sections. After recalling the basics of locally recoverable codes in Section 2, we introduce a refined notion of weight distribution of a code, which is able to capture its locality as well as the weights of the codewords; see Section 3. In the same section, we then establish a duality result for the refined weight distribution, which is similar to, and yet different from, a MacWilliams identity. The identity combined with linear programming produces a bound on the parameters of a locally recoverable code that improves the best bounds currently available for several parameter sets; see Section 4. We illustrate this with comparison tables. The last part of the paper, namely Section 5, is devoted to the role played by the field size in the theory of locally recoverable codes. We propose various arguments based on the notions of dual distance and generalized weights. This leads us to establishing new bounds for the parameters of a locally recoverable code, which in turn gives us non-existence results for codes meeting the Generalized Singleton Bound over small fields. Acknowledgements.We are grateful to Markus Grassl for discussing with us the connection between the results of this paper and the split weight enumerator of a code. A. G. is supported by the Dutch Research Council through grant OCENW.KLEIN.539. B. J. is supported by the Dutch Research Council through grant VI.Vidi.203.045. A. R. is supported by the Dutch Research Council through grants VI.Vidi.203.045, OCENW.KLEIN.539, and by the Royal Academy of Arts and Sciences of the Netherlands. ## 2 Locally Recoverable Codes We start by recalling concepts of classical coding theory and locally recoverable codes, and by establishing the notation. Throughout the paper, \(n\geq 2\) is an integer, \(q\) denotes a prime power, and \(\mathbb{F}_{q}\) is the finite field of \(q\) elements. We will denote the set \(\{1,\ldots,n\}\) by \([n]\). **Definition 2.1**.: A **(linear) code** is an \(\mathbb{F}_{q}\)-linear subspace \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) endowed with the Hamming metric. The **dual** of \(\mathscr{C}\) is \(\mathscr{C}^{\perp}:=\{x\in\mathbb{F}_{q}^{n}:x\cdot y^{\top}=0\text{ for all }y\in\mathscr{C}\}.\) Moreover, we say that \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) is **non-degenerate** if there is no \(i\in[n]\) for which each \(x\in\mathscr{C}\) has \(x_{i}=0\). The codes \(\{0\}\) and \(\mathbb{F}_{q}^{n}\) are called **trivial**. In this document, by "code" we always mean "non-trivial code", unless otherwise specified. **Definition 2.2**.: The **minimum distance** of a code \(\mathscr{C}\) is defined as \[d(\mathscr{C}):=\min\{\omega^{\mathrm{H}}(x):x\in\mathscr{C}\setminus\{0\}\},\] where \(\omega^{\mathrm{H}}(x):=|\{i\,:\,x_{i}\neq 0\}|\) is the (**Hamming**) **weight** of \(x\). A large minimum distance guarantees the error detection and correction capabilities of a code, so naturally one wants this parameter to be as large as possible. However, as it is well known, there is a trade-off between the minimum distance and the dimension of a code with given length. This trade-off is expressed by the famous _Singleton Bound_, which says that for a code \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) we always have \(d(\mathscr{C})\leq n-\dim_{\mathbb{F}_{q}}(\mathscr{C})+1\); see [21]. Codes that meet the bound with equality are called **MDS**. It remains a wide open problem in coding theory to determine when codes achieving this bound (or other bounds) exist. Apart from the minimum distance and the dimension, in this paper we consider an additional parameter for linear codes, called _locality_. **Definition 2.3**.: A code \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) has **locality**\(r\) if for every \(i\in\{1,\ldots,n\}\) there exists a set \(S_{i}\subseteq[n]\), called a **recovery set** for (the coordinate) \(i\), with the following properties: 1. \(i\notin S_{i}\), 2. \(|S_{i}|\leq r\), 3. if \(x,y\in\mathscr{C}\) and \(x_{j}=y_{j}\) for all \(j\in S_{i}\), then \(x_{i}=y_{i}\). We call **LRC** (**locally recoverable code**) a code for which the locality parameter is considered. **Example 2.4**.: Let \(\mathscr{C}\leq\mathbb{F}_{2}^{7}\) be the binary simplex code of dimension \(3\), which consists of codewords of the form \[(u_{1},u_{2},u_{3})\begin{pmatrix}1&0&0&1&0&1&1\\ 0&1&0&1&1&0&1\\ 0&0&1&0&1&1&1\end{pmatrix}=(u_{1},u_{2},u_{3},u_{1}+u_{2},u_{2}+u_{3},u_{1}+u_ {3},u_{1}+u_{2}+u_{3}),\] where \(u_{1},u_{2},u_{3}\in\mathbb{F}_{2}\). Without loss of generality, we focus on recovering coordinate \(i=1\). It is not hard to check that recovery sets for \(1\) are \[\{2,4\},\,\{3,6\},\,\{5,7\},\,\{2,3,7\},\,\{4,6,7\},\,\{3,4,5\},\,\{2,5,6\}\] as well as any subset of \(\{2,\ldots,7\}\) containing one of the above. By symmetry of the coordinates, this code has (minimum) locality \(2\). As the name suggests, a recovery set \(S_{i}\) for \(i\) allows to recover the coordinate \(x_{i}\) of any codeword \(x\in\mathscr{C}\) using only the coordinates of \(x\) indexed by \(S_{i}\). This is done via a **recovery function**\(f_{i}:\pi_{S_{i}}(\mathscr{C})\to\mathbb{F}_{q}\) that satisfies \(f_{i}(\pi_{S_{i}}(x))=x_{i}\) for all \(x\in\mathscr{C}\), where \(\pi_{S_{i}}\) denotes the projection map onto the coordinates indexed by the elements of \(S_{i}\). Interestingly, for linear codes the recovery functions must be linear; see [1, Lemma 10]. **Proposition 2.5**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a code and let \(S_{i}\) be a recovery set for the coordinate \(i\) with recovery function \(f_{i}\). Then \(f_{i}\) is an \(\mathbb{F}_{q}\)-linear map. Similarly to classical codes, it is important to understand the trade-offs between the locality parameter and other parameters of the code, such as its minimum distance and dimension. In [8], the authors showed how locality impacts the dimension of an LRC by establishing a generalization of the Singleton Bound. **Theorem 2.6** (Generalized Singleton Bound).: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a code with locality \(r\), dimension \(k\), and minimum distance \(d\). Then \[k+\left\lceil\frac{k}{r}\right\rceil\leq n-d+2. \tag{2.1}\] Note that the bound of Theorem 2.6 coincides with the classical Singleton Bound if \(k=r\). Codes whose parameters meet the bound (2.1) with equality are called **optimal LRC**. Note that the simplex code in Example 5.3 is optimal. In [22], a construction of optimal LRC was given for \(q\geq n\), \(r\mid k\), and \(r+1\mid n\). However, if these divisibility constraints are not satisfied or if \(q<n\), optimal LRC do not always exist or have not been found yet. In order to establish or exclude the existence of optimal LRCs for small \(q\), it is natural to establish bounds linking the locality to the size of the underlying field. The following is a shortening bound established in [4], that improves the Singleton-type bound in Theorem 2.6. In the minimum we include the (trivial) case \(t=0\), even though the original statement does not. Note that for some parameters the minimum is indeed attained by \(t=0\). **Theorem 2.7** (Shortening Bound).: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a code with locality \(r\), dimension \(k\) and minimum distance \(d\). We have \[k\leq\min_{t\in\mathbb{Z}_{\geq 0}}\left\{rt+k_{\mathrm{opt}}^{(q)}(n-t(r+1),d) \right\}, \tag{2.2}\] where \(k_{\mathrm{opt}}^{(q)}(n,d)\) is the largest possible dimension of a code of length \(n\) and minimum distance \(d\) over \(\mathbb{F}_{q}\). **Remark 2.8**.: 1. It was observed in [13] that (2.2) yields a series of bounds on codes with locality, by applying known bounds on \(k_{\mathrm{opt}}^{(q)}(n-t(r+1),d)\). For example, by letting \(t=\lceil k/r\rceil\) and using the classical Singleton Bound on \(k_{\mathrm{opt}}^{(q)}(n-t(r+1),d)\), we recover Theorem 2.6. 2. Even though the bound of Theorem 2.7 is a refinement of Theorem 2.6, there is a "computational" drawback in evaluating it. Indeed, determining the value of \(k_{\mathrm{opt}}^{(q)}(n,d)\) for given \(d,n,q\) is a wide open problem in classical coding theory, which means that no closed formula is known for the RHS of (2.2). Establishing bounds whose evaluation is computationally feasible has therefore become a crucial research problem in the study of LRCs. This is one of the problems this paper addresses. The concept of locally recoverable codes was further generalized in [18], where the authors introduce the notion of \((r,\delta)\)-LRC. The additional parameter \(\delta\) provides extra information regarding the number of recovery sets for coordinate \(i\) of size less than or equal to \(r\). In the applications, \((r,\delta)\)-LRCs facilitate the local recovery of a failed node in the event that other nodes of the network fail as well. **Definition 2.9**.: A non-degenerate code \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) has **locality**\((r,\delta)\) (or is \((r,\delta)\)**-LRC**) if for all \(i\in[n]\) there exists a set \(S_{i}\subseteq[n]\) such that: 1. \(i\notin S_{i}\), 2. \(|S_{i}|\leq r+\delta\), 3. \(d(\pi_{S_{i}\cup\{i\}}(\mathscr{C}))\geq\delta\). We then call \(S_{i}\) an \((r,\delta)\)**-recovery set** for \(i\). Note that for an \((r,\delta)\)-LRC \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\), an \((r,\delta)\)-recovery set \(S_{i}\) for \(i\in[n]\), and \(T\subseteq S_{i}\cup\{i\}\) with \(|T|=\delta-1\), the coordinates of any \(x\in\mathscr{C}\) indexed by \(T\) can be recovered from the coordinates of \(x\) indexed by \((S_{i}\cup\{i\})\backslash T\). This follows from Definition 2.9, part 3). Moreover, it can easily be seen that the notions of \((r,\delta)\)-LRC and LRC with locality \(r\) coincide when \(\delta=2\). A refinement of Theorem 2.6 taking the parameter \(\delta\) into account was established in [18] and it reads as follows: \[d\leq n-k+1-\left(\left\lceil\frac{k}{r}\right\rceil-1\right)(\delta-1). \tag{2.3}\] Furthermore, the following is a generalization of Theorem 2.7 which was established in [9, 19]: \[k\leq\min_{t\in\mathbb{Z}_{\geq 0}}\{tr+k_{\mathrm{opt}}^{(q)}(n-t(r+\delta-1),d)\}. \tag{2.4}\] Similarly to Theorem 2.7, the bound of (2.4) has a computational drawback, as already explained in Remark 2.8. Note that both (2.1) and (2.2) can be recovered by the two previous bounds when \(\delta=2\). In this paper, we focus mostly on classical LRCs (i.e. \(\delta=2\)). However, in some instances we consider the broader case of \((r,\delta)\)-LRCs for any \(\delta\), when the techniques we develop are applicable. ## 3 A Finer Weight Distribution and Duality In this section, we introduce and work with a refinement of the classical _weight distribution_ of a code. Recall that the **weight distribution** of \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) is the tuple \((W_{0}(\mathscr{C}),\ldots,W_{n}(\mathscr{C}))\), where \[W_{i}(\mathscr{C}):=|\{x\in\mathscr{C}:\omega^{\mathrm{H}}(x)=i\}|.\] Before we introduce the aforementioned refinement of the weight distribution, we show how the locality of a code \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) can be captured by looking at the support of codewords in the dual code \(\mathscr{C}^{\perp}\). Recall that for a vector \(x\in\mathbb{F}_{q}^{n}\), its **support** is defined to be the set \(\sigma(x):=\{i\mid x_{i}\neq 0\}\). We now have the following equivalent characterization of the locality parameter, which will be used extensively in this paper. **Lemma 3.1**.: Let \(r\geq 1\) be an integer. A linear code \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) has locality \(r\) if and only if for any \(i\in[n]\) there exists \(x\in\mathscr{C}^{\perp}\) with \(i\in\sigma(x)\) and \(\omega^{\mathrm{H}}(x)\leq r+1\). Lemma 3.1 was first established in [11, Lemma 5] and heavily relies on the fact that recovery functions for linear codes are always linear; see Proposition 2.5. The characterization of Lemma 3.1 inspires the following definition, where instead of caring only about how many codewords of a certain weight there are (which is captured in the classical weight distribution), we want to know the number of codewords of a certain weight having a certain subset of \([n]\) in their support. **Definition 3.2**.: For a code \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\), a set \(S\subseteq[n]\), and \(0\leq i\leq n\), we let \[W_{i}^{S}(\mathscr{C}):=|\{x\in\mathscr{C}:\omega^{\mathrm{H}}(x)=i,\,S \subseteq\sigma(x)\}|.\] Note that if we set \(S=\emptyset\), then the refined weight distribution introduced in Definition 3.2 recovers the classical weight distribution. The main result of this section is a MacWilliams-type identity for the refined weight distribution. In more detail, we will show that the refined weight distribution of the dual code fully determines the refined weight distribution of the original code. We start with the following result, which relates values of the refined weight distributions to one another. **Lemma 3.3**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) and \(A\subseteq[n]\). Then for all \(1\leq i\leq n\) and \(|A|\leq t\leq i\) we have \[W_{i}^{A}(\mathscr{C})=\frac{1}{\binom{i-|A|}{t-|A|}}\sum_{ \begin{subarray}{c}A\subseteq S\subseteq[n]\\ |S|=t\end{subarray}}W_{i}^{S}(\mathscr{C}).\] Proof.: Let \(\mathscr{V}=\{(x,S)\mid x\in\mathscr{C}\), \(\omega^{\text{H}}(x)=i\), \(A\subseteq S\subseteq\sigma(x)\), \(|S|=t\}\). We count the elements of \(\mathscr{V}\) in two ways. On one hand, we have \[|\mathscr{V}|=\sum_{\begin{subarray}{c}A\subseteq S\subseteq[n]\\ |S|=t\end{subarray}}W_{i}^{S}(\mathscr{C}). \tag{3.1}\] On the other hand, \[|\mathscr{V}|=\sum_{\begin{subarray}{c}x\in\mathscr{C}\\ \omega^{\text{H}}(x)=i\\ A\subseteq\sigma(x)\end{subarray}}|\{S\,:\,A\subseteq S\subseteq\sigma(x)\text { and }|S|=t\}|=\binom{i-|A|}{t-|A|}W_{i}^{A}(\mathscr{C}). \tag{3.2}\] Combining (3.1) and (3.2) concludes the proof. In order to obtain the MacWilliams-type identity for the refined weight distribution from Definition 3.2, we need to introduce auxiliary definitions and results. **Definition 3.4**.: For a code \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) and subsets \(S\subseteq T\subseteq[n]\), let \[\mathscr{C}(S,T):=\{x\in\mathscr{C}:S\subseteq\sigma(x),\,x\in\mathscr{C}(T)\},\] where \(\mathscr{C}(T)\) denotes the **shortening** of \(\mathscr{C}\) by the set \(T\), i.e., \(\mathscr{C}(T)=\{x\in\mathscr{C}:\sigma(x)\subseteq T\}\). We can now express the cardinality of a subcode introduced in Definition 3.4 in terms of the cardinality of subcodes of the dual code. **Proposition 3.5**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\). For all \(S\subseteq T\subseteq[n]\) we have \[|\mathscr{C}(S,T)|=|\mathscr{C}|\sum_{A\subseteq S}(-1)^{|A|}\frac{|\mathscr{ C}^{\perp}(T^{c}\cup A)|}{q^{n-|T|+|A|}}.\] Proof.: We have \[|\mathscr{C}(S,T)| =|\mathscr{C}(T)|-|\{x\in\mathscr{C}:\sigma(x)\subseteq T,\, \sigma(x)\subseteq[n]\setminus A\text{ for some }\emptyset\neq A\subseteq S\}|\] \[=|\mathscr{C}(T)|-|\{x\in\mathscr{C}:\sigma(x)\subseteq T\cap([n] \setminus A)\text{ for some }\emptyset\neq A\subseteq S\}|. \tag{3.3}\] Since \(A\subseteq S\subseteq T\), we can rewrite (3.3) as \[|\mathscr{C}(T)|-|\{x\in\mathscr{C}:\sigma(x)\subseteq T\setminus A \text{ for some }\emptyset\neq A\subseteq S\}| =|\mathscr{C}(T)|-\left|\bigcup_{\emptyset\neq A\subseteq S} \mathscr{C}(T\setminus A)\right|\] \[=\sum_{A\subseteq S}(-1)^{|A|}\left|\mathscr{C}(T\setminus A) \right|,\] where the latter equality follows from the Inclusion-Exclusion principle. Finally, we have \[|\mathscr{C}(T\setminus A)|=\frac{|\mathscr{C}|}{q^{n-|T|+|A|}}|\mathscr{C}^{ \perp}(T^{c}\cup A)|,\] from which the statement of the lemma follows. With the aid of Proposition 3.5 we can prove the following result, which can be seen as an analogue of a _MacWilliams binomial moment_ identity. **Proposition 3.6**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\), \(S\subseteq[n]\) and \(|S|\leq t\leq n\). We have \[\sum_{i=0}^{n}\binom{n-i}{t-i}W_{i}^{S}(\mathscr{C})=\\ q^{k-n+t-|S|}(q-1)^{|S|}\sum_{i=0}^{n}\sum_{D\subseteq S}\sum_{B \subseteq D}(-1)^{|D|-|B|}(1-q)^{-|B|}\binom{n-|S|-i+|B|}{t-|S|}W_{i}^{D}( \mathscr{C}^{\perp}).\] Proof.: We prove the statement by first fixing a subset \(S\subseteq[n]\) and summing both sides of the expression in Proposition 3.5 over all \(T\subseteq[n]\) with \(S\subseteq T\) and \(|T|=t\). The LHS of the equality in Proposition 3.5 gives \[\sum_{\begin{subarray}{c}S\subseteq T\subseteq[n]\\ |T|=t\end{subarray}}|\mathscr{C}(S,T)|= \sum_{\begin{subarray}{c}x\in\mathscr{C},\\ \sigma(x)\supseteq S\\ \omega^{\mathbb{H}}(x)=i\end{subarray}}|\{T\subseteq[n]:|T|=t,\,\sigma(x) \subseteq T,\,S\subseteq T\}|\] \[= \sum_{i=0}^{n}\sum_{\begin{subarray}{c}x\in\mathscr{C}\\ \sigma(x)\supseteq S\\ \omega^{\mathbb{H}}(x)=i\end{subarray}}\binom{n-i}{t-i}=\sum_{i=0}^{n}\binom{ n-i}{t-i}W_{i}^{S}(\mathscr{C}),\] which is the LHS of the corollary. For the RHS we compute \[\sum_{\begin{subarray}{c}S\subseteq T\subseteq[n]\\ |T|=t\end{subarray}}|\mathscr{C}|\sum_{A\subseteq S}(-1)^{|A|}\frac{|\mathscr{C }^{\perp}(T^{c}\cup A)|}{q^{n-t+|A|}}=|\mathscr{C}|\sum_{A\subseteq S}\frac{(-1 )^{|A|}}{q^{n-t+|A|}}\sum_{\begin{subarray}{c}S\subseteq T\subseteq[n]\\ |T|=t\end{subarray}}|\mathscr{C}^{\perp}(T^{c}\cup A)|. \tag{3.4}\] We now write the last sum in (3.4) differently. For a fixed \(A\subseteq S\) we have \[\sum_{\begin{subarray}{c}S\subseteq T\subseteq[n]\\ |T|=t\end{subarray}}|\mathscr{C}^{\perp}(T^{c}\cup A)|= \sum_{i=0}^{n}\sum_{\begin{subarray}{c}x\in\mathscr{C}^{\perp}\\ \omega^{\mathbb{H}}(x)=i\end{subarray}}|\{T\subseteq[n]:|T|=t,\sigma(x) \subseteq T^{c}\cup A,S\subseteq T\}|\] \[= \sum_{i=0}^{n}\sum_{\begin{subarray}{c}x\in\mathscr{C}^{\perp}\\ \omega^{\mathbb{H}}(x)=i\end{subarray}}|\{T\subseteq[n]:|T|=n-t,\sigma(x) \subseteq T\cup A,T\subseteq S^{c}\}|\] \[= \sum_{i=0}^{n}\sum_{\begin{subarray}{c}x\in\mathscr{C}^{\perp}\\ \omega^{\mathbb{H}}(x)=i\\ \sigma(x)\subseteq S^{c}\cup A\end{subarray}}|\{T\subseteq S^{c}:|T|=n-t,\sigma (x)\subseteq T\cup A\}|\] \[= \sum_{i=0}^{n}\sum_{j=0}^{|A|}\sum_{\begin{subarray}{c}x\in \mathscr{C}^{\perp},\\ \omega^{\mathbb{H}}(x)=i\\ |\sigma(x)\cap A|=j\\ |\sigma(x)\cap S^{c}|=i-j\end{subarray}}\binom{n-|S|-i+j}{n-t-i+j}\] \[= \sum_{i=0}^{n}\sum_{j=0}^{|A|}\sum_{\begin{subarray}{c}B\subseteq A \\ |B|=j\end{subarray}}\sum_{\begin{subarray}{c}D\subseteq S^{c},\\ |D|=i-j\end{subarray}}\sum_{\begin{subarray}{c}x\in\mathscr{C}^{\perp}\\ \sigma(x)=D\cup B\end{subarray}}\binom{n-|S|-i+j}{n-t-i+j}\] \[=\sum_{i=0}^{n}\sum_{\begin{subarray}{c}B\subseteq A\\ |D|=i-|B|\end{subarray}}\sum_{\begin{subarray}{c}D\subseteq S^{c}\\ |D|=i-|B|\end{subarray}}\binom{n-|S|-|D|}{n-t-|D|}W_{i}^{D\cup B}(\mathscr{C}^ {\perp}),\] where the last step follows from the fact that \(|\{x\in\mathscr{C}^{\perp}:\sigma(x)=B\cup D\}|=W_{i}^{D\cup B}(\mathscr{C}^ {\perp})\) for \(B\subseteq A\) with \(|B|=j\) and \(D\subseteq S^{c}\) with \(|D|=i-j\). After substituting this expression into (3.4), we simplify further by reordering the summands and applying the Binomial Theorem as follows: \[|\mathscr{C}|\sum_{A\subseteq S}\frac{(-1)^{|A|}}{q^{n-t+|A|}} \sum_{i=0}^{n}\sum_{B\subseteq A}\sum_{\begin{subarray}{c}D\subseteq S^{c}\\ |D|=i-|B|\end{subarray}}\binom{n-|S|-|D|}{n-t-|D|}W_{i}^{D\cup B}(\mathscr{C}^ {\perp})\] \[=q^{k-n+t}\sum_{i=0}^{n}\sum_{B\subseteq S}\sum_{\begin{subarray} {c}D\subseteq S^{c}\\ |D|=i-|B|\end{subarray}}\binom{n-|S|-|D|}{n-t-|D|}W_{i}^{D\cup B}(\mathscr{C}^ {\perp})\sum_{B\subseteq A\subseteq S}\frac{(-1)^{|A|}}{q^{|A|}}\] \[=q^{k-n+t}\sum_{i=0}^{n}\sum_{B\subseteq S}\sum_{\begin{subarray} {c}D\subseteq S^{c}\\ |D|=i-|B|\end{subarray}}\binom{n-|S|-|D|}{n-t-|D|}W_{i}^{D\cup B}(\mathscr{C}^ {\perp})\sum_{j=|B|}(-1)^{j}\binom{|S|-|B|}{j-|B|}q^{-j}\] \[=q^{k-n+t}\sum_{i=0}^{n}\sum_{B\subseteq S}\sum_{\begin{subarray} {c}D\subseteq S^{c}\\ |D|=i-|B|\end{subarray}}\binom{n-|S|-|D|}{n-t-|D|}W_{i}^{D\cup B}(\mathscr{C}^ {\perp})(-1)^{|S|}q^{-|B|}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\cdot\sum_{j=0}^{|S|-|B|}(-1)^{|S|-|B|-j}\binom{|S|-|B|}{j}(q^{-1} )^{j}\] \[=q^{k-n+t}\sum_{i=0}^{n}\sum_{B\subseteq S}\sum_{\begin{subarray} {c}D\subseteq S^{c}\\ |D|=i-|B|\end{subarray}}\binom{n-|S|-|D|}{n-t-|D|}W_{i}^{D\cup B}(\mathscr{C}^ {\perp})(-1)^{|S|}q^{-|B|}(q^{-1}-1)^{|S|-|B|}\] \[=q^{k-n+t-|S|}(q-1)^{|S|}\sum_{i=0}^{n}\sum_{B\subseteq S}(1-q)^{ -|B|}\sum_{\begin{subarray}{c}D\subseteq S^{c}\\ |D|=i-|B|\end{subarray}}\binom{n-|S|-i+|B|}{t-|S|}W_{i}^{D\cup B}(\mathscr{C}^ {\perp}).\] To conclude the proof we will need the following claim. **Claim A.** Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\), \(S\subseteq[n]\) with \(|S|=t\), \(B\subseteq S\), and \(1\leq j\leq n\). Then \[\sum_{\begin{subarray}{c}D\subseteq S^{c}\\ |D|=j-|B|\end{subarray}}W_{j}^{D\cup B}(\mathscr{C}^{\perp})=\sum_{B\subseteq D \subseteq S}(-1)^{|D|-|B|}W_{j}^{D}(\mathscr{C}^{\perp}).\] _Proof of the claim._ For all \(D\subseteq S^{c}\) with \(|D|=j-|B|\) we have \[W_{j}^{D\cup B}(\mathscr{C}^{\perp})=\left|\{v\in\mathscr{C}^{\perp}\,:\, \sigma(v)=D\cup B\}\right|.\] Hence \[\sum_{\begin{subarray}{c}D\subseteq S^{c}\\ |D|=j-|B|\end{subarray}}\ W_{j}^{D\cup B}(\mathscr{C}^{\perp}) =\left|\{v\in\mathscr{C}^{\perp}\,:\,D\subseteq S^{c},\,|D|=j-|B|, \,\sigma(v)=D\cup B\}\right|\] \[=\left|\{v\in\mathscr{C}^{\perp}\,:\,B\subseteq D\subseteq[n] \setminus(S\setminus B),\,|D|=j,\,\sigma(v)=D\}\right|\] \[=\Big{|}\{v\in\mathscr{C}^{\perp}\,:\,B\subseteq D\subseteq[n],\,|D|=j,\, \sigma(v)=D\}\Big{|}-\] \[\left|\bigcup_{\emptyset\subset A\subseteq(S\setminus B)}\{v\in \mathscr{C}^{\perp}\,:\,B\cup A\subseteq D\subseteq[n],\,|D|=j,\,\sigma(v)=D \}\right|.\] Using the Inclusion-Exclusion principle followed by Lemma 3.3, we get \[\sum_{\begin{subarray}{c}D\subseteq S^{c}\\ |D|=j-|B|\end{subarray}}W_{j}^{D\cup B}(\mathscr{C}^{\perp}) =\sum_{\emptyset\subseteq A\subseteq S\setminus B}(-1)^{|A|}\sum_ {\begin{subarray}{c}A\cup B\subseteq D\subseteq[n]\\ |D|=j\end{subarray}}W_{j}^{D}(\mathscr{C}^{\perp})\] \[=\sum_{\emptyset\subseteq A\subseteq S\setminus B}(-1)^{|A|}W_{j }^{A\cup B}(\mathscr{C}^{\perp})\] \[=\sum_{B\subseteq D\subseteq S}(-1)^{|D|-|B|}W_{j}^{D}(\mathscr{ C}^{\perp}),\] proving the desired claim. Finally, from Claim A it follows that \[q^{k-n+t-|S|}(q-1)^{|S|}\sum_{i=0}^{n}\sum_{B\subseteq S}(1-q)^{-|B|}\sum_{ \begin{subarray}{c}D\subseteq S^{c}\\ |D|=i-|B|\end{subarray}}\binom{n-|S|-i+|B|}{t-|S|}W_{i}^{D\cup B}(\mathscr{C}^ {\perp})=\] \[q^{k-n+t-|S|}(q-1)^{|S|}\sum_{i=0}^{n}\sum_{D\subseteq S}\sum_{B\subseteq D}(- 1)^{|D|-|B|}(1-q)^{-|B|}\binom{n-|S|-i+|B|}{t-|S|}W_{i}^{D}(\mathscr{C}^{\perp }),\] concluding the proof. In order to "isolate" the refined weight distribution of a code \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) in the sum of Proposition 3.6, we will use the following lemma. **Lemma 3.7**.: Let \[\alpha_{t}:=\sum_{i=0}^{n}\binom{n-i}{t-i}\beta_{i}\quad\text{for }0\leq t\leq n.\] Then \[\beta_{i}=\sum_{t=0}^{n}(-1)^{(i-t)}\binom{n-t}{i-t}\alpha_{t}\quad\text{for all }0\leq i\leq n.\] Proof.: Let \(0\leq i\leq n\) be fixed. We have \[\sum_{t=0}^{n}(-1)^{(i-t)}\binom{n-t}{i-t}\alpha_{t} =\sum_{t=0}^{n}(-1)^{(i-t)}\binom{n-t}{i-t}\sum_{j=0}^{n}\binom{n -j}{t-j}\beta_{j}\] \[=\sum_{t=0}^{n}(-1)^{(i-t)}\sum_{j=0}^{n}\binom{n-t}{n-i}\binom{ n-j}{n-t}\beta_{j}\] \[=\sum_{t=0}^{n}(-1)^{(i-t)}\sum_{j=0}^{n}\binom{i-j}{i-t}\binom{ n-j}{n-i}\beta_{j}\] \[=\sum_{j=0}^{n}\binom{n-j}{n-i}\beta_{j}\sum_{t=0}^{i}(-1)^{(i-t) }\binom{i-j}{i-t}\] \[=\sum_{j=0}^{n}\binom{n-j}{n-i}\beta_{j}\sum_{s=0}^{i-j}(-1)^{s} \binom{i-j}{s}\] \[=\beta_{i},\] because \[\sum_{s=0}^{i-j}(-1)^{s}\binom{i-j}{s}=0\] unless \(i=j\), by the Binomial Theorem. We can finally established the MacWilliams-type identity for the refined weight distribution of a code. **Theorem 3.8**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) and fix \(S\subseteq[n]\). For all \(0\leq i\leq n\) we have \[W_{i}^{S}(\mathscr{C})=q^{k-n-|S|}(q-1)^{|S|}\sum_{t=|S|}^{n} \sum_{j=0}^{n}\sum_{D\subseteq S}\sum_{B\subseteq D}(-1)^{i-t+|D|}q^{t}\\ (q-1)^{-|B|}\binom{n-t}{i-t}\binom{n-|S|-j+|B|}{t-|S|}W_{j}^{D}( \mathscr{C}^{\perp}). \tag{3.5}\] Proof.: The statement follows directly from Proposition 3.6 and Lemma 3.7, by setting \(\beta_{i}=W_{i}^{S}(\mathscr{C})\) and \[\alpha_{t}=q^{k-n+t-|S|}(q-1)^{|S|}\sum_{j=0}^{n}\sum_{D\subseteq S}\sum_{B \subseteq D}(-1)^{|D|-|B|}(1-q)^{-|B|}\binom{n-|S|-j+|B|}{t-|S|}W_{j}^{D}( \mathscr{C}^{\perp}).\qed\] **Remark 3.9**.: Although we call the above a MacWilliams-type identity, enumerators we consider do not naturally fit in the framework of the MacWilliams identities, as they do not represent the cardinalities of the blocks of a partition of the underlying code; see [7]. However, one can see the above as a generalization of the classical MacWilliams identity. In fact, with tedious but straightfowrad computations one can show that (3.5) reduces to the famous MacWilliams identity [16] when \(S=\emptyset\). When applying MacWilliams-type identities, it is crucial to understand the dependencies between variables and which groups of these determine each other, possibly under extra assumptions. The next result shows that \(W_{i}^{S}(\mathscr{C})\) can be rewritten in terms of \(W_{j}^{T}(\mathscr{C}^{\perp})\) for \(1\leq j\leq n\) and all \(T\subseteq[n]\) with \(|T|=|S|\), provided that the cardinality of \(S\) does not exceed the minimum distance of \(\mathscr{C}^{\perp}\). This result will become especially useful for the linear programming bounds presented in Section 4. **Corollary 3.10**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a code, \(d^{\perp}\) the minimum distance of \(\mathscr{C}^{\perp}\), and \(S\subseteq[n]\). If \(|S|\leq d^{\perp}\), then \[W_{i}^{S}(\mathscr{C})=\binom{n-|S|}{i-|S|}q^{k-n}(q-1)^{i}+q^{k- n-|S|}(q-1)^{|S|}\sum_{t=|S|}^{n}\sum_{j=d^{\perp}}^{n}\sum_{\begin{subarray}{c}T \subseteq[n]\\ |T|=|S|\end{subarray}}\sum_{D\subseteq S\cap T}\sum_{B\subseteq D}(-1)^{i-t+ |D|}\\ q^{t}(q-1)^{-|B|}\binom{n-t}{i-t}\binom{n-|S|-j+|B|}{t-|S|}\binom{j-|D|}{| S|-|D|}^{-1}W_{j}^{T}(\mathscr{C}^{\perp}).\] Proof.: Let \(s:=|S|\) and consider the identity from Theorem 3.8. We first isolate the case \(j=0\). Note that \(W_{0}^{D}(\mathscr{C}^{\perp})=1\) if \(D=\emptyset\) and \(W_{0}^{D}(\mathscr{C}^{\perp})=0\) otherwise. Hence the summand of Theorem 3.8 corresponding to \(j=0\) is: \[\sum_{t=s}^{n}(-1)^{i-t}q^{t}\binom{n-t}{i-t}\binom{n-s}{n-t}= \sum_{t=s}^{n}(-1)^{i-t}q^{t}\binom{n-s}{n-i}\binom{i-s}{i-t}\] \[= \binom{n-s}{n-i}\sum_{t=s}^{i}(-1)^{i-t}q^{t}\binom{i-s}{i-t}\] \[= \binom{n-s}{n-i}q^{s}\sum_{t=0}^{i-s}(-1)^{i-s-t}q^{t}\binom{i-s}{ t}\] \[= \binom{n-s}{n-i}q^{s}(q-1)^{i-s},\] where the former equality follows from the identity \[\binom{a}{b+c}\binom{b+c}{b}=\binom{a}{c}\binom{a-c}{b},\] and the latter one follows from the Binomial Theorem. Now assume \(j\geq 1\). If \(j\leq d^{\perp}\) then \(W_{j}^{D}(\mathscr{C}^{\perp})=0\). Thus it suffices to consider the summands for \(d^{\perp}\leq j\leq n\). By assumption \(s\leq d^{\perp}\). Hence for all \(D\subseteq S\) we can apply Lemma 3.3 and get \[W_{j}^{D}(\mathscr{C}^{\perp})=\binom{j-|D|}{s-|D|}^{-1}\sum_{\begin{subarray} {c}D\subseteq T\subseteq[n]\\ |T|=s\end{subarray}}W_{j}^{T}(\mathscr{C}^{\perp}).\] Therefore for \(j\geq d^{\perp}\), (3.5) can be rewritten as \[\sum_{j=d^{\perp}}^{n}\sum_{t=s}^{n}\sum_{D\subseteq S}\sum_{B \subseteq D}(-1)^{i-t+|D|}q^{t}(q-1)^{-|B|}\binom{n-t}{i-t}\binom{n-s-j+|B|}{t- s}\binom{j-|D|}{s-|D|}^{-1}\sum_{\begin{subarray}{c}D\subseteq T\subseteq[n]\\ |T|=s\end{subarray}}W_{j}^{T}(\mathscr{C}^{\perp})\] \[= \sum_{j=d^{\perp}}^{n}\sum_{t=s}^{n}\sum_{\begin{subarray}{c}T \subseteq[n]\\ |T|=s\end{subarray}}\sum_{D\subseteq S\cap T}\sum_{B\subseteq D}(-1)^{i-t+|D|} q^{t}(q-1)^{-|B|}\binom{n-t}{i-t}\binom{n-s-j+|B|}{t-s}\binom{j-|D|}{s-|D|}^{-1}W_{j} ^{T}(\mathscr{C}^{\perp}).\] Putting everything together proves the desired statement. Note that in the above proof the assumption \(|S|\leq d^{\perp}\) was needed to apply Lemma 3.3. A natural question is whether \(W_{i}^{S}(\mathscr{C})\) or not can be fully expressed in terms of \(W_{j}^{T}(\mathscr{C}^{\perp})\), for \(0\leq j\leq n\) and \(|T|=|S|\) when \(|S|>d^{\perp}\). At the time of writing this paper we are unable to answer this question. ## 4 Applications and Bounds Delsarte's linear programming (LP) bound [5] is a powerful tool to estimate the size of a code with a certain length and minimum distance. It combines the classical Macwilliams identities with, as the name suggests, linear programming. In this section we study a new LP bound for codes with locality. We use our main duality result, Theorem 3.8, to build a linear program that gives a bound on the size of \((r,\delta)\)-LRCs. **Notation 4.1**.: In the sequel, for a code \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\), \(0\leq i\leq n\) and \(1\leq j\leq n\), we will write \(W_{i}^{j}(\mathscr{C})\) instead of \(W_{i}^{\{j\}}(\mathscr{C})\). The next result shows that it is possible to bound from below, in any \((r,\delta)\)-LRC, the number of codewords with coordinate \(i\) in their support and weight at most \(r+\delta-1\). **Proposition 4.2**.: Let \(\delta\geq 2\) and let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be an \((r,\delta)\)-LRC, with dimension \(k\). Then for all \(j\in[n]\) we have \[\sum_{i=0}^{r+\delta-1}W_{i}^{j}(\mathscr{C}^{\perp})\geq q^{\delta-1}-q^{ \delta-2}.\] Proof.: Let \(j\in[n]\) and \(S_{j}\subseteq[n]\) be minimal with respect to inclusion such that \(|S_{j}|\leq r+\delta-1\), \(j\in S_{j}\), and \(d(\pi_{S_{j}}(\mathscr{C}))\geq\delta\). Let \(k_{j}:=\dim(\pi_{S_{j}}(\mathscr{C}))\), \(d_{j}:=d(\pi_{S_{j}}(\mathscr{C}))\), and \(k_{j}^{\perp}=|S_{j}|-k_{j}\). From the Singleton Bound we have \(|S_{j}|-k_{j}+1\geq\delta\), which means \(k_{j}^{\perp}\geq\delta-1\). Furthermore there exist \(v\in\mathscr{C}^{\perp}(S_{j})\) such that \(j\in\sigma(v)\), since otherwise we arrive to the contradiction that \(\mathscr{C}^{\perp}(S_{j})\) would be degenerate and \(d(\pi_{S_{j}}(\mathscr{C}))=1<\delta\). These two facts together imply that there are at least \(q^{\delta-1}-q^{\delta-2}\) codewords in \(\mathscr{C}^{\perp}(S_{j})\) containing \(j\) in their support and of weight at most \(|S_{j}|\leq r+\delta-1\). Therefore \(\sum_{i=0}^{r+\delta-1}W_{i}^{j}(\mathscr{C}^{\perp})\geq q^{\delta-1}-q^{ \delta-2}\). Unfortunately, the converse of this statement is not true, unless \(\delta=2\) (leading to the statement of Lemma 4.4 below). Consider the following example. **Example 4.3**.: Let \(G\) and \(H\) be matrices over \(\mathbb{F}_{2}\) defined as follows: \[G:=\begin{pmatrix}1&1&0&1&0\\ 0&1&1&0&0\\ 0&0&0&1&1\end{pmatrix},\quad H:=\begin{pmatrix}1&1&1&0&0\\ 1&0&0&1&1\end{pmatrix}.\] Let \(\mathscr{C}=\{xG:x\in\mathbb{F}_{2}^{3}\}\) which then implies \(\mathscr{C}^{\perp}=\{xH:x\in\mathbb{F}_{2}^{2}\}\). If we fix \(r:=2\) and \(\delta:=3\), then \(r+\delta-1=4\) and \(q^{\delta-1}-q^{\delta-2}=2\). One can easily check that for all \(j\in[5]\) we have \[\sum_{i=0}^{r+\delta-1}W_{i}^{j}(\mathscr{C}^{\perp})=\sum_{i=0}^{4}W_{i}^{j} (\mathscr{C}^{\perp})=2.\] However, for the coordinate \(\{1\}\) there exists no set \(S_{1}\subseteq[5]\) such that \(1\in S_{1}\), \(|S_{1}|\leq 4\), and \(d(\pi_{S_{1}}(\mathscr{C}))\geq 3\). In fact any projection of \(\mathscr{C}\) onto a set of coordinate of size less or equal to \(4\) will have minimum distance \(1\) or \(2\). Hence the code \(\mathscr{C}\) is not an \((2,3)\)-LRC. As already mentioned, for \(\delta=2\) the converse of Proposition 4.2 is also true. As a byproduct, the result also gives a characterization of codes with locality. **Lemma 4.4**.: A non-degenerate code \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) has locality \(r\) if and only if \(\sum_{i=2}^{r+1}W_{i}^{j}(\mathscr{C}^{\perp})\geq q-1\) for all \(1\leq j\leq n\). Proposition 4.2 and Lemma 4.4 indicate that the MacWilliams-type identity of Corollary 3.10 becomes especially useful in the context of LRCs when considering the weight distribution \(W_{i}^{j}(\mathscr{C}^{\perp})\), where \(1\leq j\leq n\) and \(0\leq i\leq n\). By applying Corollary 3.10, we get the following corollary. **Corollary 4.5**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a code and \(l\in[n]\). Then \[W_{i}^{l}(\mathscr{C}) =\binom{n-1}{i-1}q^{k-n}(q-1)^{i}+q^{k-n-1}(q-1)\sum_{j=d^{\perp }}^{n}\sum_{s=1}^{n}\sum_{t=1}^{i}(-1)^{i-t}q^{t}\binom{n-t}{i-t}\] \[\left(\frac{1}{j}\binom{n-1-j}{t-1}\right)^{1-\delta(l,s)}\left( \frac{1-j}{j}\binom{n-1-j}{t-1}-(q-1)^{-1}\binom{n-j}{t-1}\right)^{\delta(l,s )}W_{j}^{s}(\mathscr{C}^{\perp}),\] where \(\delta(l,s)=1\) if \(l=s\) and \(0\) otherwise. Proof.: Using Corollary 3.10 for \(S=\{l\}\) we immediately get \[W_{i}^{l}(\mathscr{C})=\binom{n-1}{i-1}q^{k-n}(q-1)^{i}+q^{k-n-1}(q -1)\sum_{t=1}^{n}\sum_{j=d^{\perp}}\sum_{\begin{subarray}{c}T\subseteq[n]\\ |T|=1\end{subarray}}\sum_{D\subseteq\{l\}\cap T}\sum_{B\subseteq D}(-1)^{i-t+ |D|}q^{t}\\ (q-1)^{-|B|}\binom{n-t}{i-t}\binom{n-1-j+|B|}{t-1}\binom{j-|D|}{1-|D| }^{-1}W_{j}^{T}(\mathscr{C}^{\perp}).\] We now compute \[\sum_{D\subseteq\{l\}\cap T}\sum_{B\subseteq D}(-1)^{|D|}(q-1)^{-|B|}\binom{ n-1-j+|B|}{t-1}\binom{j-|D|}{1-|D|}^{-1}, \tag{4.1}\] for both \(T=\{l\}\) and \(T=\{s\}\) where \(s\neq l\). In the former case, (4.1) is equal to \[\frac{1}{j}\binom{n-1-j}{t-1}-\left[(q-1)^{-1}\binom{n-j}{t-1}+\binom{n-1-j}{ t-1}\right].\] In the case \(T=\{s\}\) where \(s\neq l\), (4.1) is equal to \(\frac{1}{j}\binom{n-1-j}{t-1}\). The results then follows. **Remark 4.6**.: There exist other methods to derive the MacWilliams-type identity of Corollary 4.5. A first approach was established in [10], where the parameter \(W_{i}^{l}(\mathscr{C})\) is rewritten in terms of the \(W_{j}(\mathscr{C})\)'s and \(W_{j}(\mathscr{C}([n]\setminus\{l\}))\)'s for \(1\leq j\), \(l\leq n\). The classical MacWilliams identity is then applied to the latter terms. In addition, it was brought to our attention by M. Grassl that our refined weight distribution seems related to the notion of _split weight enumerator_; see [20] for more details. We were able to show that this is in fact the case and are able to derive Corollary 4.5 also as a corollary of [20, Proposition 1]. The last result we need to apply a linear program is the following, which is a specific case of Lemma 3.3 for \(A=\emptyset\) and \(|S|=1\). **Lemma 4.7**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be non-degenerate. For all \(1\leq i\leq n\) and \(1\leq j\leq n\) we have \[W_{i}(\mathscr{C})=\sum_{j=1}^{n}W_{i}^{j}(\mathscr{C})/i.\] For ease of exposition, we introduce the following notation. **Notation 4.8**.: Let \(A=\{a_{il}\ |\ 1\leq i\leq n+1,\,1\leq l\leq n\}\subseteq\mathbb{R}_{\geq 0}\). For \(1\leq i,l\leq n\) we denote by \(a_{il}^{\perp}\) the following linear combination of elements of \(A\): \[a_{il}^{\perp}=\binom{n-1}{i-1}(q-1)^{i}+q^{-1}(q-1)\sum_{j=d^{ \perp}}^{n}\sum_{s=1}^{n}\sum_{t=1}^{i}(-1)^{i-t}q^{t}\binom{n-t}{i-t}\cdot\\ \left(\frac{1}{j}\binom{n-1-j}{t-1}\right)^{1-\delta(l,s)}\left( \frac{1-j}{j}\binom{n-1-j}{t-1}-(q-1)^{-1}\binom{n-j}{t-1}\right)^{\delta(l,s) }a_{js}.\] The following result gives a linear program that establishes bounds for codes with \((r,\delta)\)-locality. **Theorem 4.9** (LP bound for \((r,\delta)\)-LRCs).: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a non-degenerate code of minimum distance at least \(d\), dimension \(k\), and \((r,\delta)\)-locality. Let \(\mu^{*}\) denote the minimum value of \[\sum_{i=1}^{n}\left(\sum_{j=1}^{n}a_{ij}/i\right),\] where \(a_{ij}\in\mathbb{R}\), for \(1\leq i\leq n+1\) and \(1\leq j\leq n\), satisfy the following constraints: 1. \(a_{ij}\geq 0\) for \(1\leq i,j\leq n\), 2. \(a_{ij}^{\perp}\geq 0\) for \(1\leq i,j\leq n\), 3. \(a_{ij}^{\perp}=0\) for \(1\leq i\leq d-1\) and \(1\leq j\leq n\), 4. \(\sum_{i=1}^{r+\delta-1}a_{ij}\geq q^{\delta-1}-q^{\delta-2}\) for \(1\leq j\leq n\), 5. \(a_{1j}=0\) for \(1\leq j\leq n\). Then \[k\leq n-\lceil\log_{q}(1+\mu^{*})\rceil.\] Proof.: We claim that for any non-degenerate linear code \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) of minimum distance \(d\) and \((r,\delta)\)-locality, the assignment \(a_{ij}=W_{i}^{j}(\mathscr{C}^{\perp})\) is a feasible solution of the linear program. Indeed, (i) and (ii) are satisfied trivially. Constraint (iii) guarantees that the minimum distance of the code is at least \(d\) and constraint (iv) needs to be fulfilled in order for the code to be an \((r,\delta)\)-LRC. Finally constraint (v) makes sure that \(\mathscr{C}\) is non-degenerate. Therefore the optimum of the linear program gives a lower bound on the size of \(\mathscr{C}^{\perp}\) by Proposition 4.2. We conclude this section with some results obtained with the LP bound of Theorem 4.9 in Tables 1-5. For fixed parameters \((q,n,d,r,\delta)\), we give an upper bound on the dimension \(k\) of an \((r,\delta)\)-LRC in \(\mathbb{F}_{q}^{n}\) and minimum distance at least \(d\). The computations were performed using SageMath. In the case where \(\delta=2\) (i.e. Tables 1-4) we compare our results to (2.1) (**gen. Singl**) and (2.2) (**SH**). The latter is computed in two different ways: first, the value \(k_{\mathrm{opt}}^{(q)}\) is estimated using a classical LP bound (**SH with LP**); second, the exact value of \(k_{\mathrm{opt}}^{(q)}\) is computed (**SH exact**), since it can be determined for the parameters considered in the tables by looking at databases of codes. The most fair comparison between our result and (2.2) is given by SH with LP, since the value of SH exact is only known for very small parameters. Our results show that Theorem 4.9 is often tighter than (2.2) (this is indicated in red on the tables). A similar comparison is done in Table 5, but using (2.3) and (2.4) instead. **Remark 4.10**.: As Table 5 illustrates, when \(\delta>2\) our LP bound does not seem to beat the shortening bound (both the exact and the LP one) as frequently as for \(\delta=2\). A possible explanation for why this happens is that, for \(\delta>2\), the \((r,\delta)\)-LRCs are not fully characterized by the constraint (iv) in Theorem 4.9. Hence a code that achieves the minimum of the objective function may not be the dual code of an \((r,\delta)\)-LRC. ## 5 The Role of the Field Size In this section we establish various bounds for LRCs that involve the underlying field size \(q\). The goal is to understand the role played by this parameter for the code's locality. We first use the refined weight distribution introduced in Section 3 to derive bounds connecting the dual distance with the locality and the field size. We then apply the theory of generalized weights to derive bounds involving the length, dimension, minimum distance, locality, and field size. To derive our first bound for LRCs, recall from Definition 3.4 that \[\mathscr{C}(S,[n])=\{x\in\mathscr{C}:S\subseteq\sigma(x)\}.\] The following result holds for any linear code and does not depend on the locality parameter. **Proposition 5.1**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be an code of dimension \(k\) and minimum distance \(d\). Let \(S\subseteq[n]\) with \(|S|\leq k-1\). If \(|\mathscr{C}(S,[n])|>0\), then there exist \(x\in\mathscr{C}(S,[n])\) such that \[\omega^{\mathrm{H}}(x)\leq n-k+|S|-(d-q)/q. \tag{5.1}\] Proof.: Let \(s=|S|\) and \(x\in\mathscr{C}(S,[n])\) of minimum weight with \(x\neq 0\). Let \(t=\omega^{\mathrm{H}}(x)\). Without loss of generality, we may assume \(S=\{1,\ldots,s\}\), \(T=\{1,\ldots,t\}\), and \(x=\sum_{i=1}^{t}e_{i}\), where \(e_{i}\) is the \(i\)-th standard basis elements. Since \(s\leq k-1\) we have \(t\leq n-k+s\). Furthermore, since \(\dim(\mathscr{C})=k\) then there must be an element \(y\in\mathscr{C}\) such that \(y_{i}=0\) for all \(i\in S\cup\{n-k+s+2,\ldots,n\}\). Then \(\omega^{\mathrm{H}}(\pi_{T}(y))\geq d-(n-t)+k-s-1\). Moreover, there exist \(\alpha\in\mathbb{F}_{q}^{*}\) such that \[\{j\,:\,\pi_{T}(y)=\alpha\}\geq(d-n+t+k-s-1)/(q-1).\] Let \(x^{\prime}=\alpha x-y\). Then \(\alpha_{i}\neq 0\) for all \(i\in S\) and \[\omega^{\mathrm{H}}(x^{\prime}) \leq t-\frac{d-n+t+k-s-1}{q-1}+(n-t)-(k-s-1)\] \[=n-k+s+1-\frac{d-n+t+k-s-1}{q-1}.\] By choice of \(t\) we therefore have \[t\leq n-k+s+1-\frac{d-n+t+k-s-1}{q-1},\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(n\) & \(d\) & \(r\) & LP & SH with LP & SH exact & gen. Singl. \\ \hline \hline 9 & 3 & 3 & \(k\leq 5\) & \(k\leq 6\) & \(k\leq 6\) & \(k\leq 6\) \\ \hline 11 & 3 & 4 & \(k\leq 7\) & \(k\leq 8\) & \(k\leq 8\) & \(k\leq 8\) \\ \hline 14 & 2 & 6 & \(k\leq 11\) & \(k\leq 12\) & \(k\leq 12\) & \(k\leq 12\) \\ \hline 16 & 2 & 7 & \(k\leq 13\) & \(k\leq 14\) & \(k\leq 14\) & \(k\leq 14\) \\ \hline 16 & 2 & 7 & \(k\leq 13\) & \(k\leq 14\) & \(k\leq 14\) & \(k\leq 14\) \\ \hline 18 & 2 & 8 & \(k\leq 15\) & \(k\leq 16\) & \(k\leq 16\) & \(k\leq 16\) \\ \hline 22 & 2 & 10 & \(k\leq 19\) & \(k\leq 20\) & \(k\leq 20\) & \(k\leq 20\) \\ \hline 24 & 2 & 11 & \(k\leq 21\) & \(k\leq 22\) & \(k\leq 22\) & \(k\leq 22\) \\ \hline \end{tabular} \end{table} Table 4: \(q=5\), \(\delta=2\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(n\) & \(d\) & \(r\) & LP & SH with LP & SH exact & gen. Singl \\ \hline 15 & 7 & 5 & \(k\leq 5\) & \(k\leq 5\) & \(k\leq 5\) & \(k\leq 7\) \\ \hline 16 & 5 & 5 & \(k\leq 8\) & \(k\leq 7\) & \(k\leq 7\) & \(k\leq 10\) \\ \hline 16 & 8 & 5 & \(k\leq 4\) & \(k\leq 5\) & \(k\leq 5\) & \(k\leq 7\) \\ \hline 17 & 9 & 5 & \(k\leq 3\) & \(k\leq 3\) & \(k\leq 3\) & \(k\leq 7\) \\ \hline 17 & 10 & 5 & \(k\leq 2\) & \(k\leq 2\) & \(k\leq 2\) & \(k\leq 6\) \\ \hline 18 & 7 & 5 & \(k\leq 7\) & \(k\leq 7\) & \(k\leq 7\) & \(k\leq 10\) \\ \hline 19 & 8 & 5 & \(k\leq 7\) & \(k\leq 7\) & \(k\leq 7\) & \(k\leq 10\) \\ \hline 20 & 5 & 7 & \(k\leq 13\) & \(k\leq 11\) & \(k\leq 11\) & \(k\leq 14\) \\ \hline \end{tabular} \end{table} Table 5: \(q=2\), \(\delta=3\) which is equivalent to \(t\leq n-k+s-(d-q)/q\). By applying the previous proposition to a set \(S\) of cardinality \(1\), we obtain the following corollary, which was presented also in [10]. **Corollary 5.2**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) of dimension \(k\) and minimum distance \(d\). Then \[d\leq n-k+1-(d-q)/q.\] Note the above result improves on the Singleton bound for \(d\geq q\). Furthermore, it shows that for linear MDS codes one must have \(d\leq q\). In the context of Corollary 5.2, one may wonder if the bound of Proposition 5.1, when applied to a set \(S\) with \(2\leq|S|\leq k-1\), can lead to an upper bound for the \(|S|\)-generalized weight (see Definition 5.5). In other words: Is it true that \(d_{|S|}\leq n-k+|S|-(d-q)/q\)? Although there exist examples of codes for which the inequality holds, this is in general not the case, as illustrated by the following example. **Example 5.3**.: Let \(\mathscr{C}\leq\mathbb{F}_{2}^{7}\) be the simplex code of dimension \(3\), i.e., the code generated by \[G=\begin{pmatrix}1&0&0&1&0&1&1\\ 0&1&0&1&1&0&1\\ 0&0&1&0&1&1&1\end{pmatrix}.\] It is well known that \(\omega^{\mathrm{H}}(x)=4\) for all non-zero \(x\in\mathscr{C}\). Therefore for all \(S\subseteq[7]\) such that \(|S|=2\), the smallest weight of a codeword \(x\in\mathscr{C}(S,[n])\) is \(4\). One can easily show that \(d_{2}(\mathscr{C})=6\), which is greater than the right hand side of (5.1) when the parameters are those of the simplex code. By applying Proposition 5.1 to the dual code \(\mathscr{C}^{\perp}\) of an LRC \(\mathscr{C}\), one can easily derive the following bound. **Corollary 5.4**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a code with dimension \(k\geq 2\), minimum distance \(d\), and locality \(r\). Let \(d^{\perp}=d(\mathscr{C}^{\perp})\). We have \[d^{\perp}-1+\frac{d^{\perp}-q}{q}\leq n-(d-2)-\left\lceil\frac{k}{r}\right\rceil.\] Proof.: Note that if a code \(\mathscr{C}\) has locality \(r\), then by Lemma 4.7 there exists a codeword \(x\in\mathscr{C}^{\perp}\) with \(\omega^{\mathrm{H}}(x)\leq r+1\). In particular, \(d^{\perp}\leq r+1\). Applying Corollary 5.2 we obtain \[d^{\perp}\leq k+1-(d^{\perp}-q)/q\] which, combined with Theorem 2.6, yields \[d^{\perp}\leq n-(d-2)-\left\lceil\frac{k}{r}\right\rceil+1-(d^{\perp}-q)/q.\] This concludes the proof. We now use the _generalized weights_ of a code to derive new bounds which we later apply to LRCs. We start by recalling the needed definitions. **Definition 5.5**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a code. The \(i\)**-th generalized weight** of \(\mathscr{C}\) is \[d_{i}(\mathscr{C}):=\min\{|\sigma(\mathscr{D})|\,:\,\mathscr{D}\leq\mathscr{C }\text{ and }\dim(\mathscr{D})=i\}\] for \(1\leq i\leq k\), and where \(\sigma(\mathscr{D}):=\bigcup_{x\in\mathscr{D}}\sigma(x)\). It is easy to see that \(d_{1}=d\). Moreover, the generalized weights of a code are strictly increasing, i.e., \(d_{1}(\mathscr{C})<d_{2}(\mathscr{C})<\ldots<d_{k}(\mathscr{C})\); see e.g. [24]. The following are well-known bounds involving the generalized weights of codes established respectively in [24] and [14, Theorem 1]. **Theorem 5.6**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a code of dimension \(k\) and let \(d_{1},\ldots,d_{k}\) be its generalized weights. Then: 1. \(d_{i}\leq n-k+i\), 2. \((q^{i}-1)d_{i-1}\leq(q^{i}-q)d_{i}\). Part (i) of Theorem 5.6 is often referred to as the _generalized Singleton-type bounds_. The first two generalized weights of a code provide extremely useful information when trying to determine the smallest underlying field size over which some classes of codes exist. This is shown in the next result. It is, to the best of our knowledge, the first time such a bound is derived. **Theorem 5.7**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a code of minimum distance \(d\). If \(d_{2}=d+s\), then \[d\leq sq. \tag{5.2}\] Proof.: By Theorem 5.6 (ii) we know that \((q^{2}-1)d\leq(q^{2}-q)d_{2}=(q^{2}-q)(d+s)\). Rewriting this equality gives us \(d\leq sq\), as desired. As a simple consequence of the previous theorem, we get the following bound. **Corollary 5.8**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) have dimension \(k\) and minimum distance \(d\). Then \[d\leq\frac{q}{q+1}(n-k+2). \tag{5.3}\] Proof.: Let \(d_{2}\) be the second generalized weight of \(\mathscr{C}\). Then by Theorem 5.6 (i) we have \(d_{2}-d\leq n-k+2-d\). Moreover, by applying Theorem 5.7 we get \[d\leq q(n-k+2-d).\] The inequality above can then be rewritten a \(d\leq\frac{q}{q+1}(n-k+2)\), which proves the wanted result. In the remaining part of this section, we study the generalized weights of an LRC. While these parameters were already considered in [12], we propose a new approach based on a new class of code parameters. The latter turn out to be closely related to the locality, and allow to provide concise proofs for known results and extend them as well. **Notation 5.9**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be of dimension \(k\), \(\mathscr{C}^{\perp}\) its dual code of dimension \(k^{\perp}=n-k\) with \(i\)-th generalized weight \(d_{i}^{\perp}\). Define \[\mu_{i}(\mathscr{C}):=\min\{t\,:\,d_{t}^{\perp}\geq n-k^{\perp}-(i-1)+t\}.\] The parameter \(\mu_{i}(\mathscr{C})\) captures the smallest dimension \(t\) such that all subcodes of \(\mathscr{C}^{\perp}\) of dimension \(t\) have support of size at least \(n-k^{\perp}-(i-1)+t\). In other words, \(\mu_{i}(\mathscr{C})\) is the smallest dimension of subcodes needed such that the generalized weight of the dual code of said dimension has defect at least \((i-1)\) from the generalized Singleton bound. **Remark 5.10**.: The parameters \(\mu_{i}\) can be seen as a generalization of the parameter \(\mu\) introduced in [23]. In the latter the parameter \(\mu\) is defined in terms of matroid theory terminology but gives Notation 5.9 when translated into coding theory language. Furthermore, the parameters \(\mu_{i}\) can also be defined for matroids, however for readability purposes we restrict to defining those parameters in terms of coding theory terminology. Before presenting the next result, we recall the following well-known facts; see e.g. [15]. **Lemma 5.11**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) have dimension \(k\) and let \(S\subseteq[n]\). Then 1. \(\dim(\mathscr{C}(S))+\dim(\pi_{S^{c}}(\mathscr{C}))=k\). 2. \(\pi_{S}(\mathscr{C})^{\perp}=\pi_{S}(\mathscr{C}^{\perp}(S))\) As the next result shows, the parameters \(\mu_{i}\) are closely related to the generalized weights of the code \(\mathscr{C}\). The following proof is inspired by the work of [23], in which similar methods were used to prove the same result for \(i=1\). **Theorem 5.12**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a code of dimension \(k\) and let \(d_{i}(\mathscr{C})\), for \(1\leq i\leq k\), be its generalized weights. Then \[d_{i}(\mathscr{C})=n-k-\mu_{i}(\mathscr{C})+i+1.\] Proof.: For notation purposes, throughout the proof we let \(\mu_{i}=\mu_{i}(\mathscr{C})\), \(d_{i}=d_{i}(\mathscr{C})\), and \(d_{i}^{\perp}=d_{i}(\mathscr{C}^{\perp})\). For ease of exposition, we divide the proof into two claims. **Claim B**.: We have \(\mu_{i}\geq n-k-d_{i}+i+1\). Proof of the claim.: We show that \(d_{k^{\perp}-d_{i}+i}^{\perp}<n-d_{i}+1\) by constructing a subspace of \(\mathscr{C}^{\perp}\) that has the desired dimension and support. Let \(\mathscr{D}\leq\mathscr{C}\) and \(S:=\sigma(\mathscr{D})\) with the property that \(\dim(\mathscr{D})=i\) and \(|S|=d_{i}\). Since \(\mathscr{D}\) achieves the generalized weight, it must be that \(\mathscr{D}=\mathscr{C}(S)\) and by Lemma 5.11(1) we know that \(\dim(\pi_{S^{c}}(\mathscr{C}))=k-i\). Furthermore, using Lemma 5.11(2) we obtain \(\dim(\mathscr{C}^{\perp}(S^{c}))=\dim(\pi_{S^{c}}(\mathscr{C})^{\perp})=n-d_ {i}-k+i\). Define \(\mathscr{D}^{\prime}=\mathscr{C}^{\perp}(S^{c})\) and note that \(\sigma(\mathscr{D}^{\prime})\leq n-d_{i}<n-d_{i}+1\). Therefore we have \(d_{k^{\perp}-d_{i}+1}^{\perp}<n-d_{i}+1\), which establishes the statement. **Claim C**.: We have \(\mu_{i}\leq n-k-d_{i}+i+1\). Proof of the claim.: First assume \(\mu_{i}=1\). From Theorem 5.6(1), we have \(d_{i}\leq n-k+i\) and therefore \(\mu_{i}=1\leq n-k-d_{i}+i+1\), as desired. Now assume \(\mu_{i}\geq 2\). Let \(\mathscr{D}\leq\mathscr{C}^{\perp}\) such that \(\dim(\mathscr{D})=\mu_{i}-1\) and \(\sigma(\mathscr{D})=:S\) satisfies \(|S|=d_{\mu_{i}-1}^{\perp}\leq k-i+\mu_{i}-1\). Such a subspace exists by definition of the parameter \(\mu_{i}\). Furthermore, since \(\dim(\mathscr{D})=\mu_{i}-1\), \(\sigma(\mathscr{D})=S\) and \(d_{\mu_{i}-1}^{\perp}=|S|\), we must have \(\mathscr{D}=\mathscr{C}^{\perp}(S)\). Therefore using Lemma 5.11(2) we obtain: \[\dim(\pi_{S}(\mathscr{C})) =\dim(\pi_{S}(\mathscr{C}^{\perp}(S))^{\perp})\] \[=|S|-\dim(\mathscr{C}^{\perp}(S))\] \[\leq k-i+\mu_{i}-1-(\mu_{i}-1)\] \[=k-i.\] The above also gives that \(|S|=\dim(\pi_{S}(\mathscr{C}))+\dim(\mathscr{D})=\dim(\pi_{S}(\mathscr{C}))+ \mu_{i}-1\), which will be used later on. Using Lemma 5.11(1), we moreover get \(s:=\dim(\mathscr{C}(S^{c}))=k-\dim(\pi_{S}(\mathscr{C}))\geq i\). Let \(G\in\mathbb{F}_{q}^{s\times n}\) be a generator matrix of \(\mathscr{C}(S^{c})\) in reduced row-echelon form (RREF). Let \(\mathscr{W}=\langle g_{1},\dots g_{i}\rangle\), where \(g_{i}\) is the \(i\)-th row of \(G\). Clearly, \(\dim(\mathscr{W})=i\), \(D:=\sigma(\mathscr{W})\subseteq S^{c}\), and since \(G\) is in RREF, \(\mathscr{C}(D)=\mathscr{W}\). This shows that \(d_{i}\leq|D|\). Now consider \(A\subseteq S^{c}\) such that \(A\cup D=S^{c}\) and \(A\cap D=\emptyset\). Note that \(D^{c}=S\cup A\). By Lemma 5.11(1), we have \(\dim(\pi_{A\cup S}(\mathscr{C}))=k-\dim(\mathscr{C}(D))=k-i\). This leads to the following chain of inequalities: \[k-i =\dim(\pi_{A\cup S}(\mathscr{C}))\] \[\leq\dim(\pi_{S}(\mathscr{C}))+\dim(\pi_{A}(\mathscr{C}))\] \[\leq\dim(\pi_{S}(\mathscr{C}))+|A|.\] Finally, we use the facts that \(d_{i}\leq|D|\) and \(|S|=\dim(\pi_{S}(\mathscr{C}))+\mu_{i}-1\) to get the following: \[n-d_{i} \geq n-|D|\] \[=|S\cup A|\] \[=|S|+|A|\] \[\geq\dim(\pi_{S}(\mathscr{C}))+\mu_{i}-1+k-i-\dim(\pi_{S}(\mathscr{ C}))\] \[=\mu_{i}+k-i-1.\] This shows that \(\mu_{i}\leq n-d_{i}-k+i+1\), as desired. Combining Claim B with Claim C concludes the proof. Our next move is to show how the dimension and locality of a code directly impact the parameter \(\mu_{i}\). The special case of \(\mu_{1}\) was already shown in [23]. **Lemma 5.13**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be an LRC of dimension \(k\), locality \(r\), generalized weights \(d_{i}\), and \(\mu_{i}\) as in Notation 5.9. Then for all \(1\leq i\leq k\) we have \[\mu_{i}\geq\left\lceil\frac{k-(i-1)}{r}\right\rceil.\] Proof.: Since \(\mathscr{C}\) has locality \(r\), there exist at least \(\lceil k/r\rceil-1\) linearly independent codewords \(x_{i}\in\mathscr{C}^{\perp}\), for \(1\leq i\leq\lceil k/r\rceil-1\), of weight at most \(r+1\). Let \(s:=\lceil(k-(i-1))/r\rceil-1\leq\lceil k/r\rceil-1\). Consider \(\mathscr{D}=\langle x_{1},\dots x_{s}\rangle\). Then \[\sigma(\mathscr{D}) \leq(r+1)\left(\left\lceil\frac{k-i+1}{r}\right\rceil-1\right)\] \[\leq k-i+1+r-r+\left\lceil\frac{k-i+1}{r}\right\rceil-1\] \[<k-(i-1)+\left\lceil\frac{k-i+1}{r}\right\rceil.\] Hence \(\mu_{i}>\lceil(k-(i-1))/r\rceil-1\), proving the lemma. As a simple consequence we get the following characterization of optimal LRCs and a bound on the generalized weights of an LRC. **Corollary 5.14**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be a code with dimension \(k\), minimum distance \(d\) and locality \(r\). Furthermore, let \(\mathscr{C}^{\perp}\) be its dual code with generalized weight \(\{d_{1}^{\perp},\dots,d_{n-k}^{\perp}\}\). The code \(\mathscr{C}\) is an optimal LRC if and only if \(d_{\lceil k/r\rceil}^{\perp}=n-k^{\perp}+\lceil k/r\rceil\). Proof.: By Theorem 5.12, a code \(\mathscr{C}\) is an optimal LRC if and only if \(\mu_{1}=\lceil k/r\rceil\). By definition, the latter is true if and only if \(d_{\lceil k/r\rceil}\geq n-k^{\perp}+\lceil k/r\rceil\). However, by Theorem 5.6(i) we also know that \(d_{\lceil k/r\rceil}\leq n-k^{\perp}+\lceil k/r\rceil\). Hence equality must hold. **Corollary 5.15**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be an LRC of dimension \(k\), locality \(r\), and let \(d_{i}\) denote its generalized weights where \(1\leq i\leq k\). Then \[d_{i}\leq n-k+i-\left(\left\lceil\frac{k-(i-1)}{r}\right\rceil-1\right).\] Proof.: Combining Theorem 5.12 and Lemma 5.13 we obtain \[d_{i} =n-k-\mu_{i}+i+1\] \[\leq n-k-\left\lceil\frac{k-(i-1)}{r}\right\rceil+i+1.\qed\] **Remark 5.16**.: Corollary 5.15 is a special case of the bound established in [12, Theorem 1] for \((r,\delta)\)-LRCs. However, the result therein is proved using the gap numbers of a code, which differs from our approach. Using the generalized Singleton-type bounds (see Theorem 5.6 (i)), it is possible to determine or obtain a bound on the second generalized weight of optimal LRC. Note that in [12, Theorem 5], the authors establish the full generalized weight hierarchy of optimal LRCs under the assumption that \(r|k\). The proof of [12, Theorem 5] is based on the following result, which we can prove for completeness using the theory developed in this paper. **Proposition 5.17**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be an optimal LRC of dimension \(k\), locality \(r\), and let \(d_{i}\) denote its \(i\)-th generalized weight, \(1\leq i\leq k\). * If \(k\not\equiv 1\pmod{r}\), then \(d_{2}=d_{1}+1\). * If \(k\equiv 1\pmod{r}\), then \(d_{2}\leq d_{1}+2\). Proof.: Assume \(k\not\equiv 1\pmod{r}\). First note that \(\lceil k/r\rceil=\lceil(k-1)/r\rceil\). Since \(\mathscr{C}\) is optimal, we have \(d=n-k-\lceil k/r\rceil+2\). Using Theorem 5.15 we get the following bound on \(d_{2}\): \[d_{2} \leq n-k+2-(\lceil(k-1)/r\rceil)+1\] \[\leq n-k-\lceil k/r\rceil+3\] \[=d_{1}+1.\] However since \(d_{1}<d_{2}\) it must be that \(d_{2}=d_{1}+1\). Now assume \(k\equiv 1\pmod{r}\). Then \(d_{2}\leq d_{1}+2\). Therefore \(\lceil k/r\rceil=\lceil(k-1)/r\rceil-1\). Using Theorem 5.15 in a similar way as above, we get \(d_{2}\leq d_{1}+2\). Combining Proposition 5.17 with Theorem 5.7 we get as an immediate corollary an upper bound on the minimum distance of optimal LRCs. A similar result was shown in [13, Theorem 2] by considering a suitable construction. We, on the other hand, show that the following bound can be seen as a direct consequence of the change in generalized weights of an optimal LRC. **Corollary 5.18**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be an optimal LRC of dimension \(k\), minimum distance \(d\), and locality \(r\). * If \(k\not\equiv 1\pmod{r}\), then \(d\leq q\). * If \(k\equiv 1\pmod{r}\), then \(d\leq 2q\). **Remark 5.19**.: For \(q=2\) there exist codes for which the bound in Corollary 5.18 for \(k\equiv 1\pmod{r}\) is attained. For example, the simplex code of Example 5.3 is an optimal LRC where \(d=4=2q\). We can now derive a Singleton-type bound for LRCs that depends on the field size. **Proposition 5.20**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be an LRC of dimension \(k\) and locality \(r\). Then \[d\leq\frac{q}{q+1}(n-k-\left\lceil\frac{k-1}{r}\right\rceil+3). \tag{5.4}\] Proof.: By Theorem 5.6 we have \(d_{2}\leq n-k-\lceil\frac{k-1}{r}\rceil+3\). Hence \(d_{2}-d\leq n-k-\left\lceil\frac{k-1}{r}\right\rceil+3-d\) and applying Theorem 5.7 we get that \[d\leq q(n-k-\left\lceil\frac{k-1}{r}\right\rceil+3-d).\] By rewriting the inequality above we get the desired result. The previous theorem leads to interesting restrictions on the parameters of optimal LRC, including the field size \(q\). **Corollary 5.21**.: Let \(\mathscr{C}\leq\mathbb{F}_{q}^{n}\) be an optimal LRC with dimension \(k\) and locality \(r\). * If \(k\leq q\) and \(k\not\equiv 1\pmod{r}\), then \(n\leq 3q\), * If \(k\leq q\) and \(k\equiv 1\pmod{r}\), then \(n\leq 4q+1\). Proof.: By rewriting Proposition 5.20 we have \[d\leq n-k-\left\lceil\frac{k-1}{r}\right\rceil+3-\frac{n-k-\lceil(k-1)/r \rceil+3}{q+1}.\] First we assume \(k\not\equiv 1\pmod{r}\). By applying the assumptions on \(k\) we then obtain: \[d \leq n-k-\left\lceil\frac{k}{r}\right\rceil+3-\frac{n-q-\lceil(q- 1)/r\rceil+3}{q+1}\] \[=n-k-\left\lceil\frac{k}{r}\right\rceil+3-\frac{n+3}{q+1}+\frac{ q+\lceil(q-1)/r\rceil}{q+1}\] \[=n-k-\left\lceil\frac{k}{r}\right\rceil+2-\frac{n+3}{q+1}+3\] Note that if \(3-(n+3)/(q+1)<0\) then we get a sharper bound than (2.1). Hence if \(\mathscr{C}\) is an optimal LRC we must have \(3-(n+3)/(q+1)\geq 0\), which is true if and only if \(n\leq 3q\). If \(k\equiv 1\pmod{r}\) we apply a similar argument with the only difference that \(\lceil(k-1)/r\rceil=\lceil k/r\rceil-1\). We then get that \(n\leq 4q+1\). **Remark 5.22**.: The previous result shows that an optimal LRC \(\mathscr{C}\in\mathbb{F}_{q}^{n}\) cannot exist when \(k\leq q<(n-1)/4\).
2308.00056
Quantum simulation of dissipation for Maxwell equations in dispersive media
In dispersive media, dissipation appears in the Schr\"odinger representation of classical Maxwell equations as a sparse diagonal operator occupying an $r$-dimensional subspace. A first order Suzuki-Trotter approximation for the evolution operator enables us to isolate the non-unitary operators (associated with dissipation) from the unitary operators (associated with lossless media). The unitary operators can be implemented through qubit lattice algorithm (QLA) on $n$ qubits. However, the non-unitary-dissipative part poses a challenge on how it should be implemented on a quantum computer. In this paper, two probabilistic dilation algorithms are considered for handling the dissipative operators. The first algorithm is based on treating the classical dissipation as a linear amplitude damping-type completely positive trace preserving (CPTP) quantum channel where the combined system-environment must undergo unitary evolution in the dilated space. The unspecified environment can be modeled by just one ancillary qubit, resulting in an implementation scaling of $\textit{O}(2^{n-1}n^2)$ elementary gates for the dilated unitary evolution operator. The second algorithm approximates the non-unitary operators by the Linear Combination of Unitaries (LCU). We obtain an optimized representation of the non-unitary part, which requires $\textit{O}(2^{n})$ elementary gates. Applying the LCU method for a simple dielectric medium with homogeneous dissipation rate, the implementation scaling can be further reduced into $\textit{O}[poly(n)]$ basic gates. For the particular case of weak dissipation we show that our proposed post-selective dilation algorithms can efficiently delve into the transient evolution dynamics of dissipative systems by calculating the respective implementation circuit depth. A connection of our results with the non-linear-in-normalization-only (NINO) quantum channels is also presented.
Efstratios Koukoutsis, Kyriakos Hizanidis, Abhay K. Ram, George Vahala
2023-07-31T18:22:40Z
http://arxiv.org/abs/2308.00056v2
# Quantum simulation of dissipation for Maxwell equations in dispersive media ###### Abstract The dissipative character of an electromagnetic medium breaks the unitary evolution structure that is present in lossless media, both dispersive and optical. In dispersive media, dissipation appears in the Schrodinger representation of Maxwell equations as a sparse diagonal operator occupying an \(r\)-dimensional subspace. Trotterization of the evolution enables us to isolate the non-unitary part associated with dissipation from the unitary part that corresponds to the lossless case. Given the fact that the unitary part can be implemented on \(n\) qubits based on the discretization and the dimensionality of the pertinent fields we propose two dilation techniques for handling the dissipative evolution part. Our first algorithm is based on realization of dissipation as a quantum amplitude channel where an unspecified environment interacts with the lossless system and produces the non-unitary evolution. Therefore, the combined system-environment is now closed and evolves unitary. This physical unitary dilation process requires only one ancillary qubit as environment resulting in a scaling of \(O(2^{n-1}n^{2})\) elementary gates, in the implementation of the total system-environment unitary evolution operator. On the other hand, by treating the non-unitary part with the LCU method, exploiting the diagonal structure of dissipation, we obtain an optimized counterpart for the previous result. The non-unitary part can now be implemented in \(O(2^{n})\) elementary gates. Under the light of our results, a full-wave quantum simulation of electromagnetic wave propagation and scattering in complex media is deemed possible in the near future. ## I Introduction Electromagnetic waves are ubiquitous in many real-life applications, from thermonuclear fusion to communications. In most of these applications electromagnetic waves propagate through complex media giving birth to phenomena that can only be explored through computational techniques, which most of the times are cost inefficient both in resources and running time. On the other hand, quantum computing has brought the possibility of overcoming the conventional computational costs [1; 2], motivating the application of quantum information science to traditionally classical fields. Electromagnetic wave propagation and scattering in complex media is a prominent candidate for quantum implementation due to the linear nature of Maxwell equations and the theoretical bedrock that has been established in the description of classical electromagnetism in the language of quantum mechanics [3; 4; 5]. A characteristic example on those quantum computing implementation efforts in electromagnetic wave propagation and scattering from dielectrics is the Qubit Lattice Algorithm (QLA) [6; 7; 8; 9]. Current quantum computers are optimized to unitary operations. Such operators are naturally embedded in the framework of closed quantum systems obeying the Schrodinger equation \[i\frac{\partial\ket{\mathbf{\psi}}}{\partial t}=\hat{H}(\mathbf{r})\ket{\mathbf{\psi}} \quad\text{with}\quad\hat{H}=\hat{H}^{\dagger}, \tag{1}\] and evolve unitarily under the action of the evolution operator \(\hat{\mathcal{U}}=\exp\Bigl{\{}-it\hat{H}\hat{H}\Bigr{\}}\) so that \[\ket{\mathbf{\psi}(t)}=\hat{\mathcal{U}}\ket{\mathbf{\psi}(0)}. \tag{2}\] The classical, linearized, evolution analog of Eq.(1) comprises of energy conserving systems where unitary evolution corresponds to energy conservation. This is exactly the case for Maxwell equations in lossless inhomogeneous media with optical response [8] and with dispersion [10; 11]. Then, one can harness the quantum computing techniques to pursue a computational advantage compared to the classical simulation schemes. Unfortunately, in nature most systems of interest exhibit dissipation, (i.e. energy loss) and therefore quantum computing cannot be applied in a straightforward fashion. For these classical systems the Schrodinger-like evolution equation is still valid but the generator of dynamics is now non-Hermitian, \(\hat{H}\neq\hat{H}^{\dagger}\), and possibly time-dependent. Quantum systems suffering from dissipation and decoherence phenomena due to interaction with an environment are considered open, and evolve through Kraus representation [12] \[\rho_{S}(t)=\sum_{\mu}\hat{K}_{\mu}\rho_{S}(0)\hat{K}_{\mu}^{\dagger},\quad \sum_{\mu}\hat{K}_{\mu}^{\dagger}\hat{K}_{\mu}=I \tag{3}\] where the so-called Kraus operators are defined by \[\hat{K}_{\mu}=\,\bra{\mu}\hat{\mathcal{U}}_{S+E}\ket{e_{0}}. \tag{4}\] The operator \(\hat{\mathcal{U}}_{S+E}\) in Eq.(4) represents the unitary evolution of the composite system-environment \(\rho_{S}\otimes\rho_{E}\) which is now closed. The density matrix of the stationary environment is \(\rho_{E}=\left|e_{0}\right\rangle\left\langle e_{0}\right|\), and the individual Kraus operators themselves are non-unitary for \(\mu\geq 2\). Representation (3) is completely positive (CP), trace preserving (TP) and always achievable for the dynamic evolution of density matrix of the open system \(\rho_{S}\)[13]. In recent years a great amount of work has been done to embed the non-unitary evolution of quantum open systems to the unitary framework of quantum computers resorting to dilation theory [14; 15; 16], where a non-unitary operator is considered as a projection of a unitary operator in an extended Hilbert space. This Hilbert space extension is at the expense of introducing ancillary qubits to represent the extended state. In the same fashion, Maxwell equations in passive, dissipative and dispersive media can attain a Hermitian Schrodinger structure as in Eq.(1) by extending the Hilbert space of the primary fields \(\mathbf{E},\mathbf{H}\) with auxiliary fields [17; 11; 18] derived through functional analysis techniques. However, the Hermitian Hamiltonian operator \(\hat{H}\) in those mathematically oriented works, possesses a complex structure that prohibits from any attempt at quantum computing implementation. For a possible quantum simulation of Maxwell equations in lossy and dispersive media it is of paramount importance to construct a Schrodinger representation that is compatible with the tools of quantum information science. In this paper we propose probabilistic dilation algorithms for simulation of the Trotterized dissipative dynamics of Maxwell equations in dispersive media. Particularly, in Sec. II.1 we lay out the steps for a quantum representation of Maxwell equation in dispersive media as a Schrodinger equation through the introduction of auxiliary electromagnetic fields, related with the physical fields of polarization and polarization density current. In Sec. II.2 we examine how dissipation enters the Maxwell-Schrodinger equation, being responsible for an anti-Hermitian part in the Hamiltonian. This non-Hermitian part, generates a non-unitary part in the Trotterized evolution dynamics in Sec. II.3 that has to be integrated in unitary quantum computing framework. To that extent, in Sec. III we formulate electromagnetic dissipation as an augmented quantum amplitude damping channel [15]. The respective set of Kraus operators form the basis of the dilation model with one ancillary qubit as the environmental state. In Secs. III.1, III.2, a quantum circuit for the probabilistic simulation of dissipative dynamics is presented along with the implementation scaling of the system-environment unitary operator into elementary quantum gates. Considerations about the simulation error \(\varepsilon\) of the total evolution are briefly described in Sec. III.3. Finally, in Sec. IV we reduce the implementation costs of the dilated unitary evolution for the dissipative dynamics using the LCU method. ## II Quantum representation of Maxwell equations in dispersive media The building blocks of our formulation, as far as the classical electromagnetic theory of waves is concerned, are the source-free Maxwell equations \[\mathbf{\nabla\cdot d}(\mathbf{r},t)=0,\quad i\frac{\partial\mathbf{d}(\mathbf{r},t)}{\partial t }=\hat{M}\mathbf{u}(\mathbf{r},t), \tag{5}\] accompanied with the constitutive relations for a general dispersive medium in the temporal domain \[\mathbf{d}(\mathbf{r},t)=\hat{W}(\mathbf{r})\mathbf{u}(\mathbf{r},t)+\int_{0}^{t}\hat{G}(\mathbf{r},t -\tau)\mathbf{u}(\mathbf{r},\tau). \tag{6}\] In Eqs.(5),(6) we employed the six-vector formulation of electromagnetic fields \(\mathbf{u}=(\mathbf{E},\mathbf{H})^{T}\) and their respective intensities \(\mathbf{d}=(\mathbf{D},\mathbf{B})^{T}\)[8]. The Maxwell operator \(\hat{M}\), \[\hat{M}=i\begin{bmatrix}0&\mathbf{\nabla\times}\\ -\mathbf{\nabla\times}&0\end{bmatrix} \tag{7}\] is Hermitian in \(L^{2}(\Omega\subseteq\mathbb{R}^{3},\mathbb{C})\) under the Dirichlet boundary condition \[\widehat{\mathbf{n}}(\mathbf{r})\times\mathbf{u}_{1}=0\text{ on the boundary }\partial\Omega. \tag{8}\] The form of the constitutive relation (6) satisfies five physical postulates [19]: determinism, linearity, causality, locality in space, and invariance under time translations. The instantaneous, optical response of the medium is given by the \(6\times 6\) matrix \(\hat{W}=diag(\epsilon(\mathbf{r}),\mu(\mathbf{r}))\) whereas the susceptibility kernel \(\hat{G}\) represents the dispersive response accommodating for dissipation and memory effects. For simplicity we will assume that the high frequency limit of the medium's response is \(\hat{W}=\hat{W}_{0}=diag(\epsilon_{0},\mu_{0})\). For a discussion on the inhomogeneity in the optical response and the explicit unitary evolution of Maxwell equations we direct the reader to [8]. In addition, without loss of generality we assume scalar dissipative and dispersive media. Under those considerations, the general constitutive relation in the frequency domain for a lossy and dispersive medium reads [11] \[\epsilon(\mathbf{r},\omega) =\epsilon_{0}\Big{(}1+\sum_{l=1}^{N_{e}}\frac{\Omega_{e,l}^{2}( \mathbf{r})}{\omega_{e,l}^{2}(\mathbf{r})-2i\gamma_{e,l}(\mathbf{r})\omega-\omega^{2}} \Big{)}, \tag{9}\] \[\mu(\mathbf{r},\omega) =\mu_{0}\Big{(}1+\sum_{l=1}^{N_{m}}\frac{\Omega_{m,l}^{2}(\mathbf{r} )}{\omega_{m,l}^{2}(\mathbf{r})-2i\gamma_{m,l}(\mathbf{r})\omega-\omega^{2}}\Big{)}, \tag{10}\] with \(\gamma_{e,l}(\mathbf{r})\), \(\gamma_{m,l}(\mathbf{r})\geq 0\) and the coefficients \(\Omega_{e}\), \(\omega_{e}\), \(\Omega_{m}\), \(\omega_{m}\) characterize the medium. When \(\gamma_{e}=0\) (\(\gamma_{m}=0\)) the medium is characterized as a lossless and dispersive- Lorentz -medium, whereas for 0) we retrieve the Drude medium.The connection between the susceptibility kernel \(\hat{G}\) in Eq.(6) and the constitutive relations (9),(10) in frequency domain is \[\hat{G}(\mathbf{r},t)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\begin{bmatrix}\epsilon( \mathbf{r},\omega)-\epsilon_{0}&0\\ 0&\mu(\mathbf{r},\omega)-\mu_{0}\end{bmatrix}e^{-i\omega t}d\,\omega \tag{11}\] In Sec. II.1 we express the time dependent Maxwell equations (5) in a Lorentz medium as a Schrodinger equation with Hermitian structure. The unitary evolution, as expected, represents the conservation of the electromagnetic energy since we are dealing with a lossless medium. The breaking of Hermitian structure due to dissipation is presented in Sec. II.2. In Sec. II.3 the Troterized evolution is employed, in order to examine the form of the non-unitary evolution associated with the dissipation separately from the unitary part corresponding to the lossless case. ### Schrodinger representation of Maxwell equations in a Lorentz medium Following [11] we define the auxiliary fields, \[\mathbb{P}_{l}(\mathbf{r},t)=\frac{1}{2\pi}\int_{0}^{t}\int_{-\infty}^{\infty} \frac{e^{-i\omega(t-\tau)}}{\omega_{e,l}^{2}-\omega^{2}}\mathbf{E}(\mathbf{r},\tau)d \,\omega d\,\tau \tag{12}\] and \[\mathbb{M}_{l}(\mathbf{r},t)=\frac{1}{2\pi}\int_{0}^{t}\int_{-\infty}^{\infty} \frac{e^{-i\omega(t-\tau)}}{\omega_{m,l}^{2}-\omega^{2}}\mathbf{H}(\mathbf{r},\tau)d \,\omega d\,\tau \tag{13}\] that are directly related to the polarization \(\mathbf{P}\) and magnetization \(\mathbf{M}\) of the medium \[\mathbf{P}(\mathbf{r},t)=\epsilon_{0}\sum_{l=1}^{N_{e}}\Omega_{e,l}^{2}\mathbb{P}_{l },\quad\mathbf{M}(\mathbf{r},t)=\mu_{0}\sum_{l=1}^{N_{m}}\Omega_{m,l}^{2}\mathbb{M}_{ l}. \tag{14}\] Putting together Maxwell equations (5) along with the evolution equation for the auxiliary fields (12),(13) we obtain a closed system of partial differential equations, \[i\frac{\partial\mathbf{u}}{\partial t} =\hat{W}_{0}^{-1}\hat{M}\mathbf{u}-i\sum_{l=1}^{N}\hat{\Omega}_{l}^{ 2}\mathbf{\mathcal{P}}_{l,t} \tag{15}\] \[i\frac{\partial\mathbf{\mathcal{P}}_{l}}{\partial t} =i\mathbf{\mathcal{P}}_{l,t},\ l=1,2...N\] \[i\frac{\partial\mathbf{\mathcal{P}}_{l,t}}{\partial t} =i\mathbf{u}-i\hat{\omega}_{l}^{2}\mathbf{\mathcal{P}}_{l},\ l=1,2...N\] where the electromagnetic vector \(\mathbf{u}\), is \(\mathbf{u}=(\mathbf{E},\mathbf{H})^{T}\) and the generalized auxiliary vector \(\mathbf{\mathcal{P}}_{l}\) reads \(\mathbf{\mathcal{P}}_{l}=(\mathbb{P}_{l},\mathbb{M}_{l})^{T}\). Additionally, \(N=max\{N_{e},N_{m}\}\) and the diagonal matrices \(\hat{\Omega}_{l}^{2},\ \hat{\omega}_{l}^{2}\) are \[\hat{\Omega}_{l}^{2}=\begin{bmatrix}\Omega_{e,l}^{2}&0\\ 0&\Omega_{m,l}^{2}\end{bmatrix},\quad\hat{\omega}_{l}^{2}=\begin{bmatrix} \omega_{e,l}^{2}&0\\ 0&\omega_{m,l}^{2}\end{bmatrix}. \tag{16}\] Applying, the following Dyson transform [8] \[\hat{\rho}=diag(\hat{W}_{0}^{1/2},\hat{W}_{0}^{1/2}\hat{\Omega}_{l}\hat{ \omega}_{l},W_{0}^{1/2}\hat{\Omega}_{l}), \tag{17}\] Maxwell system (15) is of explicit Hermitian Schrodinger structure: \[i\frac{\partial}{\partial t}\begin{bmatrix}\hat{W}_{0}^{1/2}\mathbf{u}\\ \hat{W}_{0}^{1/2}\hat{\Omega}_{l}\hat{\omega}_{l}\mathbf{\mathcal{P}}_{l}\\ W_{0}^{1/2}\hat{\Omega}_{l}\hat{\mathcal{P}}_{l,t}\end{bmatrix}=\begin{bmatrix} \hat{W}_{0}^{1/2}\hat{M}\hat{W}_{0}^{-1/2}&0&-i\hat{\Omega}_{l}\\ 0&0&i\hat{\omega}_{l}\\ i\hat{\Omega}_{l}&-i\hat{\omega}_{l}&0\end{bmatrix}\begin{bmatrix}W_{0}^{1/2} \mathbf{u}\\ \hat{W}_{0}^{1/2}\hat{\Omega}_{l}\hat{\omega}_{l}\mathbf{\mathcal{P}}_{l}\\ W_{0}^{1/2}\hat{\Omega}_{l}\hat{\mathcal{P}}_{l,t}\end{bmatrix},\quad\text{ for }l=1,2,...,N\Rightarrow i\frac{\partial\mathbf{\psi}}{\partial t}=\hat{D}_{0}\mathbf{\psi}, \tag{18}\] with initial condition \(\mathbf{\psi}_{0}=(\hat{W}_{0}^{1/2}\mathbf{u}_{0},0,0)^{T}\). The unitary evolution operator \(\exp\Bigl{\{}-it\hat{D}_{\rho}\Bigr{\}}\) of the Schrodinger representation of Maxwell equations in a Lorentz dispersive medium (18), offers an optimistic possibility for quantum computing implementation due to the relatively simple form of the Hermitian generator of dynamics \(\hat{D}_{\rho}\). Importantly, the auxiliary fields in the state vector \(\mathbf{\psi}\) are strongly related to the physical quantities of polarization and magnetization, (Eq.(14)) as well as with the respective density currents. In that way, it is possible to retrieve rich physical information concerning various electromagnetic quantities from the amplitude estimation of the final output state. The unitary evolution of Eq.(18) corresponds to conservation of the extended electromagnetic energy \(E_{total}\) which can be calculated explicitly as \[E_{total}(t)=\frac{1}{2}\|\mathbf{\psi}\|^{2}=\frac{1}{2}\hat{W}_{0}\int_{\Omega} \|\mathbf{u}\|^{2}d\,\mathbf{r}+\frac{1}{2}\hat{W}_{0}\sum_{l=1}^{N}\int_{\Omega}\hat{ \Omega}_{l}^{2}(\hat{\omega}_{l}^{2}\|\mathbf{\mathcal{P}}_{l}\|^{2}+\|\mathbf{\mathcal{ P}}_{l,t}\|^{2})d\,\mathbf{r}. \tag{19}\] The first term in the right hand side of Eq.(19) represents the electromagnetic energy in vacuum, \[E_{el}(t)=\frac{1}{2}\int_{\Omega}\epsilon_{0}(\|\mathbf{E}\|^{2}+\mu_{0}\|\mathbf{H} \|^{2})d\,\mathbf{r}\leq E_{total}(0)=E_{el}(0). \tag{20}\] Energy expression (19) is valid beyond the plane-wave, harmonic and semi-harmonic approximations for the fields as imposed by Landau and Brillouin [20]. We end this section by noting that a generalization for tensorial Lorentz media can be found in [10]. A characteristic tensorial Lorentz medium is the magnetized plasma. As a result, it is a matter of algebra to show that Maxwell equations in cold magnetized plasma can obtain a Hermitian Schrodinger representation as Eq.(18) with energy integral (19). This self-consistent formulation enables for full wave quantum computing simulation of wave propagation and scattering in a cold, inhomogeneous and magnetized plasma. Interestingly enough the authors in [21; 22] explore the prospect of a quantum advantage in two key plasma physics problems, electrostatic Landau damping and wave propagation in cold fluid plasma model, reinforcing our initial view that wave propagation can be benefited from quantum computing. ### Introducing dissipation Switching dissipation on, \(\gamma_{e,l}\left(\gamma_{m,l}\right)\neq 0\), we adjust the auxiliary fields \(\mathbb{P}_{l}\) and \(\mathbb{H}_{l}\) by adding the factor \(-2i\gamma_{e,l}\left(-2i\gamma_{m,l}\right)\) in the denominator of Eqs.(12) and (13) respectively. The only difference between the lossless Maxwell system (15) and when dissipation is present is that new imaginary diagonal terms appear, breaking the Hermicity of the Schrodinger representation (18). Specifically, the first two equation in Maxwell system (15) remain unchanged, whereas the third one now reads \[i\frac{\partial\mathbf{\mathcal{P}}_{l,t}}{\partial t}=i\mathbf{u}-i\hat{\omega}_{l}^ {2}\mathbf{\mathcal{P}}_{l}-2i\hat{\gamma}_{l}\mathbf{\mathcal{P}}_{l,t},\ l=1,2...N, \tag{21}\] where \[\hat{\gamma}_{l}=\begin{bmatrix}\gamma_{e,l}&0\\ 0&\gamma_{m,l}\end{bmatrix}. \tag{22}\] Substituting the diagonal matrix \(2\hat{\gamma}_{l}\rightarrow\hat{\gamma}_{l}\) without changing the notation the dissipative counterpart of Hermitian Schrodinger equation (18) has the form \[i\frac{\partial\mathbf{\psi}}{\partial t}=[\hat{D}_{\rho}-i\hat{D}_{diss}]\mathbf{ \psi}. \tag{23}\] The diagonal matrix \(\hat{D}_{diss}=diag(0,0,\hat{\gamma}_{l})\) is Hermitian and positive definite (\(\gamma_{e,l},\,\gamma_{m,l}\geq 0\)) so the anti-Hermitian term \(-i\hat{D}_{diss}\) describes pure dissipation. Since the generator of dynamics in (23) is non-Hermitian the generated evolution operator \(\hat{\mathcal{U}}(t)=\exp\Bigl{\{}-it[\hat{D}_{\rho}-i\hat{D}_{diss}]\Bigr{\}}\) is non-unitary. Thus, straight forward implementation of the evolution operator \(\hat{\mathcal{U}}\) on a quantum computer, in terms of approximating \(\hat{\mathcal{U}}\) with a sequence of unitary operations within an error \(\varepsilon\), fails. ### Trotterized evolution Employing on \(\hat{\mathcal{U}}(t)\) operator, a first order Trotter approximation for an infinitesimal evolution advancement \(0\rightarrow\delta t\) \[\exp\Bigl{\{}-i\delta t[\hat{D}_{0}-i\hat{D}_{diss}]\Bigr{\}}=e^{-i\delta t \hat{D}_{0}}e^{-\delta t\hat{D}_{diss}}+O(\delta t^{2}), \tag{24}\] we can treat the non-unitary term \(\exp\Bigl{\{}-\delta t\hat{D}_{diss}\Bigr{\}}\) separately. The quantum implementation for the unitary evolution \(\exp\Bigl{\{}-i\delta t\hat{D}_{0}\Bigr{\}}\) of the lossless case can be considered known. The diagonal dissipative operator \(\hat{D}_{diss}\) contains at most \(6N\) positive elements \(\gamma_{e,l},\,\gamma_{m,l}\) and permits an immediate evaluation of the exponential operator \(\exp\Bigl{\{}-\delta t\hat{D}_{diss}\Bigr{\}}\) as \[\exp\Bigl{\{}-\delta t\hat{D}_{diss}\Bigr{\}}=\hat{K}_{0}=diag(I_{6\times 6 },I_{6N\times 6N},\hat{\Gamma}), \tag{25}\] with \[\hat{\Gamma}=\begin{bmatrix}e^{-\delta t\gamma_{e,l}}I_{3\times 3}&0\\ 0&e^{-\delta t\gamma_{m,l}}I_{3\times 3}\end{bmatrix} \tag{26}\] The diagonal sub-matrix \(\hat{\Gamma}\) has a \(6N\times 6N\) dimensionality. We have denoted the non-unitary operator \(\exp\Bigl{\{}-\delta t\hat{D}_{diss}\Bigr{\}}\) as \(\hat{K}_{0}\) to indicate the resemblance with the Kraus operator \(\hat{K}_{0}\) in Eq.(3). In the following section we will show that based on this association it is possible to describe classical dissipation as the result of a measurement in the lossless system, interacting with an unspecified environment. ## III Dissipation in the context of Kraus operators After spatial discretization of initial state vector \(\mathbf{\psi}(\mathbf{r},0)\), we express it as a normalized, \(n=\log_{2}\left(6+12N\right)N_{x}=\log_{2}d\), qubit superposition state \[\ket{\mathbf{\psi}_{0}}=\frac{1}{\sqrt{E_{0}}}\sum_{q=0}^{d-1}\psi_{0q}\ket{q}, \tag{27}\] where \(N_{x}\) corresponds to the the number of grid nodes, \(N_{x}\Delta\mathbf{r}=\mathbf{L}\) formed by the intersection of grid cells of \(\Delta\mathbf{r}\) volume and \(\mathbf{L}\) is the characteristic spatial scale in the \(\Omega\) domain. The initial energy \(E_{0}=\bra{\mathbf{\psi}_{0}}\ket{\mathbf{\psi}_{0}}=\sum_{q}\epsilon_{0}E_{q}^{2}+ \mu_{0}H_{q}^{2}\) in accordance with the energy equation (20). By construction the initial state \(\ket{\mathbf{\psi}_{0}}\) is a pure state and the Trotterized evolution (24) in the density matrix framework can be written as \[\bar{\rho}(\delta t)=e^{-i\delta t\hat{D}_{0}}(\hat{K}_{0}\rho(0)\hat{K}_{0}^{ \dagger})e^{i\delta t\hat{D}_{0}}. \tag{28}\] We have denoted with a bar the matrix outcome in (28) because \(\tilde{\rho}(\delta t)\) is not a proper quantum mechanical density matrix, as the inward operator \(\hat{K}_{0}\rho(0)\hat{K}_{0}^{\dagger}\) is non-trace preserving. The non-trace preserving character of (28) is inherently related to the modeling of the classical dissipative system (23). The classical system includes the variation of the electromagnetic quantities as well as the existence of dissipation based on a phenomenological Lorentz oscillator model. None of the pertinent equations describe the effects of the energy loss, all of our attention has been focused on the behavior of the electromagnetic quantities. This is similar to a quantum mechanical system interacting with an unspecified environment, resulting in dissipation. Making a measurement on the whole system and tracing out the environment, we obtain a non-trace preserving quantum channel [12]. Consequently we can think of classical dissipation as the observable result of the interaction between the quantum represented lossless system and a unspecified environment. Since we are uninterested, or to be precise, unaware, of the physical processes that the dissipative energy obeys we augment (28) with the term \[\hat{K}_{1}\rho(0)\hat{K}_{1}, \tag{29}\] where the second Kraus operator \(\hat{K}_{1}\) satisfies \(\hat{K}_{1}^{\dagger}\hat{K}_{1}=I_{d\times d}-\hat{K}_{0}^{\dagger}\hat{K}_{0}\), and has the form \[\hat{K}_{1}=\begin{bmatrix}0&\sqrt{I_{r\times r}-\hat{\Gamma}^{2}}\\ 0&0\end{bmatrix} \tag{30}\] with \(r=6NN_{x}\), the dimension that dissipation occupies. Kraus operator \(\hat{K}_{1}\) describes a transition - quantum jump from the dissipative state of interest to a different one. The set of Kraus operators \(\hat{K}_{0}\), \(\hat{K}_{1}\) are the multi-dimentional analogs of the amplitude damping channel operators [12]. Therefore the augmented dissipative evolution reads \[\rho_{aug}(\delta t)=e^{-i\delta t\hat{D}_{0}}\rho_{diss}(\delta t)e^{i\delta t \hat{D}_{0}} \tag{31}\] with \[\rho_{diss}(\delta t)=\hat{K}_{0}\rho(0)\hat{K}_{0}^{\dagger}+\hat{K}_{1}\rho( 0)\hat{K}_{1}^{\dagger}. \tag{32}\] The operator \(e^{-i\delta t\hat{D}_{0}}\) has been defined in Eq.(18). A similar idea has been presented in [15], but each different flavors of decay correspond to a different amplitude damping Kraus operator, resulting in a dilation procedure based on Stinespring theorem [23]. In the following section, we argue that the minimal augmented construction as presented in Eqs. (31),(32) is sufficient to capture the physical dissipative dynamics (28). ### The algorithm Taking into consideration the Kraus operators \(\hat{K}_{0}\), \(\hat{K}_{1}\) in Eq.(32) we construct the minimal unitary dilation \(\hat{\mathcal{U}}_{diss}\) of non-unitary \(\hat{K}_{0}\) operator in the Trotterized evolution (24) as \[\hat{\mathcal{U}}_{diss}=\begin{bmatrix}\hat{K}_{0}&-\hat{K}_{1}^{\dagger}\\ \hat{K}_{1}&\hat{\mathcal{X}}\hat{K}_{0}\hat{\mathcal{X}}\end{bmatrix}. \tag{33}\] The unitary operator \(\hat{\mathcal{U}}_{diss}\) is a \(2d\times 2d\) matrix operator meaning that it acts on \(n+1\) qubits. This ancillary qubit represents the environment, so the lossless system plus the environment will comprise a closed conservative system. This minimal dilation is directly related to the Sz. Nagy dilation of \(\hat{K}\) through a rotational transformation. The operator \(\hat{\mathcal{X}}\), is an appropriate extension of Pauli \(\hat{X}\) operator in \(d\)-dimensions. We select the one qubit environment to be stationary with density matrix \(\rho_{E}=\ket{0}\bra{0}\). Setting the initial state in the dilated space \(\ket{\boldsymbol{\Psi}_{0}}\) separable we can write that \[\ket{\boldsymbol{\Psi}_{0}}=\ket{0}\ket{\boldsymbol{\psi}_{0}}. \tag{34}\] The action of the unitary operator \(\hat{\mathcal{U}}_{diss}\) on the composite initial state (34) yields \[\ket{0}\hat{K}_{0}\ket{\boldsymbol{\psi}_{0}}+\ket{1}\hat{K}_{1}\ket{ \boldsymbol{\psi}_{0}}. \tag{35}\] Next, we apply a controlled \(e^{-i\delta t\hat{D}_{0}}\) operation to Eq. (35), in respect to the \(0\) environment qubit, obtaining \[\ket{0}e^{-i\delta t\hat{D}_{0}}\hat{K}_{0}\ket{\boldsymbol{\psi}_{0}}+\ket{1 }\hat{K}_{1}\ket{\boldsymbol{\psi}_{0}}. \tag{36}\] Finally, a projective measurement in the first qubit with operator \(P_{0}=\ket{0}\bra{0}\otimes I_{d\times d}\) followed by tracing out the environment, correctly provides the non-unitary Trotterized evolution (24) for the lossy, dispersive medium. The steps in Eqs.(34)-(36) along with the post selection of the output state are depicted in the quantum circuit of Fig. 1. ### Implementation of \(\hat{\mathcal{U}}_{diss}\) operator The explicit form of unitary operator \(\hat{\mathcal{U}}_{diss}\) in Eq.(33) is \[\hat{\mathcal{U}}_{diss}=\begin{bmatrix}I_{(d-r)\times(d-r)}&0&0&0\\ 0&\hat{\Gamma}&-\sqrt{I_{r\times r}-\hat{\Gamma}^{2}}&0\\ 0&\sqrt{I_{r\times r}-\hat{\Gamma}^{2}}&\hat{\Gamma}&0\\ 0&0&0&I_{(d-r)\times(d-r)}\end{bmatrix}, \tag{37}\] Figure 1: Quantum circuit for simulation of the non-unitary classical evolution (24) in a dissipative and dispersive medium. with the diagonal operator \(\hat{\Gamma}\) given in Eq.(26). Setting \(\cos\theta_{l}/2=\hat{\Gamma}_{ll}\) we can decompose the \(\hat{\mathcal{U}}_{diss}\) operator into \(r\) two-level unitary \(y\)-rotations, \(\hat{\mathcal{R}}_{y}(\theta_{l})\), acting on \(n+1\) qubits, \[\hat{\mathcal{U}}_{diss}=\prod_{l=1}^{r}\hat{\mathcal{R}}_{y}(\theta_{l}). \tag{38}\] Hence, implementation of \(\hat{\mathcal{U}}_{diss}\) can be accomplished, to leading order, in _O_(\(rn^{2}\)) CNOTs and single qubit rotations \(\hat{R}_{y}(\theta_{l})\). Since \(d=(6+12N)N_{x}=6N_{x}+2r=2^{n}\) we conclude that \(r=2^{n-1}(1-\frac{1}{1+2N})\) so to leading order the implementation of \(\hat{\mathcal{U}}_{diss}\) is achieved in _O_(\(2^{n-1}n^{2}\)) simple gates. The decomposition \(\hat{\mathcal{U}}_{diss}\) in a product of two-level rotation, Eq.38, acting on \(n+1\) qubits is related to multiplex rotations. Following [24], a multiplex rotation acting on \(n+1\) qubits can be implemented in _O_(\(4^{n+1}\)) CNOTs and single qubit \(\hat{R}_{y}\) rotations. The two crucial aspects that enables us to implement the product structure (38) exponentially better are, firstly that the dissipation subspace \(\mathcal{H}_{r}\) has a smaller dimension \(r<d\) compared to the overall system and secondly the Hilbert space \(\mathcal{H}_{r}\) structure of the classical system enables decomposition into separate spaces through direct product, \(\mathcal{H}_{r}=\bigoplus_{l=1}^{r}\mathcal{H}_{l}\). The latter is not always possible in quantum systems and that is the cause of the Strinespring scaling in the amplitude damping dilation of [15] compared to the one-qubit amplitude damping dilation of ours. ### Total evolution The probability \(p_{0}(\delta t)=\)\(\langle\mathbf{\psi}_{0}|\hat{K}_{0}^{2}|\mathbf{\psi}_{0}\rangle\) for a successful post-selection is bounded between the values \(p_{0min}\) and \(p_{0max}\), \[p_{0min}(\delta t) =1+(e^{-2\gamma_{max}\delta t}-1)\sum_{q=d-r}^{d-1}\frac{|\psi_{ 0q}|^{2}}{E_{0}}, \tag{39}\] \[p_{0max}(\delta t) =1+(e^{-2\gamma_{min}\delta t}-1)\sum_{q=d-r}^{d-1}\frac{|\psi_{ 0q}|^{2}}{E_{0}}, \tag{40}\] with \(r=6NN_{x}\) and \(\gamma_{max}=max\{\gamma_{e,l},\gamma_{m,l}\}\), \(l=1,2,...,r\) (in analogous way the definition for \(\gamma_{min}\)). The probability bounds, Eqs. (39),(40) dictate the optimal time-step \(\delta t\) for a high success probability of quantum implementation in Fig.1. We notice two different cases: 1. \(\frac{\gamma_{min}}{\gamma_{max}}<1\), then the optimal time step \(\delta t\) is provided by the maximization of \(|p_{0min}-p_{0max}|\), \[\delta t=\frac{1}{2(\gamma_{min}-\gamma_{max})}\ln\Big{(}\frac{\gamma_{min}} {\gamma_{max}}\Big{)}.\] (41) 2. \(\frac{\gamma_{min}}{\gamma_{max}}=1\), then \(p_{0}=p_{0min}=p_{0max}\) and the optimal time step \(\delta t\) is provided by the the expression \[\delta t=\textit{o}(1/2\gamma)\quad\text{with}\quad\gamma=\gamma_{max}=\gamma_ {min}.\] (42) The convergence of the Trotterized evolution into the real evolution demands a high number \(N_{t}\) of repetitions of the quantum circuit in Fig.1. The error \(\varepsilon\) of this procedure for a total simulation time \(t_{total}\) is \[\varepsilon=\Big{\|}\hat{\mathcal{U}}-(e^{-\delta t\hat{D}_{0}}\hat{K}_{0})^{N _{t}}\Big{\|}=\textit{O}(t_{total}\delta t). \tag{43}\] Therefore, using the time step of Eq.(41) we obtain \[\varepsilon=\textit{O}\Big{[}\frac{t_{total}}{2(\gamma_{min}-\gamma_{max})}\ln \Big{(}\frac{\gamma_{min}}{\gamma_{max}}\Big{)}\Big{]}, \tag{44}\] whereas in the second case the error is bounded \[\varepsilon=\textit{o}\Big{(}\frac{t_{total}}{2\gamma}\Big{)} \tag{45}\] ## IV An optimized approach In the previous section, we have succeeded in showing how to convert the non-unitary,diagonal, dissipative evolution part \(\hat{K}_{0}=\exp\Bigl{\{}-\delta t\hat{D}_{diss}\Bigr{\}}\) into _O_(\(2^{n-1}n^{2}\)) elementary unitary gates, based on the constructed interconnection between quantum amplitude damping and classical dissipation. A different approach is to refrain from associating classical dissipation with a quantum process and treat the non-unitary diagonal dissipative evolution operator \(\hat{K}_{0}\) with the Linear Combination of Unitary (LCU) method [25]. Specifically, we notice that the operator \(\hat{K}_{0}\) can be written as a sum of two unitary matrices \[\hat{K}_{0}=\frac{1}{2}(\hat{K}_{0z}+\hat{K}_{0z}^{\dagger}), \tag{46}\] where \[\hat{K}_{0z}=\begin{bmatrix}I_{(d-r)\times(d-r)}&0\\ 0&e^{-\theta_{0}/2}\end{bmatrix},\quad l=1,2,...,r. \tag{47}\] An in Sec.III.2, we have set \(\cos\theta_{l}/2=\hat{\Gamma}_{ll}\). We have to highlight that the structure of the unitary components in (46) remains diagonal. In order to apply the LCU method (Lemma 6 in [25]), we need one auxiliary qubit, in the same fashion as the dilation process in Sec III. Then, we introduce the unitary operators \[\hat{U}_{prep}:\ket{0}\rightarrow\frac{1}{\sqrt{2}}(\ket{0}+\ket{1}) \tag{48}\] \[\hat{U}_{select}=\ket{0}\bra{0}\otimes\hat{K}_{0z}+\ket{1}\bra{1} \otimes\hat{K}_{0z}^{\dagger}. \tag{49}\] Note that the action of \(\hat{U}_{prep}\) is just the Hadamard gate \(\hat{H}\), \(\hat{U}_{prep}=\hat{H}\). We can probabilisticaly implement the non-unitary operator \(\hat{K}_{0}\) using the unitary dilated operator, \[\hat{\mathcal{U}}_{diss}=(\hat{H}\otimes I_{d\times d})\hat{U}_{select}(\hat{H} \otimes I_{d\times d}). \tag{50}\] The action of operator \(\hat{\mathcal{U}}_{diss}\) on the initial state \(\ket{0}\ket{\mathbf{\psi}_{0}}\) is \[\hat{\mathcal{U}}_{diss}\ket{0}\ket{\mathbf{\psi}_{0}}=\ket{0}\hat{K}_{0}\ket{\mathbf{ \psi}_{0}}+\frac{1}{2}\ket{1}(\hat{K}_{0z}-\hat{K}_{0z}^{\dagger})\ket{\mathbf{\psi }_{0}}. \tag{51}\] Again, a measurement on the first qubit provides us with the desired result. The quantum circuit representation for simulation of the Trotterized dynamics (24), taking into consideration Eqs. (50),(51) is depicted in Fig.2. The probability of measuring the \(0\) value qubit in the output state (51) is \(p_{0}=\mel{\mathbf{\psi}_{0}}{\hat{K}_{0}^{\dagger}}{\mathbf{\psi}_{0}}\), obeying the same bounds (Eqs.(39),(40)) as in Sec. III.2. The remaining question is whether the implementation of operator \(\hat{U}_{select}\) scales favourably compared to that of \(\hat{\mathcal{U}}_{diss}\) from the previous section. Given the definition (49), \(\hat{U}_{select}\) is a \(2d\times 2d\) diagonal operator, \[\hat{U}_{select}=\begin{bmatrix}\hat{K}_{0z}&0\\ 0&\hat{K}_{0z}^{\dagger}\end{bmatrix}. \tag{52}\] which contains \(r\) two-level \(z\)-rotations, \(\hat{\mathcal{R}}_{z}(\theta_{l})\) compared to the \(r\) two-level \(y\)-rotations, \(\hat{\mathcal{R}}_{y}(\theta_{l})\) of (37). As a result, the diagonal nature of \(\hat{U}_{select}\) allows for an implementation in \(2^{n}(1-\frac{1}{1+2N})-3\) alternating CNOTs and single-qubit \(\hat{R}_{z}(\theta_{l})\) rotations [26]. Therefore, to leading order, a quadratic improvement is achieved as compared to the implementation scaling of the physical dilation (38). The LCU method produces the same dilation method, specialized for diagonal operators, presented in [16]. Finally, when \(\{\gamma_{e,l},\gamma_{e,l}\}\) are homogeneous or very few in numbers, their values appear repetitively in the diagonal matrix \(\hat{U}_{select}\) after the discetization. It is then possible to reduce ever further the implementation cost of the LCU method to polynomial scaling _O\([p(n)]\)_, [27]. ## V Conclusions Simulation of dynamics in dissipative classical systems is inherently challenging for quantum computing due to the loss of unitarity. We focus on the dissipative and dispersive media in which energy loss appears as an anti-Hermitian, diagonal part in the Schrodinger representation of Maxwell equations. Using Trotterization we separate the unitary evolution part from the non-unitary one, enabling us to work with the non-unitary evolution solely. The interconnection between the dissipation in dispersive media and the quantum amplitude damping serves as a first step in a dilation process, relying on an augmented Kraus representation (31) and the direct sum structure of the dissipative \(r\)-dimensional subspace. In that way, we only need one ancillary qubit for modeling the environment, in contrast to the respective formulation of non-unitary quantum evolution through quantum channels [15]. This physical dilation requires \(O(2^{n-1}n^{2})\) CNOTs and one qubit \(y\)-rotations. The second algorithm, leaves behind the physical connection between classical and quantum processes, implementing the non-unitary evolution through the LCU Lemma in _O\((2^{n})\)_ CNOTs and one qubit \(z\)-rotations. However, while the second method is quadratically better it lacks physical interpretation which is essential in cases where the anti-Hermitian part in the Hamiltonian does not correspond to pure dissipation [28]. In such cases, a physical dilation based on the construction of a quantum Kraus representation for the classical dynamics may be advantageous in terms of physical information that can be extracted from the quantum implementation, compared to the LCU dilation. The Schrodinger representation of Maxwell equations for lossless, dispersive media in terms of the generator of dynamics (18) has unitary structure without the need of complex operators. In that sense, combining the results from Secs. III, IV and reference [8] in conjunction with a quantum implementation on the simulation of Maxwell equations for the lossless case, we can possibly realize full wave simulation of electromagnetic wave propagation and scattering in complex media such as magnetized plasmas and meta-materials in a quantum computer. Our set up, Eqs.(5),(6), considers minimal assumptions, respecting Kramer-Kroning causality relations [20] in the whole frequency spectrum and extending beyond \(\mathcal{PT}\)-symmetric models [29]. Finally, it also important to be noted that both of the methods can be applied to other classical dissipative systems by diagonalizing, when this is computationally feasible, the Hermitian dissipative operator \(\hat{D}_{diss}\) in the respective Schrodinger equation (23) as \(\hat{D}_{diss}=\hat{V}\hat{\Delta}\hat{V}^{\dagger}\). Then, the diagonal operator \(\hat{\Delta}\) produces a diagonal non-unitary part as in Eq.(24) that can be implemented using the techniques described here and in [16]. The implementation of unitary operators \(\hat{V},\hat{V}^{\dagger}\) is directly related to the dimension of the dissipative subspace. Figure 2: Quantum circuit for simulation of the non-unitary classical evolution (24) in a dissipative and dispersive medium using the LCU method. ###### Acknowledgements. This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. A.K.R is supported by the Department of Energy under Grant Nos. DE-SC0021647 and DE-FG02-91ER-54109. G.V is supported by the Department of Energy under Grant Nos. DE-SC0021651.
2309.08723
Sweeping Permutation Automata
This paper introduces sweeping permutation automata, which move over an input string in alternating left-to-right and right-to-left sweeps and have a bijective transition function. It is proved that these automata recognize the same family of languages as the classical one-way permutation automata (Thierrin, "Permutation automata", Mathematical Systems Theory, 1968). An n-state two-way permutation automaton is transformed to a one-way permutation automaton with F(n)=\max_(k+l=n, m <= l) k (l \choose m) (k - 1 \choose l - m) (l - m)! states. This number of states is proved to be necessary in the worst case, and its growth rate is estimated as F(n) = n^(n/2 - (1 + \ln 2)/2 \cdot n/(\ln n) \cdot (1 + o(1))).
Maria Radionova, Alexander Okhotin
2023-09-15T19:15:07Z
http://arxiv.org/abs/2309.08723v1
# Sweeping Permutation Automata ###### Abstract This paper introduces sweeping permutation automata, which move over an input string in alternating left-to-right and right-to-left sweeps and have a bijective transition function. It is proved that these automata recognize the same family of languages as the classical one-way permutation automata (Thierrin, "Permutation automata", _Mathematical Systems Theory_, 1968). An \(n\)-state two-way permutation automaton is transformed to a one-way permutation automaton with \(F(n)=\max_{k+\ell=n,m\leqslant\ell}k\cdot\binom{\ell}{m}\cdot\binom{k-1}{\ell-m }\cdot(\ell-m)!\) states. This number of states is proved to be necessary in the worst case, and its growth rate is estimated as \(F(n)=n^{\frac{n}{2}-\frac{1+\ln 2}{2}\frac{n}{\ln\ln}(1+o(1))}\). ## 1 Introduction _Permutation automata_, introduced by Thierrin [16], are one-way deterministic finite automata, in which the transition function by each symbol forms a permutation of the set of states. They recognize a proper subfamily of regular languages: for instance, no finite language is recognized by any permutation automaton. The language family recognized by permutation automata is known as the _group languages_, because their syntactic monoid is a group, and it has received some attention in the literature on algebraic automata theory [10]. Recently, Hospodar and Mlynarcik [4] determined the state complexity of operations on these automata, while Rauch and Holzer [13] investigated the effect of operations on permutation automata on the number of accepting states. Permutation automata are reversible, in the sense that, indeed, knowing the current state and the last read symbol one can always reconstruct the state at the previous step. The more general _reversible automata_, studied by Angluin [2] and by Pin [11], additionally allow undefined transitions, so that the transition function by each symbol is injective. Reversible automata still cannot recognize all regular languages [12], but since they can recognize all finite languages, they are a more powerful model than permutation automata. The notion of reversible computation has also been studied for two-way finite automata. In general, a two-way automaton (2DFA) operates on a string delimited by a left end-marker (\(\vdash\)) and a right end-marker (\(\dashv\)), and may move its head to the left or to the right in any transition. For the reversible subclass of two-way finite automata (2RFA), Kondacs and Watrous [8] proved that 2RFA can recognize every regular language. Later, Kunc and Okhotin [9, Sect. 8.1] showed that every regular language can still be recognized by 2RFA with no undefined transitions on symbols of the alphabet, and with injective functions on the end-markers. But since the latter automata, in spite of having some kind of bijections in their transition functions, recognize all regular languages, they are no longer a model for the group languages. And there seems to be no reasonable way to have 2RFA act bijectively on both end-markers, because in this case it would be impossible to define both an accepting and a rejecting state. Can permutation automata have any two-way generalization at all? This paper gives a positive answer by investigating _sweeping permutation automata_. This new model is a subclass of sweeping automata, that is, two-way automata that may turn only at the end-markers. In a sweeping automaton, there are left-moving and right-moving states, and any transition in a right-moving state by any symbol other than an end-marker must move the head to the right and lead to another right-moving state (same for left-moving states). The transitions by each symbol other than an end-marker form two functions, one acting on the right-moving states, and the other on the left-moving states. In the proposed sweeping permutation automata, both functions must be bijections, whereas the transition functions at the end-markers must be injective. A formal definition of the new model is given in Section 2. The main motivation for the study of sweeping permutation automata (2PerFA) is that these automata recognize the same family of languages as the classical one-way permutation automata (1PerFA). This result is established in Section 3 by showing that if the optimal transformation of two-way automata to one-way, as defined by Kapoutsis [6, 7], is carefully applied to a 2PerFA, then it always produces a 1PerFA. The next question studied in this paper is the number of states in a 1PerFA needed to simulate an \(n\)-state 2PerFA. The number of states used in the transformation in Section 3 depends on the partition of \(n\) states of the 2PerFA into \(k\) right-moving and \(\ell\) left-moving states, and also on the number \(m\) of left-moving states in which there are no transitions by the left end-marker (\(\vdash\)). The resulting 1PerFA has \(k\cdot\binom{\ell}{m}\cdot\binom{k-1}{\ell-m}\cdot(\ell-m)!\) states. A matching lower bound for each triple \((k,\ell,m)\), with \(k>\ell\) and \(m>0\), is established in Section 4, where it is proved that there exists a 2PerFA with \(k\) right-moving states and \(\ell\) left-moving states, and with the given value of \(m\), such that every one-way deterministic automaton (1DFA) recognizing the same language must have at least \(k\cdot\binom{\ell}{m}\cdot\binom{k-1}{\ell-m}\cdot(\ell-m)!\) states. The desired state complexity of transforming two-way permutation automata to one-way should give the number of states in a 1PerFA that is sufficient and in the worst case necessary to simulate every 2PerFA with \(n\) states. Note that the minimal 1DFA for a group language is always a 1PerFA [4], and hence this is the same state complexity tradeoff as from 2PerFA to 1DFA. The following function gives an upper bound on this state complexity. \[F(n)=\max_{\begin{subarray}{c}k+\ell=n\\ m\leqslant\ell\end{subarray}}k\cdot\binom{\ell}{m}\cdot\binom{k-1}{\ell-m} \cdot(\ell-m)!\] This bound is proved to be precise in Section 5, where it is shown that the maximum in the formula is reached for \(k=\lfloor\frac{n+2}{2}\rfloor\) and \(\ell=\lceil\frac{n-2}{2}\rceil\), that is, for \(k>\ell\). Since Section 4 provides witness languages for these values of \(k\) and \(\ell\) that require \(k\cdot\binom{\ell}{m}\cdot\binom{k-1}{\ell-m}\cdot(\ell-m)!\) states in every 1PerFA, this gives a lower bound \(F(n)\). Finally, the growth rate of the function \(F(n)\) is estimated as \(F(n)=n^{\frac{n}{2}-\frac{1+n2}{2}\frac{n}{\ln n}(1+o(1))}\) using Stirling's approximation. An alternative, more general definition of sweeping permutation automata, which allows acceptance both at the left end-marker and at the right end-marker, is presented in Section 6. A proof that they still can be transformed to 1PerFA is sketched, but the generalized transformation uses more states. ## 2 Definition A one-way permutation automaton (1PerFA) is a one-way deterministic finite automaton (1DFA) in which the transition function by every symbol is a bijection [16]. This restriction is adapted to the more general _sweeping automata_[15], in which the set of states is divided into disjoint classes of right-moving (\(Q_{+}\)) and left-moving (\(Q_{-}\)) states, so that the automaton may turn only at the end-markers. In the proposed _sweeping permutation automata_, the transition function by each symbol forms one left-to-right bijection and another right-to-left bijection. Transitions at the end-markers are injective partial functions. **Definition 1**.: A sweeping permutation automaton (2PerFA) is a 9-tuple \(\mathcal{A}=(\Sigma,Q_{+},Q_{-},q_{0},\langle\delta_{a}^{+}\rangle_{a\in\Sigma}, \langle\delta_{a}^{-}\rangle_{a\in\Sigma},\delta_{\vdash},F)\), where * \(\Sigma\) is the alphabet; * \(Q_{+}\cup Q_{-}\) is the set of states, where \(Q_{+}\cap Q_{-}=\varnothing\); * \(q_{0}\in Q_{+}\) is the initial state; * for each symbol \(a\in\Sigma\), \(\delta_{a}^{+}\colon Q_{+}\to Q_{+}\) and \(\delta_{a}^{-}\colon Q_{-}\to Q_{-}\) are bijective transition functions; * the transition functions at the end-markers \(\delta_{\vdash}\colon(Q_{-}\cup\{q_{0}\})\to Q_{+}\), \(\delta_{\vdash}\colon Q_{+}\to Q_{-}\) are partially defined and injective on their respective domains; * \(F\subseteq Q_{+}\) is the set of accepting states, with \(\delta_{\vdash}(q)\) undefined for all \(q\in F\). The computation of the automaton is defined in the same way as for sweeping automata of the general form. Given an input string \(w=a_{1}\ldots a_{m}\in\Sigma^{*}\), the automaton operates on a tape \(\vdash a_{1}\ldots a_{m}\dashv\). Its computation is a uniquely defined sequence of configurations, which are pairs \((q,i)\) of a current state \(q\in Q_{+}\cup Q_{-}\) and a position \(i\in\{0,1,\ldots,m+1\}\) on the tape. It starts in the configuration \((q_{0},0)\) and makes a transition to \((\delta_{\vdash}(q_{0}),1)\). If the automaton is in a configuration \((q,i)\) with \(q\in Q_{+}\) and \(i\in\{1,\ldots,m\}\), it moves to the next configuration \((\delta_{a_{i}}^{+}(q),i+1)\). Once the automaton is in a configuration \((q,m+1)\), it accepts if \(q\in F\), or moves to \((\delta_{\vdash}(q),m)\) if \(\delta_{\vdash}(q)\) is defined, and rejects otherwise. In a configuration \((q,i)\) with \(q\in Q_{-}\) and \(i\in\{1,\ldots,m\}\), the automaton moves to \((\delta_{a_{i}}^{-}(q),i-1)\). Finally, in a configuration \((q,0)\) with \(q\in Q_{-}\), the automaton turns back to \((\delta_{\vdash}(q),1)\) or rejects if this transition is undefined. The language recognized by an automaton \(\mathcal{A}\), denoted by \(L(\mathcal{A})\), is the set of all strings it accepts. A one-way permutation automaton (1PerFA) is a 2PerFA in which \(Q_{-}=\varnothing\) and \(\delta_{\vdash}\) is undefined on every state. The left end-marker can be removed, making \(\delta_{\vdash}(q_{0})\) the new initial state. Note that a 2PerFA never loops. If it did, then some configuration would appear twice in some computation. Consider the earliest such configuration. If it is not the initial configuration, then there exists only one possible previous configuration. It appears at least twice in the computation, and it Figure 1: Transitions of a 2PerFA on an input string. precedes the configuration considered before, a contradiction. The repeated configuration cannot be the initial configuration, in which the 2PerFA is at the left end-marker in the state \(q_{0}\in Q_{+}\), because the automaton may return to \(\vdash\) only in the states from \(Q_{-}\). ## 3 Transformation to one-way Since a 2PerFA is a 2DFA, the well-known transformation to a one-way automaton can be applied to it [6, 14]: after reading a prefix of a string \(u\), the 1DFA stores the first state in which the 2PerFA eventually goes right from the last symbol of the prefix, and the function which encodes the outcomes of all computations starting at the last symbol of the prefix and ending with the transition from that symbol to the right. For a sweeping automaton, all computations encoded by the functions start in \(Q_{-}\) and end in \(Q_{+}\). Moreover, computations starting in different states should end in different states. Therefore, a one-way automaton has to remember fewer different functions of a simpler form, and eventually turns out to be a permutation automaton. **Lemma 1**.: _For every 2PerFA \(\mathcal{A}=(\Sigma,Q_{+},Q_{-},q_{0},\langle\delta_{a}^{+}\rangle_{a\in \Sigma},\langle\delta_{a}^{-}\rangle_{a\in\Sigma},\delta_{\vdash},\delta_{ \dashv},F)\) with_ \[|Q_{+}|=k,\quad|Q_{-}|=\ell,\quad|Q_{-}^{\times}|=m,\] _where \(Q_{-}^{\times}\subseteq Q_{-}\) is the set of states from which there is no transition by \(\vdash\), there exists a 1PerFA recognizing the same language which uses states of the form \((q,f)\) satisfying the following restrictions:_ * \(q\in Q_{+}\)_,_ * \(f\colon Q_{-}\to Q_{+}\) _is a partially defined function,_ * \(q\notin\ Imf\)_,_ * \(f\) _is injective,_ * \(f\) _is undefined on exactly_ \(m\) _states._ Proof.: We will construct a 1PerFA \(\mathcal{B}=(\Sigma,Q,\widetilde{q}_{0},\widetilde{\delta},\widetilde{F})\); states of \(\mathcal{B}\) shall be pairs \((q,f)\), where \(q\in Q_{+}\) and \(f\colon Q_{-}\to Q_{+}\) is a partial function. After reading a prefix \(s\in\Sigma^{*}\), the automaton \(\mathcal{B}\) should come to a state \((q,f)\), where \(q\) and \(f\) describe the outcomes of the following computations of \(\mathcal{A}\) on \(s\): * if the 2PerFA starts on \(\vdash s\) in its initial configuration, then it eventually moves from the last symbol of \(\vdash s\) to the right in the state \(q\), * if the 2PerFA starts at the last symbol of \(\vdash s\) in a state \(p\) from \(Q_{-}\), then it eventually leaves \(s\) in the state \(f(p)\in Q_{+}\). If the computation reaches an undefined transition at \(\vdash\), then the value \(f(p)\) is undefined. The initial state is defined as \[\widetilde{q}_{0}=(\delta_{\vdash}(q_{0}),\delta_{\vdash}\big{|}_{Q_{-}})\] where \(\delta_{\vdash}\big{|}_{Q_{-}}\) is \(\delta_{\vdash}\) restricted to the domain \(Q_{-}\). The definition of the transition function is as follows: \[\widetilde{\delta}_{a}((q,f))=(\delta_{a}^{+}(q),\delta_{a}^{+}\circ f\circ \delta_{a}^{-}),\] for all \[a\in\Sigma\] **Claim 1**.: _Every pair \((q,f)\) reachable from \(\widetilde{q}_{0}\) by transitions in \(\widetilde{\delta}\) satisfies the following conditions:_ * \(q\in Q_{+}\), * \(f\colon Q_{-}\to Q_{+}\) _is a partially defined function,_ * \(q\notin\ Imf\)_,_ * \(f\) _is injective,_ * \(f\) _is undefined on exactly_ \(m\) _states._ Proof.: Induction on the length of the string. Let \((q,f)\) be reachable in \(\mathcal{B}\) by a string \(u\). If \(u=\epsilon\), then \((q,f)\) is the initial state \[(q,f)=(\delta_{\vdash}(q_{0}),\delta_{\vdash}\big{|}_{Q_{-}})\] The first two conditions are satisfied because the domain of \(\delta_{\vdash}\) is split into \(q_{0}\) and \(Q_{-}\). The third and the fourth conditions follow from the injectivity of \(\delta_{\vdash}\) and the disjointness of \(\{q_{0}\}\) and \(Q_{-}\). The states on which \(\delta_{\vdash}\big{|}_{Q_{-}}\) is not defined are the states in \(Q_{-}^{\times}\) by definition, and there are \(m\) of them. Let \((q,f)\) be reachable in \(\mathcal{B}\) by a string \(u\) and let \((q^{\prime},f^{\prime})\) be reachable from it by a transition by \(a\). The induction assumption is true for the state \((q,f)\), and \((q^{\prime},f^{\prime})\) is defined as \[(q^{\prime},f^{\prime})=(\delta_{a}^{+}(q),\delta_{a}^{+}\circ f\circ\delta_{ a}^{-})\] The state \(q^{\prime}\in Q_{+}\) because \(\delta_{a}^{+}\) is a total function which acts from \(Q_{+}\) to \(Q_{+}\). The function \(f^{\prime}\) acts from \(Q_{-}\) to \(Q_{+}\) because \(\delta_{a}^{-}\) acts from \(Q_{-}\) and \(\delta_{a}^{+}\) acts to \(Q_{+}\). To see that \(\delta_{a}^{+}(q)\notin\mathrm{Im}\ \delta_{a}^{+}\circ f\circ\delta_{a}^{-}\), consider that \(\{q\}\) and \(\mathrm{Im}\ f\) are disjoint by the induction assumption, and therefore their images under a bijection \(\delta_{a}^{+}\), that are, \(\{\delta_{a}^{+}(q)\}\) and \(\mathrm{Im}\ \delta_{a}^{+}\circ f\), are disjoint as well. The function \(\delta_{a}^{+}\circ f\circ\delta_{a}^{-}\) is injective as a composition of injective functions. The function \(\delta_{a}^{+}\circ f\circ\delta_{a}^{-}\) is undefined on exactly \(m\) states because \(f\) is, and functions \(\delta_{a}^{+}\) and \(\delta_{a}^{-}\) are total bijections. Let \(Q\) be the set of all pairs \((q,f)\) satisfying Claim 1. **Claim 2**.: _After reading a prefix \(s\in\Sigma^{*}\) the automaton \(\mathcal{B}\) comes to a state \((q,f)\), where_ * _if the 2PerFA starts on_ \(\vdash\)_s in its initial configuration, then it eventually moves from the last symbol of_ \(\vdash\)_s to the right in the state_ \(q\)_,_ * _if the 2PerFA starts at the last symbol of_ \(\vdash\)_s in a state_ \(p\) _from_ \(Q_{-}\)_, then it eventually leaves_ \(s\) _in the state_ \(f(p)\in Q_{+}\)_. If the computation reaches an undefined transition at_ \(\vdash\)_, then the value_ \(f(p)\) _is undefined._ Proof.: Induction on the length of the string. It is clear for the empty string and the initial state. Let \((q,f)\) be the state of \(\mathcal{B}\) after reading \(s\), then \((q^{\prime},f^{\prime})=\overline{\delta}_{a}((q,f))\) is the state after reading \(sa\). By the induction assumption, the state \((q,f)\) and the string \(s\) satisfy the property. Then \(\mathcal{B}\) reads the symbol \(a\) and comes to the state \((\delta_{a}^{+}(q),\delta_{a}^{+}\circ f\circ\delta_{a}^{-})\). The automaton eventually leaves \(\vdash\)_s_ to the right in the state \(q\); then it comes to \(a\) in this state and makes a transition to \(\delta_{a}^{+}(q)\), thus leaving \(\vdash\)_sa_ to the right. To prove the second condition, let the 2PerFA start on \(\vdash\)_sa_ at the symbol \(a\) in a state \(p\in Q_{-}\). Then it moves left to the last symbol of \(\vdash\)_s_ in the state \(\delta_{a}^{-}(p)\). Then the computation continues on the string \(\vdash\)_s_ and its outcome is given by the function \(f\). Eventually the 2PerFA leaves \(\vdash\)_s_ to the right and comes to \(a\) in the state \(f(\delta_{a}^{-}(p))\). Then the 2PerFA looks at the symbol and goes to \(\delta_{a}^{+}(f(\delta_{a}^{-}(p)))\) moving to the right. If \(f(\delta_{a}^{-}(p))\) is undefined, then so is \(f^{\prime}(p)\). So, the function \(\delta_{a}^{+}\circ f\circ\delta_{a}^{-}\) indeed satisfies the second claim. To define \((q,f)\) as an accepting or a rejecting state, consider the following sequence of states \(\{q_{i}\}_{i\geqslant 1}\) with \(q_{i}\in Q_{+}\). The first element is \[q_{1}=q\] For each \(q_{i}\) if the 2PerFA has a transition by \(\dashv\) from \(q_{i}\) and the function \(f\) is defined on \(\delta_{\dashv}(q_{i})\) then \[q_{i+1}=f(\delta_{\dashv}(q_{i}))\] Otherwise the sequence ends. The sequence \(\{q_{i}\}_{i\geqslant 1}\) is always finite, because if it loops then some state \(\widetilde{q}\) appears at least twice. Consider the earliest repeated state. If it is not \(q_{1}\) then there is the previous one. The previous state for \(\widetilde{q}\) is the same for all its appearances as \(f(\delta_{\dashv})\) is an injective function. Therefore, \(\widetilde{q}\) is not the earliest repeated state. So, \(\widetilde{q}\) should be \(q_{1}\). As \(q_{1}\notin\operatorname{Im}\,f\) the state \(q_{1}\) cannot be repeated, a contradiction. If this sequence ends with an accepting state \(q_{i}\in F\), then the state \((q,f)\) is accepting in \(\mathcal{B}\). Otherwise, the state \((q,f)\) is rejecting. The constructed one-way automaton accepts the same language as the 2PerFA because when it comes by some string \(s\) to a state \((q,f)\), then before accepting or rejecting on \(\dashv s\dashv\) the 2PerFA passes through the sequence of states \(\{q_{i}\}_{i\geqslant 1}\), with \(q_{i}\in Q_{+}\), at \(\dashv\). **Claim 3**.: _The resulting one-way automaton is a permutation automaton._ Proof.: We will prove that the transition function \(\widetilde{\delta}_{a}\) is a bijection for each symbol \(a\in\Sigma\). Firstly, we show its injectivity. Let \[\widetilde{\delta}_{a}((q_{1},f_{1}))=\widetilde{\delta}_{a}((q_{2},f_{2}))\] Then, by the definition of \(\widetilde{\delta}_{a}\), \[(\delta_{a}^{+}(q_{1}),\delta_{a}^{+}\circ f_{1}\circ\delta_{a}^{-})=(\delta_ {a}^{+}(q_{2}),\delta_{a}^{+}\circ f_{2}\circ\delta_{a}^{-})\] Then \(\delta_{a}^{+}(q_{1})=\delta_{a}^{+}(q_{2})\), which means that \(q_{1}=q_{2}\), because \(\delta_{a}^{+}\) is a bijection. Then consider the equality \[\delta_{a}^{+}\circ f_{1}\circ\delta_{a}^{-}=\delta_{a}^{+}\circ f_{2}\circ \delta_{a}^{-}\] Taking a composition of both sides of the equation with \((\delta_{a}^{+})^{-1}\) on the left and \((\delta_{a}^{-})^{-1}\) on the right yields \[(\delta_{a}^{+})^{-1}\circ\delta_{a}^{+}\circ f_{1}\circ\delta_{a}^{-}\circ( \delta_{a}^{-})^{-1}=(\delta_{a}^{+})^{-1}\circ\delta_{a}^{+}\circ f_{2}\circ \delta_{a}^{-}\circ(\delta_{a}^{-})^{-1}\] Then \(f_{1}=f_{2}\), and the injectivity is proved. The function \(\widetilde{\delta}_{a}\) is total and has equal domain and range, it is therefore a bijection. This completes the proof of the lemma. **Theorem 1**.: _For every 2PerFA \(\mathcal{A}=(\Sigma,Q_{+},Q_{-},q_{0},\langle\delta_{a}^{+}\rangle_{a\in\Sigma},\langle\delta_{a}^{-}\rangle_{a\in\Sigma},\delta_{\neg},\delta_{\neg},F)\) with_ \[|Q_{+}|=k,\quad|Q_{-}|=\ell,\quad|Q_{-}^{\times}|=m,\] _where \(Q_{-}^{\times}\subseteq Q_{-}\) is the set of states from which there is no transition by \(\vdash\), there exists a 1PerFA with at most_ \[k\cdot\binom{\ell}{m}\cdot\binom{k-1}{\ell-m}\cdot(\ell-m)!\] _states that recognizes the same language._ Proof.: Consider the one-way automaton \(\mathcal{B}\) obtained for the 2PerFA \(\mathcal{A}\) in Lemma 1. Every state \((q,f)\) of \(\mathcal{B}\) satisfies the following conditions: * \(q\in Q_{+}\), * \(f\colon Q_{-}\to Q_{+}\) is a partially defined function, * \(q\notin\ \mathrm{Im}f\), * \(f\) is injective, * and \(f\) is undefined on exactly \(m\) states. For a fixed \(q\in Q_{+}\), let us count the number of functions satisfying the conditions above: firstly, we should choose \(m\) states from \(Q_{-}\) on which \(f\) is not defined. Secondly, from the \(k-1\) states we should choose \(\ell-m\) different values for \(f\)'s range. And lastly, we can choose a bijection between these two sets in \((\ell-m)!\) ways. \[\binom{\ell}{m}\cdot\binom{k-1}{\ell-m}\cdot(\ell-m)!\] By multiplying this number by \(k\) (the number of different states \(q\)) we will get the claimed number of states. If a two-way automaton has \(n\) states in total, then there is only a finite number of partitions into left-moving and right-moving states, and finitely many choices of \(m\), and hence the following number of states is sufficient to transform this automaton to one-way. \[F(n)=\max_{\begin{subarray}{c}k,\ell,m\\ k>0,\ \ell\geq m\geq 0\\ m\geq\ell-k+1\end{subarray}}k\cdot\binom{\ell}{m}\cdot\binom{k-1}{\ell-m} \cdot(\ell-m)!\] **Corollary 1**.: _For every \(n\)-state 2PerFA there exists a 1PerFA with \(F(n)\) states that recognizes the same language._ Later it will be proved that \(F(n)\) is a sharp bound, that is, for some \(n\)-state 2PerFA every 1PerFA recognizing the same language has to have at least \(F(n)\) states. ## 4 Lower bound on the number of states In this section it will be shown that the upper bound on the number of states in a 1DFA needed to simulate a 2PerFA is sharp for each triple \((k,\ell,m)\), where \(k>\ell\) and \(\ell\geqslant m>0\). Only the case of \(k>\ell\) is considered, because, as it will be shown later, the maximum over \((k,\ell,m)\) in \(F(n)\) is reached for \(k>\ell\) (in other words, a 2PerFA that requires the maximum number of states in a 1PerFA has \(|Q_{+}|>|Q_{-}|\)). **Theorem 2**.: _For all \(k,\ell,m\) with \(k>\ell>0\) and \(\ell\geqslant m>0\) there exists a 2PerFA \(\mathcal{A}=(\Sigma,Q_{+},Q_{-},q_{0},\langle\delta_{a}^{+}\rangle_{a\in \Sigma},\langle\delta_{a}^{-}\rangle_{a\in\Sigma},\delta_{\vdash},F)\) such that_ \[|Q_{+}|=k,\quad|Q_{-}|=\ell,\] _the function \(\delta_{\vdash}\) is undefined on exactly \(m\) arguments from \(Q_{-}\), and every 1DFA recognizing \(L(\mathcal{A})\) must have at least_ \[k\cdot\binom{\ell}{m}\cdot\binom{k-1}{\ell-m}\cdot(\ell-m)!\] _states._ Proof.: Fix \(k,\ell,m\) and consider a 2PerFA \(\mathcal{A}=(\Sigma,Q_{+},Q_{-},q_{0},\langle\delta_{a}^{+}\rangle_{a\in \Sigma},\langle\delta_{a}^{-}\rangle_{a\in\Sigma},\delta_{\vdash},\delta_{ \vdash},F)\) where * \(Q_{+}=\{q_{0},\ldots,q_{k-1}\},Q_{-}=\{r_{0},\ldots,r_{\ell-1}\}\). * The initial state is \(q_{0}\) and the accepting states are \(\{q_{\ell},\ldots,q_{k-1}\}\). * The functions \(\delta_{a}^{+}\) and \(\delta_{b}^{+}\) are generators of the permutation group on the set \(Q_{+}\) (for example, these could be a cycle on all elements of \(Q_{+}\) and an elementary transposition). Similarly, \(\delta_{c}^{-},\delta_{d}^{-}\) are generators of the permutation group on the set \(Q_{-}\), and \(\delta_{a}^{-},\delta_{b}^{-},\delta_{c}^{+},\delta_{d}^{+}\) are identity functions. * Transitions at the left end-marker are \(\delta_{\vdash}(q_{0})=q_{0}\) and \(\delta_{\vdash}(r_{i})=q_{i+1}\) for \(0\leqslant i<\ell-m\). There are no transitions by \(\vdash\) in the remaining \(m\) states. * Transitions at the right end-marker are \(\delta_{\vdash}(q_{i})=r_{i}\) for \(0\leqslant i<\ell\). There are no transitions by \(\dashv\) in the remaining \(k-\ell\) states. Figure 3: Symbols \(a,b,c,d\) in the construction of \(\mathcal{A}\). The proof of the lower bound on the size of any 1DFA recognizing this language is by showing that the automaton \(\mathcal{B}=(Q,\widetilde{q}_{0},\widetilde{F},\widetilde{\delta},\Sigma)\) obtained from \(\mathcal{A}\) by the transformation in Lemma 1 will be minimal. We will show that every state is reachable and for every two states there exists a separating string. Firstly prove the reachability. Consider a state \((q,f)\). It satisfies the following conditions from Lemma 1: * \(q\in Q_{+}\), * \(f\colon Q_{-}\to Q_{+}\) is a partially defined function, * \(q\not\in\mathrm{Im}f\), * \(f\) is injective, * \(f\) is undefined on exactly \(m\) states. Let \(\sigma\) be a permutation on the set \(Q_{-}\) that maps the states on which \(f\) is not defined to the \(m\) states from \(Q_{-}\) without a transition by \(\vdash\). Take a string \(u_{\sigma}\in\{c,d\}^{*}\) that, when read from right to left, implements the permutation \(\sigma\), and acts as an identity on \(Q_{+}\) when read from left to right. Next, the goal is to define a permutation \(\pi\) on the set \(Q_{+}\) that maps \(q_{0}\) to \(q\), and, for each state \(q^{\prime}\in Q_{-}\) on which \(f\) is defined, it should map the state \(\delta_{-}(\sigma(q^{\prime}))\) to the state \(f(q^{\prime})\), as shown in Figure 4. Note that \(\delta_{-}(\sigma(q^{\prime}))\) is defined for all \(q^{\prime}\) in the domain of \(f\). We can introduce such a permutation because each state \(\delta_{-}(\sigma(q^{\prime}))\) is not equal to \(q_{0}\) and they are all pairwise distinct (as \(\sigma\) is a permutation and \(\delta_{-}\) is an injection). Also each state \(f(q^{\prime})\) is not equal to \(q\) and they are all pairwise distinct too. Take the string \(v_{\pi}\in\{a,b\}^{*}\) that, when read from left to right, implements \(\pi\), and is an identity on \(Q_{-}\) if read from right to left. So, by the string \(v_{\pi}u_{\sigma}\) we reach the state \((q,f)\). Next, the existence of a separating string for all pairs of states is proved. Consider different states \((q_{1},f_{1})\) and \((q_{2},f_{2})\). Let them be reached by strings \(s_{1}\) and \(s_{2}\), respectively. There are several cases. * The states \(q_{1},q_{2}\) are different, as shown in Figure 5. Fix a state \(r\in Q_{-}\) with no transition on the left end-marker defined. Since \(\mathcal{A}\) is a permutation automaton, there exists a state \(\widetilde{r}\in Q_{-}\) such that after reading \(s_{1}\) from right to left starting in the state \(\widetilde{r}\), the automaton is in the state \(r\). Also let \(q^{\prime}_{1}\in Q_{+}\) be the state from which there is a transition to \(\widetilde{r}\) by the right end-marker: such a state exists, because in the 2PerFA there are transitions to all states in \(Q_{-}\) by the right end-marker. Then let \(\pi\) be such a permutation on the set \(Q_{+}\) that maps \(q_{1}\) to \(q_{1}^{\prime}\) and \(q_{2}\) to an accepting state \(q_{2}^{\prime}\in Q_{+}\). Let \(v_{\pi}\in\{a,b\}^{*}\) be the string that implements the permutation \(\pi\) when read from left to right. So, the string \(s_{1}v_{\pi}\) will be rejected by 2PerFA and the string \(s_{2}v_{\pi}\) will be accepted. That is, \(v_{\pi}\) is a separating string. * The states \(q_{1},q_{2}\) are the same, but \(f_{1}\neq f_{2}\). Because \(f_{1}\neq f_{2}\) there exists a state \(r\in Q_{-}\) on which these functions differ, that is, either one of \(f_{1}(r),f_{2}(r)\) is defined and the other is not, or both are defined and are different states. 1. First, assume that \(f_{1}\) is defined on \(r\), but \(f_{2}\) is not (the case of \(f_{2}(r)\) defined and \(f_{1}(r)\) undefined is symmetric). It is true that \(f_{1}(r)\neq q_{1}\), because \(q_{1}\notin\operatorname{Im}f_{1}\) in every state of \(\mathcal{B}\). Fix the state \(q\) with a transition from it to \(r\) by the right end-marker. Then let \(\pi\) be such a permutation of the set \(Q_{+}\) that maps \(q_{1}\) to \(q\) and \(f_{1}(r)\) to an accepting state. And let the string \(v_{\pi}\in\{a,b\}^{*}\) implement \(\pi\) when the 2PerFA reads it from left to right as shown in Figure 6. The string \(s_{1}v_{\pi}\) will be accepted by the 2PerFA and the string \(s_{2}v_{\pi}\) will be rejected. So, \(v_{\pi}\) is a separating string. 2. Both functions \(f_{1}\) and \(f_{2}\) are defined on \(r\). It is true that \(f_{1}(r)\neq q_{1}\) and \(f_{2}(r)\neq q_{1}\). Again, fix the state \(q\) from which there is a transition to \(r\) by the right end-marker. Then fix a state \(\widetilde{r}\) from \(Q_{-}\) without a transition by the left end-marker (it exists because \(m\geqslant 1\)). Let \(r^{*}\in Q_{-}\) be the state from which the 2PerFA reads \(s_{1}\) and finishes in \(\widetilde{r}\). We can choose such a state because the 2PerFA is a permutation automaton, and this state cannot coincide with \(r\) because in this case \(f_{1}(r)\) would be undefined. Then let \(q^{*}\in Q_{+}\) be the state, from which there is a transition by the right end-marker to \(r^{*}\): it exists because there are such transitions to all states in \(Q_{-}\). Let \(v_{\pi}\in\{a,b\}^{*}\) implement a permutation \(\pi\) on \(Q_{+}\) that maps \(q_{1}\) to \(q\), \(f_{1}(r)\) to \(q^{*}\) and \(f_{2}(r)\) to an accepting state, as illustrated in Figure 7. The string \(s_{1}v_{\pi}\) will be rejected by the 2PerFA, and the string \(s_{2}v_{\pi}\) will be accepted, so, \(v_{\pi}\) is a separating string. ## 5 Optimal partition of \(n\) in \(F(n)\) and the logarithmic asymptotics of \(F(n)\) It has been proved above that every \(n\)-state 2PerFA can be transformed to an equivalent 1PerFA with \[F(n)=\max_{\begin{subarray}{c}k,\ell,m\\ k>0,\ \ell\geq m\geq 0\\ m\geq\ell-k+1\end{subarray}}G(k,\ell,m)\] states, where \[G(k,\ell,m)=k\cdot\binom{\ell}{m}\cdot\binom{k-1}{\ell-m}\cdot(\ell-m)!\] What is the optimal partition of \(n\) states into \(k\) right-moving and \(\ell\) left-moving states, and what is the optimal number \(m\) of unused states at the left end-marker? This question is answered in the following lemma. **Lemma 2**.: _For every fixed \(n=k+\ell\) the function_ \[G(k,\ell,m)=k\cdot\binom{\ell}{m}\cdot\binom{k-1}{\ell-m}\cdot(\ell-m)!\] _is defined for \(k>0\), \(\ell\geq m\geq 0\), \(m\geq\ell-k+1\), and reaches its maximum value on a triple \((k,\ell,m)\), where \(k>\ell\). If \(n\geq 8\), then the optimal values of the arguments are:_ \[k=\left\lfloor\frac{n+2}{2}\right\rfloor,\quad\ell=\left\lceil\frac{n-2}{2} \right\rceil,\quad m=\begin{cases}\lceil\frac{\sqrt{3+2n}-3}{2}\rceil,&\text{$n$ is odd}\\ \lceil\frac{\sqrt{4+2n}-4}{2}\rceil,&\text{$n$ is even}\end{cases}\] Sketch of a proof.: To prove this, firstly, find an optimal value of \(m\) for a fixed pair \((k,\ell)\). Denote it by \(m_{\text{opt}}=m_{\text{opt}}(k,\ell)\). Then analyse the next ratio \[\frac{G(k,\ell,m_{\text{opt}}(k,\ell))}{G(k+1,\ell-1,m_{\text{opt}}(k+1,\ell-1 ))}\] Figure 6: A separating string for states \((q_{1},f_{1})\) and \((q_{2},f_{2})\), with \(q_{1}=q_{2}\), \(f_{1}(r)\) defined, \(f_{2}(r)\) undefined. It is proved that this ratio is at least \(1\) if \(k>\ell\) and \(n\geqslant 8\), and at most \(1\) if \(k\leqslant\ell\). Therefore, the optimal partition \(n=k+\ell\) has \(k>\ell\). If \(n\geqslant 8\), then it has \(k=\ell+1\) or \(k=\ell+2\), depending on the parity of \(n\), and the optimal values of \(k\) and \(\ell\) are \[k=\left\lfloor\frac{n+2}{2}\right\rfloor,\quad\ell=\left\lceil\frac{n-2}{2}\right\rceil\] There is a formula for \(m_{\mathrm{opt}}(k,\ell)\), and its value for approximately equal \(k\) and \(\ell\) is \[m_{\mathrm{opt}}(k,\ell)=\left\lceil\frac{\sqrt{D}+\ell-k-2}{2}\right\rceil, \quad\text{where }D=(k-\ell)^{2}+4(\ell+1)\] Then, for a given \(n\geqslant 8\), the claimed optimal value of \(m\) can be found by substituting the optimal values of \(k\) and \(\ell\) into the formula for \(m_{\mathrm{opt}}\). With the optimal values of \(k\), \(\ell\) and \(m\) determined, the main result of this paper can now be finally stated. **Theorem 3**.: _Let \(n\geqslant 1\). For every \(n\)-state 2PerFA there exists a 1PerFA with \(F(n)\) states that recognizes the same language, and in the worst case \(F(n)\) states in a 1PerFA are necessary._ Proof.: The upper bound is given in Corollary 1. For the lower bound, for every \(n\geqslant 8\), let \(k\), \(\ell\) and \(m\) be as in Lemma 2. Then, since \(k>\ell\), Theorem 2 presents the desired \(n\)-state 2PerFA, for which every 1PerFA recognizing the same language must have at least \(G(k,\ell,m)=F(n)\) states. For \(n=5,6,7\), a calculation of possible values \(k,\ell,m\) shows that the maximum of \(G(k,\ell,m)\) is reached for \(m=1\). Then, Theorem 2 is still applicable and provides the witness languages. For \(n=4\), the optimal values given by a calculation are \(k=3\), \(\ell=1\) and \(m=0\). Nevertheless, the same automaton as in Theorem 2 still provides the desired lower bound, which was checked by a computer calculation. Finally, \(F(n)=n\) for \(n=1,2,3\), and (trivial) witness languages are \((a^{n})^{*}\). So, \(F(n)=G(k,\ell,m)\) for the specified values of \(k,\ell,m\). As \[G(k,\ell,m)=k\cdot\binom{\ell}{m}\cdot\binom{k-1}{\ell-m}\cdot(\ell-m)!=\frac{k! \ell!}{(k-1-\ell+m)!m!(\ell-m)!}\] the asymptotics of \(F(n)\) can be determined by using Stirling's approximation of factorials for \(k,\ell,m\) from the optimal partition. The final result is given in the following theorem. **Theorem 4**.: \(F(n)=n^{\frac{n}{2}-\frac{1+n2}{2}\frac{n}{\ln n}(1+o(1))}\)_._ To compare, transformation of 2DFA of the general form to 1DFA has the sharp bound proved by Kapoutsis [7]. \[n(n^{n}-(n-1)^{n})+1\] The transformation of sweeping 2DFA to 1DFA [3] requires slightly fewer states, yet still of the order \(n^{n(1+o(1))}\). \[\varphi(n)=\max_{k=1}^{n}k^{n-k+1}+1=n^{n-\frac{n\ln n}{\ln n}+O(\frac{n}{\ln n })}\] Evidently, in the case of 2PerFA, the cost of transformation to one-way is substantially reduced (with the exponent divided by two). The transformation complexity in these three cases is compared for small values of \(n\) in Table 1. ## 6 A more general definition Consider a variant of the definition of a 2PerFA, in which acceptance is also allowed at the left end-marker in states from \(Q_{-}\). It entails that in a transformation from 2PerFA to 1PerFA in each state \((q,f)\) the function \(f\) operates from \(Q_{-}\) to \(Q_{+}\cup\{\text{ACC},\text{REJ}\}\). Upon reading a string \(u\in\Sigma^{*}\), the 1PerFA comes to a state \((q,f)\), where \(q\in Q_{+}\) is the state in which the 2PerFA first moves to the right from the last symbol of \(\vdash u\), and for every state \(r\in Q_{-}\) and \(p\in Q_{+}\), if \(f(r)=p\), then the 2PerFA after reading \(\vdash u\) \begin{table} \begin{tabular}{r|r|r|r|} \(n\) & \(F(n)\) & \(\max_{k=1}^{n}k^{n-k+1}+1\) & \(n(n^{n}-(n-1)^{n})+1\) \\ & **(2PerFA to 1DFA)** & (sweeping to 1DFA) & (2DFA to 1DFA) \\ \hline 1 & 1 & 2 & 2 \\ 2 & 2 & 3 & 7 \\ 3 & 3 & 5 & 58 \\ 4 & 6 & 10 & 701 \\ 5 & 12 & 28 & 10506 \\ 6 & 24 & 82 & 186187 \\ 7 & 72 & 257 & 3805250 \\ 8 & 180 & 1025 & 88099321 \\ 9 & 480 & 4097 & 2278824850 \\ 10 & 1440 & 16385 & 65132155991 \\ 11 & 3600 & 78126 & 2038428376722 \\ 12 & 12600 & 390626 & 69332064858421 \\ \hline \end{tabular} \end{table} Table 1: The value of \(F(n)\) compared to the known transformations for irreversible 2DFA, for small values of \(n\). starting at its last symbol in the state \(r\) finishes in the state \(p\). If \(f(r)=\text{REJ}\) then, after reading \(\vdash\!u\) from right to left starting in \(r\), the 2PerFA rejects at the left end-marker. And if \(f(r)=\text{ACC}\), then, after reading \(\vdash\!u\) from right to left starting in \(r\) the 2PerFA accepts at the left end-marker. The transition function \(\delta\), the initial state and the set of accepting states will be defined similarly to Lemma 1. The automaton constructed by this transformation will be a permutation automaton. To prove this claim, \(\delta\) is first shown to be injective, and then bijectivity follows from the equality of its domain and range. As in the proof of Lemma 1, suppose that \(\delta\) is not injective, and has the same value on two different states: \[\delta((q_{1},f_{1})) =\delta((q_{2},f_{2}))\] \[(\delta_{a}^{+}(q_{1}),\delta_{a}^{+}\circ f_{1}\circ\delta_{a}^{-}) =(\delta_{a}^{+}(q_{2}),\delta_{a}^{+}\circ f_{2}\circ\delta_{a}^ {-})\] From \[\delta_{a}^{+}(q_{1}) =\delta_{a}^{+}(q_{2})\] follows \[q_{1} =q_{2}\] as \(\delta_{a}^{+}\) is a bijection. And from \[\delta_{a}^{+}\circ f_{1}\circ\delta_{a}^{-}=\delta_{a}^{+}\circ f_{2}\circ \delta_{a}^{-}\] by multiplying by inverse functions of \((\delta_{a}^{+})^{-1},(\delta_{a}^{-})^{-1}\) from the left side and from the right side respectively, the next equation follows \[(\delta_{a}^{+})^{-1}\circ\delta_{a}^{+}\circ f_{1}\circ\delta_{a}^{-}\circ( \delta_{a}^{-})^{-1} =(\delta_{a}^{+})^{-1}\circ\delta_{a}^{+}\circ f_{2}\circ\delta_{a }^{-}\circ(\delta_{a}^{-})^{-1}\] \[f_{1} =f_{2}\] So, \((q_{1},f_{1})=(q_{2},f_{2})\), therefore \(\delta\) is a bijection and the constructed automaton is a permutation automaton. Denote the number of accepting states in \(Q_{-}\) by \(e\). The exact number of states in the constructed 1PerFA is given in the following theorem. **Theorem 5**.: _For every 2PerFA \(\mathcal{A}=(\Sigma,Q_{+},Q_{-},q_{0},\langle\delta_{a}^{+}\rangle_{a\in\Sigma },\langle\delta_{a}^{-}\rangle_{a\in\Sigma},\delta_{\vdash},\delta_{\vdash},F)\) with \(F\subseteq Q_{+}\cup Q_{-}\) and_ \[|Q_{+}|=k,\quad|Q_{-}|=\ell,\quad|Q_{-}^{\times}|=m,\quad|F\cap Q_{-}|=e,\] _where \(Q_{-}^{\times}\subseteq Q_{-}\) is the set of rejecting states from which there is no transition by \(\vdash\), there exists a 1PerFA with at most_ \[k\cdot\binom{\ell}{m}\cdot\binom{m}{e}\cdot\binom{k-1}{\ell-m}\cdot(\ell-m)!\] _states that recognizes the same language._ ## 7 Conclusion The complexity of transforming sweeping permutation automata (2PerFA) to classical one-way permutation automata (1PerFA) has been determined precisely. A suggested question for future research is the state complexity of operations on 2PerFA. Indeed, state complexity of operations on 1PerFA has recently been investigated [4, 13], state complexity of operations on 2DFA of the general form was studied as well [5], and it would be interesting to know how the case of 2PerFA compares to these related models.
2308.16634
Effect of initial-state geometric configurations on the nuclear liquid-gas phase transition
Within the framework of an extended quantum molecular dynamics model, we simulated $^{40}$Ca + $^{16}$O collisions at beam energies ranging from 60 to 150 MeV/nucleon for $^{16}$O with different $\alpha$-cluster configurations. Results imply that different $\alpha$-cluster configurations lead to different yields of deuteron, triton, $^3$He and $^4$He, but not for proton and neutron. We discuss the effect of geometric fluctuations which are presented by double ratios of light nuclei, namely $\mathcal{O}_\text{p-d-t}$ and $\mathcal{O}_\text{p-d-He}$. It is found that magnitude hierarchy of geometric fluctuations is chain, kite, square and tetrahedron structure of $^{16}$O. $\mathcal{O}_\text{p-d-t}$ has maximum value around 80 -- 100 MeV/nucleon which could be related to liquid-gas phase transition, that is consistent with results from the charge distribution of the heaviest fragments in the collisions.
Y. T. Cao, X. G. Deng, Y. G. Ma
2023-08-31T11:02:09Z
http://arxiv.org/abs/2308.16634v1
# Effect of initial-state geometric configurations on the nuclear liquid-gas phase transition ###### Abstract Within the framework of an extended quantum molecular dynamics model, we simulated \({}^{40}\)Ca + \({}^{16}\)O collisions at beam energies ranging from 60 to 150 MeV/nucleon for \({}^{16}\)O with different \(\alpha\)-cluster configurations. Results imply that different \(\alpha\)-cluster configurations lead to different yields of deuteron, triton, \({}^{3}\)He and \({}^{4}\)He, but not for proton and neutron. We discuss the effect of geometric fluctuations which are presented by double ratios of light nuclei, namely \(\mathcal{O}_{\text{p-d-t}}\) and \(\mathcal{O}_{\text{p-d-He}}\). It is found that magnitude hierarchy of geometric fluctuations is chain, kite, square and tetrahedron structure of \({}^{16}\)O. \(\mathcal{O}_{\text{p-d-t}}\) has maximum value around 80 - 100 MeV/nucleon which could be related to liquid-gas phase transition, that is consistent with results from the charge distribution of the heaviest fragments in the collisions. pacs: 25.70.-z, 24.10.Lx, 21.30.Fe ## I Introduction Phase transition is a universal property of interacting substances and generally studied in the thermodynamic limit of macroscopic systems. The atomic nucleus as a finite size system, the phase transition in nucleonic level [1; 2; 3; 4; 5] or quark level [6; 7; 8; 9; 10; 11] has been extensively discussed and investigated. The interaction between nucleons is similar to that between molecules in a van der Waals fluid, so Bertsch and Siemens [1] speculated that nucleus may experience liquid-gas phase transition (LGPT) when it is heated. Theoretical and experimental efforts were made to confirm it, especially in the area of intermediate energy heavy-ion collisions. In a certain excitation energy range, the nuclear caloric curve has a temperature plateau [2], which implied a possible indication of phase transition [12; 13; 14; 15; 16; 17]. Experimentally, spinodal decomposition was found to have occurred in nuclear multifragmentation [18], indicating the existence of liquid-gas phase coexistence region in the finite nuclear systems. The application of negative microcanonical heat capacity in nuclear fragmentation [19], which may be related to LGPT [20]. As we know, clustering is a fundamental phenomenon in physics, which has attracted a lot of attention for a long time. It was earlier proposed by Gamow [21] and discussed by Bethe and Bacher [22; 23] for the high stability of the \(\alpha\)-cluster around neighboring light nuclei. A cluster structure can emerge in excited states of nuclei or in ground states of nuclei especially in light nuclei, where the nucleus resembles a molecule composed of clusters [24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. Configuration of \(\alpha\)-cluster is a key problem to understand the phenomenon of clustering in light nuclei. At present, there are many theoretical predictions on \(\alpha\)-cluster configurations in light nuclei. For instance, \({}^{16}\)O can be treated as linear-chain structure with four-\(\alpha\) clusters, which was supported by the \(\alpha\) cluster model [34] and the cranked Skyrme Hartree-Fock method [35]. At the ground state, it can be regarded as tetrahedral structure with the approach of nuclear chiral effective field theory [36] and covariant density functional theory [37]. And the same structure is also presented above the ground state supported by Hartree-Fock-Bogoliubov method [38]. In the last decade, many studies have focused on density fluctuations to investigate LGPT as in Refs. [39; 40; 41]. Obviously, different \(\alpha\)-cluster configurations shall induce different geometric fluctuations, so we chose the following four \(\alpha\)-cluster configurations for the projectile \({}^{16}\)O, which are chain, kite, square and tetrahedron to probe density fluctuation. How different \(\alpha\)-cluster configurations affect on the LGPT is considered in this work. In our study, we explore the effect of geometric fluctuation on LGPT in low-intermediate energy heavy-ion collisions. Within the framework of the extended quantum molecular dynamics (EQMD) model, the central \({}^{40}\)Ca + \({}^{16}\)O collisions at energies ranging from 60 to 150 MeV/nucleon are simulated, and the GEMINI model [42; 43; 44] is then used to de-excite heavy fragments. The organization of the paper is as follows: In Sect. II, we give introductions of our simulation model and method, including the EQMD model and GEMINI model as well as ratios of light nuclei. Results of effects of geometric fluctuation on the yields and (double) ratios of light nuclei are discussed in Sect. III. Moreover, the relation to nuclear liquid gas phase transition is pointed out by the charge distribution of the heaviest fragments in the same collisions. Finally, conclusion is given in Sect. IV. Model and Methodology ### EQMD model In the EQMD model, the wave packets of nucleons are Gaussian-like and the total wave function of the system is treated as the direct product of all nucleons [45] \[\Psi=\prod_{i}\varphi(\mathbf{r}_{i})=\prod_{i}(\frac{\nu_{i}+\nu_{i}^{*}}{2\pi})^{3/ 4}\text{exp}[-\frac{\nu_{i}}{2}(\mathbf{r}_{i}-\mathbf{R}_{i})^{2}+\frac{i}{\hbar}\mathbf{P} _{i}\mathbf{\cdot}\mathbf{r}_{i}]\,, \tag{1}\] where \(\mathbf{R}_{i}\) and \(\mathbf{P}_{i}\) are the centers of position and momentum of the \(i\)-th wave packet, respectively. The Gaussian width \(\nu_{i}\) is introduced as \(\nu_{i}\equiv\frac{1}{\lambda_{i}}+i\delta_{i}\) where \(\lambda_{i}\) and \(\delta_{i}\) are dynamical variables in the process of initialization. The expected value of Hamiltonian can be expressed as \[H =\left\langle\Psi\left|\sum_{i}-\frac{\hbar^{2}}{2m}\bigtriangledown _{i}^{2}-\hat{T}_{zero}+\hat{H}_{int}\right|\Psi\right\rangle \tag{2}\] \[=\sum_{i}\frac{\mathbf{P}_{i}^{2}}{2m}+\frac{3\hbar^{2}(1+\lambda_{i} ^{2}\delta_{i}^{2})}{4m\lambda_{i}}-T_{zero}+H_{int},\] where the first, second and third term are the center momentum of the wave packet, the contribution of the dynamic wave packet, and the zero point center-of-mass kinetic energy \(-T_{zero}\), respectively. The first term can be expressed as \(\left\langle\hat{\mathbf{p}}_{i}\right\rangle^{2}/2m\), the second term can be treated as \(\left(\left\langle\hat{\mathbf{p}}_{i}^{2}\right\rangle-\left\langle\hat{\mathbf{p}}_{ i}\right\rangle^{2}\right)/2m\), and the form of the third term can be found in details in Ref. [45]. For the effective interaction \(H_{int}\), it consists of the Skyrme potential, the Coulomb potential, the symmetry energy, and the Pauli potential as follows \[H_{int}=H_{Skyrme}+H_{Coulomb}+H_{Symmetry}+H_{Pauli}. \tag{3}\] The form of Skyrme interaction is written as \[H_{Skyrme}=\frac{\alpha}{2\rho_{0}}\int\rho^{2}(\mathbf{r})d^{3}r+\frac{\beta}{( \gamma+1)\rho_{0}^{\gamma}}\int\rho^{\gamma+1}(\mathbf{r})d^{3}r, \tag{4}\] where \(\alpha=-124.3\) MeV, \(\beta=70.5\) MeV, and \(\gamma=2\), which can be obtained from fitting the ground state properties of finite nuclei. The form of Coulomb potential can be expressed as \[H_{Coulomb}=\frac{e^{2}}{2}\sum_{i}\sum_{i\neq j}Z_{i}Z_{j}\frac{1}{\text{ erf}}(\frac{r_{ij}}{\sqrt{4L}})\,, \tag{5}\] where \(r_{ij}=|\mathbf{r}_{i}-\mathbf{r}_{j}|\) and \(\text{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-u^{2}}du\). And the symmetry potential can be written as \[H_{Symmetry}=\frac{C_{S}}{2\rho_{0}}\sum_{i,j\neq i}\int[2\delta(I_{i},I_{j}) -1]\rho_{i}(\mathbf{r})\rho_{j}(\mathbf{r})d^{3}r, \tag{6}\] where \(C_{S}\) is the symmetry energy coefficient which is 25 MeV in this work. It is known that the stability of nuclei in the model description is very important to study the cluster structure effects of nuclei. As a result, in order to make saturation property and \(\alpha\)-cluster structures can be obtained after energy cooling [30], a phenomenological repulsive Pauli potential is introduced to prevent nucleons with the same spin-\(S\) and isospin-\(I\) to come close to each other in the phase space, which can be presented as \[H_{Pauli}=\frac{c_{P}}{2}\sum_{i}(f_{i}-f_{0})^{\mu}\theta(f_{i}-f_{0}), \tag{7}\] where \(f_{i}\) is the overlap of the \(i\)-th nucleon with other nucleons having the same spin and isospin, i.e. \(f_{i}\equiv\sum_{j}\delta(S_{i},S_{j})\delta(I_{i},J_{j})\left|\langle\phi_{i} |\phi_{j}\rangle\right|^{2}\), and \(\theta\) is the unit step function, and \(c_{P}=15\) MeV is a coefficient denoting strength of Pauli potential. For the other two parameters, we take \(f_{0}=1.0\) and \(\mu=1.3\). For the standard QMD model, it shows insufficient stability, for which the phase space obtained from the Monte Carlo samples is not in the lowest point of energy [45]. So the EQMD model takes the kinetic-energy term of the momentum variance of wave packets in the Hamiltonian into account, which is ignored as the spurious constant term in the standard QMD [46; 47]. Besides, the wave packet width is introduced into the Hamiltonian as a complex variable, and treated as an independent dynamic variable. These modifications not only describe the ground state better, but also make the model successful in the study of nuclear cluster states. As a consequence, we first consider that the energy-minimum state is the ground state of initial nucleus. Afterwards, a random configuration is given to each nucleus. And under the time-dependent variation principle (TDVP) [48], propagation of each nucleon can be described as [45] \[\dot{\mathbf{R}}_{i}=\frac{\partial H}{\partial\mathbf{P}_{i}}+\mu_{\mathbf{R}}\frac{ \partial H}{\partial\mathbf{R}_{i}},\dot{\mathbf{P}}_{i}=-\frac{\partial H}{\partial \mathbf{R}_{i}}+\mu_{\mathbf{P}}\frac{\partial H}{\partial\mathbf{P}_{i}}, \tag{8}\] where \(H\) is the expected value of the Hamiltonian, and \(\mu_{\mathbf{R}}\), \(\mu_{\mathbf{P}}\), \(\mu_{\lambda}\) and \(\mu_{\delta}\) are various friction coefficients. During the friction cooling process, the system dissipates its energy with negative coefficients, making itself goes to a stable (minimum or even eigenstate) state [49]. In contrast, in the subsequent nuclear reaction simulation stage, these coefficients are zero to maintain the energy conservation of the system. It is worth mentioning that an improvement in the performance of the inelastic process, especially for the incoherent \(p\)-\(n\) bremsstrahlung process in the framework of the EQMD model, has been presented in Refs. [50; 51]. ### GEMINI model The calculation in this study is a two-step process, including both dynamical and statistical codes. At the end of dynamical evolution, the nucleons are re-aggregated and condensed to form individual clusters [43]. The deexcitation of heavy clusters is realized by the GEMINI code by R. J. Charity [52; 53]. With the information of a given primary fragment including its proton number \(Z\), mass number \(A\), excitation energy \(E^{*}\), and spin \(J_{CN}\), GEMINI de-excites the fragment through a series of sequential binary decays until the excitation energy of the hot fragments reaches zero. The GEMINI model deals with the evaporation of light particles in the Hauser-Feshbach form [54]. The partial decay width of a compound nucleus for the evaporation of particle \(i\) is expressed as \[\begin{split}\Gamma_{i}(E^{*},J_{CN})=\frac{1}{2\pi\rho_{CN}(E^{* },J_{CN})}\int d\varepsilon\sum_{J_{d}=0}^{\infty}\sum_{J=|J_{CN}-J_{d}|}^{J_{ CN}+J_{d}}\\ \times\sum_{\ell=|J-S_{i}|}^{J+S_{i}}T_{\ell}(\varepsilon)\rho_{d }(E^{*}-B_{i}-\varepsilon,J_{d})\,,\end{split} \tag{9}\] where \(J_{d}\), \(S_{i}\), \(J\), and \(\ell\) are spin of the daughter nucleus, the spin, the total angular momentum, and the orbital angular momenta of the evaporated particle, respectively; \(\varepsilon\) and \(B_{i}\) are respectively its kinetic and separation energy; \(T_{\ell}\) is its transmission coefficient or barrier penetration factor, and \(\rho_{d}\) and \(\rho_{CN}\) are respectively the level density of the daughter and compound nucleus. The description of intermediate-mass fragment emission follows the Moretto form [55; 56], which has been further extended to the following form \[\begin{split}\Gamma_{Z,A}=\frac{1}{2\pi\rho_{CN}(E^{*},J_{CN})} \times d\varepsilon\rho_{sad}\\ (E^{*}-B_{Z,A}(J_{CN})-\varepsilon,J_{CN})\,,\end{split} \tag{10}\] where \(\rho_{sad}\) is the level density at the saddle point, \(\varepsilon\) is the kinetic energy in the fission degree of freedom at the saddle point, \(B_{Z,A}(J_{CN})\) is the conditional barrier depending on both the mass and charge asymmetries, and can be expressed as \[B_{Z,A}(J_{CN})=B_{A}^{Sierk}(J_{CN})+\Delta M+\Delta E_{Coul}-\delta W-\delta P\,, \tag{11}\] where \(\Delta M\) and \(\Delta E_{Coul}\) are the mass and Coulomb corrections accounting for the different \(Z\) and \(A\) values of the two fragments, \(\delta W\) and \(\delta P\) are the ground-state shell and pairing corrections to the liquid drop barrier. The quantity \(B_{A}^{Sierk}\) is the interpolated Sierk barrier for the specified mass asymmetry. For the symmetric divisions in heavy nuclei, the GEMINI model uses the Bohr-Wheeler form [57] to predict the total symmetric fission yield \[\begin{split}\Gamma_{BW}=\frac{1}{2\pi\rho_{CN}(E^{*},J_{CN})} \times d\varepsilon\rho_{sad}\\ (E^{*}-B_{f}(J_{CN})-\varepsilon,J_{CN})\,,\end{split} \tag{12}\] where \(B_{f}(J_{CN})\) is the spin-dependent fission barrier, read as \[B_{f}(J_{CN})=B_{f}^{Sierk}(J_{CN})-\delta W-\delta P\,. \tag{13}\] ### Ratios and density fluctuation In the analytical coalescence formula COAL-SH [58] for cluster production, the yield \(N_{c}\) of a cluster at midrapidity and consisting of \(A\) constituent particles from the hadronic matter at kinetic freeze-out or emission source of effective temperature \(T_{eff}\), volume \(V\), and number \(N\) of the \(i\)-th constituent with mass \(m_{i}\) can be read as \[\begin{split} N_{c}=g_{rel}g_{size}g_{c}M^{3/2}\left[\prod_{i=1}^ {A}\frac{N_{i}}{m_{i}^{3/2}}\right]\\ \times\prod_{i=1}^{A-1}\frac{(4\pi/\omega)^{3/2}}{V\chi(1+\chi^{ 2})}\left(\frac{\chi^{2}}{1+\chi^{2}}\right)^{l_{i}}G(l_{i},\chi)\,.\end{split} \tag{14}\] In Eq. (14), \(M=\Sigma_{i=1}^{A}m_{i}\) is the rest mass of the cluster, \(l_{i}\) is the orbital angular momentum associated with the \(i\)-th relative coordinate, \(\omega\) is the oscillator frequency of the cluster' \(\,\,\) s internal wave function and is inversely proportional to \(Mr_{rms}^{2}\) with \(r_{rms}\) being the root-mean-square (RMS) radius of the cluster, and \(G(l,x)=\Sigma_{k=0}^{l}\frac{1}{k!(l-k)!}\frac{1}{(2k+1)\chi^{2}}\) with \(\chi=(2T_{eff}/\omega)^{1/2}\) is the suppression factor due to the orbital angular momentum on the coalescence probability [59; 60]. Additionally, \(g_{c}=(2S+1)/(\Pi_{i=1}^{A}(2s_{i}+1))\) is the coalescence factor for constituents of spin \(s_{i}\) to form a cluster of spin \(S\), \(g_{rel}\) is the relativistic correction to the effective volume in momentum space, and \(g_{size}\) is the correction due to the finite size of produced cluster. Taking density fluctuations of nucleons into account, the neutron and proton densities in the emission source can be expressed as [61; 62] \[n(\vec{r})=\frac{1}{V}\int n(\vec{r})d\vec{r}+\delta n(\vec{r})=\langle n \rangle+\delta n(\vec{r})\,. \tag{15}\] \[n_{p}(\vec{r})=\frac{1}{V}\int n_{p}(\vec{r})d\vec{r}+\delta n_{p}(\vec{r})= \langle n_{p}\rangle+\delta n_{p}(\vec{r})\,, \tag{16}\] where \(\langle\cdots\rangle\) represents the average value over space and \(\delta n(\vec{r})(\delta n_{p}(\vec{r}))\) with \(\langle\delta n(\vec{r})\rangle=0(\langle\delta n_{p}(\vec{r})\rangle=0)\) represents the fluctuation of neutron (proton) density from its average value \(\langle n\rangle(\langle n_{p}\rangle)\). Then yields of deuterons and tritrons can be approximately written in an analytical coalescence framework as [61] \[\begin{split} N_{d}=\frac{3}{2^{1/2}}(\frac{2\pi}{m_{0}T_{eff}})^{3 /2}\int d\vec{r}n(\vec{r})n_{p}(\vec{r})\\ =\frac{3}{2^{1/2}}\left(\frac{2\pi}{m_{0}T_{eff}}\right)^{3/2}N_ {p}\langle n\rangle(1+\alpha\Delta n)\,,\end{split} \tag{17}\] \[\begin{split} N_{t}=\frac{3^{3/2}}{4}(\frac{2\pi}{m_{0}T_{eff}})^{3 }\int d\vec{r}n(\vec{r})^{2}n_{p}(\vec{r})\\ =\frac{3^{3/2}}{4}\left(\frac{2\pi}{m_{0}T_{eff}}\right)^{3}N_ {p}\langle n\rangle^{2}[1+(1+2\alpha)\Delta n]\,,\end{split} \tag{18}\] where \(\alpha\) being the correlation coefficient. In addition, \(\Delta n=\langle(\delta n)^{2}\rangle/\langle n\rangle^{2}\) is a dimensionless quantity that characterizes the relative density fluctuation of neutrons. Combining Eq. (17) and Eq. (18), an important double ratio can be defined as [61, 62] \[O_{1}\equiv\mathcal{O}_{\text{p-d-t}}=\frac{N_{p}N_{t}}{N_{d}^{2}}=g\frac{1+(1 +2\alpha)\Delta n^{2}}{1+\alpha\Delta n}\;, \tag{19}\] with \(g=4/9\times(3/4)^{1.5}\approx 0.29\). When \(\alpha\Delta n\) is much smaller than unity, the correction from \(\alpha\) in Eq. (19) is second-order [61], and \(O_{1}\) can be approximated as \[O_{1}\approx 0.29(1+\Delta n)\,. \tag{20}\] In this way, \(O_{1}\) has a very simple linear dependence on \(\Delta n\). We can suggest that the yield ratio of light nuclei can be taken as a direct probe of the large density fluctuations which might be associated with critical phenomenon [61]. Besides, another double ratio of light-nuclei which \(\alpha\)-particle is involved was also proposed [63] as \[O_{4}\equiv\mathcal{O}_{\text{p-d-He}}=\frac{N_{{}^{4}He}N_{p}^{2}}{N_{d}^{3 }}\,. \tag{21}\] From the results in Ref. [63], it is thought that the above ratio could be taken as a potential probe of critical phenomenon [64, 65, 66, 67]. From the statistical point of view, the ratios of O\({}_{1}\) and O\({}_{4}\) can be considered in this work. Moreover, in our simulations, some single ratios such as \(N_{n}/N_{p}\) and \(N_{{}^{4}He}/N_{{}^{3}He}\) are also considered. ## III Results and discussion In the EQMD model, the Pauli potential inhibits the system to collapse into the Pauli-blocked state at low energies and gives the model capability to describe \(\alpha\)-clustering. Before frictional cooling, the nucleon distribution of \({}^{16}\)O is random, but after friction cooling it forms something like four-\(\alpha\) configuration. For the four-\(\alpha\) states of \({}^{16}\)O, we have chosen four configurations: chain, square, kite and tetrahedron. After the system goes long enough time till 500 fm/c, the final-state heavy fragments of which the excitation energy are greater than zero and the mass greater than 4 will be further deexcited by the GEMINI model. For a given \(\alpha\)-cluster configuration and incident energy point, the number of simulated events is 300,000. It should be noted that, for O\({}_{1}\) and O\({}_{4}\), the events when the denominator is zero are abandoned and only fill in the spectrum event by event with non-zero denominators. ### The effect of chain \(\alpha\)-clustering projectile with different polarization modes In this work, we refer to the plane formed by the intersection of the \(x\) and \(z\) axes as the collision plane. Here, we polarize projectile with the chain of \({}^{16}\)O both transversely and longitudinally, as shown in Fig. 1. For other comparison case, the projectile is randomly rotated in four-\(\pi\) solid angle. It can be imagined that the projection of the projectile on \(x-y\) plane is only one \(\alpha\)-cluster point in the case of transverse polarization, while it is four \(\alpha\)-cluster points for the longitudinal polarization. In this way, different initial fluctuations among these three cases can be set and one can determine whether it has any effects on LGPT or not. Firstly, the yields of various types of fragments as a function of beam energy in chain-like \({}^{16}\)O bombarding on \({}^{40}\)Ca collisions under three polarization modes are given, as shown in Fig. 2. One can see that the yields of proton and neutron increase with the increase of incident energy and reach stable values in energy region of 60 - 150 MeV/nucleon. And the yields of deuteron, triton and \({}^{3}\)He increase first and then decrease as incident energy increases. For deuteron, triton and \({}^{3}\)He, when the incident energy is less then 100 MeV/nucleon, their yields increase with the incident energy, which is due to the fact that the composite system formed by \({}^{16}\)O and \({}^{40}\)Ca is in a state of fusion evaporation [68, 69, 70]. At this stage, the compression and temperature of the collision system increase as incident energy increases. Thus it evaporates more light clusters etc., such as proton, neutron, deuteron, triton and \({}^{3}\)He [71]. However, with further increase of incident energy, the excitation energy of the system is so large that the system moves towards multiple fragmentation [68, 69, 70]. The phase-space volume occupied by proton and neutron becomes larger [71], which reduces the formation probability of deuteron, triton and \({}^{3}\)He. These features have been observed in previous experiments [72]. In addition, for deuteron, triton and \({}^{3}\)He, under the same conditions, the mass number is larger, the yield is smaller, which is consistent with the prediction from the thermal model [73]. Different from the previous paragraph, for \({}^{4}\)He, its yield starts at almost zero before 70 MeV/nucleon, then increases with the beam energy, and finally levels off or drops slightly (see Fig. 5(f)). Moreover, the yield of \({}^{4}\)He is about ten times that of \({}^{3}\)He, which is exactly opposite to the prediction of the thermal model [73]. The yield of \({}^{4}\)He is greater than that of triton and \({}^{3}\)He which can be Figure 1: Schematic plot of the projections of (a) a transversely polarized and (b) a longitudinally polarized chain-like \({}^{16}\)O in the \(x\)-\(z\) plane at initial stage. attributed to the weaker Mott effect [74] on \({}^{4}\)He than that on triton and \({}^{3}\)He, i.e., a light nucleus would no longer be bound if the phase-space density of its surrounding nucleons is too large [75; 76; 77]. This is because the \({}^{4}\)He is well bound and compact while other light fragments is weakly bound and loose. Furthermore, from the trend of \({}^{4}\)He yield, we speculate that \({}^{4}\)He may be produced mainly through multiple fragmentation rather than fusion evaporation. At the beginning, when the incident energy is low, no multiple fragmentation has occurred, so the yield of \({}^{4}\)He is almost zero. And with the increasing of inci Figure 4: Dependence of (a) O\({}_{1}\) and (b) O\({}_{4}\) on the incident energy when \({}^{16}\)O is polarized transversely (TP), longitudinally (LP) and unpolarized (NP), respectively. Figure 3: Dependence of (a) N\({}_{n}\)/N\({}_{p}\) and (b) N\({}_{{}^{4}He}\)/N\({}_{{}^{3}He}\) on the incident energy when \({}^{16}\)O is polarized transversely (TP), longitudinally (LP) and unpolarized (NP), respectively. Figure 2: Dependence of yields of (a) proton, (b) neutron, (c) deuteron, (d) triton, (e) \({}^{3}\)He and (f) \({}^{4}\)He on the incident energy when \({}^{16}\)O is polarized transversely (TP), longitudinally (LP) and unpolarized (NP), respectively. dent energy, multiple fragmentation starts to occur and gradually dominates, so its yield increases. When the incident energy is large, it is difficult to decompose \({}^{4}\)He due to the large binding energy, so its yield changes little or only slightly. In Fig. 2 (a) and (b), proton and neutron show insensitive to the polarization modes. However, for deuteron, triton, \({}^{3}\)He, and \({}^{4}\)He, they display obvious differences among longitudinal, transverse, and without polarization modes. And it is seen that deuteron shows more sensitive in low energy region, but it is opposite for triton, \({}^{3}\)He, and \({}^{4}\)He. For the ratio of N\({}_{n}\)/N\({}_{p}\) which is usually taken as a Figure 5: Dependence of yields of (a) proton, (b) neutron, (c) deuteron, (d) triton, (e) \({}^{3}\)He and (f) \({}^{4}\)He on the incident energy when \({}^{16}\)O has four different \(\alpha\)-cluster configurations. Figure 6: Dependence of (a) N\({}_{n}\)/N\({}_{p}\) and (b) N\({}_{4He}\)/N\({}_{3He}\) on the incident energy when \({}^{16}\)O has four different \(\alpha\)-cluster configurations. Figure 7: Dependence of (a) O\({}_{1}\) and (b) O\({}_{4}\) on the incident energy when \({}^{16}\)O has four different \(\alpha\)-cluster configurations. sensitive probe to neutron skin [78; 79; 80; 81], we can see from Fig. 3 (a) that it increases with the incident energy and eventually converges to 1, since the projectile and target are symmetric in this work. And there is no significant difference in the value of N\({}_{n}\)/N\({}_{p}\) among different polarization modes. Additionally, as shown in Fig. 3(b), the ratio of \({}^{4}\)He to \({}^{3}\)He has the similar trend with N\({}_{n}\)/N\({}_{p}\) but has obvious difference for different polarization modes, and the curve is similar to the dependence of the yield of \({}^{4}\)He on incident energy in Fig. 2 (f), indicating that the change of the \({}^{4}\)He yield is dominant. Furthermore, ratios of O\({}_{1}\) and O\({}_{4}\) as a function of incident energy under different polarization modes (with different initial geometric fluctuations) are shown in Fig. 4 which could reflect nucleonic density fluctuation. One could expect that such geometric fluctuation has strong relation to the nucleonic density fluctuation. As mentioned above, the polarized projectile of chain-like \({}^{16}\)O at longitudinal direction has larger geometric fluctuation than the transverse polarization one. And the geometric fluctuation for unpolarized one is between them. Here, one should notice that ratios of O\({}_{1}\) and O\({}_{4}\) are based on an equilibrium source. And the collision system at low energy could not reach equilibrium condition. Without such limit, one still can make the ratios by light nuclei but with less meanings. From Fig. 4 (a), one can see that the ratio of O\({}_{1}\) for unpolarization case has the largest value below 80 MeV/nucleon. As beam energy increases, however, the O\({}_{1}\) for longitudinal polarization gives the largest value and the one for transverse polarization shows the smallest which is as we expected. It shows that the initial-state geometric fluctuation of projectile with different \(\alpha\)-cluster configurations is sensitive to the O\({}_{1}\) at higher incident energies. In Refs. [82; 83], density fluctuation is enhanced as beam energy or temperature increases which is associated with the LGPT in nuclear matter. In Fig. 4 (a), the ratios of O\({}_{1}\) can reach maximum value around 90 MeV/nucleon which depends on polarization modes. Such turning point could has physical meaning which may be associated with the LGPT and it will be cross-checked by charge distribution of the heaviest fragment below. For the ratios of O\({}_{4}\), it tends to be stable value as beam energy increases without turning value. But it seems that ratio of O\({}_{4}\) is sensitive to the polarization mode. Also one can see that trends of O\({}_{1}\) are similar to ones of the yield of triton, and trends of O\({}_{4}\) are similar to ones of the yield of \({}^{4}\)He, from which we can infer that the yields of triton and \({}^{4}\)He in the final-state product is more sensitive to geometric fluctuation. In addition, it can be seen from Fig. 2, 3 and 4 that when the incident energy is low and the system is in the fusion evaporation stage, the yields and various ratios of different fragments are not sensitive to the geometric configuration of \({}^{16}\)O, while they become sensitive only when the incident energy is high and the system is in the multiple fragmentation stage. ### The effect of projectile with different \(\alpha\)-clustering configurations Similarly in Sect. III.1, we first investigate the dependence of the yields of different types of fragments on incident energy with different \(\alpha\)-cluster configurations for \({}^{16}\)O, the results of which are shown in Fig. 5. For proton and neutron, their yields increase with the incident energy. And they show no more difference among yields with different \(\alpha\)-cluster configurations. For deuteron, triton and \({}^{3}\)He, their yields increase first and then decrease with the incident energy. And for \({}^{4}\)He, its yield first increases and then becomes stable with the incident energy. Furthermore, when the incident energy is greater than 100 MeV/nucleon, the relationship among the yields of triton, \({}^{3}\)He and \({}^{4}\)He for \({}^{16}\)O with different \(\alpha\)-cluster configurations is "chain \(>\) kite \(>\) square \(>\) tetrahedron" and with an obvious difference. As shown in Fig. 6, the trends of N\({}_{n}\)/N\({}_{p}\) and N\({}_{4He}\)/N\({}_{3He}\) are similar to those described in Sect. III.1. There is also no significant difference in the value of N\({}_{n}\)/N\({}_{p}\) between different \(\alpha\)-cluster configurations as in Fig. 6 (a). The N\({}_{4He}\)/N\({}_{3He}\) for chain-like configuration displays the largest values and the one for tetrahedron-like configuration is with the smallest value. Figure 8: Z\({}_{1}\)/Z\({}_{s}\) distribution for (a) chain-like \({}^{16}\)O with different incident energies and (b) \({}^{16}\)O with four different \(\alpha\)-cluster configurations when the incident energy is 80 MeV/nucleon. Ratios of O\({}_{1}\) and O\({}_{4}\) as a function of incident energy under different \(\alpha\)-cluster configurations are shown in Fig. 7. For O\({}_{1}\), it first increases and then decreases with the incident energy. And below 100 MeV/nucleon, O\({}_{1}\) for chain-like configuration gives the smallest value and tetrahedron-like configuration is with the largest value. However, the hierarchy is opposite from 100 MeV/nucleon up to 150 MeV/nucleon. In addition, there are obvious peaks arising around 80 to 100 MeV/nucleon, which may be related to LGPT as mentioned above. For O\({}_{4}\), it first increases and tends to be stable with the incident energy except for the one with tetrahedron configuration slightly decreasing as beam energy increases after 100 MeV/nucleon. Moreover, the peak energy of O\({}_{1}\) is somehow different for various cluster configurations. And for O\({}_{4}\), the influence of different cluster configurations begins to appear at 80 MeV/nucleon and becomes stable after 100 MeV/nucleon. As mentioned in Ref. [84], the charge distribution of the heaviest fragment in intermediate energy heavy-ion collisions has been observed to be bimodal, which is expected as a generic signal of phase transition. So we plot the probability distribution for Z\({}_{1}\) over Z\({}_{s}\) for different incident energy and different \(\alpha\)-cluster configurations as shown in Fig. 8, where Z\({}_{1}\) is the charge of the heaviest fragment in each collision event and Z\({}_{s}\) is the sum of the charges of projectile and target. It can be clearly seen from Fig. 8(a) that for chain-like \({}^{16}\)O, the probability distribution of Z\({}_{1}\)/Z\({}_{s}\) starts to show a bimodal structure when the incident energy is greater than 80 MeV/nucleon, and this structure disappears until the incident energy is greater than 100 MeV/nucleon, further indicating that LGPT occurs within this incident energy range. Furthermore, as shown in Fig. 8(b), when the incident energy is 80 MeV/nucleon, the bimodal structure of the probability distribution curve corresponding to the square-like and tetrahedron-like projectile is the most obvious, followed by the kite-like, and the chain-like is the least obvious. Combined with the magnitude of geometric fluctuation for different \(\alpha\)-cluster configurations derived previously, it can be inferred that the larger the geometric fluctuation, the larger the incident energy resulting from LGPT, which can also be verified with the peak energy of O\({}_{1}\) in Fig. 7(a). ## IV Conclusion The difference of geometric fluctuation caused by different \(\alpha\)-cluster configurations is mainly reflected in the effects on the yields of deuteron, triton, \({}^{3}\)He and \({}^{4}\)He, but it is full for the yields of proton and neutron. By investigating the double ratios \(\mathcal{O}_{\text{p-d-t}}\) and \(\mathcal{O}_{\text{p-d-He}}\) of light nuclei, we disclose that the magnitude hierarchy of geometric fluctuations is "chain \(>\) kite \(>\) square \(>\) tetrahedron" for reactions of \({}^{40}\)Ca induced by \({}^{16}\)O with different \(\alpha\)-configuration. The maximum value of \(\mathcal{O}_{\text{p-d-t}}\) is around 80 - 100 MeV/nucleon which could be related to LGPT, and it is consistent with results from the charge distribution of the heaviest fragment in the same reaction. The current work sheds light on the effects of geometric fluctuation on LGPT in low-intermediate energy heavy-ion collisions. In future, the yields of light nuclei produced in \({}^{40}\)Ca + \({}^{16}\)O central collisions with different incident energy can be measured through some experimental programs in HIRFL at CSR, FRIB at MSU as well as other facilities. Since it was indicated in many previous studies that \({}^{16}\)O in the ground state could be a tetrahedral 4\(\alpha\) structure, we expect that the experimental data shall be compatible with the conclusions we have drawn in the previous sections for \({}^{16}\)O with tetrahedral configuration. Meanwhile, the yields of charged light nuclei are intuitive and easily measurable physical quantities, and the single ratios of \({}^{4}\)He/\({}^{3}\)He as well as their double ratios \(\mathcal{O}_{\text{p-d-t}}\) and \(\mathcal{O}_{\text{p-d-He}}\) are better observables since the insufficient detector's effect in experiments can be cancelled, we expect the trend or saturation value of the excitation function of the ratios could give hints of geometric fluctuation. Of course, collective observable, such as elliptic flow, may be also necessary for the further study on the phenomena discussed in this work. Authors thank Dr. Kai-Jia Sun and Song Zhang for communications. This work was supported in part by the National Natural Science Foundation of China under contract Nos. 11890710, 11890714, 12147101, and 12205049, and the Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008.
2309.07404
Holographic Einstein rings of Non-commutative black holes
With the help of the AdS/CFT correspondence, we easily derive the desired response function of QFT on the boundary. Using the virtual optical system with a convex lens, we are able to obtain the image of the black hole from the response function and further study the Einstein ring of the non-commutative black holes. All the results show that there are some common features and different features compared to the previous study of other background black holes. And with the change of the observation position, this ring will change into a luminosity-deformed ring, or light points. In addition to these similarities, there are some different features which are due to the singularity of the event horizon temperature. Explicitly, the relation between temperature and the event horizon $T-z_h$ has two branches when the non-commutative parameter $n$ is fixed. These in turn have an effect on the behavior of the response function and the Einstein ring. However, the amplitude of $|\langle O\rangle|$ increases with the decrease of the temperature $T$ for the left branch of $T-z_h$ relation, while the amplitude of $|\langle O\rangle|$ decreases with the decrease of the temperature $T$ for the right branch. These differences are also reflected in the Einstein ring. Therefore, these differences can be used to distinguish different black hole backgrounds. Furthermore, we show that the non-commutative parameter has an effect on the brightness and the position of Einstein ring.
Xin-Yun Hu, Xiao-Xiong Zeng, Li-Fang Li, Peng Xu
2023-09-14T03:16:22Z
http://arxiv.org/abs/2309.07404v5
# Holographic Einstein rings of Non-commutative black holes ###### Abstract With the help of AdS/CFT correspondence, we derive the desired response function of QFT on the boundary. Using the virtual optical system with a convex lens, we are able to obtain the image of the black hole from the response function and further study the Einstein ring of the non-commutative black holes. All the results show that there are some common features and different features compared to the previous study of other background black holes. The common features include that the holographic ring always appears with the concentric stripe surrounded when the observer located at the north pole, and an extremely bright ring when the observer is at the position of the photon sphere of the black hole. With the change of the observation position, this ring will change into a luminosity-deformed ring, or light points. In addition to these similarities, there are some different features which are due to the singularity of the event horizon temperature. Explicitly, the relation between temperature and the event horizon \(T-z_{h}\) has two branches when the non-commutative parameter \(n\) is fixed. These in turn have an effect on the behavior of the response function and the Einstein ring. For example, the amplitude of the response function \(|\langle O\rangle|\) increases with the decrease of the non-commutative strength parameter \(n\) for both branches of \(T-n\) relation. However, the amplitude of \(|\langle O\rangle|\) increases with the decrease of the temperature \(T\) for the left branch of \(T-z_{h}\) relation, while the amplitude of \(|\langle O\rangle|\) decreases with the decrease of the temperature \(T\) for the right branch. These differences are also reflected in Einstein ring. Therefore, these differences can be used to distinguish different black hole backgrounds. Furthermore, we show that the non-commutative parameter has an effect on the brightness and the position of Einstein ring. Introduction In recent years, non-commutative spacetime in gravity theories has been a vital research subject [1] in that it is considered as an alternative way to the quantum gravity [2]. Several investigations in noncommutative spacetime in gravity theories have been derived in recent literature, please see the comprehensive reviews [3; 4]. In particular, the effects of non-commutativity on black hole physics have been an area of great interest, mainly because of the possibility of a better understanding of the final stage of the black hole due to its evaporation. As is well known, the non-commutativity eliminates point-like structures in favor of smeared objects in flat spacetime [5; 6] and can be implemented in General Relativity by modifying the matter source [7]. Therefore, non-commutativity is introduced by modifying mass density so that the Dirac delta function is replaced by a Gaussian distribution [7] or alternatively by a Lorentzian distribution [8; 9]. In this way the mass density takes the form \(\rho_{n}(r)=\frac{M\sqrt{n}}{\pi^{3/2}(r^{2}+\pi n)^{2}}\), where \(n\) is the noncommutative parameter and \(M\) is the total mass diffused throughout the region of linear size \(n\). With this model in hand, we aim to analyze the lensed effect of noncommutative black hole in the holographic framework closely followed [10; 11]. In the paper [10; 11], they proposed a direct procedure to construct holographic images of the black hole in the bulk from a given response function of the QFT on the boundary. The response function with respect to an external source corresponds to the asymptotic data of the bulk field generated by the source on the AdS boundary. For a thermal state on two-dimensional sphere dual to Schwarzschild AdS\({}_{4}\) black hole, they demonstrated that the holographic images gravitationally lensed by the black hole can be constructed from the response function. And all these results are consistent with the size of the photon sphere of the black hole calculated in geometrical optics. Closely followed by these breakthroughs, the authors in [12; 13; 14; 15; 16] showed that this holographic images do exist in different gravitational background. However, the photon sphere varies according to the specific bulk dual geometry and the detailed behavior of Einstein ring also varies. Therefore, in this paper, we are tempted to investigated the behavior of the lensed response for the noncommutative black hole and study the effect of the noncommutative factor on the lensed response. This paper is arranged as follows. In section II, we briefly review the noncommutative solution in spherically symmeric AdS black hole. In section III, we give a holographic setup of such model and analyze the lensed response function. With the optical system, we observe the Einstein ring in our model and compare our results with the optical approximation in section IV. Our results shows that the position of photon ring obtained from the geometrical optics is full consistence with that of the holographic ring. Section V is the comparison betweeen the holographic method and the optical results. Section VI is devoted to our conclusions. ## II Review of the holographic construction of Einstein ring in AdS black holes Gravitational lensing is one of fundamental phenomena by strong gravity. Supposing there is a light source behind a gravitational body, the observers will see a ring-like image of the light source, i.e., the so-called Einstein ring when the light source, the gravitational body, and observers are in alignment. If the gravitational body is a black hole, some light rays are so strongly bended that they can go around the black hole many times, and especially infinite times on the photon sphere. As a result, multiple Einstein rings which correspond to winding numbers of the light ray orbits emerge and infinitely concentrate on the photon sphere. Recently, an observational project for imaging black holes which is called the Event Horizon Telescope (EHT) [17], has captured the first image of the supermassive black hole in M87. And the dark area inside the photon sphere is named black hole shadow [18] and the shadow of a black hole contains a lot of information. The study of shadow not only enables us to comprehend the geometric structure of spacetime, but also helps us to explore various gravity models more deeply. In the framework of the wave optics, the holographic images of AdS black hole in the bulk was constructed when the wave emitted by the source at the boundary of AdS enters the bulk and then propagated in the bulk by considering the AdS/CFT correspondence [10; 11; 12]. Here we first review explicitly the construction of holographic "images" of the dual black hole from the response function of the boundary QFT with external sources. Considering a \((2+1)\)-dimensional boundary conformal field theory on a 2-sphere \(S^{2}\) at a finite temperature, we study a one-point function of a scalar operator \(\mathcal{O}\) with its conformal dimension \(\Delta_{\mathcal{O}}=3\), under a time-dependent localized source \(J_{\mathcal{O}}\). The gravity dual is a black hole in the global AdS\({}_{4}\) with a probe massless bulk scalar field in the spacetime. The schematic picture of our setup is shown in Fig. 1. For the source \(J_{\mathcal{O}}\), here we employ a time-periodic localized Gaussian source with the frequency \(\omega\), amounts to an AdS boundary condition for the scalar field(see Fig. 2). For simplicity, we employ the monochromatic and axisymmetric Gaussian wave packet centered on the south pole \(\theta_{S}=\pi\) as the source \[J_{\mathcal{O}}(v_{e},\theta)=e^{-i\omega v_{e}}\frac{1}{2\pi\delta^{2}}\exp \left[-\frac{(\pi-\theta)^{2}}{2\delta^{2}}\right]=e^{-i\omega v_{e}}\sum_{m=0 }^{\infty}c_{m0}Y_{m0}(\theta), \tag{1}\] where \(v_{e}\) and \(\theta\) are coordinates. And more, \(\delta\) is the width of the wave produced by the Gaussian source and \(Y_{m0}\) is the spherical harmonics function. We set the wave packet size \(\delta\) to be \(\delta\ll\pi\), and the coefficients of the spherical harmonics \(Y_{m0}(\theta)\) can be given as \[c_{m0}=(-1)^{m}\sqrt{\frac{m+1/2}{2\pi}}\exp\left[-\frac{1}{2}(m+1/2)^{2} \delta^{2}\right]. \tag{2}\] With this time periodic boundary condition, a bulk scalar wave is injected into the bulk from the AdS boundary. The scalar wave propagates inside the black hole spacetime and reaches other Figure 1: The observer and its telescope. Figure 2: A monochromatic Gaussian source is located at a point on the AdS boundary, and its response is observed at another point on the same boundary. points on the \(S^{2}\) of the AdS boundary (please see Fig. 2). We are able to measure the local response function \(e^{-i\omega t}\langle{\cal O}(\vec{x})\rangle\) which has the information about the bulk geometry of the black hole spacetime. Using a wave-optical method, we get a formula which converts the response function \(\langle{\cal O}(\vec{x})\rangle\) to the image of the dual black hole \(|\Psi_{\rm V}(\vec{x}_{\rm V})|^{2}\) on a virtual screen shown as follows \[\Psi_{\rm V}(\vec{x}_{\rm V})=\int_{|\vec{x}|<\zeta}d^{2}x\,\langle{\cal O}( \vec{x})\rangle e^{-\frac{i\omega}{f}\vec{x}\cdot\vec{x}_{\rm V}}\, \tag{3}\] here \(\vec{x}=(x,y)\) and \(\vec{x}_{\rm V}=(x_{\rm V},y_{\rm V})\) are Cartesian-like coordinates on boundary \(S^{2}\) and the virtual screen, respectively. We have set the origin of the coordinates to an observation point. This operation is mathematically implemented by a Fourier transformation of the response function on a small patch with the radius \(\zeta\) around the observation point. And \(f\) describes magnification of the image on the screen. As shown in Fig. 1, this optical system helps us to derive the images of AdS black hole. Explicitly, such optical system consists of a convex lens and a spherical screen so that we can construct images of the black hole from the response function with the wave optics [19]. In Fig. 1, the middle position is the lens with focal length f regarded as a "converter" between plane and spherical waves and located at \(z=0\). In this paper we assume that the lens is infinitely thin and the size of the lens is much smaller than the focal length \(f\). Consider that a plane wave is irradiated to the lens from the left hand side shown in Fig. 1. Such plane wave is converted into spherical wave and converges at the focus located at \(z=f\). We denote \(\Psi_{I}\) and \(\Psi_{T}\) as the incident wave and the transmitted wave. Then, the role of the convex lens for the wave functions with frequency \(\omega\) on the lens can be mathematically expressed as \[\Psi_{T}(\vec{x})=e^{-i\omega\frac{|\vec{x}|^{2}}{2f}}\Psi_{I}(\vec{x}). \tag{4}\] We consider a spherical screen located at \((x,y,z)=(x_{V},y_{V},z_{V})\) with \(x_{V}^{2}+y_{V}^{2}+z_{V}^{2}=f^{2}\). The transmitted wave converted by the lens is focusing and imaging on this screen. The wave function \(\Psi_{V}(\vec{x_{V}})\) on the screen is given by \[\Psi_{V}(\vec{x}_{V})=\int_{|\vec{x}|\leq\zeta}d^{2}x\Psi_{T}(\vec{x})e^{i \omega L}, \tag{5}\] where \(L\) is the distance between \((x,y,0)\) on the lens and \((x_{V},y_{V},z_{V})\) on the screen. Substituting Eq.(4) into Eq.(5), we have \[\Psi_{V}(\vec{x}_{V})=\int_{|\vec{x}|\leq\zeta}d^{2}x\Psi_{I}(\vec{x})e^{- \frac{i\omega}{f}\vec{x}\cdot\vec{x}_{V}}. \tag{6}\] Comparing Eq.(6) with Eq.(3) implies that the image on the screen can be obtained by the Fourier transformation of the incident wave within a finite domain of the lens, which motivates us to regard the response function as the dual quantity of the Einstein ring. ## III The holographic setup of non-commutative black holes In this section, we construct the holographic model for the non-commutative Schwarzschild black hole and study the properties of the response function carefully which further helps us to study the Einstein ring. The mass density of a static, spherically symmetric, particle-like gravitational source is no longer a function distribution, but given by a Lorentzian distribution shown as [7; 8; 9] \[\rho_{n}=\frac{\sqrt{n}M}{\pi^{3/2}\left(\pi n+r^{2}\right)^{2}}, \tag{7}\] here \(n\) is the strength of non-commutativity of spacetime and \(M\) is the total mass diffused throughout a region with linear size \(\sqrt{n}\). For the smeared matter distribution, we further obtain [9] \[\mathcal{M}_{n}=\int_{0}^{r}\rho_{n}(r)4\pi r^{2}dr=\frac{2M}{\pi}\left(\tan^{ -1}(\frac{r}{\sqrt{\pi n}})-\frac{\sqrt{\pi n}r}{\pi n+r^{2}}\right)=-\frac{4 \sqrt{n}M}{\sqrt{\pi}r}+M+\mathcal{O}(n^{3/2}). \tag{8}\] In this case, the non-commutative Schwarzschild black hole metric is given by \[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d \xi^{2}\right), \tag{9}\] with \[f(r)=1-\frac{2\mathcal{M}_{n}}{r}=1-\frac{2M}{r}+\frac{8\sqrt{n}M}{\sqrt{\pi} r^{2}}+\mathcal{O}(n^{3/2}). \tag{10}\] The event horizon of this space time can be got at \(f(r)=0\), and the larger root corresponds to the event horizon of the black hole and the smaller root is so called Cauchy (or inner) horizon, which is \[r_{h}=M+\frac{\sqrt{M^{2}\pi-8\sqrt{\pi}\sqrt{n}M}}{\sqrt{\pi}}, \tag{11}\] and \[r_{c}=M-\frac{\sqrt{M^{2}\pi-8\sqrt{\pi}\sqrt{n}M}}{\sqrt{\pi}}. \tag{12}\] At the event horizon, the corresponding Hawking temperature is \[T=\frac{\sqrt{\pi}\left(z_{h}^{2}+3\right)-8\sqrt{n}z_{h}\left(z_{h}^{2}+2 \right)}{4\pi z_{h}\left(\sqrt{\pi}-4\sqrt{n}z_{h}\right)}, \tag{13}\] in which \(z_{h}=1/r_{h}\). From the above Eq.(13), we have clearly seen that when the expression \(z_{h}(\sqrt{\pi}-4\sqrt{n})=0\), there is a divergence in the denominator, which is shown in Fig. 3 and Fig. 4. At first, we fixed the parameter \(z_{h}=1\). The temperate \(T\) decreases as the the strength parameter of non-commutativity of spacetime increases in each branch shown in Fig. 3. In Fig. 4, we show the relationship between the temperature and the event horizon \(z_{h}\) with fixed \(n=0.6\). We see a divergence as the horizon \(z_{h}\) approaches zero for the left branch and for the right branch the temperature decreases first and then increases with the increase of the event horizon. This feature may have an effect on the following response function derived from the holographic methods which is different from the previous studies [12; 14] and maybe used as a method to distinguish the non-commutative Schwarzschild solution from other black hole solutions. In the following we move to the holographic setup for this noncommucatative Schwarzschild black hole solution. We take the complex scalar field as a probe field in the above non-commutative Schwarzschild background. The corresponding dynamics is determined by the Klein-Gordon equation \[D_{b}D^{b}\Phi-\mathscr{M}^{2}\Phi=0, \tag{14}\] where \(\mathscr{M}\) is the mass of the scalar particle. We prefer the ingoing Eddington coordinate in order to solve the above Eq. (14) in a more convenient way, that is, \[v_{e}=t+z_{*}=t-\int\frac{1}{f(z)}\mathrm{d}z, \tag{15}\] Figure 4: The relation between the temperature \(T\) and \(z_{h}\) for a fixed noncommutative parameter \(n=0.6\). Figure 3: The relation between the temperature \(T\) and the noncommutative parameter \(n\) for the case \(z_{h}=1\). here \(z=1/r\) and \[f(z)=\frac{8M\sqrt{n}z^{4}}{\sqrt{\pi}}-2Mz^{3}+z^{2}+1. \tag{16}\] Therefore the non-vanishing bulk background fields are transformed into the following smooth form \[\mathrm{d}s^{2}=\frac{1}{z^{2}}\left[-f(z)e^{-\chi(z)}\mathrm{d}v _{e}^{2}-2e^{-\chi(z)/2}\mathrm{d}z\mathrm{d}v_{e}+\mathrm{d}\Omega^{2}\right], \tag{17}\] \[A_{b}=-A(z)(\mathrm{d}v_{e})_{b}, \tag{18}\] where the gauge transformation is also applied to the electromagnetic 4-potential. Here we take \(\mathscr{M}^{2}=-2\). With \(\Phi=z\phi\), the asymptotic behaviour of \(\phi\) near the AdS boundary is expressed as \[\phi(v_{e},z,\theta,\xi)=J_{\mathcal{O}}(v_{e},\theta,\xi)+\langle\mathcal{O} \rangle z+O(z^{2}). \tag{19}\] With the holographic dictionary, \(J_{\mathcal{O}}\) is interpreted as the source for the boundary field theory, and the corresponding expectation value of the dual operator, namely the response function, is given by \[\langle\mathcal{O}\rangle_{J_{\mathcal{O}}}=\langle\mathcal{O} \rangle-\partial_{v_{e}}J_{\mathcal{O}}, \tag{20}\] where \(\langle\mathcal{O}\rangle\) corresponds to the expectation value of the dual operator with the source turned off obviously. With the source given by Eq. (1), the bulk solution is as follows \[\phi(v_{e},z,\theta)=e^{-i\omega v_{e}}\sum_{m=0}^{\infty}c_{m0} \mathcal{Z}_{m}(z)Y_{m0}(\theta), \tag{21}\] where \(\mathcal{Z}_{m}\) satisfies the equation of motion \[z^{2}f\mathcal{Z}_{m}^{\prime\prime}+z^{2}[f^{\prime}+2i\omega] \mathcal{Z}_{m}^{\prime}+[(2-2f)+zf^{\prime}-z^{2}m(m+1)]\mathcal{Z}_{m}=0, \tag{22}\] and its asymptotic behaviour near the AdS boundary is \[\mathcal{Z}_{m}=1+\langle\mathcal{O}\rangle_{m}z+O(z^{2}). \tag{23}\] And the resulting response \(\langle\mathcal{O}\rangle_{J_{\mathcal{O}}}\) is expressed as \[\langle\mathcal{O}\rangle_{J_{\mathcal{O}}}=e^{-i\omega v_{e}} \sum_{m=0}^{\infty}c_{m0}\langle\mathcal{O}\rangle_{m}Y_{m0}(\theta). \tag{24}\] Our main task is to solve the radial equation Eq. (22) with the boundary condition \(\mathcal{Z}_{m}(0)=1\) at the AdS boundary condition and the regular boundary condition on the black hole event horizon. With the pseudo-spectral method [12], we are able to derive the corresponding numerical solution for \(\mathcal{Z}_{m}\) and extract \(\mathcal{O}_{m}\). With the help of the extracted \(\mathcal{O}_{m}\), the total response is obtained by Eq. (24). We plot a typical profile of the total response \(\langle\mathcal{O}\rangle\) in Fig. 5 to Fig. 7. Followed by the method in [12], we find that the interference pattern indeed arises from the diffraction of our scalar field off the black hole. For explicitly, Fig. 5 shows the amplitude of \(|\langle\mathcal{O}\rangle|\) for different \(\omega\) with \(n=0.6\), \(z_{h}=1\) and \(T=0.690\). This figure shows the frequency \(\omega\) of the Gaussian source increases the width and therefore reduces the wave periods, which means the total response function depends closely on the Gaussian source. Next we investigate the dependence of the total function on the non-commutative strength parameter \(n\). As stated before, the relation between \(T\) and \(n\) is disconnected which is shown in Fig. 3. This in turn has an effect on the relationship between the total response function \(\langle\mathcal{O}\rangle\) and \(n\) which is shown in Fig. 6. In the left subfigure (6a), \(n\) is smaller and corresponds to the left branch in Fig. 3. And in the right subfigure (6b), \(n\) is bigger and corresponds to the right branch in Fig. 3. We see that the amplitude \(|\langle\mathcal{O}\rangle|\) increases with the decrease of the noncommutativity strength parameter \(n\) for both branches. Similarly, we study the dependence of the total response function \(\langle\mathcal{O}\rangle\) on the horizon temperature \(T\) which is shown in Fig. 7 with fixed \(n=0.6\) and \(\omega=80\). The left subfigure (7a) corresponds to the left branch in Fig. 4 and the subfigure (7b) corresponds to the right branch in Fig. 4. And we see that for the left subfigure (7a), the amplitude \(|\langle\mathcal{O}\rangle|\) increases with the decrease of the temperature \(T\). However for the right subfigure (7b), the amplitude \(|\langle\mathcal{O}\rangle|\) decreases with the decrease of the temperature \(T\). The difference between subfigures (7a) and (7b) are easy to explain. For each branch of Fig. 3, the temperature is monotonically dependent on the parameter \(n\), which means that the temperature decreases as the parameter \(n\) increases. However, in the \(T-z_{h}\) figure, the temperature \(T\) decreases as \(z_{h}\) increases for the left branch while the temperature \(T\) first decreases then increases as \(z_{h}\) increases for the right branches shown in Fig. 4. These differences lead to different behaviors between the \(|\langle\mathcal{O}\rangle|-n\) and \(|\langle\mathcal{O}\rangle|-T\) relations. In all, the total response function depends on the Gaussian source and the spacetime geometry closely. Therefore, if this response function can be transformed as the observed images, it will be regarded as an useful tool to reflect the feature of the spacetime geometry. To achieve this goal, we require a special imaging system, which is described in the next section. ## IV Holographic Einstein ring in AdS black hole According to Eq.(6), we are able to see that the observed wave on the screen which is connected with the incident wave by the Fourier transformation. We will capture the images of the dual black hole on the the screen with Eq.(6). As stated in the last section, firstly, we consider the effect of wave source on the characteristics of Figure 8: The images of the lensed response observed at the observation angle \(\theta_{obs}=0\) for different \(\omega\) with \(n=0.6\) and \(z_{h}=1\). Figure 9: The brightness of the lensed response on the screen for different \(T\) with \(\omega=80\) and \(n=0.6\). the holographic Einstein image observed from the north pole with the non-commutativity strength parameter \(n=0.6\) and \(z_{h}=1\), which is shown in Fig. 8. The corresponding curves which show the brightness of the lensed response on the screen for the same parameters are shown in Fig. 9. With \(\delta=0.02\) for the source and \(\zeta=0.6\) for the convex lens, the higher the frequency becomes, the sharper the resulting ring becomes. This is reasonable because the image can be well captured by the geometric optics approximation in the high frequency limit, which is also shown in Fig. 9 where the peak of brightness is getting steeper. Next, we discuss the influence of different non-commutativity strength parameter \(n\) on the Einstein ring shown in Fig. 10. Suppose the observer is located at different positions of AdS boundary with the change of the non-commutativity strength parameter \(n\) for fixed horizon temperature \(T=0.690\) and frequency \(\omega=80\). When the observer is located at the position \(\theta=0^{\circ}\), which means the observation location is the north pole of the AdS boundary, a series of axisymmetric concentric rings appear in the image, and one of them is particularly bright which is shown in the left-most column of Fig. 10. Explicitly from top to bottom, the noncommutativity strength parameter \(n\) increases while the brightness of the ring decreases. And more, as the parameter \(n\) increases, the bright ring is away from the center. All these can be clearly seen in Fig. 11 which also shows the brightness peak of lensed response is far away from the center as the parameter \(n\) increases for the same parameter. Next we fix the observed position to \(\theta=30^{\circ}\) (the second column from the left shown in Fig. 10). When the non-commutativity strength parameter \(n\) increases, we see the same phenomena as for \(\theta=0^{\circ}\). That is to say, a series of axisymmetric concentric ring still exist in the image. And from the top to the bottom, we see that the brightness of the ring decreases when the parameter \(n\) increases. However, when we move to \(\theta=60^{\circ}\), we see that the luminosity ring instead of a strict axisymmetric ring. There are just bright light arcs appeared which is consistent with [13; 14]. When the parameter \(n\) increases, we see the bright light arcs changed from two pairs to one pair when \(n\) changed from \(n=0.3\) to \(n=0.5\) and \(n=0.9\). When the observer is at \(\theta=90^{\circ}\), all left is a bright spot shown on the right-most column of Fig. 10. And as the parameter \(n\) increases, the bright spot becomes far away from the center. For better understanding the above holographic Einstein image, we study the impact of the horizon temperature on the images of the lensed response observed with the fixed observation angle \(\theta=0^{\circ}\), the fixed noncommutativity strength parameter \(n=0.6\) and fixed frequency \(\omega=80\) shown in Fig. 12 and Fig. 14, which correspond to the left branch and the right branch for \(T-z_{h}\) relation shown in Fig. 4. In Fig. 12, \(T=0.7761\) corresponds to the horizon radius \(z_{h}=0.9\). \(T=0.9346\) corresponds to Figure 11: The brightness of the lensed response on the screen for different \(n\) with \(\omega=80\) and \(z_{h}=1\). Figure 10: The images of the lensed response observed at various observation angles for different \(n\) with \(T=0.690\) and \(\omega=80\). \(z_{h}=0.8\). And \(T=1.3235\) corresponds to \(z_{h}=0.7\). \(T=4.3187\) means \(z_{h}=0.6\). It is easy to see that the temperature increases but the horizon decreases in this branch. We study the image of the dual black hole when the horizon gradually decreases which corresponds to the left branch of Fig. 4. We see that as the temperature increases, firstly the brightest ring hardly move. When the temperature rises to \(T=4.3187\), the radius of the brightest ring moves to the center, which can be also seen clearly in Fig. 13. In Fig. 14, we also plot the images of the lensed response for the right branch in Fig. 4 with the same parameters as in Fig. 12. From Fig. 4, it is easy to read out \(T=0.9129\) corresponds to \(z_{h}=5\), \(T=1.6722\) corresponds to \(z_{h}=10\), \(T=4.8317\) corresponds to \(z_{h}=30\) and \(T=8.0102\) means \(z_{h}=50\) respectively. We study the image of the dual black hole when the horizon gradually increases. When the horizon \(z_{h}=5\) is small, we see a bright axisymmetric ring in the center. When the horizon grows to \(z_{h}=1\), we see a smaller axisymmetric concentric rings appearing in the image and the bright ring goes closer to the center. When we further increase the horizon to \(z_{h}=10\), the brightest ring becomes smaller and closer to the center. When \(z_{h}=50\), we see a bright spot clearly in the center and surrounded by a weak aperture, which is also seen in Fig. 15. Figure 12: The images of the lensed response observed at the observation angle \(\theta_{obs}=0\) for different \(T\) with \(n=0.6\) and \(\omega=80\). From (a) to (d), the horizons correspond to \(z_{h}=0.9,0.8,0.7,0.6\) respectively. Figure 13: The brightness of the lensed response on the screen for different \(T\) with \(\omega=80\) and \(n=0.6\).From (a) to (d), the horizons correspond to \(z_{h}=0.9,0.8,0.7,0.6\) respectively. ## V The comparison between the holographic results and optical results In this section, we compare the results from the holographic dual with the results from the geometrical optics. At the position of the photon sphere of the BH, there exists the brightest ring in the image. Here we will verify this brightest ring in the image from the perspective of optical geometry. In a spacetime with metric in Eq.(9), the ingoing angle of photons from boundary is expressed with the conserved energy \(\omega\) and the angular momentum \(\mathcal{L}\). For generality, we choose the coordinate system in order to let the photon orbit lying on the equatorial plane \(\theta=\pi/2\). The 4-vector \(u^{a}=(d/d\nu)^{a}\) satisfies \[-F(r)\left(\frac{\mathrm{d}t}{\mathrm{d}\nu}\right)^{2}+\frac{1}{F(r)}\left( \frac{\mathrm{d}r}{\mathrm{d}\nu}\right)^{2}+r^{2}\sin^{2}\theta\left(\frac{ \mathrm{d}\phi}{\mathrm{d}\nu}\right)^{2}=0, \tag{25}\] or equivalently, \[\dot{r}^{2}=\omega^{2}-\mathcal{L}^{2}\mathcal{R}, \tag{26}\] where \(\mathcal{R}=F(r)\big{/}r^{2}\), \(\omega=F(r)\dot{t}\), \(\mathcal{L}=r^{2}\dot{\Phi}\), and \(\dot{r}\equiv\partial r/\partial\nu\), \(\dot{t}\equiv\partial t/\partial\nu\), \(\dot{\Phi}\equiv\partial\Phi/\partial\nu\). Figure 14: The images of the lensed response observed at the observation angle \(\theta_{obs}=0\) for different \(T\) wit \(n=0.6\) and \(\omega=80\). From (a) to (d), the horizons correspond to \(z_{h}=5,10,30,50\) respectively. Figure 15: The brightness of the lensed response on the screen for different \(T\) with \(\omega=80\) and \(n=0.6\). From (a) to (d), the horizons correspond to \(z_{h}=5,10,30,50\) respectively. The ingoing angle \(\theta_{\rm in}\) with normal vector of boundary \(n^{b}=\partial/\partial r^{b}\) should be \[\cos\theta_{\rm in}=\frac{g_{ij}u^{i}n^{j}}{|u||n|}\bigg{|}_{r=\infty}=\sqrt{ \frac{\dot{r}^{2}/F}{\dot{r}^{2}/F+L^{2}/r^{2}}}\bigg{|}_{r=\infty}, \tag{27}\] and it means that \[\sin^{2}\theta_{\rm in}=1-\cos^{2}\theta_{\rm in}=\frac{{\cal L}^{2}{\cal R}} {\dot{r}^{2}+{\cal L}^{2}{\cal R}}\bigg{|}_{r=\infty}=\frac{{\cal L}^{2}}{ \omega^{2}}. \tag{28}\] Therefore the ingoing angle \(\theta_{\rm in}\) of photon orbit from boundary satisfies that \[\sin\theta_{in}=\frac{{\cal L}}{\omega}, \tag{29}\] which is shown in Fig. 16. Explicitly, when the light is located at the photon sphere, the above relation is still valid. We label the angular momentum in the case as \(L_{s}\) and can be determined by the following conditions \[\dot{r}=0,\quad\frac{d{\cal R}}{dr}=0. \tag{30}\] In the geometrical optics, the angle \(\theta_{in}\) gives the angular distance of the image of the incident ray from the zenith if an observer on the AdS boundary looks up into the AdS bulk. If two end points of the geodesic and the center of the black hole are in alignment, the observer see a ring with a radius corresponding to the incident angle \(\theta_{in}\) because of axisymmetry [10]. In addition, with Fig. 17, we can obtain the angle of the Einstein ring, that is \[\sin\theta_{R}=\frac{r_{R}}{f}. \tag{31}\] Figure 16: The ingoing angle \(\theta_{in}\) and outgoing \(\theta_{out}\) angle of the photon at the photon sphere. According to [10], we know that for a sufficiently large \(l\), \(\sin\theta_{R}=\sin\theta_{in}\), thus we have the relation \[\frac{r_{R}}{f}=\frac{L}{\omega}, \tag{32}\] here \(L\) is the angular momentum at the photon sphere. This relation can be also proved numerically. In particular, the values of \(\frac{r_{R}}{f}\) of the Einstein ring formed are shown in Fig. 18 and Fig. 19, where both radii of the black hole horizon \(r_{h}\) and the circular orbit \(r_{R}\) as functions of temperature are exhibited. From Fig. 18 and Fig. 19, the Einstein ring radius decreases with the increase of the temperature. As expected, the Einstein ring radius obtained by our wave optics fits well with that by geometric optics. Figure 17: The relation between the \(\theta_{R}\) and \(r_{R}\). Figure 18: Comparition between the results obtained by wave optics and geometric optics for different \(n\) with \(\omega=80\). ## VI Conclusions Studying the response function of the noncommutative gravity may help us distinguish this gravity from other gravity theories. Following the previous study, in the framework of AdS/CFT correspondence, we have studied the holographic Einstein images of a non-commutative Schwarzschild black hole. We consider a \((2+1)\) dimensional boundary conformal field theory on a 2-sphere \(S^{2}\) at a finite temperature, and study a one-point function of a scalar \(\mathcal{O}\), under a time-dependent localized Gaussian source \(J_{\mathcal{O}}\) with the frequency \(\omega\). We derive the local response function \(e^{-i\omega t}\mathcal{O}(\bar{x})\). With this given response function, we see that for the left subfigure (7a), the amplitude \(|\langle\mathcal{O}\rangle|\) increases with the increase of the temperature \(T\). However for the right subfigure (7b), the amplitude \(|\langle\mathcal{O}\rangle|\) decreases with the increase of the temperature \(T\). This is different from the dependent analysis of the total response function \(|\langle\mathcal{O}\rangle|\) on the noncommutativity strength parameter \(n\). This is easy to explain. Because the monotonic relationship between \(T\) and \(n\) in each branches is similar in Fig. 3, which means that the temperature decreases as n increases. However, in the \(T-z_{h}\) figure, the temperature \(T\) decreases as \(z_{h}\) increases on the left branch while the temperature \(T\) first decreases and then increases as \(z_{h}\) increases in the right branches. These differences result in different dependencies of the absolute amplitude \(|\langle\mathcal{O}\rangle|\) on parameters \(n\) and \(T\). Furthermore, we study the corresponding effects on Einstein rings which is shown in Fig.12 and Fig. 14. We can see for the right branches, the brightest ring moves to the center when the temperature increases. Furthermore, we study the effect of the noncommutative parameter \(n\) on the Einstein image. Our results show that the noncommutative parameter has an effect on the brightness and the position of Einstein ring. All these results imply that the holographic images can be used as an effective Figure 19: Comparition between the results obtained by wave optics and geometric optics for different \(\omega\) with \(n=0.3\). tool to distinguish different types of black holes for fixed wave source and optical system. ## Acknowledgements This work is supported by the National Natural Science Foundation of China (Grants Nos. 11675140, 11705005, 12375043), Innovation and Development Joint Foundation of Chongqing Natural Science Foundation (Grant No. CSTB2022NSCQ-LZX0021), and Basic Research Project of Science and Technology Committee of Chongqing (Grant No. CSTB2023NSCQ-MSX0324).
2309.09429
Quantum mechanical Gaussian wavepackets of single relativistic particles
We study the evolutions of selected quasi-(1+1) dimensional wavepacket solutions to the Klein-Gordon equation for a relativistic charged particle in uniform motion or accelerated by a uniform electric field in Minkowski space. We explore how good the charge density of a Klein-Gordon wavepacket can be approximated by a Gaussian state with the single-particle interpretation. We find that the minimal initial width of a wavepacket for a good Gaussian approximation in position space is about the Compton wavelength of the particle divided by its Lorentz factor at the initial moment. Relativistic length contraction also manifests in the spreading of the wavepacket's charge density.
Yu-Che Huang, Fong-Ming He, Shih-Yuin Lin
2023-09-18T01:58:31Z
http://arxiv.org/abs/2309.09429v2
# Quantum mechanical Gaussian wavepackets of single relativistic particles ###### Abstract We study the evolutions of selected quasi-(1+1) dimensional wavepacket solutions to the Klein-Gordon equation for a relativistic charged particle in uniform motion or accelerated by a uniform electric field in Minkowski space. We explore how good the charge density of a Klein-Gordon wavepacket can be approximated by a Gaussian state with the single-particle interpretation. We find that the minimal initial width of a wavepacket for a good Gaussian approximation in position space is about the Compton wavelength of the particle divided by its Lorentz factor at the initial moment. Relativistic length contraction also manifests in the spreading of the wavepacket's charge density. keywords: Relativistic quantum mechanics ## 1 Introduction Gaussian states in quantum systems are mathematically simple yet capable of describing a broad range of physics, including the ground state, squeezed states, and thermal states of harmonic oscillators (HOs) or field modes. For Gaussian states of two HOs, their degree of quantum entanglement is well defined even when they are in a mixed state [1; 2]. Taking these advantages, a particle-field interacting system in a Gaussian state can be analyzed in detail, and even non-perturbatively if the combined system is linear (e.g. [2; 3; 4]). Quantum mechanical wavepackets of a single relativistic particle described by the Klein-Gordon (KG) equation or the Salpeter equation can behave very differently from the Gaussian wavepackets in non-relativistic quantum mechanics. A closed-form example has been provided by Rosenstein and Usher in Ref. [5], where they explicitly demonstrated that a wavepacket of a particle at rest in (1+1) dimensional Minkowski space with initial width smaller than its Compton wavelength \(\lambda_{C}\) will not peak around the classical trajectory but rather around the lightcone started with the initial position. Nevertheless, such highly non-Gaussian wavepackets may not be observable in laboratories. To detect an electron wavepacket no wider than its Compton wavelength, the energy density would need to be so high that particle-antiparticle creation would occur in the experiment [6; 7; 8], leading to a breakdown of the single-particle interpretation. In Ref. [9], we constructed a linearized effective theory for single electrons moving in quantum electromagnetic (EM) fields, where Gaussian states of single electrons and EM fields are considered. To justify the Gaussian approximation for single electrons in our effective theory and to determine the condition under which the single-particle interpretation applies to our electrons, the wavepackets in Refs. [5; 10] for relativistic particles at rest are insufficient. In this paper, we generalize the wavepacket solution in Ref. [5] to the one for relativistic particles in uniform motion, and then construct wavepackets initially Gaussian for free particles at constant speed and for accelerated particles in a uniform electric field. We calculate the charge densities of our moving wavepacket solutions and see whether they behave like Gaussian functions. If they do, we can safely approximate them with Gaussian wavepackets in the Schrodinger equation, associated with Gaussian probability densities in the single-particle interpretation. This paper is organized as follows. In Section 2.1, we give our wavepacket solutions to the Salpeter equation for free relativistic particles in uniform motion. Our solutions are generalized from the ones in [5]. After we study the properties of these wavepacket solutions, in Section 2.2 we consider an alternative class of the wavepackets for free particle in uniform motion, which are exactly Gaussian at the initial moment. Similar wavepacket solutions to the Klein-Gordon equation for relativistic charged particles accelerated in a uniform electric field are given and studied in Section 3. Combining our results, we find that the minimal initial width for a wavepacket evolving like a Gaussian wavepacket centered at its classical trajectory is about \(\lambda_{C}/\gamma_{0}\), where \(\gamma_{0}=1/\sqrt{1-(v_{0}/c)^{2}}\) is the Lorentz factor of the particle initially at speed \(v_{0}\). Our findings are summarized in Section 4. Finally in A, we remark on how the phase of a Klein-Gordon wavepacket solution, when evaluated around the particle's classical trajectory, evolves in a manner akin to the corresponding classical action. ## 2 Wavepackets of free relativistic particles The Klein-Gordon equation for relativistic particles of mass \(m\) and charge \(q\) moving in electromagnetic fields \(A^{\mu}\) reads [8] \[\Big{[}(\hat{p}_{\mu}-qA_{\mu})(\hat{p}^{\mu}-qA^{\mu})c^{2}+m^{2}c^{4}\Big{]} \Psi=0 \tag{1}\] with the signature \((-,+,+,+)\) and \(\hat{p}_{\mu}=-i\hbar\partial_{\mu}\). The charge density is defined as \[\rho(t,\mathbf{x})\equiv\mathrm{Re}\,\frac{q}{mc^{2}}\Psi^{*}(t,\mathbf{x}) \Big{[}i\hbar\partial_{t}-qA^{0}(t,\mathbf{x})\Big{]}\Psi(t,\mathbf{x}), \tag{2}\] which indicates the dominance of anti-particle in the regions where \(\rho\) is negative [8]. If \(\rho\) is positive definite everywhere and normalizable, then \(\rho(t,{\bf x})\) could approximately represent the probability density of finding the particle at \((t,{\bf x})\) For free relativistic particles, (1) reduces to \[0=\Big{[}\hat{p}_{\mu}\hat{p}^{\mu}c^{2}+m^{2}c^{4}\Big{]}\Psi=\left[(\hbar \partial_{t})^{2}-\hbar^{2}c^{2}\nabla^{2}+m^{2}c^{4}\right]\Psi \tag{3}\] The solutions of positive energy to (3) also satisfies the Salpeter equation or the square-root KG equation [11], \[i\hbar\partial_{t}\Psi(t,x)=\left(mc^{2}\sqrt{1+\left[\frac{\hbar}{imc}\right] ^{2}\nabla^{2}}\,\right)\Psi(t,x), \tag{4}\] whose form is similar to the Schodinger equation. ### A wavepacket in uniform motion A wave-packet solution to (4) has been provided by Rosenstein and Usher in Ref. [5]. Following the same method, we generalize their result to the case of free particles in uniform motion below. Starting with the ansatz (independent of \(y\) and \(z\)), \[u_{p}(t,x)=\exp\left[-\frac{1}{\hbar}(\vartheta+it)W(p)+\frac{i}{\hbar}p(x-v_ {0}t)\right], \tag{5}\] where \(v_{0}\) is the velocity of the classical trajectory of the particle in the \(x\)-direction, and \(c\vartheta\) corresponds to the initial width of the wavepacket [5] in the case of \(v_{0}=0\). Note that the dependence of \(c\vartheta\) on the initial width is not linear (for example, see the caption of Figure 2), though \(c\vartheta\) is a monotonic increasing function of the initial width in the parameter range we explored. Inserting the above ansatz into (4), we obtain \[W(p)=mc^{2}\sqrt{1+\Big{(}\frac{p}{mc}\Big{)}^{2}}-pv_{0}, \tag{6}\] which is positive for all \(p=mv\gamma\) since \(v_{0},v<c\). When \(p=p_{0}=mv_{0}\gamma_{0}\) with \(\gamma_{0}\equiv 1/\sqrt{1-(v_{0}/c)^{2}}\), one has \(W(p_{0})=-L_{cl}\) in value, where the classical Lagrangian \(L_{cl}=-mc^{2}\gamma_{0}\) can be read off from (A.2). Taking the superposition of \(u_{p}(t,x)\), \[\Psi(t,x)={\cal N}\int dp\,u_{p}(t,x)\tilde{\psi}(p) \tag{7}\] with the simplest choice, \(\tilde{\psi}(p)=1\) for all \(p\), we get \[\Psi(t,x)={\cal N}\int dk\exp\left[-\frac{mc^{2}}{\hbar}(\vartheta+it)\sqrt{1 +k^{2}}+\frac{mc}{\hbar}(\vartheta v_{0}+ix)k\right] \tag{8}\] where \(k\equiv p/(mc)\). Let \(k=\sinh\kappa^{\prime}\) and integrate over \(\kappa^{\prime}\), we obtain the closed form of the wave-packets as \[\Psi(t,x)=\sqrt{\frac{mc}{\hbar\pi\gamma_{0}K_{1}\left(\frac{2mc^{2}\vartheta}{ \hbar\gamma_{0}}\right)}}\,\frac{(\vartheta+it)c}{F(t,x)}K_{1}\left(\frac{mc}{ \hbar}F(t,x)\right), \tag{9}\] where \(K_{1}(z)\equiv\int_{0}^{\infty}d\kappa\,e^{-z\cosh\kappa}\cosh\kappa\) is the modified Bessel function of the second kind, and \(F(t,x)\equiv\sqrt{\left(x-iv_{0}\vartheta\right)^{2}-c^{2}\left(t-i\vartheta \right)^{2}}\). The normalization factor \(\mathcal{N}\) is obtained by requiring \(\int_{-\infty}^{\infty}dx|\Psi|^{2}=1\), most conveniently using the wavefunction \(\Psi\) in the form of (7). Similar integration gives \(\langle x\rangle=\int_{-\infty}^{\infty}dx\,x|\Psi|^{2}=v_{0}t\). #### 2.1.1 Initial width and non-Gaussianity As shown in Figure 1, the charge density of wavepacket solution (9) for a relativistic free particle behaves like a Gaussian function peaked around the classical trajectory if the value of parameter \(c\vartheta\) (\(\gamma\) in [5]) is sufficiently large (e.g., the dashed curve with \(c\vartheta=100\) in Figure 1.) This is consistent with the observation in Ref. [5]. Note that in Figure 1, the Lorentz factor of the particle is \(\gamma_{0}\approx 1.03\), and so the particle here is not in a highly relativistic motion. If the initial width is well below the Compton wavelength of the particle [e.g. the case of \(c\vartheta=0.1\) in Figure 1, corresponding to the width \(2\sigma=0.092\bar{\lambda}_{C}\ll 2\pi\bar{\lambda}_{C}=\lambda_{C}\) at \(t=0\) in Figure 2 (right)], the charge density behaves very differently from a moving Gaussian function. At \(t=0\), while the shape of \(|\Psi|^{2}\) is still close to a Gaussian function and the wavefunction \(\Psi\) here should include only the positive-energy modes as they are solutions to (4), the shape of the corresponding charge density \(\rho\) has been non-Gaussian: there exists regions where the charge density is negative, indicating the presence of antiparticles [black solid curve in Figure 1 (left)]. So the single-particle interpretation fails Figure 1: The (scaled) spatial distribution of the charge density \(\rho(t,x)\) (normalized by \(\int_{-\infty}^{\infty}dx\rho=1\)) for \(t=0\) (left), \(10\) (middle), and \(20\) (right) with various initial widths of wavefunction (9). Here \(c=\hbar=m\equiv 1\), and so \(x\) is in the unit of the reduced Compton wavelength of the particle \(\bar{\lambda}_{C}=\hbar/(mc)=\lambda_{C}/(2\pi)\). The black dashed, gray, black dotted, and black solid curves in each plot represent the charge densities \(\rho\) of wavefunction (9) with \(c\vartheta=100\), \(10\), \(1\), and \(0.1\), all moving at a constant speed \(v_{0}=c/4\) to the right. Note that the black solid curves have been scaled down to \(1/20\) of their original values. The green curves represent the Gaussian function \(\rho_{{}_{G}}(t,x)=(\sigma_{0}\sqrt{\pi})^{-1}e^{-(x-v_{0}t)^{2}/\sigma_{0}^{2}}\), where we choose \(\sigma_{0}=9.468\) (constant in time) as a reference for the black dashed curve (\(c\vartheta=100\)). At \(t=20\) the spreading of the black dashed wavepacket is not significant yet. here: Neither \(\rho(t,\mathbf{x})\) nor \(|\Psi(t,\mathbf{x})|^{2}\) can be interpreted as a probability density of finding the particle at \((t,\mathbf{x})\). Furthermore, when \(t>0\), the charge density \(\rho\) splits into two peaks around the left and right edges of the lightcone in the \(t\)-\(x\) diagram [Figure 1 (middle) and (right)]. Obviously, such kind of double-peak distributions is highly non-Gaussian. If we take the values of \(c\vartheta\) from 100 down to 0.1 continuously, in Figure 1 one can see that the shape of the charge density distribution changes continuously from almost Gaussian to highly non-Gaussian functions. Actually, the gray curves of \(c\vartheta=10\) in Figure 1 are quite close to Gaussian functions, though a small asymmetry about the classical particle position can be seen. #### 2.1.2 Similarity to Gaussian functions To see how close the above wavefunctions to a moving Gaussian wavepacket of half-width \(\sigma\) and momentum \(\bar{p}(t)\) centered at \(x=\bar{x}(t)\), namely, \[\varphi_{G}(t,x)\equiv\frac{1}{\sqrt{\sigma\sqrt{\pi}}}\exp\left[-\frac{1}{2 \sigma^{2}}\big{(}x-\bar{x}(t)\big{)}^{2}+\frac{i}{\hbar}\bar{p}(t)x\right] \tag{10}\] with \(\bar{x}(t)=\bar{v}t\) and \(\bar{p}(t)=m\bar{v}\bar{\gamma}=m\bar{v}/\sqrt{1-(\bar{v}/c)^{2}}\) for a free particle at constant velocity \(\bar{v}\) in the \(x\)-direction, we calculate the projection \[\mathcal{G}_{\Psi}=\max_{\sigma}\left|\int dx\varphi_{G}^{*}(t,x)\Psi(t,x) \right|^{2} \tag{11}\] with the best fit of \(\sigma\) producing the maximum value of \(\mathcal{G}_{\Psi}\). The similarity of the charge densities \(\rho\) of the above wavefunctions to a normalized Gaussian function of the half-width \(\sigma\) centered at \(x=\bar{x}(t)\), \[\rho_{G}(t,x)\equiv\left|\varphi_{G}(t,x)\right|^{2}=\frac{1}{\sigma\sqrt{ \pi}}\exp\left[-\frac{1}{\sigma^{2}}\big{(}x-\bar{x}(t)\big{)}^{2}\right], \tag{12}\] Figure 2: (Left) Time evolution of \(\mathcal{G}_{\Psi}\) [dashed curves, see Eq. (11)] and \(\mathcal{G}_{\rho}\) [solid curves, see Eq. (13)] of the wavepackets in Figure 1. The black, red, green, and blue dots and curves represents the cases of \(c\vartheta=100\), \(10\), \(1\), and \(0.1\), respectively. (Right) The half-widths \(\sigma\) producing the values of \(\mathcal{G}_{\Psi}\) (dashed curves) and \(\mathcal{G}_{\rho}\) (solid) in the left plot. At \(t=0\), the initial widths of the best-fit Gaussian functions to the charge densities \(\rho\) for wavefunctions (9) with \(c\vartheta=100\), \(10\), \(1\), and \(0.1\) are \(2\sigma=18.94\), \(5.654\), \(1.048\), and \(0.092\), respectively. may also be estimated by calculating \[\mathcal{G}_{\rho}(t)=\max_{\sigma}\frac{\int dx\,\sqrt{\rho_{G}(t,x)\rho(t,x)}}{ \sqrt{\int dx\,|\rho(t,x)|}}. \tag{13}\] Once \(\rho(t,x)\) has negative regions in \(x\), the square root in the integrand of the above numerator will generate complex values, though their imaginary parts are all negligible in Figures 2, 5, and 7. In Figure 2 (left), we show the quantities \(\mathcal{G}_{\Psi}\) (dashed curves) and \(\mathcal{G}_{\rho}\) (solid curves) of the same wavepackets in Figure 1. One can see that the larger value of \(c\vartheta\) [corresponding to the larger initial width \(2\sigma\) in Figure 2 (right)] is, the lower decay rate of the similarity to Gaussian functions \(\mathcal{G}_{\Psi}\) or \(\mathcal{G}_{\rho}\) will be. For the same wavepacket solution, \(\mathcal{G}_{\Psi}\) drops faster than \(\mathcal{G}_{\rho}\) as \(t\) increases. This is because the phase of \(\Psi\) becomes highly nonlinear in \(x\) for \(t>0\) and evolves very quickly in a complicated way in \(t\) such that the projection in (11) get worse very quickly, while \(\mathcal{G}_{\rho}\) is not sensitive to the phase. The corresponding half-widths \(\sigma\) for \(\Psi\) (dashed curves) and \(\rho\) (solid curves) are shown in Figure 2 (right), where one can see that the best fit of the half-width \(\sigma\) for wavefunction \(\Psi\) tends to overlap as \(t\to\infty\). In contrast, for each wavepacket solution the half-width \(\sigma\) of the charge density \(\rho\) grows linearly in time for sufficiently large \(t\) with a rate depending on the parameter \(c\vartheta\). The larger \(c\vartheta\) is, the larger initial width \(2\sigma\) and the lower spreading rate of \(\rho\) will be. When the initial width \(2\sigma\) is well below the Compton wavelength, the spreading rate can exceed the speed of light (blue solid curve) as the shape of \(\rho\) gets highly non-Gaussian in Figure 1. #### 2.1.3 Momentum spectrum The peak-splitting of the wavepacket of initial width well below its Compton wavelength indicates that the momentum spectrum of the wavepacket is so broad that many modes of high positive and negative momenta are included. The speed of those high-momentum modes are all close to the speed of light, and so they accumulate or constructively interfere around the lightcone in the \(t\)-\(x\) diagram in Figure 1. To see this more clearly, let \[\tilde{\Psi}(t,p)=\int\frac{dx}{\sqrt{2\pi\hbar}}e^{-\frac{i}{\hbar}px}\Psi(t,x)=\mathcal{N}\sqrt{2\pi\hbar}\,u_{p}(t,0) \tag{14}\] with \(u_{p}(t,x)\) given in (5). The momentum spectrum of \(\Psi(t,x)\) can be observed via the distribution function \[\tilde{\rho}(t,p)\equiv\left|\tilde{\Psi}(t,p)\right|^{2}=2\pi\hbar|\mathcal{ N}|^{2}u_{p}^{*}(t,0)u_{p}(t,0)=2\pi\hbar|\mathcal{N}|^{2}e^{-\frac{2}{\hbar} \vartheta W(p)}, \tag{15}\] which has the maximum at \(p=p_{0}=mv_{0}\gamma_{0}\) since \(W(p)\) has the minimum there (\(W^{\prime}(p_{0})=0\)). As shown in Figure 3 (left), the smaller value of \(c\vartheta\) is, the wider range of modes with \(p\) around \(p_{0}\) will contribute. When \(c\vartheta<O(\bar{\lambda}_{C})\), there will be many modes of \(|p|>mc=\hbar/\bar{\lambda}_{C}\) involved in the wavepacket. These modes are moving at speeds close to \(c\) and would constructively interfere around the lightcone. In particular, the momentum spectra of the cases with \(c\vartheta=1\) (black dotted curve) and \(0.1\) (black solid) are significantly nonzero in the region of \(p<-mc=-1\). This is associated with the significant left-moving peaks apart from the main peaks of the charge densities with \(c\vartheta=1\) and \(0.1\) in Figure 1. ### Wavepackets initially Gaussian In Figures 1 and 2 we have seen that the minimal initial width that the charge density of wavefunction (9) can be approximated as a Gaussian function is roughly \(5.65\times\) to \(18.94\times\)\(\lambda_{C}/(2\pi)\approx 1\times\) to \(3\times\)\(\lambda_{C}\), corresponding to \(c\vartheta=10\) to \(100\). This is not a definite lower limit, anyway. In this section we will show that the minimal initial width for a good Gaussian approximation to another class of slowly moving wavepackets can also reach the Compton wavelength \(\lambda_{C}\). If the wavepacket in that class are in highly relativistic motion, the minimal initial width for Gaussian approximation can be even smaller in the rest frame. Starting with (7), one can choose \(\vartheta=0\) and set \(\tilde{\psi}(p)=\exp[-\frac{1}{\hbar}\tilde{\vartheta}W(p)]\). Then \(\left.u_{p}(t,x)\right|_{\vartheta=0}=\tilde{u}_{p}(t,x)\equiv\exp\left\{- \frac{i}{\hbar}\left[E(p)t-px\right]\right\}\) with \(E(p)=\sqrt{m^{2}c^{4}+p^{2}c^{2}}\) becomes the conventional plane-wave solution and one still obtains \(\psi(t,x)\) in (9) with \(\vartheta\) replaced by \(\tilde{\vartheta}\). Thus, we could liberate ourselves from (9) by choosing \(\vartheta=0\) and introducing an alternative \(\tilde{\psi}(p)\), which may be designed to make the wavepacket acting more "classically". Suppose the initial wavepacket at \(t=0\) is exactly a Gaussian function of half-width \(\sigma_{0}>0\) and momentum \(p_{0}\), centered at \(x=x_{0}\), \[\Psi(0,x)=\frac{1}{\sqrt{\sigma_{0}\sqrt{\pi}}}\exp\left[-\frac{(x-x_{0})^{2} }{2\sigma_{0}^{2}}+\frac{i}{\hbar}p_{0}(x-x_{0})\right]. \tag{16}\] Figure 3: (Left) The black dashed, gray, black dotted, and black solid curves represent \(\tilde{\rho}(t,p)\) in (15) for the wavepackets with the parameter \(c\vartheta=100\), \(10\), \(1\), and \(0.1\), respectively, for a free particle with \(v_{0}=c/4\) (\(\gamma_{0}\approx 1.03\)). Other parameters have the same values as those in Figure 1. (Right) Comparison between the Gaussian distribution (17) with \(\sigma_{0}=0.3\) (black) and the distribution function (15) with \(c\vartheta=0.3\) (gray dashed) and \(c\vartheta=96\) (gray dotted) for a particle in relativistic motion with \(\gamma_{0}=10\). Here the distribution functions are normalized to \(1\) at \(p=p_{0}\approx 9.95\) (\(m=c=\hbar=1\)). One can see that in this regime the value of \(c\vartheta\) in (15) can be very different from the initial width of the wavepacket in position space, which is roughly \(2\sigma_{0}=0.6\). We also compare (17) with \(\sigma_{0}=3\) (green) and (15) with \(c\vartheta=10\) (blue dotted) for \(v_{0}=0\). These two curves almost overlap, and the tails in the region \(|p|>1\) are negligible. The wavepacket for \(t\geq 0\) can be constructed using the superposition of \(\tilde{u}_{p}(t,x)\) with the spectrum \[\tilde{\psi}(p)=\int\frac{dx}{2\pi\hbar}e^{-\frac{i}{\hbar}px}\Psi(0,x)=\frac{ \sqrt{\sigma_{0}}}{\hbar\sqrt{2\pi^{\frac{3}{2}}}}\exp\left[-\frac{\sigma_{0}^ {2}}{2\hbar^{2}}(p-p_{0})^{2}-\frac{i}{\hbar}px_{0}\right], \tag{17}\] which gives \[\Psi(t,x) = \int dp\,\tilde{u}_{p}(t,x)\tilde{\psi}(p) \tag{18}\] \[= \frac{\sqrt{\sigma_{0}}}{\hbar\sqrt{2\pi^{\frac{3}{2}}}}\int dp \,e^{-\frac{i}{\hbar}[E(p)t-p(x-x_{0})]-\frac{\sigma_{0}^{2}}{2\hbar^{2}}(p-p_ {0})^{2}}\] Four examples are given in Figure 4. In the left column, one can see that the charge density \(\rho\) corresponding to the wavepackets of initial width \(2\sigma_{0}=6\) in the unit of the reduced Compton wavelength \(\lambda_{C}/(2\pi)\) (namely, \(2\sigma_{0}=6\lambda_{C}/(2\pi)\approx\lambda_{C}\)) are still very close to Gaussian functions, even in the case of \(p_{0}=0\) (upper-left). Indeed, the momentum spectrum of the wavefunction (18) with \(\sigma_{0}=3\) [the green curve in Figure 3 (right)] is mostly concentrated in the interval \(|p|<mc=1\). With initial width well below the Compton wavelength, in contrast, the charge density \(\rho\) of a free wavepacket (18) at zero speed (\(p_{0}=0\) and \(\gamma_{0}=1\)) is very non-Gaussian even at the initial moment \(t=0\) when (18) is exactly Figure 4: Time evolution of the charge density \(\rho(t,x)\) of wavepacket (18) with the parameter values \(x_{0}=0\) and \((\sigma_{0},\gamma_{0})=(3,1)\) (top-left), \((3,10)\) (lower-left), \((0.3,10)\) (lower-right), and \((0.3,1)\) (upper-right). The curves from dark to light represents \(\rho\) at \(t=0,4,8,12\), and \(16\). Gaussian. In the upper-right plot of Figure 4 (\(\sigma_{0}=0.3\) and \(\gamma_{0}=1\)), one can see that \(\rho\) at \(t=0\) has negative regions where the antiparticle density dominates, indicating that the single-particle interpretation fails in this case. In the same plot, \(\rho(t,x)\) also splits into two peaks moving apart from each other as \(t\) increases, as its wide momentum distribution includes significant contributions from the modes of \(|p|>mc\). These non-Gaussian features are similar to the case of \(c\vartheta=0.1\) in Figure 1 where the wavepackets have \(p_{0}/m=v_{0}\gamma_{0}<1\) (non-relativistic motion). For the initial momentum \(p_{0}\gg mc\), we observed that the left-moving peak moving apart from the main peak will be well suppressed if the Gaussian distribution in momentum space has a half-width \(\hbar/\sigma_{0}\) less than about \(p_{0}/3\) [the black curve in Figure 3 (right)], namely, \(|\tilde{\psi}(-1)|^{2}/|\tilde{\psi}(p_{0})|^{2}\approx e^{-\sigma_{0}^{2}p_ {0}^{2}/\hbar^{2}}<e^{-9}\approx 10^{-4}\) with \(\tilde{\psi}(p)\) in (17). Thus, to get a good Gaussian approximation with a charge density having only one peak (around \(x=v_{0}t\)), the minimal initial half-width \(\sigma_{0}\approx 3\hbar/p_{0}\approx 3\hbar/(mc\gamma_{0})=3\lambda_{C}/(2 \pi\gamma_{0})\) in position space can be very small in the direction of motion for a highly relativistic particle (\(\gamma_{0}\gg 1\)) [see Figures 4 (lower-right) and Figure 3 (right).] Indeed, with the same initial width below the Compton wavelength, the non-Gaussianity in Figure 4 (upper-right) for the case of \((\sigma_{0},\gamma_{0})=(0.3,1)\) is suppressed in Figure 4 (lower-right) for \((\sigma_{0},\gamma_{0})=(0.3,10)\). The Gaussian approximation for the charge density in Figure 4 (lower-right) is as good as the one in Figure 4 (upper-left), and \(\sigma_{0}=3/\gamma_{0}\) in both cases [also compare the green and black curves in Figure 5 (left).] Combining the above observations, we find that the Gaussian approximation for the charge density \(\rho\) of wavefunction (18) will be good if the initial width of the wavepacket \(2\sigma_{0}\) is above \(O(\lambda_{C})/\gamma_{0}\). The presence of \(\gamma_{0}\) may be considered as a manifestation of length contraction. Compare the upper and the lower plots in the same column in Figure 4, one can see that for the same initial half-width \(\sigma_{0}\), the similarity of \(\rho\) to Gaussian functions drops slower for a faster particle in the laboratory frame. This can be Figure 5: (Left) Time evolution of \(\mathcal{G}_{\rho}\) (solid curve) and \(\mathcal{G}_{\Psi}\) (dashed) of the wavepackets in Figure 4. The red solid line and the red dashed line cannot be distinguished in this plot. (Right) Time evolution of the half-width \(\sigma\) producing the values of \(\mathcal{G}_{\rho}\) (solid curves) and \(\mathcal{G}_{\Psi}\) (dashed) in the left plot. The red (green) solid line and the red (green) dashed line are not distinguishable in this plot. Here the black, red, green, and blue dots and curves represents the cases of \((\sigma_{0},\gamma_{0})=(3,1)\), \((3,10)\), \((0.3,10)\), and \((0.3,1)\), respectively. confirmed by comparing the black and red curves, and comparing the blue and green curves in Figure 5 (left). Moreover, for sufficiently large \(t\), the half-width \(\sigma\) of the best-fit Gaussian function to the charge density \(\rho\) grows linearly in \(t\), as shown in Figure 5 (right), while the spreading rate in the non-classical case (blue curve) can exceed the speed of light. With the same initial half-width \(\sigma_{0}\), the spreading rate of \(\rho\) is lower for a particle moving faster. For particles in uniform motion, however, one could not tell such a dependence of the spreading rate on the particle speed is from time dilation, and/or from length contraction. This will become clearer in the case of accelerated particles. ## 3 Klein-Gordon wavepacket of particles in uniform electric field in Minkowski coordinates In shaping the initial wavefunction at \(t=0\) for a charged particle in a uniform electric field, it is convenient to choose the EM four-potential \(A^{\mu}=(0,-\mathcal{E}t,0,0)\), which gives the electric field \(\mathbf{E}=-cF_{0j}\hat{x}^{j}=\mathcal{E}\hat{x}\). The corresponding Klein Figure 6: Time evolution of the charge density \(\rho(t,x)\) of wavepacket (22) with (21) and (23). Here \(F=0.1\), \((\sigma_{0},\gamma_{0})=(3,1)\) (upper-left), \((0.3,1)\) (upper-right), and \((0.3,10)\) (lower-right). Their momentum spectrum \(\tilde{\rho}(t,p)\equiv|\psi_{\mathbf{p}}(t)|^{2}\) with \(\psi_{\mathbf{p}}(t)\) in (21) are shown in the lower-left plot, where the black, blue, and green curves represent the cases of \((\sigma_{0},\gamma_{0})=(3,1)\), \((0.3,1)\), and \((0.3,10)\), respectively, with the peak-values of each case at \(t=0\) normalized to \(1\). In the lower-left plot, only the blue curves have significant tails in the region of \(p<-mc=-1\), corresponding to the double-peak structure in the upper-right plot for the charge density \(\rho(t,x)\) of \((\sigma_{0},\gamma_{0})=(0.3,1)\). The solid curves from dark to light represent \(\rho\) or \(\tilde{\rho}\) at \(t=0,4,8,12,16,\cdots\), while the dotted curves in the lower-right plot represent \(\rho\) at \(t=-4,-8\), and \(-12\) from dark to light. Gordon equation reads (\(c=\hbar=1\), \({\bf x}\equiv(x,y,z)\)) \[\left[\partial_{t}^{2}+(-i\partial_{x}+q{\cal E}t)^{2}-\partial_{y}^{2}-\partial_ {z}^{2}+m^{2}\right]\Psi(t,{\bf x})=0. \tag{19}\] Inserting the ansatz \(\Psi=e^{i{\bf p}\cdot{\bf x}}\psi_{\bf p}(t)\) into the above equation, one has \[\left[\partial_{t}^{2}+(p_{x}+q{\cal E}t)^{2}+M^{2}\right]\psi_{\bf p}(t)=0 \tag{20}\] with \(M^{2}=m^{2}+p_{y}^{2}+p_{z}^{2}\). Below we take \(p_{y}=p_{z}=0\) for simplicity. The general solution to the above equation is [7] \[\psi_{\bf p}(t)=c_{\bf p}^{+}D_{-\frac{1}{2}-\frac{iM^{2}}{2F}}\left[\frac{i+1} {\sqrt{F}}(p_{x}+Ft)\right]+c_{\bf p}^{-}D_{-\frac{1}{2}+\frac{iM^{2}}{2F}} \left[\frac{i-1}{\sqrt{F}}(p_{x}+Ft)\right], \tag{21}\] where \(D_{\nu}(z)\) are the parabolic cylinder functions, \(F\equiv q{\cal E}\), and \(c_{\bf p}^{\pm}\) are constants of time. Suppose the initial wavefunction at \(t=0\) is Eq. (16) again. Then the wavefunction for \(t\geq 0\) will be \[\Psi(t,{\bf x})=\int d^{3}p\,e^{i{\bf p}\cdot{\bf x}}\psi_{\bf p}(t) \tag{22}\] with the constants \[c_{\bf p}^{\pm}={\cal N}\delta(p_{y})\delta(p_{z})\int\frac{dx}{ 2\pi}\,\Psi(0,{\bf x})\left(e^{ip_{x}x}D_{-\frac{1}{2}\mp\frac{iM^{2}}{2F}} \left[\frac{i\pm 1}{\sqrt{F}}p_{x}\right]\right)^{*}\] \[={\cal N}\sqrt{2\sigma_{0}\sqrt{\pi}}\,\delta(p_{y})\delta(p_{z}) D_{-\frac{1}{2}\pm\frac{iM^{2}}{2F}}\left[\frac{-i\pm 1}{\sqrt{F}}p_{x}\right]e^{- \frac{\sigma_{0}^{2}}{2}(p_{x}-p_{0})^{2}-i(p_{x}-p_{0})x_{0}}, \tag{23}\] where \({\cal N}\) is the normalization constant. Since \(A^{0}=0\) in this gauge, the charge density (2) is simply \(\rho=\frac{q\hbar}{mc}{\rm Re}\,\,i\Psi^{*}\partial_{t}\Psi\) here. In Figure 6, we show three examples of the charge density \(\rho\) of wavepacket (22) with (23), \(F=0.1\), and other parameter values similar to those in Figure 4. These examples correspond to the classical trajectories \(\bar{x}(t)=c\sqrt{\alpha^{-2}+t^{2}}\) (the left and middle plots) and \(\bar{x}(t)=c\sqrt{\alpha^{-2}+(t+t_{0})^{2}}-c\sqrt{\alpha^{-2}+t_{0}^{2}}+c \alpha^{-1}\) (the right plot) with \(\alpha\equiv F/(mc)\) and \(t_{0}\equiv p_{0}/F\) for uniformly accelerated charges, and so \(\bar{x}(0)=c\alpha^{-1}=10\) in all the three examples in Figure 6. For a charged particle initially at rest, we find that \(\rho\) can be close to a Gaussian function with the value of \(\sigma_{0}\) as small as 3 [Figure 6 (upper-left)], _i.e._ with the initial width about \(2\sigma_{0}=6\) in the unit of the reduced Compton wavelength of the particle. This result is the same as we observed in Figure 4 (upper-left) for free particles. For \(\sigma_{0}<3\), as \(\sigma_{0}\) decreases, deflection of the charge density \(\rho\) from a Gaussian function becomes more and more significant at \(t=0\) (though \(|\Psi(0,x)|^{2}\) is still Gaussian), and the non-Gaussianity of \(\rho\) grows more and more quickly as \(t\) increases [Figure 7 (left).] In particular, as shown in Figure 6 (upper-right), a wavepacket of initial width \(\sigma_{0}\) well below the Compton wavelength and initial momentum \(p_{0}=0\) will quickly evolve to a double-peak structure, though the expectation value of position still moves around \(\bar{x}(t)\) like a classical accelerated charge. This behavior is similar to those free wavepackets of small initial widths in Figures 1 and 4. The significant tail of the corresponding momentum spectrum \(\tilde{\rho}\) (blue curves) in the region of \(p<-mc=-1\) in Figure 6 (lower-left) is also similar to the momentum spectra of those free wavepackets with small initial widths in Figure 3. When both the initial speed and the strength of the uniform electric field are small, the charge density \(\rho\) of wavepacket (22) with initial width \(2\sigma_{0}\) above \(O(\lambda_{C})\) will spread as \(t\) increases, while the spreading rate decreases as the speed of the position expectation value goes to the speed of light [Figure 6 (upper-left)]. In Figure 7 (right) we can see that the half-width \(\sigma\) of the charge density \(\rho\) in this case (black curve) evolves like \(t/\bar{\gamma}(t)\) (red-dotted), rather than the proper time \(\tau=\alpha^{-1}\sinh^{-1}\alpha t\), for sufficiently large \(t\)1. This may be considered as a manifestation of length contraction, rather than time dilation, from the case of free particles at zero speed, whose half-width evolves like \(t\) for sufficiently large \(t\) (cf. Figure 5). Footnote 1: The worldline of our uniformly accelerated charge at proper acceleration \(a\) in Figure 6 (upper-left) and (upper-right) is \(\bar{z}^{\mu}(\tau)=(c\alpha^{-1}\sinh\alpha\tau,c\alpha^{-1}\cosh\alpha\tau, 0,0)\) parametrized by its proper time \(\tau\), or \(\bar{z}^{\mu}(t)=(ct,c\sqrt{\alpha^{-2}+t^{2}},0,0)\) parametrized by the Minkowski time \(t\), with \(\alpha\equiv a/c\) and \(\bar{z}^{\mu}(0)=(0,c\alpha^{-1},0,0)\). It is straightforward to obtain the three velocity \(\bar{v}^{i}\equiv d\bar{z}^{i}(t)/dt=(ct/\sqrt{\alpha^{-2}+t^{2}},0,0)\), and the Lorentz factor \(\bar{\gamma}(t)=\big{(}1-\frac{v_{i}}{c}\,\frac{v^{i}}{c}\big{)}^{-1/2}=\sqrt{ 1+(\alpha t)^{2}}\). If the initial speed \(v_{0}\) of the charged particle at \(t=0\) is close to the speed of light, then again, the minimal initial width for a long-lasting Gaussian wavepacket goes down to \(O(1/\gamma_{0})\) of the Compton wavelength as shown in Figure 6 (lower-right), where \(\sigma_{0}=3/\gamma_{0}=0.3\). This may also be considered as a manifestation of length contraction. ## 4 Summary We have demonstrated with selected examples that the charge density of a wavepacket of relativistic particle in uniform motion can be approximated by a Gaussian wavefunction with the single-particle interpretation if the initial width is above \(O(\lambda_{C})/\gamma_{0}\), where \(\lambda_{C}\) is the Compton wavelength of the particle and \(\gamma_{0}\) is the Lorentz factor of the particle at the initial moment. For a particle with non-negative initial momentum \(p_{0}\), an initial width above \(O(\lambda_{C})/\gamma_{0}\) in position space corresponds to a momentum spectrum with a tail in the region of \(p<-mc\) negligibly small in momentum space. For the wavepackets of particles in uniform motion with all parameter values the same except the initial speed, Gaussian approximation can be good for a longer time for a particle at a higher initial speed, when observed in the laboratory (rest) frame. A wavepacket of a particle linearly accelerated in a uniform electric field shows similar behaviors both in position space and momentum space. With an initial width above the minimal one, the spreading of the charge density of a uniformly accelerated Gaussian wavepacket appears frozen in the laboratory frame as the group velocity of the wavepacket approaches the speed of light. We find that this is a manifestation of length contraction, rather than time dilation. The above findings in the context of relativistic quantum mechanics suggests that the UV cutoff of the electron-photon interaction in electrodynamics could also be \(O(\lambda_{C})/\bar{\gamma}(t)\) in an effective theory quantized in Minkowski coordinates for single electrons interacting with EM fields [9]. **Acknowledgment** SYL thanks Bei-Lok Hu for illuminating discussions. YCH and SYL are supported by the National Science and Technology Council of Taiwan under grant No. NSTC 112-2112-M-018-003 and in part by the National Center for Theoretical Sciences, Taiwan. ## Appendix A Phase and classical action The classical action for a charged particle moving in EM fields in the Minkowski-time gauge reads [12; 9] \[S_{cl}=\int dt\left\{-mc^{2}\sqrt{1-\frac{1}{c^{2}}\frac{dz_{i}}{dt}\frac{dz^ {i}}{dt}}+qcA_{0}(t,\mathbf{z}(t))+q\frac{dz^{i}}{dt}A_{i}(t,\mathbf{z}(t)) \right\}\equiv\int dt\,L_{cl}, \tag{10}\] where \(L_{cl}\) is the classical Lagrangian. For a free particle in uniform motion at speed \(v_{0}\), one has \(A^{\mu}=0\) and \[S_{cl}=\int dt(-mc^{2})\sqrt{1-\frac{v_{0}^{2}}{c^{2}}}=-\frac{mc^{2}}{\gamma_ {0}}t, \tag{11}\] where \(\gamma_{0}\) is the Lorentz factor of the particle motion. Compare (11) and (10), one gets \(L_{cl}=-mc^{2}/\gamma_{0}\) in this case. Now writing (9) for the free particle as \[\Psi=R(t,\mathbf{x})e^{i\phi(t,\mathbf{x})} \tag{12}\] with real functions representing magnitude \(R\) and phase \(\phi\). In Figure A.8 (left), one can see that the phase of wavefunction (9) evaluated along the particle's classical trajectory, namely, \(\phi(t,\bar{\mathbf{x}}(t))\), behaves similarly to the particle's classical action \(S_{cl}\) in (A.2) for sufficiently large \(t\), as expected. Indeed, when \(t\) is sufficiently large and \(x=\bar{x}(t)=x_{0}+v_{0}t\), one has \(F(t,\bar{x}(t))\approx ic\sqrt{1-(v_{0}/c)^{2}}\,t\) in (9), and the phase of \(\Psi(t,\bar{x}(t),0,0)\) behaves like \(-mc^{2}\sqrt{1-(v_{0}/c)^{2}}\,t/\hbar=-mc^{2}\tau/\hbar=S_{cl}/\hbar\) with the particle's proper time \(\tau\) along \(\mathbf{x}=(\bar{x},0,0)\) according to the asymptotic behavior of the modified Bessel functions [13], from which one can also see \(\phi(t,\bar{\mathbf{x}}(t))-S_{cl}(t)\to-\pi/4\) as \(t\to\infty\) at late times. As shown in Figure A.8 (middle), the phase of wavefunction (18) for a free particle behaves like the classical action (A.2), too, when \(t\) is sufficiently large. For a charged particle moving in a uniform electric field considered in Section 3, we have \(A^{\mu}=(0,-\mathcal{E}t,0,0)\), \(\frac{d}{dt}\bar{x}=c\alpha(t+t_{0})/\sqrt{1+[\alpha(t+t_{0})]^{2}}\), and \(\frac{d}{dt}\bar{y}=\frac{d}{dt}\bar{z}=0\), so (A.1) reads \[S_{cl}=-mc^{2}\int_{0}^{t}d\bar{t}\,\frac{1+\alpha^{2}\bar{t}\left(\bar{t}+t_{ 0}\right)}{\sqrt{1+\alpha^{2}\left(\bar{t}+t_{0}\right)^{2}}}\] (A.4) When \(\alpha(t+t_{0})\) is sufficiently large, \(S_{cl}\sim-mc^{2}\int^{t}d\tilde{t}\alpha\tilde{t}=-mc^{2}\alpha t^{2}/2=-cFt ^{2}/2\). In figure A.8 (right), one can see that the phase \(\phi(t,\mathbf{x}(t))\) of wavefunction (22) with (21) and (23) also behaves like \(S_{cl}\) in (A.4) for sufficiently large \(t\).
2308.16620
GA-NIFS: JWST/NIRSpec IFU observations of HFLS3 reveal a dense galaxy group at z~6.3
Massive, starbursting galaxies in the early Universe represent some of the most extreme objects in the study of galaxy evolution. One such source is HFLS3 (z~6.34), which was originally identified as an extreme starburst galaxy with mild gravitational magnification ($\mu$~2.2). Here, we present new observations of HFLS3 with the JWST/NIRSpec IFU in both low (PRISM/CLEAR; R~100) and high spectral resolution (G395H/290LP; R~2700), with high spatial resolution (~0.1") and sensitivity. Thanks to the combination of the NIRSpec data and a new lensing model with accurate spectroscopic redshifts, we find that the 3"x3" field is crowded, with a lensed arc (C, $z=6.3425\pm0.0002$), two galaxies to the south (S1 and S2, $z=6.3592\pm0.0001$), two galaxies to the west (W1, $z=6.3550\pm0.0001$; W2, $z=6.3628\pm0.0001$), and two low-redshift interlopers (G1, $z=3.4806\pm0.0001$; G2, $z=2.00\pm0.01$). We present spectral fits and morpho-kinematic maps for each bright emission line from the R2700 data for all sources except G2. From a line ratio analysis, the galaxies in component C are likely powered by star formation, while we cannot rule out or confirm the presence of AGN in the other high-redshift sources. We perform gravitational lens modelling, finding evidence for a two-source composition of the lensed central object and a comparable magnification factor ($\mu$=2.1-2.4) to previous work. The projected distances and velocity offsets of each galaxy suggest that they will merge within the next ~1Gyr. Finally, we examine the dust extinction-corrected SFR(Ha) of each z>6 source, finding that the total star formation ($510\pm140$Msol/yr, magnification-corrected) is distributed across the six z~6.34-6.36 objects over a region of diameter ~11kpc. Altogether, this suggests that HFLS3 is not a single starburst galaxy, but instead is a merging system of star-forming galaxies in the Epoch of Reionisation.
G. C. Jones, H. Ubler, M. Perna, S. Arribas, A. J. Bunker, S. Carniani, S. Charlot, R. Maiolino, B. Rodriguez Del Pino, C. Willott, R. A. A. Bowler, T. Boker, A. J. Cameron, J. Chevallard, G. Cresci, M. Curti, F. D'Eugenio, N. Kumari, A. Saxena, J. Scholtz, G. Venturi, J. Witstok
2023-08-31T10:28:16Z
http://arxiv.org/abs/2308.16620v2
# GA-NIFS: JWST/NIRSpec IFU observations of HFLS3 reveal a dense galaxy group at \(z\sim 6.3\) ###### Abstract Massive, starbursting galaxies in the early Universe represent some of the most extreme objects in the study of galaxy evolution. One such source is HFLS3 (\(z\sim 6.34\)), which was originally identified as an extreme starburst galaxy with mild gravitational magnification (\(\mu\sim 2.2\)). Here, we present new observations of HFLS3 with the JWST/NIRSpec IFU in both low (PRISM/CLEAR; \(R\sim 100\)) and high spectral resolution (G395H/290LP; \(R\sim 2700\)), with high spatial resolution (\(\sim 0.1\arcsec\)) and sensitivity. Thanks to the combination of the NIRSpec data and a new lensing model with accurate spectroscopic redshifts, we find that the \(3\arcsec\times 3\arcsec\) field is crowded, with a lensed arc (C, \(z=6.3425\pm 0.0002\)), two galaxies to the south (S\({}_{1}\) and S\({}_{2}\), \(z=6.3592\pm 0.0001\)), two galaxies to the west (W\({}_{1}\), \(z=6.3550\pm 0.0001\); W\({}_{2}\), \(z=6.3628\pm 0.0001\)), and two low-redshift interlopers (G1, \(z=3.4806\pm 0.0001\); G2, \(z=2.00\pm 0.01\)). We present spectral fits and morpho-kinematic maps for each bright emission line (e.g., [OIII]\(\lambda\)5007, H\(\alpha\), Im[NII]\(\lambda\)6587) from the R2700 data for all sources except G2 (whose spectral lines fall outside the observed wavelengths of the R2700 data). From a line ratio analysis, the galaxies in component C are likely powered by star formation, while we cannot rule out or confirm the presence of AGN in the other high-redshift sources. We perform gravitational lens modelling, finding evidence for a two-source composition of the lensed central object and a comparable magnification factor (\(\mu=2.1-2.4\)) to previous work. The projected distances and velocity offsets of each galaxy suggest that they will merge within the next \(\sim 1\) Gyr. Finally, we examine the dust extinction-corrected SFR\({}_{\rm{H\alpha}}\) of each \(z>6\) source, finding that the total star formation (\(460\pm 90\) M\({}_{\odot}\) yr\({}^{-1}\), magnification-corrected) is distributed across the six \(z\sim 6.34-6.36\) objects over a region of diameter \(\sim 11\) kpc. Altogether, this suggests that HFLS3 is not a single starburst galaxy, but instead is a merging system of star-forming galaxies in the Epoch of Reionization. ## 1 Introduction Observations have revealed that the mode of galaxy evolution in the first 2 Gyr of the Universe (\(z\gtrsim 3\)) was drastically different from that of following epochs. This can be seen in the similar evolution of the global star formation rate density (SFRD) and molecular gas mass, both of which increased with cosmic time during this epoch and decreased for \(z<1\)(e.g., Bouwens et al., 2015; Decarli et al., 2019). At higher redshifts, the major merger rate is higher (e.g., Duncan et al., 2019), as well as the mean star formation rate (SFR) and black hole accretion rate (BHAR) for a given stellar mass (e.g., Speagle et al., 2014; Yang et al., 2018; Popesso et al., 2023). Together, these results suggest that galaxies in the first 2 Gyr formed rapidly, accreting gas via filaments and mergers, resulting in the buildup of stellar and black hole mass on shorter timescales than the following epochs. The unique environment of the early Universe allowed galaxies to start forming stars quickly, resulting in high-redshift galaxies that have exhausted their gas supply and have already stopped forming stars by \(z\gtrsim 3\)(e.g., see recent JWST results of Looser et al., 2023; Strait et al., 2023; Carnall et al., 2023). While this quenched appearance could be caused by a temporary minimum in a stochastic star formation history (e.g., Arata et al., 2020; Dome et al., 2023), there have also been a number of \(z>4\) hyperluminous infrared galaxies (HyLIRGs) detected with large infrared luminosities (\(L_{IR}>10^{13}\) L\({}_{\odot}\)) and SFRs of \(\gtrsim 10^{3}\) M\({}_{\odot}\) yr\({}^{-1}\)(e.g., Wagg et al., 2014; Venemans et al., 2019; Carniani et al., 2019; Riechers et al., 2020; Cheng et al., 2020) that could deplete gas reservoirs rapidly. The elevated SFR of these sources was believed to be the result of ongoing hierarchical merging (e.g., Hopkins et al., 2006), although the discovery of ordered rotation in some HyLIRGs (e.g., Tsukui & Iguchi, 2021) suggests that they could be fueled by secular accretion. One of the most extreme high-redshift HyLIRGs is 1HERMES S350 J170647.8+584623, or HFLS3. This source lies in the Spitzer First Look Survey (FLS; Fadda et al., 2004) extragalactic field (a four square degree field centred on 17h18m00s +59deg30'00.0''), and was included in the Herschel Multi-tiered Extragalactic Survey (HerMES; Oliver et al., 2012). Based on very red Herschel/SPIRE colors (\(S_{500\mu m}>S_{350\mu m}>S_{250\mu m}\) and \(S_{500\mu m}/S_{350\mu m}>1.3\)), Riechers et al. (2013) identified this source as a high-redshift dusty galaxy with a nearby low-redshift companion to the north (G1B, \(z\sim 2.092\) based on detection of CIV\(\lambda\)1549 with Keck/LRIS). A comprehensive suite of observations (e.g.; Plateau de Bure Interferometer, PdBI; the Very Large Array, VLA; Keck; William Herschel Telescope) enabled the creation of a NIR-radio spectral energy distribution (SED) and the measurement of a number of emission lines for HFLS3 (\(z_{spec}=6.3369\pm 0.0009\)). Based on their analysis of the line and continuum emission, Riechers et al. (2013) report a high mass of molecular gas (M\({}_{\rm{H_{2}}}=(1.04\pm 0.09)\times 10^{11}/a_{\rm{CO}}\) M\({}_{\odot}\)), dust (M\({}_{\rm dust}=1.31^{+0.32}_{-0.30}\times 10^{9}\) M\({}_{\odot}\)), and stars (M\({}_{*}\sim 3.7\times 10^{10}\) M\({}_{\odot}\)), as well as a high FIR luminosity (\(2.86^{+0.32}_{-0.31}\times 10^{13}\) L\({}_{\odot}\)) and SFR\({}_{\rm FIR}\sim 2900\) M\({}_{\odot}\) yr\({}^{-1}\) and evidence for a velocity gradient from PdBI [CII]158 \(\mu\)m1. Some of the spectral lines were asymmetric, which Riechers et al. (2013) interpret as evidence for a possible close separation merger. However, some of the data were taken at low resolution (\(>2\arcsec\)) and thus did not allow for detailed source differentiation and characterization. Footnote 1: The values reported by Riechers et al. (2013) assume no gravitational lensing. Follow-up Keck/NIRC2 and Hubble Space Telescope (HST) WFC3 imaging by Cooray et al. (2014) revealed the presence of three close-by companions: two to the north (G1 & G2) and one to the south (R1). The northern sources were assumed to be at the same redshift (\(z\sim 2.1\)), while a photometric redshift of \(z_{phot}\sim 6\) was found for R1. Gravitational lens modelling revealed that HELS3 may be composed of two sources, which are slightly magnified (\(\mu\sim 2.2\pm 0.3\)) by the foreground sources G1 & G2. The idea that the starburst nature of HFLS3 could be caused by a history of major mergers in a dense environment inspired multiple searches for an overdensity of galaxies. Robson et al. (2014) used JCMT/SCUBA-2 to perform a low-resolution (\(\sim 14\arcsec\)), broad (67 arcmin\({}^{2}\)) search for significant submm emission at two wavelengths (450 \(\mu\)m and 850 \(\mu\)m), but found no overdensity. This was followed by a search at optical wavelengths with HST and GTC (Laporte et al., 2015), finding no large-scale overdensity but a possible small-scale (\(\sim 36\) kpc) overdensity of faint objects. These observations suggest that HFLS3 is a gas-rich, starbursting, dusty galaxy with a velocity gradient (suggesting merging activity) when the Universe was only \(\sim 850\) Myr old. Because of these exceptional properties, it was chosen as a target for the JWST/NIRSpec GTO program 'Galaxy Assembly with NIRSpec IFS' (GA-NIFS; PI: S. Arribas & R. Maiolino). This program aims to observe (in cycles 1 and 3) a sample of 55 galaxies at \(z\sim 3-11\) spanning a variety of types (e.g., QSOs, AGN, SFGs, strongly lensed galaxies, quenched sources, major mergers), to show the power of the integral field unit (IFU; Boker et al., 2022) on NIRSpec (Jakobsen et al., 2022) for exploring resolved kinematics and gas properties. The survey is ongoing, and detailed studies of \(z=3-7\) AGN and QSOs are now published (Marshall et al., 2023; Ubler et al., 2023; Perna et al., 2023). Here, we present the JWST/NIRSpec IFU observations of HFLS3. The high resolution and sensitivity of these observations reveal a more complex system than implied by previous data, with six sources at \(z\simeq 6.3-6.5\) and two low-redshift interlopers (\(z\sim 2.0-3.5\)) within a \(3\arcsec\times 3\arcsec\) field. We present the details of our dataset in Section 2, and characterize the field in Section 3. Section 4 contains further analysis (i.e., morpho-kinematic maps, gravitational lens modeling, line ratio-based excitation conditions, star formation rate derivation, and galaxy merger discussion), and we conclude in Section 5. We use a standard concordance cosmology (\(h_{o}\), \(\Omega_{m}\), \(\Omega_{\Lambda}=0.7\), 0.3, 0.7) throughout, where \(1\arcsec\) corresponds to \(\sim 5.5\) kpc at \(z\sim 6.34\), \(\sim 7.3\) kpc at \(z\sim 3.48\), and \(\sim 8.3\) kpc at \(z\sim 2.1\). To match the notation of other works (e.g., Curti et al., 2020), emission lines are named based on their air wavelength, while we use their vacuum wavelengths for analysis (e.g., \(\lambda_{[OIII]\times 5007}=5008.24\)A). We adopt a Salpeter stellar initial mass function (IMF; Salpeter, 1955). ## 2 JWST/NIRSpec IFU data description HFLS3 was observed with the JWST/NIRSpec IFU using two disperser/filter combinations (G395H/290LP, hereafter R2700; and PRISM/CLEAR, hereafter R100) on 1 September 2022 as part of PID 1264 (PI: L. Colina; see Table 1). Both settings used a 4-point dither with a medium 'cycling' pattern and a starting point of 1. We use v1.8.2 of the GTO pipeline with CRDS context 1105 to create R2700 and R100 cubes with drizzle weighting (Fruchter & Hook, 2002). A patch was included to correct some important bugs affecting this specific pipeline version (see details in Perna et al., 2023). We have corrected count-rate images for 1/f noise through a polynomial fit. During stage 2, we have removed all data in regions of known failed open MSA shutters. We have also masked pixels at the edge of the slices (one pixel wide) to conservatively exclude pixels with unreliable sflat corrections, and implement the outlier rejection of D'Eugenio et al. (2023). The combination of a dither and drizzle weighting allowed us to sub-sample the detector pixels, resulting in cube spaxels of 0.05\(\arcsec\). ### Astrometric correction A comparison of the JWST/NIRSpec IFU data with archival HST data2 revealed a positional offset between images at comparable wavelengths. Astrometric errors like this are currently common in NIRSpec IFU data at the time of analysis (e.g., Wylezalek et al., 2022; Perna et al., 2023). Since we wish to estimate spatially resolved quantities, it is crucial to have each image aligned to a common reference frame. By shifting to the Gaia DR3 frame, we find an offset of \(0.17\pm 0.07\arcsec\) (see Appendix A.1 for details). Footnote 2: HST images from Program 13045 (PI: A. R. Cooray) were retrieved from the MAST archive. ### Background subtraction Since these observations did not include a dedicated background exposure, no background subtraction was performed by the pipeline. To estimate the background emission, we extract a mean spectrum from the R100 and R2700 data cubes using a 25 spaxel (\(12.5\arcsec\)) diameter aperture (MPDAF task _aperture_; Bacon et al., 2016) from a signal-free region to the southeast. We assume that the background emission is relatively uniform across the field of view (FoV) and create a correction by subtracting this background spectrum from the spectrum of each spaxel. Note that for the R2700 spectrum, we only include channels that are not affected by chip gap/edge issues (\(\lambda_{obs}\approx 2.87057-3.9778,4.2078-5.26643\) \(\mu\)m). ## 3 Source distribution Previous analyses of the HFLS3 field found evidence for multiple sources (Riechers et al., 2013; Cooray et al., 2014): the primary lensed starburst (HFLS3, \(z\sim 6.34\)), two low-redshift sources to the north (G1 and G2, reported to be \(z\sim 2.0\)), and a fourth source only detected in rest-frame UV emission to the southeast (R1). Initial exploration of the IFU data cubes shows evidence for a complex distribution of flux, with multiple spatial components. It is not clear _a priori_ if these are the same as previously detected, so we proceed with an uninformed search for emission. ### Field characterization As an example of the complex source distribution, we present an integrated map of the R2700 cube over the wavelength range that contains H\(\alpha\) redshifted to \(z=6.34\) (i.e., \(\lambda_{obs}=4.79954-4.84467\,\mu\)m, or \(z_{H\alpha}\sim 6.31-6.38\); left panel in Fig. 1). There are four separate areas of emission: a bright source to the north, an east-west arc near the centre, a southern source, and a double-lobed source to the west. If we instead integrate the R100 cube over a wavelength range similar to the HST/WFC3 F160W transmission (i.e., \(\lambda_{obs}=1.4-1.6\,\mu\)m), the distribution is drastically changed (Fig. 1, right panel). Note that this distribution is identical to that seen in previous HST/WFC3 F160W observations (see Fig. A.2), but at higher spatial resolution. The northern source is still present, and there is an extended object with a bright core between this source and the central arc (as seen in the HST images; Cooray et al. 2014). The other three sets of sources are much weaker. From these maps, we define five regions (see Fig. 1): the bright northern component ('G1'), the core of the extended object in the R100 data ('G2'), the arc ('C'), the southern sources ('S'), and the western sources ('W'). The regions G1 and G2 are identical to those of Cooray et al. (2014), while the C region contains the HFLS3 source. The previously found source R1 is the brightest region within the 'S' mask, but we use a larger area to encompass nearby emission. Laporte et al. (2015) identified the S and W sources as possible faint sources (ID2 and ID3, respectively) in HST maps. We also note that weaker line emission is detected from targets to the northeast of C (\(z\sim 6.34\), based on H\(\alpha\) and [OIII]\(\lambda\lambda\)4959, 5007) and southwest of S (\(z\sim 6.37\), if the observed line is H\(\alpha\)). ### R2700 integrated spectra To investigate the distribution of flux in this field, we extract spectra from the R2700 cube using each of the spatial masks (see top panel of Fig. 1) except 'G2', which will be examined in the next subsection. These spectra are fit with multiple Gaussians (one Gaussian profile per line) simultaneously using lmfit with a least-squared minimization, assuming a single redshift for most lines and a power law continuum normalized to the 4 \(\mu\)m value: \[\mathrm{F}(\lambda_{obs})=\mathrm{F}_{4\mu m}\left(\frac{\lambda_{obs}}{4\mu \mathrm{m}}\right)^{\alpha} \tag{1}\] The models are convolved with the line spread function (LSF3) before comparison to the data. The velocity widths of line pairs (i.e., [SII]\(\lambda\lambda\)6716, 6731, [NII]\(\lambda\lambda\)6548, 6584, [OIII]\(\lambda\lambda\)4959, 5007, and each Balmer line) are fixed to be identical. We adopt the standard assumptions of [NII]\(\lambda\)6584/[NII]\(\lambda\)6548 = 2.94 (e.g., Dojcinovic et al. 2023) and [OIII]\(\lambda\)5007/[OIII]\(\lambda\)4959 = 2.98 (e.g., Dimitrijevic et al. 2007). Following other NIRSpec IFU investigations (Dbler et al. 2023; Perna et al. 2023), we rescale the error spectrum generated by the calibration pipeline ('ERR') using the standard deviation of the line-free regions in the observed spectrum. Uncertainties on each parameter are estimated using the standard error output from lmfit. Only lines with \(>1\sigma\) emission at the expected centroid wavelength are fit. Footnote 3: As recorded in the JWST documentation; [https://jwst-docs.stsci.edu/jwst-near-infrared-spectrograph/nirspec-instrumentation/mirspec-dispersers-and-filters](https://jwst-docs.stsci.edu/jwst-near-infrared-spectrograph/nirspec-instrumentation/mirspec-dispersers-and-filters) The best-fit spectra are shown in Fig. 2, while the redshifts and continuum fluxes for each source are presented in Table 2 and the line properties are listed in Table 3. With a redshift of \(z_{G1}=3.4806\pm 0.0001\), the northern galaxy 'G1' is well-detected in H\(\alpha\), [NII]\(\lambda\lambda\)6548, 6584, [SII]\(\lambda\lambda\)6716, 6731, [SIII]\(\lambda\)9532, and HeL\(\lambda\)10829. Because HeL\(\lambda\)10829 may be resonantly scattered (e.g., Rudy et al. 1989; Belli et al. 2023), we allow the centroid of this line to vary (resulting in an offset of \(127\pm 21\,\mathrm{km\,s^{-1}}\)). In addition, [SIII]\(\lambda\)9532 may be blended with Pa-\(\epsilon\) (e.g., Kehrig et al. 2006), so we allow the model of this line to feature a velocity offset (which is not significant; \(39\pm 17\,\mathrm{km\,s^{-1}}\)). The best-fitting model shows significant residuals for the H\(\alpha\) complex, possibly suggesting the presence of an additional galaxy or outflow. The central component 'C' (\(z_{C}=6.3425\pm 0.0002\)) features well-detected H\(\alpha\), H\(\beta\), [OIII]\(\lambda\)5007, and [NII]\(\lambda\)6584 emission with broad profiles (FWHM\(\sim 500-700\,\mathrm{km\,s^{-1}}\)). We find evidence for weak [OIII]\(\lambda\)4959, [NII]\(\lambda\)6548, [SII]\(\lambda\lambda\)6716, 6731, with no H\(\gamma\). The [SII]\(\lambda\lambda\)6716, 6731 emission is broad and low-level, so fits with a free FWHM returned unphysical values (\(\sim 1500\,\mathrm{km\,s^{-1}}\)). To better constrain these lines, we fixed the width of each to be identical to that of H\(\alpha\). The 4 \(\mu\)m continuum is strong but features no significant slope in F\({}_{\lambda}\). While the best-fit model spectrum does not show significant residuals (Fig. 2), we note that this component is found to feature two discrete sources with a velocity offset (see Section 4.2.3) with a velocity separation smaller than our LSF. With this in mind, the large linewidth is due to the blending of the two galaxies. The S and W components are quite similar to each other, with very strong detections in H\(\beta\), [OIII]\(\lambda\lambda\)4959, 5007, and H\(\alpha\). No significant [NII]\(\lambda\lambda\)6548, 6584 or [SII]\(\lambda\lambda\)6716, 6731 emission is detected. But while the S emission is well-fit by a single redshift (\(z_{S}=6.3592\pm 0.0001\)), the double-peaked appearance of the W component suggests two galaxies (\(z_{W1}=6.3551\pm 0.0001\) and \(z_{W2}=6.3628\pm 0.0001\), with a velocity difference of \(363\pm 7\,\mathrm{km\,s^{-1}}\)). The best-fit models of both components show low spectral residuals, with possible outflow signatures in the residuals of the strongest lines (i.e., a negative residual at \(v\sim 0\) surrounded by positive features). As shown in Section 4.1, both of these spectra contain two spatially separate galaxies, so this may hint at outflowing material from gravitational interactions. \begin{table} \begin{tabular}{c|c c c c c} Grating/Filter & Readout & Groups/Int & Ints/Exp & Exposures & Total Time \\ & Pattern & & & & [s] \\ \hline G395H/290LP & NRSIRS2 & 25 & 1 & 4 & 7352.801 \\ PRISM/CLEAR & NRSIRS2RAPID & 60 & 1 & 4 & 3559.689 \\ \hline \end{tabular} \end{table} Table 1: JWST NIRSpecc IFU observation properties. \begin{table} \begin{tabular}{c|c c c} & Redshift & \multicolumn{2}{c}{4 \(\mu\)m Continuum Flux} & Spectral \\ & & [10\({}^{-20}\) erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\)] & Slope \\ \hline G1 & \(3.4806\pm 0.0001\) & \(1.37\pm 0.02\) & \(-0.7\pm 0.1\) \\ G2 & \(2.00\pm 0.01\) & - & - \\ C & \(6.3425\pm 0.0002\) & \(2.63\pm 0.04\) & \(-0.2\pm 0.1\) \\ S & \(6.3592\pm 0.0001\) & \(0.60\pm 0.03\) & \(1.3\pm 0.4\) \\ W\({}_{1}\) & \(6.3550\pm 0.0001\) & \(0.77\pm 0.04\) & \(1.0\pm 0.4\) \\ W\({}_{2}\) & \(6.3628\pm 0.0001\) & — & — \\ \end{tabular} \end{table} Table 2: Best-fit redshift and continuum flux levels for each of the HFLS3 field components, as derived through a multi-line and continuum simultaneous fit of R2700 spectra (except for G2, which used the R100 spectrum). Spectral slope given for \(F_{\lambda}(\lambda_{obs})\) (see equation 1). Since the results of W\({}_{1}\) and W\({}_{2}\) come from the same fit, the best-fit continuum properties listed for W\({}_{1}\) describe the combined continuum. Figure 1: Integrated emission of the HFLS3 field, using two illustrative wavelength ranges: redshifted H\(\alpha\) for \(z\sim 6.34\) for the R2700 cube (\(\lambda_{obs}=4.79954\,-\,4.84467\,\mu\)m, left panel) and the approximate wavelength range of HST/WFC3 F160W for the R100 cube (\(\lambda_{obs}=1.4\,-\,1.6\,\mu\)m; right panel). The adopted source masks are shown with white boundaries. North is up and east is to the left. The right panel is nearly identical to the observed HST/WFC3 F160W image, but at higher spatial resolution (see Fig. 2). ## 4 Discussion Figure 2: Integrated spectra of the R2700 cube using the masks of Fig. 1 (magenta; excluding G2), with 1\(\sigma\) errors from the associated error spectrum shown as shaded region. Best-fit models (line emission and continuum) are shown by green lines. The centroids of each line are depicted by dashed lines, with red lines indicating that the spectral line was not fit. The lower panel shows the residual. Figure 2: (Cont.) \begin{table} \begin{tabular}{c|c|c c c c c c c c c c c} \hline & & Hy & H\(\beta\) & [OIII]+4959 & [OIII],45007 & [NII],6548 & H\(\alpha\) & [NII],46584 & [SII],46716 & [SII],46731 & [SII],46532 & HeLa10829 \\ \hline G1 & FWHM & \(-\) & \(-\) & \(-\) & \(-\) & \(359\pm 158\) & \(470\pm 27\) & \(537\pm 81\) & \(597\pm 47\) & \(422\pm 17\) & \(448\pm 31\) \\ & I & \(-\) & \(-\) & \(-\) & \(53\pm 24\) & \(855\pm 77\) & \(232\pm 46\) & \(51\pm 17\) & \(58\pm 24\) & \(204\pm 10\) & \(316\pm 28\) \\ \hline C & FWHM & \((500)\) & \(710\pm 35\) & \(526\pm 34\) & \(526\pm 34\) & \(580\pm 89\) & \(710\pm 35\) & \(580\pm 89\) & \(710\pm 35\) & \(710\pm 35\) & \\ & I & \(<172\) & \(255\pm 23\) & \(130\pm 11\) & \(387\pm 32\) & \(165\pm 33\) & \(1448\pm 96\) & \(489\pm 98\) & \(422\pm 54\) & \(168\pm 50\) & \(-\) & \(-\) \\ \hline S & FWHM & \(159\pm 7\) & \(159\pm 19\) & \(134\pm 33\) & \(134\pm 3\) & \((150)\) & \(159\pm 7\) & \((150)\) & \((150)\) & \((150)\) & \((-\) & \(-\) \\ & I & \(137\pm 22\) & \(283\pm 19\) & \(676\pm 22\) & \(2014\pm 64\) & \(<91\) & \(1039\pm 59\) & \(<91\) & \(<86\) & \(<86\) & \(-\) & \(-\) \\ \hline W1 & FWHM & \(345\pm 23\) & \(345\pm 23\) & \(336\pm 12\) & \(336\pm 12\) & \((300)\) & \(345\pm 23\) & \((300)\) & \((300)\) & \(-\) & \(-\) \\ & I & \(82\pm 24\) & \(275\pm 25\) & \(449\pm 18\) & \(1338\pm 54\) & \(<197\) & \(1085\pm 89\) & \(<227\) & \(<236\) & \(<199\) & \(-\) & \(-\) \\ \hline W2 & FWHM & \(176\pm 12\) & \(176\pm 12\) & \(143\pm 6\) & \(143\pm 6\) & \((200)\) & \(176\pm 12\) & \((200)\) & \((200)\) & \((200)\) & \(-\) & \(-\) \\ & I & \(54\pm 17\) & \(161\pm 17\) & \(513\pm 26\) & \(1530\pm 78\) & \(<131\) & \(856\pm 72\) & \(<151\) & \(<157\) & \(<133\) & \(-\) & \(-\) \\ \hline \end{tabular} \end{table} Table 3: Best fit FWHM (km s\({}^{-1}\)) and integrated fluxes (\(10^{-3}\)erg s\({}^{-1}\) cm\({}^{-2}\)) of observed spectral lines, based on a simultaneous fit of a power law continuum and 1-D Gaussian for each line. For undetected lines, we list the 3\(\sigma\) upper limit on the amplitude (based on the error spectrum) and a 3\(\sigma\) upper limit on the integrated flux based on this limit and an assumed FWHM (see value in parentheses). No correction for dust reddening or gravitational magnification has been applied. ### R100 integrated spectra Due to the coarse LSF of the R100 data (\(\sim 10^{4}\) km s\({}^{-1}\); Boker et al. 2023), lines are frequently blended. In order to fit models to these data, one must create a high-resolution model and convolve it with the line spread function (e.g., Heintz et al. 2023; Jones et al. 2023; Umeda et al. 2023). While the R100 data is therefore not useful for kinematics or precise redshift derivations, we may examine the spectrum of 'G2' to find a general redshift. 'G2' features two clear emission lines in a wavelength regime (\(\lambda_{\rm obs}=1.4-2.0\) \(\mu\)m) that is not covered by the R2700 data. To fit this, we first create a higher-resolution model spectrum (\(\delta\lambda=0.001\) \(\mu\)m) that is populated with the strongest expected lines (i.e., H\(\beta\), [OIII]\(\lambda\lambda 4959,5007\), [NII]\(\lambda\lambda 6548,6584\), and H\(\alpha\))4. We find evidence for a spectral break at \(\lambda_{\rm obs}\sim 1.1\) \(\mu\)m, which we tentatively interpret as a Balmer break. The continuum is thus modelled as a power law redwards of this break (\(\lambda_{\rm rest}=364.5\) nm) and a separate power law bluewards of this value. The combined line + continuum model is then convolved with the resolving power to account for the LSF, and is rebinned to the R100 spectral bins. We use lmfit with a least-square minimization to find the best-fit continuum and line parameters and redshift, resulting in the fit of Fig. 3. The error spectrum was rescaled to match the standard deviation of the residual spectrum. Footnote 4: We find no evidence for significant [OII]\(\lambda\lambda 3726,3729\) emission in G1. Due to the heavy blending of the lines, we do not present best-fit widths or integrated fluxes. However, we find a best-fit redshift of \(z=2.00\pm 0.01\), which is comparable to the CIV-based redshift of Riechers et al. (2013) (\(z\sim 2.092\)). The redshift difference may be ascribed to a different extraction aperture or the simplicity of our current model. We note that the rich R100 spectrum of each source (which contain more emission lines) will be modelled with advanced codes in future works, and our current model is solely used to confirm the approximate redshift of the 'G2' object. ## 4 Analysis ### Morpho-kinematic maps Next, we investigate the morpho-kinematics of these components using the R2700 data. This is commonly done using non-parametric measures (e.g., cumulative velocity distributions or moment maps), which are well-suited to sources with isolated, asymmetric lines. Since the emission lines of our sources appear to be well-fit by Gaussians (with the exception of possibly G1, see Fig. 2) and feature overlapping lines in some spectra (e.g., H\(\alpha\) and [NII]\(\lambda\lambda 6548,6584\)), we will adopt a Gaussian-based approach to create maps of each source (see details in Appendix B). The resulting integrated intensity, velocity offset, velocity dispersion, and continuum maps are shown in Fig. 4, 5, 6, and 7. Some weaker lines (e.g., [SII]\(\lambda\)6731 in G1) are detected in the integrated spectrum, but lack the S/N to be significantly detected in the spectrum of multiple individual spaxels. The maps for component G1 are shown in Fig. 4. For the two strongest lines (H\(\alpha\) and [NII]\(\lambda\)6584), we see that the emission is circular with a slight concentration in the northeast-southwest diagonal. There is no strong velocity gradient, and both the velocity dispersion and continuum emission feature a central peak. The two lower-S/N lines ([SIII]\(\lambda\)9532 and HeI\(\lambda\)10829) are detected in the core, with a similar distribution to the core of the bright lines. This elongation may be influenced by a non-circular PSF (e.g., D'Eugenio et al. 2023). These combined morpho-kinematics may be interpreted either as a face-on disk or a dispersion-dominated galaxy. From the maps of component C (Fig. 5), an east-west velocity gradient is apparent. This has previously been seen in PdBI [CII] observations (Riechers et al. 2013), and the higher spatial resolution of our data allows for a more in-depth investigation. We find that while the continuum emission peaks in the center (possibly with a contribution from G2, see Figure 1), the line intensity peaks in the east and west side. In addition, the velocity dispersion does not feature a central maximum. This argues for the presence of two separate galaxies, which we will investigate further in Section 4.2. The S component is clearly composed of two spatially separated galaxies at a similar redshift (i.e., low velocity dispersion, no apparent velocity gradient; Fig. 6) with continuum and H\(\alpha\) intensity minima between the galaxies. The brighter galaxy (to the southeast) was originally identified as 'R1' by Cooray et al. (2014). We note that since these sources have a negligible velocity difference and a small spatial offset, we examined their combined integrated spectrum in Section 3.2. The W component features a strong velocity gradient (Figure 7). However, the spatially offset integrated intensity peak (to the north) and higher velocity dispersion in the south argues against rotation. Instead, this appears to be two galaxies that are spatially and spectrally separated. This is supported by the asymmetric double-peak nature seen in the integrated spectrum of this component (Fig. 2) and the two spatial peaks seen in Fig. 1. Altogether, our analysis suggests that the HFLS3 field is composed of several components: two low-redshift sources to the north (G1, \(z\sim 3.481\); G2, \(z\sim 2.01\)), a lensed source with complex kinematics suggesting two components (C\({}_{1}\) and C\({}_{2}\), \(z\sim 6.342\)), two galaxies to the south (S\({}_{1}\) and S\({}_{2}\), \(z\sim 6.359\)), and two galaxies to the west (W\({}_{1}\), \(z\sim 6.355\); W\({}_{2}\), \(z\sim 6.363\)). Figure 3: Integrated spectrum extracted from the R100 cube using the ‘G2’ mask of Fig. 1 with illustrative \(1\sigma\) errors shown as shaded region. A basic fit to the data is shown by the green line. The lower panel shows the residual, with the centroids of each line depicted by dashed lines. From left to right, the lines are H\(\beta\), [OIII]\(\lambda\lambda 4959,5007\), [NII]\(\lambda 6548\), H\(\alpha\), and [NII]\(\lambda 6584\). The best-fit redshift is \(z_{G2}=2.00\pm 0.01\) ### Gravitational lens modeling The gravitational lens model of the HFLS3 field was previously analyzed by Cooray et al. (2014), who assumed that G1 and G2 were at the same redshift (\(z\sim 2.1\)) and used the marginally resolved PdBI [CII]158\(\mu\)m image of HFLS3 to derive a low magnification factor (\(\mu\sim 2.2\)). With the precise spectroscopic redshifts of G1 and G2 and a resolved map of HFLS3, we refine this model using the public lens modeling software pyatotlens5(Nightingale & Dye, 2015; Nightingale et al., 2018, 2021). Footnote 5: [https://github.com/Jammy221/PyAutoLens](https://github.com/Jammy221/PyAutoLens) #### 4.2.1 Methods Using pyatotlens, we may derive the intrinsic (source-plane) mass and light profiles for each source in the field of view. Each light profile is given by a Sersic profile (Sersic, 1963), while we assume elliptical isothermal mass profiles. The mass and light profiles of each object are assumed to have shared centres, axis ratios, and position angles. We model the PSF as a circular Gaussian with FWHM\(\sim 0.1\arcsec\). The fitting process results in the best-fit centroid, intrinsic axis ratio (\(q\equiv a/b\)), and position angle (\(\phi\)) of the mass and light profiles, the effective radius (r\({}_{\rm eff}\))and Sersic index (\(n\)) of the light profile, and the Einstein radius (r\({}_{\rm Ein}\)) of the mass profile. By dividing the total fluxes of the best-fit image-plane and source-plane models for each component, we calculate a total magnification factor. Figure 4: Morpho-kinematic maps of the strong emission lines detected in the component G1 (\(z=3.4806\)). Integrated intensities are given per spaxel. The corresponding maps of [SII]\(\lambda\lambda 6716,6731\) are not presented due to low S/N. Due to the construction of our model, [NII]\(\lambda 6548\) has the same integrated velocity field, velocity dispersion map, and continuum map as [NII]\(\lambda 6584\), but an integrated intensity map that is a factor 2.98 lower. For [SIII]\(\lambda 9532\) and HeI\(\lambda 10829\), \(v_{50}\) values are given with respect to their best-fit redshifts (\(z=3.4810\) and 3.4821, respectively). Figure 5: Morpho-kinematic maps of the strong emission lines detected in the C component (\(z=6.3425\)). Integrated intensities are given per spaxel. The corresponding maps of H\(\beta\), [OIII]\(\lambda\lambda\)4959, 5007, and [SII]\(\lambda\lambda\)6716, 6731 are not presented due to low S/N. Due to the construction of our model, [NII]\(\lambda\)6548 has the same integrated velocity field, velocity dispersion map, and continuum map as [NII]\(\lambda\)6584, but an integrated intensity map that is a factor 2.98 lower. Figure 6: Morpho-kinematic maps of the strong emission lines detected in the S components (\(z=6.3592\)). Integrated intensities are given per spaxel. The corresponding maps of H\(\gamma\) are not presented due to low S/N. Due to the construction of our model, [OIII]\(\lambda\)4959 has the same velocity field, velocity dispersion map, and continuum map as [OIII]\(\lambda\)5007, but an integrated intensity map that is a factor 2.94 lower. Due to the complexity of the field, we begin by examining the two low-redshift sources (G1 and G2). These are isolated by collapsing the R100 data cube over \(\lambda_{\rm obs}=0.8-1.1\,\mu\)m (which is not covered by the R2700 cube). This collapsed image (top left panel of Fig. 8) has little contribution from the \(z>6\) components. Because of this, we may examine how G2 (\(z\sim 2.0\), the northernmost source in the top row of Fig. 8) lenses G1 (\(z\sim 3.5\), the elongated central source in the top row of Fig. 8). Our model contains the mass and light profile of G2 and the light profile of G1. The resulting fit is shown in the top row of Fig. 8, with parameters listed in Table 4 and source-plane models of G1 and G2 given in Fig. 10 and 11. We next turn to a wavelength range in the R2700 cube containing H\(\alpha\) at \(z\sim 6.34\) (\(\lambda_{obs}=4.80-4.85\,\mu\)m) to examine whether component C is better modeled by a single or double source. This wavelength range contains strong H\(\alpha\) emission from component C, moderate continuum from G1, and no strong emission from G2. We model the light and mass profile of G1, assuming the best-fit mass model for G2 from the R100 collapsed image but with a variable Einstein radius. Component C is fit using either a single Sersic profile or two spatially offset Sersic profiles. The resulting fits for a single and double component for C are shown in the middle and lower row of Fig. 8, with parameters listed in Table 4. Figure 7: Morpho-kinematic maps of the strong emission lines detected in the W components (\(z_{W1}=6.3550,z_{W2}=6.3628\)). Integrated intensities are given per spaxel. The corresponding maps of H\(\gamma\) are not presented due to low S/N. Due to the construction of our model, [OIII]\(\lambda\)4959 has the same velocity field, velocity dispersion map, and continuum map as [OIII]\(\lambda\)5007, but an integrated intensity map that is a factor 2.94 lower. ## References * [1] Figure 8: Results of pyatololens lensing analysis of collapsed images of the R100 cube (\(\lambda_{\rm obs}=0.8-1.1\,\mu\)m; top row) and the R2700 cube (\(\lambda_{\rm obs}=4.80-4.85\,\mu\)m) assuming that the component C is composed of a single source (middle row) or two sources (C\({}_{1}\) to the west and C\({}_{2}\) to the east; lower row). The observed data, best-fit model, and residual (all in the image plane) are presented from left to right. Mass and light profiles are assumed to be Sérsic and isothermal ellipse profiles, respectively. The outline of the spatial mask is shown by red markers. \begin{table} \begin{tabular}{l l l c c c c c c} & & \((\delta_{\lambda},\delta y)\) & \(q\) & \(\phi\) & \(r_{\rm eff}\) & \(n\) & \(\mu\) & \(r_{\rm Ein}\) \\ & \((r_{\rm e}^{-1})\) & & \((r_{\rm e}^{-1})\) & & \((r_{\rm e}^{-1})\) & & \((r_{\rm e}^{-1})\) \\ \hline R100 & G1 & \((0.14\pm 0.004,0.301\pm 0.009)\) & \(1.43\pm 0.01\) & \(57.02\pm 1.51\) & \(1.028\pm 0.018\) & \(5.06\pm 0.16\) & 1.2 & - \\ & G2 & \((0.194\pm 0.002,0.004\pm 0.002)\) & \(2.39\pm 0.05\) & \(-30.52\pm 0.95\) & \(0.251\pm 0.007\) & \(1.22\pm 0.06\) & - & \(0.12\pm 0.011\) \\ \hline R2700 (Single C) & G1 & \((0.181\pm 0.002,0.268\pm 0.004)\) & \(1.94\pm 0.08\) & \(36.13\pm 0.85\) & \(1.072\pm 0.041\) & \(3.19\pm 0.1\) & 1.12 & \(0.145\pm 0.008\) \\ & C & \((0.13\pm 0.005,-0.017\pm 0.007)\) & \(2.26\pm 0.04\) & \(69.65\pm 0.77\) & \(0.298\pm 0.002\) & \(1.19\pm 0.04\) & 1.89 & - & \(0.11\pm 0.01\) \\ \hline R2700 (Double C) & G1 & \((0.176\pm 0.003,0.298\pm 0.009)\) & \(1.77\pm 0.09\) & \(29.2\pm 0.42\) & \(2.673\pm 0.26\) & \(4.73\pm 0.23\) & 1.1 & \(0.169\pm 0.006\) \\ & G2 & & & & & & & - \(0.12\pm 0.01\) \\ & C1 & \((0.102\pm 0.005,0.012\pm 0.004)\) & \(2.00\pm 0.05\) & \(57.1\pm 1.2\) & \(0.293\pm 0.006\) & \(1.34\pm 0.07\) & 2.09 & - \\ & C2 & \((0.293\pm 0.009,0.004\pm 0.002)\) & \(1.49\pm 0.04\) & \(46.22\pm 4.7\) & \(0.211\pm 0.09\) & \(3.17\pm 0.26\) & 2.43 & - \\ \hline \end{tabular} \end{table} Table 4: Best-fit parameters from _pseudo_ pulsars analysis: offset from central position (\(70\)b6m4799\(-\)58\({}^{\circ}\)46\({}^{\circ}\)23\(\cdot\)27\({}^{\circ}\)) in arcseconds, axis ratio (\(q=\alpha/(b)\), position angle (\(\phi\)), effective radius (\(r_{\rm eff}\)). Sérel index (\(n\)), lensing magnification (\(\mu\)), and Einstein radius (\(r_{\rm Ein}\)). We present the lensing analysis results for a collapsed region of the R100 cube that contains mainly emission from G1 and G2 (\(\lambda_{obs}=0.8-1.1\,\mu\)m, two section) and for a collapsed region of the R2700 cube that primarily contains \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}\) from C and continuum emission from G1 (\(\lambda_{obs}=4.80-4.85\)). The middle section shows the results assuming C is composed of one source, while the lower section assumes two components. For each R2700 result, we assume that the mass profile of G2 is identical to the best-fit light profile of the R100 image, but with a variable Einstein radius. #### 4.2.2 Results From this analysis, we find that the lowest-redshift source (G2) is elongated (\(q\sim 2.4\)), compact (\(r_{\rm eff}\sim 0.25^{\prime\prime}\), or \(\sim 2.1\) kpc at \(z=2.00\)), and not centrally peaked (\(n\sim 1.2\)). On the other hand, the source-plane morphology of G1 is characterized as elongated (\(q\sim 1.4-1.9\)), extended (\(r_{\rm eff}\sim 1.0-2.7^{\prime\prime}\), or \(\sim 7.3-19.8\) kpc at \(z=3.48\)), centrally peaked (\(n\sim 3-5\)), and only moderately magnified by G2 (\(\mu\sim 1.1-1.2\)). Component C in the collapsed R2700 image appears as a curved arc with two bright spots. We first examine whether this may be explained as a single lensed source, yielding a best-fit source-plane morphology that is elongated (\(q\sim 2.3\)), compact (\(r_{\rm eff}\sim 0.3^{\prime\prime}\), or \(\sim 1.6\) kpc at \(z=6.34\)), with an exponential profile (\(n\sim 1.2\)), and a low magnification factor (\(\mu=1.89\)). As seen in the middle row of Fig. 8, the image-plane morphology of this source is indeed arced, but has a central peak rather than two spatially separate clumps. This results in a significant residual at the location of the western spot in the observed arc. Next, we assume that component C is composed of two spatially separate Sersic profiles (C\({}_{1}\) to the west and C\({}_{2}\) to the east). The best-fit model returns one source (C\({}_{1}\)) that is similar to the similar to the best-fit single-component model, with \(q\sim 2.0\), \(r_{\rm eff}\sim 0.3^{\prime\prime}\), \(n\sim 1.2\), and a higher magnification factor (\(\mu=2.09\)). However, this model contains an additional component to the west that is more circular (\(q\sim 1.5\)), smaller (\(r_{\rm eff}\sim 0.2^{\prime\prime}\)), more centrally peaked (\(n\sim 3\)), and is more magnified (\(\mu=2.43\)). The resulting image-plane morphology of this best-fit model is double-peaked, resulting in a better representation (i.e., a lower \(\chi^{2}\)). The two best-fit components have a small source-plane projected distance (\(0.2^{\prime\prime}\), or \(\sim 1.1\) kpc at \(z=6.34\)) that is comparable to their effective radius. This is discussed in the next subsection. We note that this analysis suggests that none of the emission peaks in the observed fields represent multiply-imaged objects. The \(z<6\) sources (G1 and G2) feature significantly different redshifts and are quite compact, implying that they are individual galaxies. The C emission is better fit by two separate components, rather than a single galaxy imaged twice. The effects of the gravitational lens are quite small at larger spatial separations from G2, so it is unlikely that the S and W components are magnified or multiply imaged (as supported by the different morpho-kinematics of each sub-component). #### 4.2.3 Nature of component C This lensing analysis of the H\(\alpha\) emission in the central component suggests that it is likely composed of two spatially separate (\(\sim 1\) kpc projected offset) galaxies. By examining the best-fit morphologies (Fig. 10), it is clear that their proximity to each other results in overlapping emission in the image-plane. So while the far east and west sides of the central arc in the image plane are dominated by the emission from each source, the space between the two apparent galaxies contains emission from both sources. Returning to the morpho-kinematic maps (Fig. 5), we see that each line shows a weak (\(\delta v\sim 100\) km s\({}^{-1}\)) east-west velocity gradient (as seen in previous [CII] observations; Riechers et al. 2013). Since none of the integrated intensity or velocity dispersion maps show a central peak, as expected for a single rotating disk (see the best-fit single component model in Fig. 8), these maps instead imply multiple galaxies separated in velocity. This is supported by the previous morpho-kinematic investigation of FIR lines by Riechers et al. (2013). An east-west velocity gradient was found in [CII], but with a high velocity dispersion which peaked in the southeast. In addition, the integrated spectrum exhibited two velocity components in multiple FIR lines (\(z=6.3335\) and \(z=6.3427\), resulting in a weighted average of \(z=6.3369\pm 0.0009\)). From our best-fit redshift (\(z_{C}=6.3425\pm 0.0003\)), this suggests that the rest-frame optical emission is dominated by the redder component. To summarize, the lensing and morpho-kinematic analyses of our new high-resolution data agree with the previous results of Cooray et al. (2014), who claimed that the component C of HFLS3 is composed of two components. The best-fit magnification of our model is also similar (\(\mu\sim 2.1-2.4\), compared to \(\mu=2.2\pm 0.3\)). Due to the overlapping image-plane distributions of the two components (see bottom-centre and bottom-right panels of Figure 10), they are not easily separable without lens modelling. ### Excitation conditions Using the well-detected lines for our sample, we may place constraints on the excitation conditions of each source using well-studied line ratio diagnostic diagrams. To do this, we first calculate three line ratios: \[N2=log_{10}([NII]\lambda 6584/H\alpha) \tag{2}\] \[S2=log_{10}([SII]\lambda 6716,6731/H\alpha) \tag{3}\] \[R3=log_{10}([OIII]\lambda 5007/H\beta) \tag{4}\] These are used to create the [NII]-BPT (R3 vs N2; Baldwin et al. 1981) and [SII]-VO87 (R3 vs S2; Veilleux & Osterbrock 1987) plots (see Fig. 9) for each high-redshift (\(z>6\)) source in the field. Since the two galaxies in W (W\({}_{1}\) and W\({}_{2}\)) have a strong velocity offset, their respective emission lines are clearly separable in their combined spectrum (Fig. 2), and we may examine the line ratios of each galaxy. On the other hand, the galaxies in C and S do not have a strong difference in relative velocity, so we present the combined line ratios of each of these close pairs. Since we are examining ratios of nearby emission line fluxes, we do not apply corrections for dust reddening or gravitational lensing. For the [NII]-BPT, sources that lie above the solid Kewley et al. (2001) line are believed to be dominated by AGN excitation, while those that lie beneath the dashed Kauffmann et al. (2003a) line are mainly star-forming, and those that lie between the lines are a combination of the two excitation sources (i.e., 'composite'). Similarly, sources below (above) the solid Kauffmann et al. (2003a) line in the [SII]-VO87 are thought to be star-forming (AGN-dominated). We find that while the sources in S and W have high R3 ratios (comparable to the R3 ratios of \(z\sim 5.5-7.0\) galaxies observed with JWST/NIRSpec MSA as part of the JADES survey; Cameron et al. 2023), the high upper limits on N2 and S2 do not allow us to rule out or confirm the presence of AGN. Component C lies in the composite region, but with large errors that may place it in the star-forming regime. We note that these demarcation lines were derived for \(z\sim 0\) galaxies with approximately solar metallicity. Recent results suggest that these lines may not separate SF- and AGN-driven ionization for high-redshift, low-metallicity sources (e.g., Kewley et al. 2013; Nakajima & Maiolino 2022; Ubler et al. 2023). This interpretation is further complicated by the fact that the C and S components are composed of two sub-components each (see Section 4.1 and 4.2), which may exhibit different ionization sources and/or metallicities. These results suggest that the line ratios of components S and W do not allow us to robustly claim the presence or absence of AGN, while component C is likely composed of star-forming galaxies but may include an AGN. ### Distribution of star formation HFLS3 was originally found to be an extreme starburst, with an SFR\({}_{\rm FIR}=2900\,{\rm M}_{\odot}\,{\rm yr}^{-1}\)(Riechers et al., 2013). Further studies found that the central galaxy was lensed, resulting in a lower intrinsic SFR \(\sim 1300\,{\rm M}_{\odot}\,{\rm yr}^{-1}\) or a 100-Myr averaged SFR \(\sim 660\,{\rm M}_{\odot}\,{\rm yr}^{-1}\)(Cooray et al., 2014). However, both of these SFRs are based on fits to SEDs that contain low-resolution observations (e.g., Herschel/SPIRE with \(>20^{\prime\prime}\), Spitzer/IRAC with \(\sim 2^{\prime\prime}\)) which would capture emission from multiple components of the HFLS3 system (i.e., the lensed emission in C, both galaxies in S, and both galaxies in W). This raises the question of how the star formation activity is distributed in the HFLS3 system: a single dominant starburst or multiple star-forming galaxies within an area of projected diameter \(\sim 11\,{\rm kpc}\)? This may be explored using the H\(\alpha\) luminosity (which mainly traces star formation on timescales of \(\sim 10-20\,{\rm Myr}\); e.g., Kennicutt, 1998; Glazebrook et al., 1999), corrected for dust attenuation. To investigate this, we first derive an estimate of the B-V color excess from the Balmer decrement (e.g., Dominguez et al., 2013): \[E(B-V)_{BD}=\frac{2.5}{k(\lambda_{HB})-k(\lambda_{Ha})}log_{10}\left(\frac{F_{ Ha,obs}/F_{HB,obs}}{2.86}\right) \tag{5}\] where \(k(\lambda)\) is the assumed dust attenuation curve (Calzetti et al., 2000), \(F_{Ha,obs}/F_{HB,obs}\) is the observed line flux ratio, and we have assumed an intrinsic line ratio of 2.86 (Osterbrock, 1989). We then use this value to estimate the intrinsic line flux of H\(\alpha\), assuming case B recombination and \(T_{e}=10^{4}\,{\rm K}\): \[F_{Ha,int}=F_{Ha,obs}10^{1(\lambda_{Ha})E(B-V)_{100}/2.5} \tag{6}\] Finally, this dust-corrected flux is used to estimate the SFR assuming a Salpeter IMF (Kennicutt, 1998): \[SFR_{Ha}=\frac{4\pi D_{L}^{2}F_{Ha,int}}{1.26\times 10^{41}\,{\rm erg}\,{\rm s} ^{-1}} \tag{7}\] where \(D_{L}\) is the luminosity distance. The resulting values are presented in Table 5. The Calzetti et al. (2000) dust attenuation curve was chosen to ease comparison to previous results, but other curves could be adopted. For example, the quadratic law of Reddy et al. (2020) results in slightly higher (\(\lesssim 1\sigma\) difference) values of SFR, while the high-mass, high-redshift analogue law of Salim et al. (2018) returns slightly lower (\(\lesssim 2\sigma\) difference) values of SFR. From this analysis, we see that the central component indeed features a high SFR. But as found in Section 4.2, this emission is lensed, resulting in a dust- and magnification-corrected SFR of \(140\pm 50\,{\rm M}_{\odot}\,{\rm yr}^{-1}\) (assuming an average \(\mu\sim 2.26\pm 0.17\)). The combined SFR of the \(z\sim 6.3\) galaxies is \(460\pm 90\,{\rm M}_{\odot}\,{\rm yr}^{-1}\), which is comparable to the previous 100 Myr-averaged SFR of Cooray et al. (2014), or \(5\xi_{-90}^{14}\,{\rm M}_{\odot}\,{\rm yr}^{-1}\). But if S, W\({}_{1}\), and W\({}_{2}\) contain AGN (which is not ruled out by their line ratios, see Section 4.3), then their SFR\({}_{Ha}\) values may be contaminated (e.g., Garn et al., 2010). On the other hand, if these sources contain star formation in optically thick regions, then the true SFR may be higher. In summary, the star formation (as traced by H\(\alpha\)) is not concentrated in a single source but is distributed in multiple objects: a central lensed arc that has two components (\(\sim\)30% of the SFR), two galaxies to the south (\(\sim\)16% of the SFR), and two galaxies to the west (\(\sim\)54% of the SFR). While the absolute SFR of this system is dependent on the presence of dust and AGN (which require further high-resolution radio and submm observations to be confirmed), the current data imply that HFLS3 is not a single starbursting galaxy, as previously reported. ### HFLS3: A galaxy group In the previous Sections, we have found that the HFLS3 field contains at least six sources at \(z\approx 6.35\). Here, we discuss the implications of this apparent high density of galaxies. Previous large-scale optical searches around HFLS3 found little evidence for an overdensity on the scale of \(\gtrsim 100\,{\rm kpc}\)(Robson et al., 2014; Laporte et al., 2015). However, Laporte et al. Figure 9: [SII]-VO87 (top) and [NII]-BPT (lower) plots created using best-fit line fluxes for each source (see values in Table 3). Distribution of low-redshift galaxies from SDSS (MPA-JHU DR8 catalog; Kauffmann et al., 2003; Brinchmann et al., 2004) are shown by background points. Left-facing arrows represent \(3\sigma\) upper limits on N2 and S2. We include the demarcation lines for \(z\sim 0\) galaxies of Kewley et al. (2001, solid lines) and Kauffmann et al. (2003a, dashed line). (2015) detected three faint optical companions to HFLS3 on small scales (\(\sim 36\) kpc), based on HST imaging with comparable spatial resolution to the current work. Two of these sources are detected in our data (S and W), while the third source falls outside the IFU FoV. Since we have found that each of these sources and the central source are composed of two galaxies, the true density is likely higher. Whether this represents a true galaxy overdensity requires knowledge of the H\(\alpha\) emitter space density at \(z>6\), which is not yet well predicted (e.g., Pozzetti et al., 2016). Alternatively, we may compare the observed galaxy distribution to a cosmological simulation that includes H\(\alpha\) emission and dust extinction (e.g., Hashimoto et al., 2023). This will be deferred to a future work. At this time, we may use the spatial and velocity offsets of each galaxy to see if they are likely gravitationally interacting. Since these values are projected, they represent lower limits on the true three-dimensional offset and relative velocity of galaxy pairs, but may be used as a first test of close association. Studies of galaxy pairs usually adopt criteria of \(\Delta r\lesssim 20\) kpc and \(\Delta\nu\lesssim 500\) km s\({}^{-1}\) for 'close' pairs (e.g., Duncan et al., 2019; Ventou et al., 2019; Romano et al., 2021). The six \(z>6\) galaxies detected here are all located within a circle of diameter \(\sim 2\arcsec\) (\(\sim 11\) kpc at \(z=6.34\)) with a maximum redshift difference of \(\delta z\sim 0.02\) (i.e., a line of sight velocity difference of \(\sim 800\) km s\({}^{-1}\)). So while each galaxy in this field firmly meets the distance criterion, the total velocity difference is high. However, each galaxy is a member of a pair with smaller spatial offsets and redshift offsets: C\({}_{1}\) and C\({}_{2}\) (\(\delta z\sim 0.0092\), using the redshifts of Riechers et al., 2013), S\({}_{1}\) and S\({}_{2}\) (negligible \(\delta z\)), and W\({}_{1}\) and W\({}_{2}\) (\(\delta z\sim 0.0078\)). Since the distance and velocity criteria are met for each pair, they are likely interacting. In addition, the galaxy pairs in the S and W regions are also likely interacting, while C features a slightly higher velocity offset (\(\Delta\nu=591\pm 11\) km s\({}^{-1}\) with respect to \(z_{W1}\)). Based on these small velocity and spatial offsets, it is likely that these galaxies represent an interacting system that will merge within \(\sim 1\) Gyr (e.g., Conselice, 2006). ## 5 Conclusion In this work, we present JWST/NIRSpec IFU observations of a field containing the \(z=6.34\) source HFLS3, as part of the GA-NIFS program. By exploring both the low (\(R\sim 100\)) and high spectral resolution (\(R\sim 2700\)) data, we find a crowded field, with two low-redshift sources (G1, \(z\sim 3.4806\); G2, \(z\sim 2.00\)), a central gravitationally lensed arc that is likely composed of two sources (C, \(z\sim 6.3425\)), two close galaxies at the same redshift to the south (S\({}_{1}\) & S\({}_{2}\), \(z\sim 6.3592\)), and two close galaxies with a velocity offset to the west (W\({}_{1}\), \(z\sim 6.3550\); W\({}_{2}\), \(z\sim 6.3628\)). All of the \(z>6\) galaxies are located within an area of \(\sim 2\arcsec\), or \(\sim 11\) kpc at \(z\sim 6.34\). The spectral fits and morpho-kinematic map analysis of these data reveal a variety of kinematic features. G1 has no strong velocity gradient and is likely a dispersion-dominated source. However, C features a strong velocity gradient across the length of its arc, hinting at possible merging activity. The galaxies in component S are distinct, but are at similar redshifts with no velocity gradient. This is different from the galaxies in component W, which are closely associated but have a strong velocity gradient as seen in the double-peak profile of each line. The red component (\(W_{2}\)) is brighter but has a lower velocity dispersion. Because of the asymmetry in the integrated intensity and velocity dispersion maps, this likely represents a merger between two galaxies, rather than a single rotating disk. Next, we use our high-quality IFU data and updated source redshifts to examine the gravitational lensing of the HFLS3 field. Our best-fit models show that G1 is moderately magnified by G2 (\(\mu\sim 1.1-1.2\)), while the H\(\alpha\) emission of C is more strongly magnified (\(\mu\sim 2.1-2.4\), comparable to the value of \(\sim 2.2\) from Cooray et al., 2014) and is likely composed of two closely separated components. The integrated line fluxes are plotted on [NII]-BPT and [SII]-VO87 plots, showing that component C is likely powered by star formation, while we may not rule out or confirm the presence of AGN in the S and W components. The observed Balmer decrements are then used to derive extinction-corrected SFR\({}_{\rm H\alpha}\) values for each source. This shows that the star formation is distributed across the field (SFR\({}_{\rm H\alpha}=460\pm 90\) M\({}_{\odot}\) yr\({}^{-1}\), corrected for lensing and extinction), with the largest contribution (\(\sim 54\%\)) from the W galaxies. However, the presence of AGN in some sources may inflate these SFRs. We use the projected spatial offsets and relative line of sight velocities of each \(z>6\) galaxy to investigate whether they likely represent a closed, merging group. Using the standard 'close pair' criteria of \(\delta r<\)20 kpc and \(\delta\nu<500\) km s\({}^{-1}\), we find that the field contains multiple likely mergers: \(C_{1}/C_{2}\), \(S_{1}/S_{2}\), and \(W_{1}/W_{2}\). In addition, the combined W and S components meet these criteria and will likely merge. While the C components feature a slightly higher velocity offset (\(\sim 600\) km s\({}^{-1}\)), the HFLS3 group contains multiple close pairs. Taken together, our results require a drastic reinterpretation of the HFLS3 field. It is composed of at least six distinct sources within \(z\sim 6.34-6.36\) that are lensed by two foreground galaxies at \(z\sim 2.1\) and \(z\sim 3.5\). All of the \(z>6\) galaxies either feature strong velocity gradients and/or are closely associated with another galaxy, implying ongoing interaction. This behaviour has been seen at high redshift (e.g., Ginolfi et al., 2022), including four galaxies at \(z\sim 7.9\) detected with the NIRSpec IFU (Hashimoto et al., 2023). Thus, HFLS3 is likely not an extreme starburst, but instead represents one of the densest groups of interacting star-forming galaxies within the first 1 Gyr of the Universe. Recent and ongoing high-resolution observations with JWST/MIRI, NOEMA, and the JVLA (as well as a future in-depth study of the R100 data cube briefly explored here) will help to further characterize this unique field. ## Acknowledgements GCJ, AJB, AJJC, JC, AS acknowledges funding from the "FirstGalaxies" Advanced Grant from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. \begin{table} \begin{tabular}{c|c|c c} Component & E(B-V)\({}_{BD}\) & \(F_{H\alpha,intrinsic}\) & SFR\({}_{H\alpha}\) \\ \hline C & \(0.59\pm 0.10\) & \(3.8\pm 1.3\) & \(140\pm 50\) \\ S & \(0.21\pm 0.08\) & \(2.0\pm 0.5\) & \(73\pm 20\) \\ W\({}_{1}\) & \(0.27\pm 0.11\) & \(2.5\pm 1.0\) & \(90\pm 40\) \\ W\({}_{2}\) & \(0.53\pm 0.12\) & \(4.3\pm 1.8\) & \(160\pm 60\) \\ \end{tabular} \end{table} Table 5: Observed B-V color excess derived from Balmer decrement, as well as integrated H\(\alpha\) flux and resulting SFR corrected for dust reddening and gravitational magnification. Flux and SFR values are in units of [\(10^{-17}\) erg s\({}^{-1}\) cm\({}^{-2}\)] and [M\({}_{\odot}\) yr\({}^{-1}\)], respectively. We assume a Salpeter (1955) IMF and Calzetti et al. (2000) extinction law. 789056). HU gratefully acknowledges support by the Isaac Newton Trust and by the Kavli Foundation through a Newton-Kavli Junior Fellowship. MP, SA, and BRdP acknowledge grant PID2021-127718NB-I00 funded by the Spanish Ministry of Science and Innovation/State Agency of Research (MICIN/AEI/ 10.13039/501100011033). MP also acknowledges Programa Atraccico de Talento de la Comunidad de Madrid via grant 2018-T2/TIC-11715. SC and GV acknowledge support from the European Union (ERC, WINGS,101040227). RM, FDE, JS, and JW acknowledge support by the Science and Technology Facilities Council (STFC), from the ERC Advanced Grant 695671 "QUENCH", and by the UKRI Frontier Research grant RISEandFALL. RM also acknowledges funding from a research professorship from the Royal Society. RB acknowledges support from an STFC Ernest Rutherford Fellowship [grant number ST/T003596/1]. GC acknowledges the support of the INAF Large Grant 2022 "The metal circle: a new sharp view of the baryon cycle up to Cosmic Dawn with the latest generation IF facilities". This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. ## References * Arata et al. (2020) Arata, S., Yajima, H., Nagamine, K., Abe, M., & Khochfar, S. 2020, MNRAS, 498, 5541 * Bacon et al. (2016) Bacon, R., Piquernes, L., Conseil, S., Richard, J., & Shepherd, M. 2016, MPDAF: MUSE Python Data Analysis Framework, Astrophysics Source Code Library, record ascl:16110.03 * Baldwin et al. (1981) Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5 * Bellis et al. (2023) Bellis, S., Park, M., Davies, R. L., et al. 2023, arXiv e-prints, arXiv:2308.05795 * Boker et al. (2022) Boker, T., Arribas, S., Litzgendorf, N., et al. 2022, A&A, 661, A82 * Boker et al. (2023) Boker, T., Beck, T. L., Birkmann, S. M., et al. 2023, PASP, 135, 038001 * Bouwens et al. (2015) Bouwens, R. J., Illingworth, B. O., Oesch, P. A., et al. 2015, ApJ, 803, 34 * Brinchmann et al. (2004) Brinchmann, J., Charlot, S., White, S. D. M., et al. 2004, MNRAS, 351, 1151 * Calzetti et al. (2000) Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, ApJ, 533, 682 * Cameron et al. (2023) Cameron, A. J., Saxena, A., Bunker, A. J., et al. 2023, arXiv e-prints, arXiv:2302.04298 * Carnall et al. (2023) Carnall, A. C., McLeod, D. J., McLure, R. J., et al. 2023, MNRAS, 520, 3974 * Carnall et al. (2019) Carnall, S., Gallerani, S., Vallini, L., et al. 2019, MNRAS, 498, 3939 * Cheng et al. (2020) Cheng, C., Cao, X., Lu, S., et al. 2020, ApJ, 898, 33 * Conselice (2006) Conselice, C. J. 2006, ApJ, 638, 686 * Cooray et al. (2014) Cooray, A., Calango, J., Wardlow, J. L., et al. 2014, ApJ, 790, 40 * Curti et al. (2020) Curti, M., Mannucci, F., Cresci, G., & Maiolino, R. 2020, MNRAS, 491, 944 * Decarli et al. (2019) Decarli, R., Walter, F., Gonzalez-Lopez, J., et al. 2019, ApJ, 882, 138 * D'Eugonin et al. (2023) D'Eugonin, P., Perez-Gonzalez, P., Maiolino, R., et al. 2023, arXiv e-prints, arXiv:2308.06317 * Dimitriyevic et al. (2007) Dimitriyevic, M. S., Popovic, L. C., Kovacevic, J., Dacic, M., & Ilic, D. 2007, MNRAS, 374, 1181 * Dziyonicovic et al. (2023) Dziyonicovic, I., Kovacevic-Djcinovic, J., & Popovic, L. C. 2023, Advances in Space Research, 71, 1219 * Dome et al. (2020) Dome, T., Tacchella, S., Fialkov, A., et al. 2023, arXiv e-prints, arXiv:2305.07066 * Dominguez et al. (2013) Dominguez, A., Siana, B., Henry, A. L., et al. 2013, ApJ, 763, 145 * Duncan et al. (2019) Duncan, A., Conselice, C. J., Mundy, C., et al. 2019a, ApJ, 876, 110 * Duncan et al. (2019) Duncan, K., Conselice, C. J., Mundy, C., et al. 2019b, ApJ, 876, 110 * Fadda et al. (2004) Fadda, D., Jannuzi, B. T., Ford, A., & Storrie-Lombardi, L. J. 2004, AJ, 128, 1 * Fruchter & Hook (2002) Fruchter, A. S. & Hook, R. N. 2002, PASP, 114, 144 * Gaia Collaboration et al. (2021) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, A&A, 649, A1 * Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1 * Garr et al. (2010) Garr, T., Sobral, D., Best, P. N., et al. 2010, MNRAS, 402, 2017 * Ginolfi et al. (2022) Ginolfi, M., Piconcelli, E., Zappacosta, L., et al. 2022, Nature Communications, 13, 4574 * Glazebrook et al. (1999) Glazebrook, K., Blake, C., Economou, F., Lilly, S., & Colless, M. 1999, MNRAS, 306, 843 * Hashimoto et al. (2023) Hashimoto, T., Alvarez-Marquez, J., Fudamoto, Y., et al. 2023, arXiv e-prints, arXiv:2305.04741 * Heintz et al. (2023) Heintz, K. E., Watson, D., Brammer, G., et al. 2023, arXiv e-prints, arXiv:2306.00647 * Hopkins et al. (2006) Hopkins, P. F., Hernquist, L., Cox, T. J., et al. 2006, ApJS, 163, 1 * Jakobsen et al. (2022) Jakobsen, P., Ferritt, J., Alves de Oliveira, C., et al. 2022, A&A, 661, A80 * Jones et al. (2023) Jones, G. C., Bunker, A. J., Saxena, A., et al. 2023, arXiv e-prints, arXiv:2306.02471 * Kauffmann et al. (2003a) Kauffmann, G., Heckman, T. M., Tremonti, C., et al. 2003a, MNRAS, 346, 1055 * Kauffmann et al. (2003b) Kauffmann, G., Heckman, T. M., White, S. D. M., et al. 2003b, MNRAS, 341, 33 * Kehrig et al. (2006) Kehrig, C., Vilchez, J. M., Telles, E., Cuisinier, F., & Perez-Montero, E. 2006, A&A, 457, 477 * Kennicutt (1998) Kennicutt, Robert C., J. 1998, ApJ, 498, 541 * Kewley et al. (2013) Kewley, L. J., Dopita, M. A., Leitherer, C., et al. 2013, ApJ, 774, 100 * Kewley et al. (2001) Kewley, L. J., Dopita, M. A., Sutherland, R. S., Heisler, C. A., & Trevena, J. 2001, ApJ, 556, 121 * Laporte et al. (2015) Laporte, N., Perez-Fourom, I., Calanoy, J. A., et al. 2015, ApJ, 810, 130 * Looser et al. (2023) Looser, T. J., D'Eugenio, F., Maiolino, R., et al. 2023, arXiv e-prints, arXiv:2302.14155 * Marshall et al. (2023) Marshall, M. A., Perna, M., Willott, C. J., et al. 2023, arXiv e-prints, arXiv:2302.04795 * Nakajima & Maiolino (2022) Nakajima, K. & Maiolino, R. 2022, MNRAS, 513, 5134 * Nightingale & Dye (2015) Nightingale, J. W. & Dye, S. 2015, MNRAS, 452, 2940 * Nightingale et al. (2018) Nightingale, J. W., Dye, S., & Massey, R. J. 2018, MNRAS, 478, 4738 * Nightingale et al. (2021) Nightingale, J. W., Hayes, R. G., Kelly, A., et al. 2021, J. Open Source Softw., 6, 2825 * Oliver et al. (2012) Oliver, S. J., Bock, J., Altieri, B., et al. 2012, MNRAS, 424, 1614 * Osterbrock (1989) Osterbrock, D. E. 1989, Astrophysics of gaseous nebulae and active galactic nuclei * Perna et al. (2023) Perna, M., Arribas, S., Marshall, M., et al. 2023, arXiv e-prints, arXiv:2304.06756 * Popesso et al. (2023) Popesso, P., Conca, A., Cresci, G., et al. 2023, MNRAS, 519, 1526 * Pozzetti et al. (2016) Pozzetti, L., Hirata, C. M., Geach, J. E., et al. 2016, A&A, 590, * [7] Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK * [8] NRC Herzberg, 5071 West Saanich Rd, Victoria, BC V9E 2E7, Canada * [9] Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester, M13 9PL, UK * [10] European Space Agency, c/o STScI, 3700 San Martin Drive, Baltimore, MD 21218, USA * Osservatorio Astrofisco di Arcetri, largo E. Fermi 5, 50127 Firenze, Italy * [12] European Southern Observatory, Karl-Schwarzschild-Strasse 2, 85748 Garching, Germany * [13] AURA for European Space Agency, Space Telescope Science Institute, 3700 San Martin Drive. Baltimore, MD, 21210, USA ## Appendix A Astrometry correction ### HST astrometry verification In order to compare the JWST/NIRSpec IFU data with archival data, a common astrometric reference frame is required (i.e., Gaia DR3; Gaia Collaboration et al., 2016, 2021). We retrieve HST images from the STScI archive, but it is not clear if all of them have been aligned to the Gaia DR3 frame (i.e., some are lacking this comment in their headers). To verify that they have been correctly aligned, we retrieve the locations of the five closest objects in the Gaia archive6 (see Table 1) and examine the locations in each HST image. Footnote 6: [https://gea.esac.esa.int/archive/](https://gea.esac.esa.int/archive/) As seen in Fig. 1, the HST images show significant emission at all five Gaia locations with small centroid offsets (\(<0.2\arcsec\)). From this, we conclude that the images have been properly aligned. ### Application of spatial shift To shift the JWST/NIRSpec data cubes to the Gaia frame, we first create narrowband JWST images by convolving the R100 data cube with the transmission profiles of the three HST/WFC3 filters used by Cooray et al. (2014) (see color maps in Fig. 2)7. These maps were then compared with the corresponding HST images (see contours in Fig. 2). An offset is estimated by finding the brightest spaxel in the JWST and HST maps for each filter and determining the distance between the centers of these spaxel. Footnote 7: Filter profiles were obtained from the SVO Filter Profile Service (Rodrigo et al., 2012; Rodrigo and Solano, 2020). We only use the WFC3 filters due to a higher S/N in the IFU data over the corresponding wavelengths. In all three cases, this offset is \(0.17\arcsec\). Because the spaxel sizes are \(0.05\arcsec\) (JWST) and \(0.12825\arcsec\) (HST/WFC3-IR), we estimate an uncertainty on this offset based on half the spaxel sizes, added in quadrature (\(\pm 0.07\arcsec\)). Thus, the offset is consistent with the pointing accuracy of \(\sim 0.1\arcsec\)(Rigby et al., 2023). We have verified that the R100 and R2700 cubes are aligned to the same initial reference frame, so throughout this work we apply this offset correction to the R100 and R2700 IFU data cubes. We briefly note that the coordinates of each source presented in Cooray et al. (2014) were calibrated to the SDSS frame rather than the Gaia frame. ## Appendix B Morpho-kinematic map creation In order to examine the morpho-kinematics of each source and line, we may produce maps of integrated intensity, velocity offset, and velocity dispersion. These are derived by extracting spectra from each spaxel (f(v)) and measuring a normalized cumulative velocity distribution (F(v) = \(\int_{-\infty}^{\infty}\)f(v\({}^{\prime}\))dv\({}^{\prime}\)/ \(\int_{-\infty}^{\infty}\)f(v\({}^{\prime}\))dv\({}^{\prime}\); e.g., Zakamska and Greene (2014)), which is used to calculate the velocity at which N\(\%\) of the flux is captured (v\({}_{\rm N}\); e.g., v\({}_{\rm 80}\)) as well as associated widths (e.g., w\({}_{\rm 80}\equiv v_{90}-v_{10}\)). However, since there are overlapping lines in some spectra, we will adopt a Gaussian-based approach. For each spaxel, we extract a spectrum and fit a model containing a flat continuum and one or more Gaussian lines using imtiw with a least-squares minimzer. Lines that are closely associated or related through a flux ratio are fit concurrently (i.e., [OIII]\(\lambda\)4959/[OIII]\(\lambda\)5007, [NII]\(\lambda\)6548/H\(\alpha\)/[NII]\(\lambda\)6584, and [SII]\(\lambda\)6716/[SII]\(\lambda\)6731) with identical continuum values. Line pairs (i.e., [OIII]\(\lambda\)4959/[OIII]\(\lambda\)5007, [NII]\(\lambda\)6548/[NII]\(\lambda\)6584, and [SII]\(\lambda\)6716/[SII]\(\lambda\)6731) are assumed to have identical kinematics. We adopt the standard assumptions of [NII]\(\lambda\)6584/[NII]\(\lambda\)6548 = 2.94 (e.g., Dojcinovic et al., 2023) and [OIII]\(\lambda\)5007/[OIII]\(\lambda\)4959 = 2.98 (e.g., Dimitrijevic et al., 2007) for each fit. The LSF is accounted for when calculating the linewidths. Continuum maps are generated using the best-fit constant continuum value for each fit and spaxel. Only fits with \(r^{2}\) (i.e., the coefficient of determination) values \(>0.5\) are presented. Using the best-fit parameters of each model Gaussian (amplitude \(A_{g}[{\rm erg\,s\,cm^{-2}\,\AA}]\), centroid wavelength \(C_{g}[\mu{\rm m}]\), and width \(W_{g}[\mu{\rm m}]\)), we may generate morpho-kinematic maps. The integrated intensity map is a direct sum of all values in the best fit: \[\frac{I(x,y)}{{\rm erg\,s\,cm^{-2}}}=10^{4}\sqrt{2\pi}A_{g}(x,y)W_{g}(x,y) \tag{1}\] The velocity field (v\({}_{\rm 50}\)) represents the dominant line-of-sight velocity for a given spaxel. For a redshifted rest-frame wavelength of \(\lambda_{o}\): \[\frac{v_{\rm 50}(x,y)}{{\rm km\,s^{-1}}}=c\left(\frac{C_{g}(x,y)}{\lambda_{o}}-1\right) \tag{2}\] where \(c[{\rm km\,s^{-1}}]\) is the speed of light. Finally, we examine the velocity dispersion of each line by finding w\({}_{\rm 80}\) of the best-fit Gaussian: \[\frac{w_{\rm 80}(x,y)}{{\rm km\,s^{-1}}}=\frac{2.176\sqrt{2ln(2)}W_{g}(x,y)c}{ \lambda_{o}} \tag{3}\] In this process, we assume that each spectral line may be fit with a single Gaussian. While this is motivated by the lack of an obvious broad component (as seen in e.g., Marshall et al., 2023), it is possible that a spaxel may contain contribution from narrow and broad emission. Since there are no strong AGN (from previous observations) included in the HFLS3 field, this is expected to be a small effect. ## Appendix C Best-fit lens models of components In Section 4.2, we used pyatolens to examine the gravitational lensing effect created by the \(z<6\) sources (G1 and G2) on the C source. The observed maps were compared with the best-fit image-plane model. Here, we show source- and image-plane maps of each individual component: G1 (Fig. 1), G2 (Fig. 2), and C (Fig. 3). \begin{table} \begin{tabular}{c|c c} Number & RA & Dec \\ \hline 1 & 256.7108238422430 & 58.767424792645000 \\ 2 & 256.6918696535650 & 58.7610752035986 \\ 3 & 256.68468210287400 & 58.77599807468070 \\ 4 & 256.6942094613340 & 58.78483710562300 \\ 5 & 256.7115031731110 & 58.785864102389900 \\ \end{tabular} \end{table} Table 1: Positions of five objects in the Gaia archive that are closest to HFLS3. [ Figure 1: Verification that HST images have been aligned to Gaia DR3 reference frame. Each panel contains a \(1^{\prime\prime}\times 1^{\prime\prime}\) view of an HST image (see labels for each row) that is focused on a given location from the Gaia archive (see labels for each column). The Gaia position is shown as a \(0.1^{\prime\prime}\) diameter red circle. The locations of each Gaia source is listed in Table 1. Figure 13: JWST/NIRSpec IFU R100 data cubes integrated over the corresponding HST filter bandpasses: F105W (left), F125W (centre), and F160W (right). In each panel, the collapsed JWST emission is depicted as the background colors, while the HST data is shown by red contours. The JWST emission in the top row is shown without the astrometric correction, while the lower row includes the alignment to Gaia DR3. ## References * [1] A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. Abazov, A. A. A. Abazov, A. A. Abazov, A. A. A. Abazov, A. A. A. Abazov, A. A. A. Abazov, A. A. A. Abazov, A. A. A. Abazov, A. A. A. Abazov, A. A. A. Abazov, A. A. A. Abazov, A. A. A. Abazov, A. A. A. A. Abazov, A. A.
2310.00208
On the Quantum Uncertainty of the Neutron Electric Dipole Moment
The continued interest in placing bounds on the neutron's Electric Dipole Moment (EDM) is due to the implications regarding the characteristics of the strong interaction and, in particular, its behavior under the CP symmetry. In this work, we discuss the apparent tension resulting from the discrepancy of about 13 orders of magnitude between the current bounds and the expected quantum uncertainty in the relevant quantity. We offer a resolution of the "puzzle" in terms of the notion of a weak measurement, using a version of the corresponding formalism adapted to consideration of the nEDM experiment at the Spallation Neutron Source at the Oak Ridge National Laboratory.
Octavio Guerrero, Libertad Barrón-Palos, Daniel Sudarsky
2023-09-30T00:55:43Z
http://arxiv.org/abs/2310.00208v2
# On the Electric Dipole Moment of the Neutron and its Quantum Uncertainty ###### Abstract The continued interest in placing bounds on the neutron's Electric Dipole Moment (EDM) is due to the implications regarding the characteristics of the strong interaction and, in particular, its behavior under the CP symmetry. In this work, we discuss the apparent tension resulting from the discrepancy of about 13 orders of magnitude between the current bounds and the expected quantum uncertainty in the relevant quantity. We offer a resolution of the "puzzle" in terms of the notion of a weak measurement, using a version of the corresponding formalism adapted to consideration of the nEDM experiment at the Spallation Neutron Source at the Oak Ridge National Laboratory. keywords: nEDM, Weak Measurement, Uncertainty Principle + Footnote †: journal: Physics Letters B ## 1 Introduction The search for an indication of a non-zero value of the neutron EDM1 is motivated by the fact that it would represent a new source of CP symmetry violation with origins in nontrivial topological features of the QCD vacuum (Callan et al., 1976). That, in turn, has implications on theories beyond the standard model of particle physics, as in the standard model itself (Engel et al., 2013; 't Hooft, 1976). Footnote 1: At a much higher level than that which could be accounted for as a result of the CP violation in the electroweak interactions. In the quest to study this quantity, quite significant projects have been developed relying on multiple techniques that have achieved remarkable levels of control of the statistical and systematical errors (Abel, C. et al., 2019; Ahmed et al., 2019). Recently, using the Paul Scherrer Institut's ultracold neutron source, the nEDM collaboration has reported the lowest bound on neutron EDM value to date \(d_{n}=(0.0\pm 1.1_{stat}\pm 0.2_{sys})\times 10^{-26}\)\(e\)-cm (Abel et al., 2020). Moreover, the nEDM experiment at the Spallation Neutron Source of the Oak Ridge National Laboratory (nEDM@SNS) seeks to study the neutron EDM up to an accuracy of \(O(10^{-28}\)\(e\)-cm) when the experiment starts working at maximum capacity (Leung et al., 2019). ## 2 Statement of the problem In this paper, we will discuss a tension that is already present between the uncertainty (dispersion (Sakurai and Tuan, 1994)) associated with the neutron EDM and the experimentally measured results. The analysis starts by considering the order of magnitude of the neutron mean square radius, which can be extracted from both simple back of the envelope calculations but that can also be inferred from the deep inelastic scattering of electrons on protons and the well-known facts about the extreme similarity between protons and neutrons (as represented, for instance, in the SU(2)- Isopspin symmetry of the strong interactions). That effective radius is of the order of femtometers (Filin et al., 2020) \(\mathcal{O}(10^{-13}\) cm), thus, by considering a very simplified characterization of the neutron in terms of its valence quarks (two down quarks with charge \(-1/3\)\(e\) and an up quark with charge \(2/3\)\(e\) ) and the corresponding explicit form of the electric dipole moment operator: \[\hat{\vec{D}}=\sum_{i}^{3}q_{i}\hat{\vec{x}}_{i}, \tag{1}\] where \(q_{i}\) are the respective charges of the quarks and \(\hat{\vec{x}}_{i}\) the positions of each one of them (for the sake of simplicity and unless clarity requires it, we will drop the operator hats notation). We can easily make a back of the envelope estimation of the value of the nEDM uncertainty, finding it to be about \(\Delta|D|\sim\mathcal{O}(10^{-13}\)\(e\)-cm). That represents a discrepancy of 13 orders of magnitude between the precision of the last reported value of this quantity and the intrinsic quantum uncertainty in the same quantity. This seems quite puzzling when one considers that, as per the usual expectations from quantum theory, the quantum uncertainty (in the corresponding system's quantum state) represents, among other things, a fundamental limit to the precision with which any observable can be measured for a system in the corresponding state. So, what are we to make of the situation at hand? Let us consider the most straightforward replies one might offer. i) One might argue that such analysis is too simplistic and contemplate, for instance, a simple harmonic oscillator with frequency \(\omega\) and mass \(M\) which is prepared in its ground state, in which the position \(X\) has an expectation value \(\langle\hat{X}\rangle=0\) and the quantum uncertainty is \(\Delta X=\hbar/(2M\omega)^{1/2}\). One might argue that it is true that \(\Delta X\) represents the degree to **which the particle's** position is **ill-defined** in the initial state \(|0\rangle\); however, that does not mean we cannot perform an experiment measuring the position to a higher accuracy. In fact, we can certainly do that, and in principle measure the position of the system to an arbitrary accuracy \(\delta X\) (at least in the context of non-relativistic quantum mechanics2). All that quantum mechanics tells us is that our prediction of what the value resulting from the measurement of the position must be taken as uncertain to a level \(\Delta X\), indicating that if we repeat the experiment a large number of times (with identically prepared systems), we will obtain a series of results whose mean value is \(\bar{X}=0\) and with a statistical dispersion given by \(\Delta X\). However, in each one of the measurements, the position might end up being well determined within an uncertainty \(\delta X\) ( corresponding to the accuracy of the measuring device), and thus, there is no conflict at all between quantum theory and the fact that we have measured \(X\) with an accuracy that far exceeds \(\Delta X\). Note, however that, as a result of such measurements, the quantum state of each oscillator would have changed to one "centered about some definite value of the position, say \(X_{i}\)" with an uncertainty \(\delta X<<\Delta X\) and the collection of values obtained in the ensemble of measurements \(\{X_{1},X_{2},......X_{N}\}\) would display a statistical dispersion \(\Delta X\). In particular, all the harmonic oscillators would now be in states that are quite different from the ground state, and thus, their corresponding energy expectation values would be higher than \((1/2)\hbar\omega\). Footnote 2: Inclusion of relativistic considerations indicate that we should not be able to measure the position of a particle with a precision that exceeds the particle Compton’s wavelength, but that let us ignore this issue for the moment. In the situation at hand, it is quite clear that what was described above cannot be what is going on. There are various reasons for that. First note that if the experiment involves a large number of neutrons corresponding to a repetition of the measurement of nEDM (with accuracies of order \(\mathcal{O}(10^{-28}\)\(e\)-cm)), we would just as with the example of the harmonic oscillator, obtain an ensemble of different results with a statistical dispersion of order \(\Delta|D|\sim\mathcal{O}(10^{-13}\)\(e\)- cm) (and our experimental colleagues would not be able to report the result they do). Furthermore, as we just noted, in the case of the harmonic oscillators, such kind of measurement (with accuracies that are much higher than the quantum uncertainty of the original state) led to changes in the state of the system, and, when the original system was in the ground state, this implied an increase in the expectation value of the system's energy. The energy scales controlling the internal structure of a neutron are of the order of MeV's, which is an enormous scale compared with the energies that the experimental devices used in the type of experiments under consideration might "transfer" to the neutrons. In fact, neutrons are the ground state of that type of quark arrangement, and the higher excitation levels correspond to the particles such as \(\Delta^{0}\), as well those known as resonances \(N^{*}\), etc. All these particles mentioned have significantly higher masses compared to a neutron. In fact, it's exceptionally challenging even to contemplate the possibility that the internal structure of a neutron might undergo substantial modification due to the relatively weak electric and magnetic fields utilized in these experiments. Consequently, drawing an analogy with the type of measurement we considered in the context of the harmonic oscillator breaks down entirely. In the case of the harmonic oscillator, the measurement profoundly alters the system's state, whereas in this scenario, such a drastic transformation cannot occur. Could it be that we are simply overestimating the nEDM uncertainty? Could that actually vanish? For the latter to be the case, the neutron should be an eigenstate of the electric dipole moment operator, and that seems rather problematic3, as there seems to be no reason whatsoever that could account for that. Footnote 3: We might dismiss the possibility given that at the electro-weak level, the CP symmetry is violated, and this should induce a non-vanishing expectation value for the nEDM, however as that effect is known to be minuscule, even compared with the tight bounds we are considering here, we will from now on simply ignore the electro-weak CP violation in the discussion. We will now consider two kinds of analysis offering strong evidence against that possibility and supporting our original order of magnitude estimates. The first involves the consideration of the correlations that must be present in the wave function characterizing the constitutive parts of the neutron. Such correlation evidenced by the condition imposed by the CP symmetry on the expected value of the neutron EDM is: \[\left\langle\vec{D}\right\rangle=q_{1}\left(\vec{x}_{1}\right)+q_{2}\left( \vec{x}_{2}\right)+q_{3}\left\langle\vec{x}_{3}\right\rangle=0. \tag{2}\] where we have for simplicity, considered just the valence quarks and treated the two \(u\) quarks as non-identical particles. Furthermore, these correlations cannot be trivial, i.e., even though the sum of the charges of the quarks that compose the neutron equals zero \(\sum_{i}^{3}q_{i}=0\), the expectation values of the positions of the three quarks cannot be identical. Otherwise, the scattering experiments that have measured the mean squared radius of the proton (considering the strong resemblance between both nucleons) would have resulted in a much smaller absolute value of this quantity (Abrahamyan et al., 2012; Kurasawa et al., 2020). Therefore, the quantum state of the system must entangle the positions of the quarks to ensure the condition (2). Note that we work under the assumption of the exact validity of the CP symmetry (so \(\left\langle\vec{D}\right\rangle=0\) ), the quantum uncertainty of the neutron EDM is: \[\Delta|D|=\sqrt{\left\langle\vec{D}^{2}\right\rangle}, \tag{3}\] Using the expression (1), the right-hand side of the last equation can be expressed as \[\left\langle\vec{D}^{2}\right\rangle =\left(q_{1}\right)^{2}\left\langle\vec{x}_{1}^{2}\right\rangle+ \left(q_{2}\right)^{2}\left\langle\vec{x}_{2}^{2}\right\rangle+\left(q_{3} \right)^{2}\left\langle\vec{x}_{3}^{2}\right\rangle+2q_{1}q_{2}\] \[\times\left\langle\vec{x}_{1}\cdot\vec{x}_{2}\right\rangle+2q_{1}q _{3}\left\langle\vec{x}_{1}\cdot\vec{x}_{3}\right\rangle+2q_{3}q_{2}\left\langle \vec{x}_{3}\cdot\vec{x}_{2}\right\rangle. \tag{4}\] Consider now the previous expression's fourth, fifth, and sixth terms. They have the form \(2q_{i}q_{j}\langle\vec{x}_{i}^{2}\cdot\vec{x}_{j}\rangle\). At this point, we start by noting the mathematical inequality \(\langle\vec{x}_{i}^{n}\cdot\vec{x}_{j}^{n}\rangle\leq 0\) \(\sqrt{\langle\|x_{i}^{2}\|^{2}\|\rangle\langle\|x_{j}^{2}\|^{2}\rangle}\) where the equality would only be achieved if there is a complete correlation between the quantities involved. That seems quite difficult to imagine because such complete correlations would only apply to the full wave function, which in our simplified model involves the three valence quarks. Thus, when the position of the third quark is integrated over, the density matrix for the remaining two particles would of course encode a strong correlation, which however ought, in general, to undergo a certain level of degradation as compared to the very rigid correlations in the state of the complete system4. Footnote 4: This feature can be illustrated even at the classical level by considering a set of billiard tables set initially with the balls in identical positions, having a player hit the white ball in each table imparting in all cases the same fixed energy to that ball with the only quantity that differs from table to table being the initial direction of the hit. Under these conditions, there will be a complete correlation among the conditions of the three balls in all tables, with the entire collection described in terms of a one-parameter family (the initial angle of the billiard stick). However, if we decide to limit the consideration to only two of the balls (ignoring the white ball in all cases, for instance) at any time, the correlation between the states of the other two balls will be less than perfect, i.e. it would have been degraded by ignoring relevant degrees of freedom. A much more physical argument might be brought to bear in the discussion which is based on the property of asymptotic freedom of QCD. That indicates that as the separation between the quarks decreases an thus the relevant energy scales of the QCD interaction increase, the strength of the interaction decreases, and thus the force responsible for the neutron's internal structure and thus, the correlations present in the wave function of its constituents can be expected to decreases, and the quark-quark relative position correlation must become smaller than at large distances. The scale of the transition might therefore be estimated to correspond to energies of the order of say 10 GeV, corresponding to an inter-quark separation of order \(10^{-14}\) cm. This leads us to estimate5\(\langle\vec{x}_{i}^{\prime}\cdot\vec{x}_{j}^{\prime}\rangle\sim 10^{-14}\sqrt{ \langle\|\vec{x}_{i}\|^{2}\rangle\langle\|\vec{x}_{j}\|^{2}\rangle}\). Footnote 5: We thank Alejandro Perez for this observation. As the valence quarks can be taken as simply confined to a region of the size of the neutron radius, the terms of the form \(\left\langle\vec{x}_{i}^{\prime 2}\right\rangle\) have to be about the order of the neutron radius. All this supports our original estimate of \(\Delta|D_{n}|\) that can not be much smaller than that implied by the mean square radius of the neutron \((1/9\,e+1/9\,e+4/9\,e)\times 10^{-13}\) cm \(\sim\mathcal{O}(10^{-13}\)\(e\cdot\) cm). In this sense, there is a manifest discrepancy of \(12-13\) orders of magnitude with respect to the measured values at the Paul Scherrer Institut's experiment and 15 orders of magnitude for the nEDM@SNS collaboration expected results. At this stage we will accept that order of magnitude estimate and then try to confront the seemingly paradoxical situation we have described at the start of this section. ## 3 A proposal for resolving the puzzle Once we face this huge discrepancy between what, in principle should be possible to measure and what the experimental groups have measured, it is clear the need to try and clarify what is going on. We are either confronting some sort of misunderstanding or a serious problem with the theory at its most basic level. Here we will argue that it is the former and that, in fact, what is needed is the recognition that the relevant type of experiments are embodiments of what is called "a weak measurement" and that by doing so, the mystery is completely resolved. In short, a weak measurement is a type of experiment in which one focuses on performing an ordinary measurement of a certain quantity (referred to as the ancilla or auxiliary observable), which is only indirectly related to the quantity of interest \(O\) and which yields direct information about the expectation value of that quantity in the original state of the system \(|\Psi\rangle\), namely \(\langle\Psi|O|\Psi\rangle\), without hardly disrupting the initial state of the system of interest \(|\Psi\rangle\). The notion was introduced in (Aharonov et al., 1988) and further developed in several following works. For a recent review (Duck et al., 1989). We will illustrate the idea as applied to the situation at hand by focusing on the experiment of the nEDM@SNS collaboration6 making use of the weak measurement formalism. The objective is not to offer a precise characterization of the experiment but to present an idealized version showing how the main ideas work in a toy model situation, which is, however, sufficiently close to that of the actual experiments in question. That means we will take the liberty to modify in several ways the setups involved, and we will make use of several simplifications in the treatment that, although not strictly rigorous will allow for a rather complete description of the relevant issues. Footnote 6: A model for the Paul Scherrer Institut collaboration experiment can also be considered. A more precise analysis is in principle, possible, although quite likely impractical, due to the sheer complexity of the actual experiments and the concomitant theoretical characterization that such an endeavor would entail. However, it is worthwhile describing in broad terms the way in which we envision such realistic analysis could be carried out. In order to do so, we next offer a brief but broadly accurate description of the situation we are concerned with. ### A more detailed description of the experimental setup The experiment (Leung et al., 2019; Golub and Lamoreaux, 1994) is based on the fact that if there was a non-zero nEDM (\(\vec{D}\neq 0\)), the precession frequency of the neutron spin would be affected when an electric field was applied. More specifically, the associated spin precession when the particle is immersed in constant electric and magnetic fields would be given by the Larmor frequency. When \(\vec{E}\) and \(\vec{B}\) are parallel, the Larmor frequency is \[\hbar\omega_{\uparrow\uparrow}=2|\mu_{n}B+DE|, \tag{5}\] where \(E=\|\vec{E}\|\) and \(B=\|\vec{B}\|\), \(\mu_{n}\) is the magnetic dipole moment of the neutron. On the other hand, the corresponding frequency when the electric field is reversed (antiparallel fields) is \[\hbar\omega_{\uparrow\downarrow}=2|\mu_{n}B-DE|, \tag{6}\] The Comparison of these quantities in the two cases would serve to determine \(\vec{D}\). However, even though the measurement principle is simple, the experiment is not simple at all. For an nEDM of the order of the current best limit, \(\vec{D}\sim 10^{-26}\)\(e\)-cm (Abel et al., 2020), the Larmor frequencies in equations (5) and (6) differ by only \(10^{-7}\) Hz, considering a typical experimental electric field of 10 kV/cm. This difference is nothing more than the shift in Larmor frequency that a neutron precessing in a constant magnetic field of 1 \(mT\) (or 10\(G\)) would experience if the field has fluctuations of the order of a few \(fT\). In fact, measurement of magnetic fields to that level can only be achieved via a co-magnetometer, which is a nuclear or atomic species with well-known magnetic dipole moment and electric dipole moment that can be considered zero for practical purposes7. If the co-magnetometer species is set to precess together with neutrons in a homogeneous magnetic field, the changes in its Larmor precession frequency, which can be accurately determined using a SQUID (Gallop and Radcliffe, 1985), can be used to monitor changes in the holding magnetic field. The nEDM@SNS experiment will make use of \({}^{3}\)He as a co-magnetometer, which in addition to providing a sensitive probe for magnetic field fluctuations, it can also provide a means to measure relative neutron precession frequency. The capture of neutrons on \({}^{3}\)He at low energies proceeds through the ground state of \({}^{4}\)He, which has nuclear spin \(J=0\). This produces a strong spin dependence in the \(n+^{3}\)He\(\to p+^{3}\)H nuclear reaction; in fact its cross section is given by Footnote 7: This can happen in atoms like \({}^{3}\)He due to the atomic Schiff screening effect (Schiff, 1963). \[\sigma_{n\text{-}^{3}\text{He}}(v)=\frac{\sigma_{0}v_{0}}{v}(1+P_{{}^{3}\text{ He}}) \tag{7}\] where \(\sigma_{0}=5333\) b is the capture cross section at the thermal neutron velocity \(v_{0}=2200\) m/s (Sears, 1986) and \(P_{{}^{3}\text{He}}\) is the polarization of the spins of the \({}^{3}\)He atoms with respect to the direction of the neutron spins. The precession frequency of neutrons or \({}^{3}\)He in a magnetic field \(B\) is \(\omega_{i}=\gamma_{i}B\), with \(\gamma_{i}\) the gyromagnetic ratio of the corresponding species8. Each species precesses at its own rate in the same magnetic field, with their spins oscillating between parallel and antiparallel, and the capture cross section for the nuclear reaction among them also oscillates according to equation 7. The rate of occurrence of the nuclear reaction, and therefore the relative angle between neutron and \({}^{3}\)He spins, can be measured in the experiment since the produced charged particles, \(p\) and \({}^{3}\)H, produce detectable light on the superfluid \({}^{4}\)He bath in which both, \(n\) and \({}^{3}\)He, are immersed. Thus \({}^{3}\)He as co-magnetometer provides information on very small fluctuations in the magnetic field while also providing a relative neutron precession frequency, which can be compared in the two configurations of an additional electric field \(\vec{E}\), parallel or antiparallel to the magnetic field \(\vec{B}\). Here we should note that the monitoring of the degree of alignment between the spins of the two species is done by monitoring the photons generated in connection with the nuclear reaction, which, in turn, proceeds when the corresponding spins are antiparallel. Thus, one can naturally consider that the actual measurement, namely the place where the macro-objectification takes place is in the (macroscopic) photon detectors that are interacting with the electromagnetic field in the relevant region of spacetime. Footnote 8: In fact neutrons and \({}^{3}\)He have gyromagnetic ratios that differ by only about 10%. The measurement mode described above is not the only one on which the nEDM@SNS experiment will rely. To achieve the most stringent limits, the so-called "spin dressed" method will be used (Eckel et al., 2012). This method is based on the fact that the spin precession frequency (either neutron or \({}^{3}\)He) can be altered using a time-varying magnetic field perpendicular to the direction of the original homogeneous magnetic field. The precession frequency is affected by the difference in the effective gyromagnetic ratio of the species, \(\gamma_{i}^{\prime}\), which scales from the standard ratio \(\gamma_{i}\) by a factor that depends on the frequency and amplitude of the applied time-varying field (\(B_{RF}\) and \(\omega_{RF}\)). The scaling or "dressing" of the spin is characterized by a zeroth-order Bessel function, \(\gamma_{i}^{\prime}=\gamma_{i}J_{0}(\gamma B_{RF}/\omega_{RF})\). It is possible to find critical points where the Bessel functions of both species (\(n\) and \({}^{3}\)He) have the same value and thus do their precession frequencies, so by measuring relative frequencies (through scintillating light produced by reaction products) in the vicinity of critical points, in the two electric field configurations, it is possible to extract the value of the nEDM. A detailed experiment description can be found in (Leung et al., 2019). ### A schematic analysis of the actual experiment as a weak measurement As we have seen, the experiment involves a large number of subsystems, all of which should, in principle, be given a quantum treatment, while in order to avoid having to confront the measurement problem in quantum theory (Maudlin, 1995), we prescribe a reasonable place to set the "Heisenberg cut" and stipulate that certain suitable elements act as a measuring devices, which generate "effective classical outputs". In the experimental setup described above, it seems quite natural to single out the light detectors (i.e., the photo-multipliers) as the actual measuring devices that can be considered as constantly monitoring the quantum state of the electromagnetic (EM) field in the corresponding frequency range. Moreover, we could take the externally applied electric and magnetic fields and give them a classical treatment (or alternatively take them to be described quantum mechanically by a suitable coherent state of the extremely low-frequency range modes of the EM field). Then we would consider a quantum system made up corresponding to the second quantized neutron field, an effective second quantized \({}^{3}\)He, \(p\), and \({}^{3}\)H fields, as well as the EM field modes in a suitable energy range interacting (via a suitable effective Hamiltonian describing the nuclear reaction and the photon emission) in the presence of the (very low energy) external EM fields. The initial state would correspond to a single neutron in a well-localized wave packet and a spin aligned on the appropriate direction, a single \({}^{3}\)He nucleus, the vacuum state for the \(p\), \({}^{3}\)H fields, the vacuum for the EM high energy modes, and unexcited state for the detectors. One would then write the initial state of the whole system including the detectors (described as low-level Unruh de-Witt detectors, interacting with the EM field), and consider the evolution of such state for a certain period of time and then compute the amplitude and then the probability for one of the detectors to be excited (following a Von-Newman measurement scheme when one assumes that detectors are at the end of the experiment (i.e., at the appropriate time) either excited or unexcited). That result would depend on various quantities, including, as we will see in the next section, the expectation value of the EDM of the neutron. As is the case with any weak measurement, a single experiment will provide very little information about the quantity of interest, but a large sequence of identical experiments can provide arbitrarily accurate information about it. As is well known, one can move the "Heisenberg cut" in a large number of ways, obtaining, in practice equivalent results. In that fashion one could simplify or complicate the analysis. One should, however avoid moving the "cut" to the point that one would end describing a strong measurement of the neutron EDM, as that would misrepresent what is actually taking place experimentally. The point we have been making is that the experiments in question could not possibly correspond to such a strong measurement. ### A simplified analysis of the experiment as a weak measurement As a first simplification, in our analysis, we will focus only on the neutron, its internal degrees of freedom, and the spatial orientation of its spin. We will treat the external electromagnetic field classically and assume that the magnetic field \(\vec{B}=0\) so that the precession of the spin direction only depends on \(\vec{D}\) and \(\vec{E}\). Then, if \(\vec{D}=0\), there will be no precession. The next consideration we will take is that the direction of polarization of the neutrons will be fixed in the same plane as that of the \({}^{3}\)He atoms polarization, but with an angular difference of zero so that they are totally parallel. Thus, for the capture reaction \(n+{{}^{3}He}\to p+{{}^{3}H}\) to occur with a probability different to zero, the neutron would need to acquire a component of its spin in the opposite direction. To carry out a simplified version of the weak measurement of the neutron's EDM (Electric Dipole Moment), we will distinguish between the neutron's internal degrees of freedom that determine the magnitude of the EDM and the degrees of freedom that characterize the spatial orientation of the neutron. This approach relies on a simplified form of the Wigner-Eckart theorem, known as the _Projection Theorem_ (Sakurai and Tuan, 1994). According to this theorem, for any vector operator \(\hat{\vec{A}}\) and \(j\neq 0\), \[\langle\alpha^{\prime},jm^{\prime}\big{|}\,V_{q}\,|\alpha,\,jm\rangle=\frac{ \langle\alpha^{\prime},jm|\,\hat{\vec{J}}\cdot\hat{\vec{V}}|\alpha,jm\rangle} {j(j+1)}\,\langle jm^{\prime}\big{|}\,J_{q}\,|jm\rangle\,, \tag{8}\] where \(\hat{\vec{A}}\) and \(\hat{\vec{J}}\) are in the spherical basis, and \(q=(-1,0,1)\). Since we are focusing on just one of these components, we can select it as our zeroth entry. In order to proceed with our analysis, it is convenient to introduce the operator defined by \(\hat{\mathcal{D}}\)=(\(\hat{\vec{S}}\cdot\hat{\vec{D}}+\hat{\vec{D}}\cdot\hat{\vec{S}}\))/2. We note that it is a Hermitian operator which serves to codify the magnitude of the electric dipole moment while ignoring its orientation (relying for this on the Wigner-Eckart theorem). In fact, by leveraging the vectorial nature of the electric dipole moment operator \(\hat{\vec{D}}\) (for clarity, we have reintroduced the hat notation over the operators in the following calculations) and the previous theorem, we can express the expectation value of each of its components in the following manner: \[\left\langle\hat{\mathcal{D}}_{i}\right\rangle=\frac{\left\langle\hat{ \mathcal{D}}\right\rangle}{j(j+1)}\left\langle\hat{\vec{J}}_{i}\right\rangle \tag{9}\] where \(\hat{\vec{J}}\) represents the angular momentum operator, which in our context corresponds to the neutron's spin (\(\hat{\vec{S}}\)), thus \(j=1/2\). The states for which we will calculate the expectation values will be defined below, but as the theorem requires, they are angular momentum eigenstates. It is evident that \([\hat{\mathcal{D}},\hat{\vec{J}}]=0\), reinforcing the notion that \(\hat{\mathcal{D}}\) is a scalar operator, as implied by the notation. However, it's worth noting that the equality (9) in terms of operators cannot be regarded as an absolute identity. This is because the components of \(\hat{\vec{D}}\) exhibit commutativity among themselves, unlike those of \(\hat{\vec{J}}\), which do not follow the same pattern. The significance of this theorem lies in its utility for calculating the essential matrix element terms \(\langle\alpha|\hat{\mathcal{D}}|\beta\rangle\) as required for our computation. The following analysis aims to exhibit how the quantum uncertainty of the EDM enters into the analysis of the weak measurement to ascertain its influence on the accuracy with which the quantity of interest can be measured. In order to do this, we proceed to write the quantum state of the neutron, explicitly separating the degrees of freedom as indicated above. That is, we express the state of a neutron essentially at rest in the lab and with a spin orientation \(\vec{\sigma}\) as: \(|\psi\rangle=N\int d(d)e^{-(d-d_{n})^{2}/2\Delta^{2}}|d\rangle\otimes|\vec{ \sigma}\rangle\), here, \(|d\rangle\) represents the eigenstates of \(\hat{\mathcal{D}}\) (normalized according to \(\langle d^{\prime}|d\rangle=\delta(d-d^{\prime})\)), \(\hat{d}_{n}=\langle n|\hat{\mathcal{D}}|n\rangle\) signifies the expectation value of the dipole moment scalar in the neutron basis state, and \(\Delta\) represents its quantum uncertainty. It's important to note that for any neutron state with a specific orientation, the relevant quantities can be obtained using equation (9). The notation \(|\vec{\sigma}\rangle\) denotes eigenstates of the spin along the direction \(\vec{\sigma}\), where \(\vec{\sigma}\cdot\hat{\vec{S}}|\vec{\sigma}\rangle=1/2|\vec{\sigma}\rangle\). At this juncture, it is also noteworthy that, due to the Projection Theorem, \(\hat{d}_{n}=d_{n}\)9. Footnote 9: Here, we have defined \(|\left\langle\vec{D}\right\rangle|\equiv|\vec{\sigma}_{k}|=d_{n}\). Our analysis starts by writing the initial state representing a pre-selected neutron (prepared in the lab to be essentially at rest and with its spin pointing in the direction \(+z\), namely \(|\psi_{0}\rangle=N\int d(d)e^{-(d-d_{n})^{2}/2\Delta^{2}}|d\rangle\otimes|z+\rangle\)). Proper state normalization implies \(N^{2}\int d(d)e^{-(d-d_{n})^{2}/2\Delta^{2}}=1\). The neutron is then subjected to the effect of an external electric field pointing in the direction \(+y\) that is applied during a finite time interval from \(t=0\) to \(t=t_{f}\), which we take to be switched on and off slowly enough to avoid transient effects. The interaction of the neutron with this electric field is represented by the Interaction Hamiltonian: \[H_{i}=\vec{E}(t)\cdot\hat{\vec{D}} \tag{10}\] where as noted, the electric field is taken to point in the direction \(+y\). We are interested in computing the probability amplitude for finding the neutron in the post-selected state \(|\psi^{\prime}\rangle=N\int d(d)e^{-(d-d_{n})^{2}/2\Delta^{2}}|d\rangle\otimes|z-\rangle\) at \(t=t_{f}\). Thus we are dealing with a version of a weak measurements of the electric dipole moment involving pre and post-selected states (Aharonov et al., 1988). The direct calculation of this amplitude is now : \[\begin{split}\mathcal{A}&=\langle\psi^{\prime}| \mathcal{T}e^{i\int H_{n}d|}|y_{0}\rangle=\langle\psi^{\prime}|\mathcal{T}e^{i \int E_{s}(t)\hat{\vec{D}}_{s}d|}|y_{0}\rangle\\ &=\langle\psi^{\prime}|e^{i\hat{D}_{s}\int E_{s}(t)dt}N\int d(d)e^ {-(d-d_{n})^{2}/\Delta^{2}}\\ &\mathbf{\times}\ket{d}\otimes\frac{1}{\sqrt{2}}(\ket{y+}+\ket{y-}) \end{split} \tag{11}\] where \(\mathcal{T}\) stands for the time order product (which will be irrelevant in this treatment where the electric field is taken as classical and with fixed orientation). In the last line we have used \(|z+\rangle=\frac{1}{\sqrt{2}}(\ket{y+}+\ket{y-})\). We define now \(A\equiv\int_{0}^{t_{f}}E(t)dt\) and \(B\equiv\frac{A}{2\sqrt{(1/2(1/2+1))}}\) and then write \[\begin{split}\mathcal{A}&=\langle\psi^{\prime}|e^{ i\int A\hat{\vec{D}}\hat{S}_{s}/\sqrt{(1/2(1/2+1))}}N\int d(d)e^{-(d-d_{n})^{2}/ \Delta^{2}}\\ &\mathbf{\times}\ket{d}\otimes\frac{1}{\sqrt{2}}(\ket{y+}+\ket{y-}) \\ &=\langle\psi^{\prime}|N\int d(d)e^{-(d-d_{n})^{2}/\Delta^{2}}\\ &\mathbf{\times}\ket{d}\otimes\frac{1}{\sqrt{2}}(e^{i\hat{D}}\ket{y+ }+e^{-i\hat{D}}\ket{y-})\end{split} \tag{12}\] Now we write \(\langle\psi^{\prime}|=N\int d(d^{\prime})e^{-(d^{\prime}-d_{n})^{2}/2\Delta^{2 }}(d^{\prime}|\otimes\frac{1}{\sqrt{2}}(\bra{y+}-\bra{y-}\) so that \[\begin{split}\mathcal{A}&=N^{2}\int d(d)e^{-(d-d_ {n})^{2}/\Delta^{2}}\frac{1}{2}(e^{i\hat{D}}B-e^{-i\hat{D}B})\\ &=\frac{1}{2}(e^{i\hat{D}_{s}B}-e^{-i\hat{D}_{s}B})e^{-\hat{B}^{ 2}\Delta^{2}/2}=-i\sin{(d_{n}B)}e^{-\hat{B}^{2}\Delta^{2}/2}.\end{split} \tag{13}\] This expression illustrates the feasibility of measuring \(d_{n}\), the expectation value of the Electric Dipole Moment of the neutron, to arbitrarily large precision regardless of the magnitude of the uncertainty \(\Delta\). This task is accomplished by measuring the probability of this transition with arbitrarily high accuracy, something that can be achieved simply by repeating the experiment sufficiently many times (or using a sufficiently large number of neutrons). Note that, as expected, if the applied electric field and the neutron electric dipole moment satisfy \(d_{n}B=n\pi\), the amplitude would vanish as it would correspond to a precession that returned the neutron's spin orientation to its original one. A valid concern may arise regarding the expression for the amplitude and, consequently the probability, as it appears to rely on the variable \(\Delta\), which remains, in principle, an unknown quantity. However, the key insight lies in that the term \(B\) relies on the applied electric field's magnitude and the duration of its application. Notably, the functional relationship governing this dependence is distinctly determined by the two parameters, \(d_{n}\) and \(\Delta\). This distinct functional form enables the separate extraction of values for these two quantities using the dependence on the electric field's characteristics and duration. In fact, in the limit where \(B\) is very small (so that the lowest order term in a power expansion can be taken as reliable), we have: \[\mathcal{A}\approx-id_{n}Be^{-B^{2}\Delta^{2}/2} \tag{14}\] so that the probability \[\mathcal{P}\approx|d_{n}B|^{2}e^{-B^{2}\Delta^{2}}\approx|d_{n}B|^{2} \tag{15}\] can be converted directly into a determination of \(d_{n}\) (assuming \(B\), which depends only on the applied electric field, is known). In practice, the experiments performed up to this date have served to set bounds in such probability, which, in turn, become bounds on the quantity \(d_{n}\). We note that this quantity is positive definite irrespective of the sign of \(d_{n}\) (of whether the EDM is aligned in the same direction as the spin or the opposite one). It is worth mentioning that under the condition where \(B\) is not small enough, the effect of the electric dipole moment's quantum uncertainty \(\Delta\), as shown in (13) above, reduces the probability by a factor \(e^{-B^{2}\Delta^{2}}\). This is not a very intuitive feature, and a deeper understanding would probably require a more general analysis, which is out of the scope of the present manuscript. ## 4 Discussion and conclusions The recognition that there is a profound difference between a strong measurement of a physical quantity--a process that often involves altering the state of the system as a result of the measuring process--and a weak measurement of the expectation value of that quantity, making use of one or several auxiliary systems (_ancillas_) that are made to interact very weakly with the system of interest and which lead to negligible changes in the state of the system, represents, we believe, the solution to what seemed as a serious discrepancy between the magnitude of the quantum uncertainty and the bounds that are extracted from the experiments on the nEDM. We started with something that seemed to be a conflict between experimental data and what quantum theory itself was expected to allow to be measured and ended with what we think is a clear understanding of what is going on. Moreover, we think several lessons can be taken to heart from this analysis, any of these are well-known general lessons that are sometimes overlooked in practice. First, we should not confuse the value of a physical observable \(O\) represented by the operator \(\hat{O}\) and its expectation value \(\langle\psi|\hat{O}|\psi\rangle\) in a certain state \(|\psi\rangle\). According to quantum theory, an observable only has a definite value for an observable \(O\) if its state is one of the eigenstates of that operator corresponding to one of the eigenvalues \(\{o_{i}\}\) of the observable, that is if \(|\psi\rangle\) is such that \(\hat{O}|\psi\rangle=o_{i}|\psi\rangle\). The expectation value, on the other hand, is always a well-defined quantity \(\langle\psi|\hat{O}|\psi\rangle\), and there does not seem to be any basic principle preventing its measurement with arbitrarily high precision. In fact one can do that by preparing a large enough number of copies of the system, preparing them all in the same state, and performing on each one a strong measurement. Such strong measurements in general do alter in a serious way the individual system's state. The point, however is that in such a scheme, one expects the quantum uncertainty of the initial state to show up as a dispersion in the distribution of results that should translate into statistical errors emerging from the experiment analysis. This issue could become particularly problematic when, instead of determining a finite value, one is attempting to place bounds on a quantity that is compatible with zero. In that case, the statistical dispersion can be expected to overwhelm the sought-for bounds. That is the basis of the puzzling situation we were describing at the beginning of this work. However, the quantity \(\langle\psi|\hat{O}|\psi\rangle\) can also be measured by other means as illustrated in the weak measurement process discussed in this work. In the case at hand, and although people often talk about measuring the nEDM, motivated by the requirement for it vanish assuming CP symmetry of the strong interactions10, what we have, in fact, are measurements of the expectation value of the nEDM. Moreover, it should be emphasized that the quantity that is required to vanish by CP symmetry of the strong interactions is precisely such expectation value. The CP symmetry does not require the neutron to be an actual eigenstate with the vanishing eigenvalue of the EDM operator. Footnote 10: As everywhere else in our discussion we are ignoring the effects of CP violation in the electro-weak sector. We hope the discussion we have presented serves to clarify what seemed, at first sight, a rather puzzling situation. ## Acknowledgements We acknowledge helpful discussions with Prof. David Albert, Prof. Lev Vaidman, and Prof. Alejandro Perez. D.S. received partial support from the Conahcyt grant 140630. L.B.P acknowledges the support of PAPIIT-UNAM grant AG102023.
2309.05213
Towards Federated Learning Under Resource Constraints via Layer-wise Training and Depth Dropout
Large machine learning models trained on diverse data have recently seen unprecedented success. Federated learning enables training on private data that may otherwise be inaccessible, such as domain-specific datasets decentralized across many clients. However, federated learning can be difficult to scale to large models when clients have limited resources. This challenge often results in a trade-off between model size and access to diverse data. To mitigate this issue and facilitate training of large models on edge devices, we introduce a simple yet effective strategy, Federated Layer-wise Learning, to simultaneously reduce per-client memory, computation, and communication costs. Clients train just a single layer each round, reducing resource costs considerably with minimal performance degradation. We also introduce Federated Depth Dropout, a complementary technique that randomly drops frozen layers during training, to further reduce resource usage. Coupling these two techniques enables us to effectively train significantly larger models on edge devices. Specifically, we reduce training memory usage by 5x or more in federated self-supervised representation learning and demonstrate that performance in downstream tasks is comparable to conventional federated self-supervised learning.
Pengfei Guo, Warren Richard Morningstar, Raviteja Vemulapalli, Karan Singhal, Vishal M. Patel, Philip Andrew Mansfield
2023-09-11T03:17:45Z
http://arxiv.org/abs/2309.05213v1
# Towards Federated Learning Under Resource Constraints via Layer-wise Training ###### Abstract Large machine learning models trained on diverse data have recently seen unprecedented success. Federated learning enables training on private data that may otherwise be inaccessible, such as domain-specific datasets decentralized across many clients. However, federated learning can be difficult to scale to large models when clients have limited resources. This challenge often results in a trade-off between model size and access to diverse data. To mitigate this issue and facilitate training of large models on edge devices, we introduce a simple yet effective strategy, _Federated Layer-wise Learning_, to simultaneously reduce per-client memory, computation, and communication costs. Clients train just a single layer each round, reducing resource costs considerably with minimal performance degradation. We also introduce _Federated Depth Dropout_, a complementary technique that randomly drops frozen layers during training, to further reduce resource usage. Coupling these two techniques enables us to effectively train significantly larger models on edge devices. Specifically, we reduce training memory usage by 5\(\times\) or more in federated self-supervised representation learning, and demonstrate that performance in downstream tasks is comparable to conventional federated self-supervised learning. ## 1 Introduction Over the last several years, deep learning has witnessed a rapid paradigm shift towards large foundational models trained on massive datasets (Brown et al., 2020; Chowdhery et al., 2022). These models learn representations which often extend to diverse downstream tasks. However, when pre-training data is distributed across a large number of devices, it becomes impractical to train models using centralized learning. In these cases, Federated Learning (FL; Konecny et al., 2016) allows participating clients to train a model together without exchanging raw data. This privacy-preserving property makes FL a popular choice for a range of applications, including face recognition (Mei et al., 2022), autonomous driving (Li et al., 2021), recommendation systems (Ning et al., 2021), and self-supervised representation learning (Vemulapalli et al., 2022). In self-supervised learning, SimCLR (Chen et al., 2020), BYOL (Grill et al., 2020), and SimSiam (Chen and He, 2021) are widely used approaches that can be adapted for use in FL settings using algorithms like Federated Averaging (FedAvg; McMahan et al., 2017). Representation learning benefits from large models due to their capacity to learn more nuanced and reliable representations of the data (Chen et al., 2022; Tran et al., 2022). However, in cross-device FL settings, the limited resources of edge devices (including memory, computation capacity, and network bandwidth) impedes the development of large models (Wang et al., 2021; Kairouz et al., 2021). In this work, we focus on federated training of large representation learning models on a large number of edge devices under resource constraints. Typically in FL, clients' models share a single global architecture and perform end-to-end training in each communication round (McMahan et al., 2017). However, many edge devices (e.g., Internet of Things (IoT) devices, mobile phones, tablets, and personal computers) lack sufficient memory and compute to train most existing large ML models. For example, the Google Pixel 6 has 12 GB of memory, which is insufficient to naively train a multi-billion parameter model. Communication of such a model and its gradient updates during every round of FL is also prohibitively data-intensive and time-consuming. These resource constraints create obstacles for real-world federated learning applications with large-scale models. Related WorkOne direction to manage resource constraints for federated learning on edge devices is to carefully select model architecture and hyperparameters (Cheng et al., 2022) to ensure that it can be trained and run efficiently on edge devices. Another direction is to use techniques such as model compression (Xu et al., 2020) and pruning (Jiang et al., 2022) to reduce the size and complexity of the model, making it more suited for training and deployment on edge devices. This can be done by removing redundant or unnecessary layers within the model, or by using low-precision arithmetic to reduce the amount of memory and computation required. In both cases, model performance degradation is usually unavoidable. Some methods rely on partially local models to avoid communicating entire models with a central server (Singhal et al., 2021), but these approaches do not reduce local memory usage on edge devices. Other approaches involve retaining part of a model on a central server (Augenstein et al., 2022), which can reduce the need for local resource usage and is thus complementary to our work but does not itself enable training larger local models. In addition, FL methods designed for heterogeneous systems (Caldas et al., 2018; Horvath et al., 2021; Mei et al., 2022) are able to construct sub-models at different complexities from one unified base model. One of the early works in this direction is Federated Dropout (Caldas et al., 2018), which allows users to train using smaller subsets of the global model, reducing the client communication and computation costs. Empirically, this method can drop up to 50% of model parameters, but will degrade model performance. FjORD (Horvath et al., 2021) improves upon Federated Dropout by introducing an ordered dropout technique that drops adjacent components of the model instead of random neurons. Experiments by Horvath et al. (2021) show that ordered dropout can bring computational benefits and better model performance. More recently, FLANC (Mei et al., 2022) formulates networks at different capacities as linear combinations of a shared neural basis set, so sub-models can be composed by using capacity-specific coefficients. While these methods can reduce average local resource usage in FL, full model training is still needed for certain clients, and thus the resource usage upper bound is still determined by the base model size. Our ContributionsFull utilization of available resources in cross-device FL remains a challenging task. In this paper, we propose _Federated Layer-wise Learning_, a strategy for resource-saving federated training. In particular, training is divided into several phases. In each phase, we update only one active layer and freeze parameters in fixed layers. As shown in Fig. 2(b), our experimental evaluation demonstrates that Federated Layer-wise Learning (FLL) can significantly reduce the resource usage of a single client compared to federated end-to-end learning (FEL) in all aspects. Specifically, FLL only uses 7-22% memory, 8-39% computation, and 8-54% communication compared to FEL. In addition, we demonstrate that _Depth Dropout_ is an effective complementary strategy in federated layer-wise learning, which further reduces resource usage upper bounds without degrading model performance. ## 2 Methods We consider the canonical cross-device FL scenario, in which a large distributed population of clients contributes to training a global model using their locally stored data (Kairouz et al., 2021). In such scenarios, the general training process involves the following steps: first, a coordinating server sends the current set of model parameters to each contributing device. Next, each device runs a local training algorithm and sends the result back to the server. Finally, the server aggregates the model updates received from all devices to determine the new set of model parameters and restarts the cycle. As previously discussed, device resource constraints limit real-world large-scale federated learning applications and lead to a trade-off between model complexity and data accessibility. ### Federated Layer-wise Learning To address resource constraints when training models in FL, we propose a simple yet effective Federated Layer-wise Learning technique. We motivate and apply the method to self-supervised learning (Chen et al., 2020) in this work, but this approach is also broadly applicable. In contrast to downstream vision tasks (_e.g._, classification) that require the extraction of compact features (_i.e._, the interpretation of input) from the output of neural networks, contrastive representation learning employs the principle of learning representations of the input data by comparing and contrasting it with other similar and dissimilar examples. Since this loss only refers to layer activations, it can be attached to any encoder layer. When applied to residual networks, we expect the effect of applying the loss on successive layers to be progressive. This motivates our Federated Layer-wise Learning method, as depicted in Fig. 1(a). The proposed method divides the holistic training process into several phases and progressively grows the model in an incremental schedule, starting from the shallow layers and moving to deeper layers. Each layer is trained for a predefined number of communication rounds before proceeding to the next layer. We only need to compute gradients and upload them to the server for the active layer, which simultaneously reduces memory usage, compute, and communication costs. We can control resource usage by varying the number of active and fixed layers during training, potentially treating multiple layers as active at a given round. As an aside, in cross-silo FL (where clients participate repeatedly in training) the fact that only one active layer is being trained enables us to avoid communicating the rest of the model to devices on most rounds, further reducing communication. ### Depth Dropout While the proposed Federated Layer-wise Learning significantly alleviates resource usage, our target scenario is cross-device FL, in which only a relatively small subset of active clients are selected from a large pool of participants. It is likely that a given client will not be selected twice during the entire federated training process (Wang et al., 2021). Thus, it is necessary to download both fixed and active layers from the server to the clients. This can still present a challenge for clients with resource constraints, as downloading a large number of fixed layers and performing forward passes can be computationally intensive at the end of the training process. To this end, we propose _Depth Dropout_ to address the increasing resource usage introduced by a large number of fixed layers. Fig. 1(b) shows how to apply Depth Dropout to a 5-layer model with a budget of 3 layers. It begins by progressively expanding the model to reach its maximum capacity, which in this case is 3 layers. During the initial three phases, we perform standard layer-wise training. In the last two phases, we randomly remove certain fixed layers. However, the first layer, which includes Transformer patch encoding and position embedding, is never removed. For example, in phase 4, we have the option to remove either layer 1 or layer 2, while in phase 5, we have three candidates to remove. This Figure 1: (a) Overview of Federated Layer-wise Learning. (b) Schematics of training procedure with Depth Dropout for a 5-layer model with a budget of 3 layers. randomization process is akin to the Dropout technique used in neural networks, and is only applied during training. During inference, the full model with 5 layers is utilized. ## 3 Experiments Datasets and ImplementationWe partition the standard CIFAR-100 (Krizhevsky et al., 2009) training set into 125 clients to simulate a cross-device FL scenario. The original test set in CIFAR-100 (Krizhevsky et al., 2009) is considered the global test set used to measure performance. ViT-Ti/16 (Dosovitskiy et al., 2020) is used as the representation learning backbone. All models are trained using the following settings: SGD optimizer for the server and clients; client learning rate of \(1\times 10^{-3}\); batch size of 16; 32 active clients per round. The Effectiveness of Layer-wise TrainingHere we compare our approach with federated end-to-end learning on standard benchmarks. Results with different setups on CIFAR-100 are shown in Table 1. We can make the following observations: (i) Both models pre-trained by Federated Layer-wise Learning and federated end-to-end learning can significantly outperform the model without pre-training, indicating the effectiveness of self-supervised representation learning in federated settings. (ii) While the Federated Layer-wise Learning approach is an approximation of federated end-to-end learning, it can achieve performance on par with the end-to-end method in downstream evaluation tasks. In particular, the performance gap is less than 1% when using the representation from the last layer (layer 12) of the network. (iii) we found that intermediate representations from the Federated Layer-wise Learning model performed better than those from the federated end-to-end learning model in certain downstream tasks. For example, in a linear downstream task using the representation from layer 3, the Federated Layer-wise Learning model achieved 28.3% accuracy, while the federated end-to-end learning model achieved 23.9%. This trend was also observed in other downstream tasks using different intermediate representations. This superior performance of intermediate representations is due to the contrastive loss being applied to all layers during the layer-wise pre-training process. These results suggest that models trained using the proposed method can easily compose sub-models of varying complexities. We conducted additional experiments to further investigate the effect of model size (number of layers) and number of training rounds per layer on the performance of Federated Layer-wise and end-to-end learning. The results of these experiments are shown in Figure 2(a). Increasing the number of layers generally led to improved performance for both learning approaches. We also found that the difference in performance between the two approaches was minimal when the number of training rounds per layer was small (4k) but became more pronounced when the number of training rounds per layer was increased (12k). Based on these results, it appears that layer-wise learning may require slightly more training rounds per layer to reach the same performance as end-to-end learning. This may be due to the fact that layer-wise learning is an approximation of end-to-end learning. However, the performance gap between the two approaches is generally less than 1%. The Effectiveness of Depth Dropout.We evaluate Depth Dropout with Federated Layer-wise Learning. We conducted two sets of experiments: applying Depth Dropout to a 6-layer model and a 12-layer model, with a fixed dropout rate of 50% (meaning half of the fixed layers were dropped). The results, shown in Tables 2 and 3, demonstrate that Depth Dropout does not significantly impact model performance. For example, the 6-layer model with Depth Dropout achieved 37.0% accuracy \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Downstream & \multicolumn{4}{c|}{Linear} & \multicolumn{4}{c}{Finetune} \\ Representation From & Layer 1 & Layer 3 & Layer 6 & Layer 12 & Layer 1 & Layer 3 & Layer 6 & Layer 12 \\ \hline Pre-training Method & \multicolumn{4}{c}{Federated Layer–wise Learning} \\ \hline Accuracy & 25.3 & 28.3 & 29.2 & 29.8 & 30.1 & 35.6 & 37.2 & 37.8 \\ \hline Pre-training Method & \multicolumn{4}{c}{Federated End-to-end Learning} \\ \hline Accuracy & 18.0 & 23.9 & 27.8 & 30.3 & 25.4 & 32.2 & 35.7 & 38.6 \\ Pre-training Method & \multicolumn{4}{c}{Training from scratch (Without Pre-training)} \\ \hline Accuracy & 9.2 & 10.0 & 10.5 & 11.3 & 18.7 & 24.2 & 27.6 & 29.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental results on CIFAR-100 with different pre-training strategies. For image classification, we report standard Top-1 accuracy (%). after finetuning, while the 6-layer model trained with only Layer-wise Learning achieved 37.2% accuracy. We observed similar results for the 12-layer model with Depth Dropout, which achieved 37.6% accuracy after finetuning, compared to 37.8% for the model trained with normal Layer-wise Learning. Additionally, Depth Dropout significantly reduced resource usage. It is worth noting that the resource usage of the 12-layer model with a budget of 6 layers was equivalent to the resource usage of a 6-layer model without Depth Dropout. As shown in Fig. 2, depth dropout reduced the upper bounds of resource usage in all three categories, especially communication cost. The original upper bound for Layer-wise training was 54%, but it was reduced to 29% when the dropout rate was set to 50%. ## 4 Conclusion Our study presents Federated Layer-wise Learning for devices with limited resources, which simultaneously reduces the demands on memory, computation, and communication for individual clients without significantly compromising performance in comparison to end-to-end training. We demonstrate that our proposed Depth Dropout technique is an effective complement to Federated Layer-wise Learning, as it further reduces resource usage across all categories with minimal loss of performance, even when dropping half of the fixed layers. Future work can evaluate these methods on larger-scale and naturally partitioned datasets, which would enable more realistic analysis of generalization performance across devices (Yuan et al., 2021). Additionally, we intend to investigate the effects of varying dropout rate for the Depth Dropout technique. Furthermore, our method can be integrated with other memory-efficient training techniques, such as model compression (Deng et al., 2020) and activation paging (Patil et al., 2022) to potentially further reduce resource usage. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model Size & 6 layers & \begin{tabular}{c} 6 layers \\ (Budget: 3 layers) \\ \end{tabular} & 3 layers & 12 layers & \begin{tabular}{c} 12 layers \\ (Budget: 6 layers) \\ \end{tabular} & 6 layers \\ \hline Accuracy & 37.2 & 37.0 & 32.8 & 37.8 & 37.6 & 37.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy of depth dropout with Federated Layer-wise Learning, under finetuning downstream evaluation. The budget specifies the max number of layers involved in training. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model Size & 6 layers & \begin{tabular}{c} 6 layers \\ (Budget: 3 layers) \\ \end{tabular} & 3 layers & 12 layers & \begin{tabular}{c} 12 layers \\ (Budget: 6 layers) \\ \end{tabular} & 6 layers \\ \hline Accuracy & 29.2 & 29.1 & 28.3 & 29.8 & 29.7 & 29.2 \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy of depth dropout with Federated Layer-wise Learning, under linear downstream evaluation. The budget specifies the max number of layers involved in training. Figure 2: (a) Comparison between Federated Layer-wise Learning (FLL) and federated end-to-end learning (FEL) under different numbers of training rounds and model sizes. Results are reported on CIFAR-100 with downstream finetuning evaluation. (b) Resource usage comparison of a client per-round after combining Federated Layer-wise Learning and Depth Dropout.
2309.13471
Cloud Watching: Understanding Attacks Against Cloud-Hosted Services
Cloud computing has dramatically changed service deployment patterns. In this work, we analyze how attackers identify and target cloud services in contrast to traditional enterprise networks and network telescopes. Using a diverse set of cloud honeypots in 5~providers and 23~countries as well as 2~educational networks and 1~network telescope, we analyze how IP address assignment, geography, network, and service-port selection, influence what services are targeted in the cloud. We find that scanners that target cloud compute are selective: they avoid scanning networks without legitimate services and they discriminate between geographic regions. Further, attackers mine Internet-service search engines to find exploitable services and, in some cases, they avoid targeting IANA-assigned protocols, causing researchers to misclassify at least 15\% of traffic on select ports. Based on our results, we derive recommendations for researchers and operators.
Liz Izhikevich, Manda Tran, Michalis Kallitsis, Aurore Fass, Zakir Durumeric
2023-09-23T20:06:00Z
http://arxiv.org/abs/2309.13471v2
# Cloud Watching: Understanding Attacks Against ###### Abstract. Cloud computing has dramatically changed service deployment patterns. In this work, we analyze how attackers identify and target cloud services in contrast to traditional enterprise networks and network telescopes. Using a diverse set of cloud honeypots in 5 providers and 23 countries as well as 2 educational networks and 1 network telescope, we analyze how IP address assignment, geography, network, and service-port selection, influence what services are targeted in the cloud. We find that scanners that target cloud compute are selective: they avoid scanning networks without legitimate services and they discriminate between geographic regions. Further, attackers mine Internet-service search engines to find exploitable services and, in some cases, they avoid targeting IANA-assigned protocols, causing researchers to misclassify at least 15% of traffic on select ports. Based on our results, we derive recommendations for researchers and operators. cloud, security, honeypot, darknet, scanning + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Information detection systems + Footnote †: journal: Journal: Information detection systems + Footnote †: journal as attacker biases require statistical validation to extract larger trends. Operators should monitor unexpected ports/protocols, since attacker traffic may be unanticipated; and continue to monitor IP reputation, since scanners send an order of magnitude more traffic to IPs found on Shodan or Censys. We release our dataset of scanning traffic targeting the cloud to enable future research. ## 2. Related Work A significant fraction of Internet measurement research use Internet telescopes, honeypots, and passive network analysis to understand topics that range from attacker behavior to Internet outages. While several prior studies have hinted that attackers exhibit bias during target selection (Safra et al., 2018; Datta et al., 2019; Datta et al., 2019; Datta et al., 2019; Datta et al., 2019; Datta et al., 2019), there has been little focus on cloud networks specifically. Our work builds off of existing research in three areas: telescope measurements, cloud computing, and Internet scanning, which we describe here. **Telescopes.** Network telescopes, also known as darknets, have been used to understand Internet background radiation (Datta et al., 2019; Datta et al., 2019), malicious scanning patterns (Datta et al., 2019; Datta et al., 2019; Datta et al., 2019; Datta et al., 2019; Datta et al., 2019), DDoS attacks (Safra et al., 2018; Datta et al., 2019), worms (Datta et al., 2019), and botnets (Datta et al., 2019; Datta et al., 2019). To ensure scientific validity, researchers have extensively studied the caveats of telescope deployment: understanding how the size (Datta et al., 2019), network (Datta et al., 2019), and geographic location (Safra et al., 2018; Datta et al., 2019; Datta et al., 2019; Datta et al., 2019) of darknets influence unsolicited scans and attacks. Calibration studies have primarily compared darknets to other darknets (Datta et al., 2019; Datta et al., 2019; Datta et al., 2019; Datta et al., 2019) or darknets to honeypots within similar networks (Datta et al., 2019; Datta et al., 2019). However, our work shows that attackers targeting the cloud frequently avoid darknets altogether and exhibit unique preferences within cloud networks. Most closely related, in 2019, Richter et al. showed that there are significant differences between scans that target darknets and a CDN (Griffon et al., 2019). Griffon et al., investigated amplification DoS attacks and found little overlap in amplification DoS attacks between the cloud and a darknet (Griffon et al., 2019). Our work also shows that telescopes which do not collect payloads, mistakenly assume that scans only target IANA-assigned protocols. Further, we demonstrate that telescopes that collect payloads but reside in networks that do not emulate real services (e.g., (Safra et al., 2018)) are avoided by scanners. **Honeypots in the Cloud.** Several recent studies have measured Internet activity using cloud-hosted honeypots. For example, Kelly et al., Bove et al., and Brown et al. study network differences amongst honeypot traffic, but only within the US (Kelly et al., 2019; Datta et al., 2019) or when aggregating different geographic regions across different networks (Kelly et al., 2019). We find that there are several surprising confounds that affect the traffic that a cloud honeypot receives, and that without statistically rigorous hypothesis testing, it is easy to draw incorrect conclusions. For example, our work shows that some reported prior results, such as network preferences (Kelly et al., 2019), are not statistically significant. Most prior works (Kelly et al., 2019; Datta et al., 2019; Datta et al., 2019) do not perform statistical tests in their analysis, making it unclear to what extent their observed differences are statistically significant or due to chance, and how their results can direct future work. **Internet Scanning.** While prior work (Kelly et al., 2019; Datta et al., 2019; Datta et al., 2019; Datta et al., 2019) has shown that the vast majority of Internet scanners target a small sub-sample of the IPv4 address space, to the best of our knowledge, no work has investigated how attackers target towards specific service histories within the cloud. Most closely related, Irwin (Irwin, 2019) finds scanners targeting port 445 are less likely to scan broadcast addresses in telescope networks and Moura et al. (Moura et al., 2019) finds neighboring IP in ISPs are more likely to engage in spam and phishing attacks. Similarly, Bodenheim (Bodenheim, 2011) measure the impact of the Shodan service search engine on IoT devices and Raftopoulos et al. (Raftopoulos et al., 2019) show that Internet scanning can lead to compromised hosts. The variety of scanning traffic targeting neighboring services requires statistically-rigorous comparisons. Francois et al. (Francois et al., 2019) propose a parametric method for detecting significant changes in telescope networks using a threshold that must be manually determined for each distribution type. Instead, we present a non-parametric method tailored towards small sample sizes, which cloud vantage points often provide. Last, our study is motivated in part by recent findings on real-world service deployment. Bano et al. (Bano et al., 2019) noted that protocols oftentimes run on unassigned ports. Izhikevich et al. (Ralick et al., 2019) found that services on unassigned ports are more likely to be vulnerable. We are the first work that analyzes how attackers scan unexpected services. We show that prior studies that only rely on popular honeypot frameworks (Bono et al., 2019; Datta et al., 2019; Datta et al., 2019) or non-reactive telescopes--and therefore assume that scans are targeting the IANA-assigned protocol--miss at least 15% of scan traffic. ## 3. Methodology To understand the differences in network attacks seen within cloud environments, we analyze traffic seen by honeypots in different networks, geographic regions, and with different service configurations. In this section, we describe our primary data sources, how we differentiate benign from malicious scanning traffic, how we minimize the risk of harm during our experiments, the statistical tests we use to compare scanning traffic, and how we validate the temporal stability of our results. ### Vantage Points To obtain a diverse set of vantage points, we use existing honeypots from GreyNoise, deploy our own honeypots, and use the Orion Network Telescope. We summarize all our data sources, including their geographic location and size, in Table 1. We publicly release our data at: [https://scans.io/study/cloud_watching](https://scans.io/study/cloud_watching). **GreyNoise honeypots.** GreyNoise deploys honeypots across multiple cloud providers and geographic regions. GreyNoise honeypots are assigned public IPv4 addresses, which are not publicly known.1 GreyNoise uses Cowrie (Cowrie, 2019), an interactive honeypot, to collect SSH (ports 22, 2222) and Telnet (23, 2323) attempted login credentials For all other ports, GreyNoise completes the TCP or TLS handshake and records only the first received payload. Each GreyNoise honeypot hosts public vulnerable-looking protocol-assigned services on at least seven popular ports. Footnote 1: The data we release contains honeypot IP addresses that are no longer in use. GreyNoise deploys a variable number of honeypots across different regions and protocols. To maximize the number of honeypots per region while also maintaining consistency, we limit our analysis to regions that contain at least 4 SSH honeypots, 4 Telnet honeypots, and 2 honeypots for all other protocols (non-SSH and non-Telnet protocols nearly always only have 2 honeypots per region). We filter to include only geographic regions and networks that collect data in both 2020 and 2021 for cross-validation purposes. After filtering, there remain honeypots across 5 cloud providers--AWS, Google, Azure, Linode, and Hurricane Electric--and 23 countries across North America, Europe, and Asia Pacific. **Honeytrap honeypots.** To understand how attackers that target clouds also target other networks with legitimate services, we use two existing /26 IPv4 networks of honeypots at two educational institutions: Stanford and Merit. The honeypots use the Honeytrap (Honeytrap, 2021) framework for traffic collection and configure it to collect the first UDP payload or the first TCP payload after completing a TCP handshake. To eliminate biases when directly comparing the education and cloud honeypots, we deploy an additional 64 IPv4 Honeytrap honeypots in a Google geographic region located near Stanford, 64 IPv4 honeypots in an AWS geographic region located near Stanford, and 2 IPv4 honeypots in a Google geographic region near Merit. We do not compare traffic between GreyNoise and Honeytrap honeypots given their different software configurations. **Orion network telescope.** To understand how attackers that target clouds also target other networks without legitimate services, we analyze scanning activity targeting a network telescope. Network telescopes/darknets typically do not host any services, receive traffic on all ports and IP addresses, and only record the first packet of a connection (i.e., they do not complete the TCP layer 4 handshake). To compare the scanning activity of a telescope with scanning activity targeting networks that host real services (e.g., educational networks or cloud providers), we use the Orion Network Telescope, which spans 475K IPv4 addresses (i.e., 1,856 /24 networks). We discuss limitations of our vantage points in Section 7. **Ethics.** To minimize harm when deploying honeypots, we configure the honeypots to not expose services that are historically prone to being abused for amplification attacks (e.g., DNS open resolver). Furthermore, our honeypots do not respond to UDP messages, ensuring that no UDP-based DDoS amplification attacks occur. The honeypots are also configured to be low-interaction, thereby limiting the size of responses and minimizing the chances of arbitrary code execution triggering a harmful zero-day amplification attack. ### Identifying Malicious Traffic Not all network scanning is malicious. Multiple motivations exist behind unsolicited network scans: organizations collecting datasets (Srivastava et al., 2017; Zhang et al., 2018), academic groups conducting research (Srivastava et al., 2017) or performing vulnerability modifications (Srivastava et al., 2017; Zhang et al., 2018), malicious actors performing reconnaissance with the intent of later exploitation (Srivastava et al., 2017), or malicious actors actively exploiting a service (Zhang et al., 2018). Understanding the true intent behind a network scan is challenging: GreyNoise's mission is to identify scanning actors, yet 78% of the scanning IPs that GreyNoise encountered in 2022 were classified as "unknown" (Bartos et al., 2021). When possible, we classify whether a scan is malicious based on whether the scan attempts to (1) login or bypass authentication, or (2) alter the state of the service (e.g., run a shell command). Our definition does not account for reconnaissance scanning that may have delayed malicious intent. Throughout our analysis, we refer to "scanners" as those for whom the scanning intent is unknown (e.g., any scanner that targets a telescope that does not collect payloads) and "attackers" as those for whom malicious intent has been verified (e.g., a scanner that sends a malicious payload). While an attacker is also a scanner, we make the distinction to maintain precision in our claims. While detecting malicious behavior is easy for protocols that request authentication (e.g., SSH, Telnet), non-authentication based protocols (e.g., HTTP) pose a challenge. For example, while the HTTP protocol is commonly used for sending benign GET requests (Srivastava et al., 2017), many exploits are also delivered over HTTP, including the critical Log4Shell (CVE-2021-44228) vulnerability (Zhang et al., 2018). To detect malicious payloads that attempt to bypass authority or alter the state of a service for non-authentication-based protocols, we use Suricata (Srivastava et al., 2017), an open-source network intrusion detection system providing 32K detection rules. Following Suricata documentation recommendations (Srivastava et al., 2017), we manually filter for rules that limit false positives (e.g., rules that do not rely on a static set of block-listed IPs or ports). To eliminate false positives, we (1) manually inspect the subset of rules that trigger alerts on payloads and (2) only keep rules that are triggered when the corresponding payload is verified as bypassing authority or altering the state of service. Our final rule set belongs in the following Suricata class types: \begin{table} \begin{tabular}{l l l l l|l l} \hline \hline Network & \# Geo & Geographic Region & Vantage Points & Collection & \# Unique & \# Unique \\ & Regions & Country (State) Codes & (IPs) per Region & Method & Scan IPs & Scan ASes \\ \hline Hurricane Electric & 1 & US (OH) & 256 & GreyNoise & 130.103 & 8.278 \\ AWS & 16 & US (OR), US (CA), US (GA), BR, BH, FR, IE, DE, CA, AU, SG, IN, KR, JP, & 4 or z (HTTP) & GreyNoise & 99,566 & 7,142 \\ & & & & & & \\ Azure & 3 & US (TX), SG, IN & 4 or z (HTTP) & GreyNoise & 19,893 & 2,477 \\ Google & 21 & US (NV), US (UT), US (CA), US (OR), US (UA), QC, CH & 4 or z (HTTP) & GreyNoise & 103,067 & 7,481 \\ & & & & & & \\ & & & & NL, DE, GB, BE, FI, AU, ID, SG, KR, & & & \\ & & & & JP, HK, TW & & & \\ Linode & 7 & US (CA), US (NY), UK, DE, IN, AU, SG & 4 or z (HTTP) & GreyNoise & 72,235 & 5,984 \\ \hline Stanford & 1 & US (West) & 64 & Honorytrap & 105,045 & 6.177 \\ AWS & 1 & US (West) & 64 & Honorytrap & 99,445 & 7,603 \\ Google & 1 & US (West) & 64 & Honorytrap & 93,119 & 7,947 \\ Merit & 1 & US (East) & 64 & Honorytrap & 106,988 & 6.315 \\ Google & 1 & US (East) & 2 & Honorytrap & 18,064 & 1,802 \\ \hline Orion & 1 & US (East) & 475K & Telescope & 5,147,050 & 24,835 \\ \hline \hline \end{tabular} \end{table} Table 1. Vantage points—We analyze scanning traffic targeting 8 unique networks (5 cloud, 2 education, and 1 telescope), spanning 23 countries across North America, Europe, and Asia Pacific. We use three different scanning traffic collection methods described in Section 3.1. We report the number of unique IPs and ASes that scan each vantage point between July 1–7, 2021. trojan-activity, web-application-attack, protocol-command-decode, attempted-user, attempted-admin, attempted-recon, bad-unknown, misc-activity. The Suricata rules used are found on Pastebin (Pastebin, 2017; Pastebin, 2017). Suricata labels 6% (10.2K) of distinct HTTP payloads in our dataset as malicious. Overall, we identify that 34% of traffic does not attempt to bypass authentication when targeting 23/Telnet, 24% does not bypass authentication when targeting 22/SSH, and 75% of payloads do not send exploits to HTTP/80. Thus, prior works (Sandam, 2017; Sandam, 2017; Sandam, 2017) whose methodology assumes that all traffic destined towards commonly vulnerable ports (e.g., Telnet/23) is malicious, and all traffic destined towards commonly benign ports (e.g., HTTP/80) is benign, likely misclassify at least a quarter of traffic. ### Comparing Vantage Points As we will show in the next section, there are confounding biases when differentiating neighboring targets, making the use of statistical tests necessary when comparing attacker activity across vantage points. To find significant differences between the traffic that targets different honeypots, we perform the non-parametric chi-squared statistical test (Sandam, 2017). To identify statistically significant differences, we use a p-value of 0.05 and apply Bonferroni correction to accommodate the comparisons across all vantage points. Often, Bonferroni correction shrinks p-values by several orders of magnitude. Since the p-value is only a measure of statistical certainty, we use Cramer's V (Cramer, 2017) to calculate the effect size (denoted by \(\phi\)), which indicates the strength of statistical difference: the larger the effect size, the more different the distributions. The magnitudes of effect sizes do not have pre-fined limits (e.g., not all \(\phi<0.3\) represents a small effect). Rather, magnitudes are derived using the chi-statistic and the degrees of freedom within the chi-test, both of which depends upon the number of unique values being compared. Thus, identical \(\phi\) values can represent different effect sizes if the degrees of freedom between two tests are different. To promote understanding, for each test we report the effect size alongside its magnitude. The chi-square test expects a minimal number of variables with an expected frequency of zero, so that it does not inaccurately mark distributions as significantly different due to a small skew in the long-tail of near-zero frequencies. As there is a long tail of scanning actors (e.g., on average, the top 3 ASes that send the most traffic of all 680 ASes account for 37% of all traffic sent to each GreyNoise honeypot), we limit the degrees of freedom and ensure the expected frequency of a variable is larger than zero (an important requirement for chi-squared tests). Concretely, we always choose the most popular 3 values for each characteristic (e.g., top 3 payloads, top 3 scanning ASes) for each vantage point and perform the chi-squared test on the union of all unique top 3 characteristics across vantage points. Studying the top 3 values decrease bias towards small distributional differences.2 Footnote 2: The long-tail of ASes/payloads that scan each honeypot restricts the number of top popular values we can compare at at time. For example, the top-3 ASes account for 37% of all scanning traffic, the top-5 account for 42% and the top-100 account for 70%. Thus, expanding evaluation to even the top-5 ASes increases the number of near-zero frequency variables by over 200%, significantly increasing bias towards small distributional-differences; studying top-3 decreases bias. Our analysis includes many dimensions of comparisons. To simplify, we focus on 3 popular assigned protocols: Telnet (the most popular protocol used by botnets (Sandam, 2017)), HTTP and SSH (the two protocols responsible for over 90% of ASCII payloads sent by network scanners (Sandam, 2017)). We also consider the possibility of scanner behavior varying across non-IANA assigned ports, and report HTTP results3 independent of port number (i.e., "HTTP/All Ports"). Footnote 3: We only analyze HTTP as all ports, since malicious HTTP packets can be fingerprinted without needing application-layer specific interaction across all ports. Across vantage points, we use the chi-squared test to compare scanning traffic using the following axes: _who_ (i.e., which ASes are scanning), _what_ (i.e., what are the top usernames/passwords/payloads being attempted), and _why_ (i.e., the maliciousness of traffic). When comparing who is scanning, we often identify scanning actors by their autonomous system, as opposed to IP address, to account for scanning campaigns that rely on multiple source IP addresses (e.g., Censys (Censys, 2018)). When comparing payloads, we directly compare usernames and passwords for SSH and Telnet, and directly compare the full payload after removing ephemeral values (i.e., Date, Host, and Content-Length fields) for HTTP. ### Temporal Stability We compare scanning traffic across all three sources of vantage points (cloud, educational, and network telescopes) using data collected during the first week of July 2021. To verify that our results are consistent across time, we repeat our experiments using data from the first week of July 2020 or July 2022 (depending on the availability of vantage points at that time) and provide the results in Appendix C. Across the 3 years, the IP addresses of our honeypots remain consistent, while those of the GreyNoise honeypots change. We supplement the results throughout the paper with a discussion on temporal similarities and differences. ## 4. Impact of IP address assignment Services hosted in the cloud live in a randomly-assigned and recycled IP space. Cloud services acquire neighbors with a range of security postures, and they occupy IP addresses that have previously housed services with a range of reputations. In this section, we explore if and how a service's IP address and history influence what scanners target. We find that, indeed, attackers target neighboring4 identical services differently, such as sending a varying number of malicious payloads, usernames, and passwords. We explore what factors influence the services scanners target, and find that scanners predict network structures to filter for targets and mine Internet-service search engines to find exploitable services. Footnote 4: We define neighboring services to be services that reside in the same geographic region and network (i.e., from the same cloud provider, educational network, or network telescope), but do not necessarily share contiguously neighboring IP addresses. ### Variation Across Neighboring Hosts Neighboring services in the cloud are scanned and attacked by a significantly different group of scanners and payloads. In Table 2, we compute the percentage of neighborhoods in the clouds that receive significantly different traffic using data from GreyNoise vantage points for the following traffic characteristics: the top 3 ASes that send traffic (malicious or not), the fraction of malicious traffic, the top 3 usernames and password attempts for SSH and Telnet, and the top 3 payloads across all traffic for HTTP. A significantly different set of ASes target neighboring services (large effect size, \(\phi\)=0.43). For example, one of four identical services in the Linode network Singapore geographic region is targeted by three orders of magnitude more unique scanning IPs from Axtel Networks (ASN 6503) compared to the other services (large \(\phi\)= 0.82). Thousands of scanner IP addresses belonging to the Tsunami botnet (Tsurami et al., 2017) only target a single IP address in the Hurricane Electric /24 honeypot network. Across neighboring services, attackers attempt different payloads when bypassing authentication of services, including different usernames (e.g., large \(\phi\)=0.24 targeting Telnet/23) and different passwords (e.g., large \(\phi\)=0.39 targeting Telnet/23). For example, attackers send an order of magnitude more payloads that attempt an HTTP POST user login request to only one of four identical honeypot services in the Azure network Singapore geographic region (large \(\phi\)=0.61). In the next sections, we explore two reasons that contribute to significant differences amongst neighboring services: IP address structure and Internet service search engines. ### IP Address Structure Service operators and attackers treat IP addresses differently. While service operators often assign IP addresses to hosts at random (e.g., dynamic host configuration, cloud-assigned virtual machine addresses), scanners and attackers use the IP address to predict the presence of targets. We identify which IP address structures scanners are most likely to target in the cloud by (1) using the network telescope to identify scanning patterns (given its substantially larger sample size) and (2) validating the existence of the same pattern in the cloud. Scanners avoid IP addresses that are believed to not host services in both the telescope and cloud. We compare the number of scanners across neighboring IP addresses in the telescope, which we plot in Appendix B. We observe that scanners are 3.5 times less likely to target an IP address structure that is likely reserved for broadcasting purposes (i.e., ending in a "2.55") compared to other IP addresses, on seven of the top ten most consistently targeted ports. Scanners targeting port 445 in the cloud also exhibit a similar bias: scanners are between 1.2 (Google) to 3.5 times (Linode) less likely to target a "2.55" IP address. However, unlike the telescope, we find no significant evidence of "2.55" avoidance on other ports in the cloud, perhaps due to the different set of attackers that target clouds and telescopes (Section 5.2). In the telescope, scanners that avoid broadcast-type addresses for one octet are equally likely to avoid an IP address with other "255" octets (e.g., x. A.255.0/24). The avoidance is significant: for example, scanners targeting 7574/Oracle are 61 times less likely to target an IP with a "255" octet; and 9 times less likely for 445/SMB. We hypothesize that incorrect filtering of broadcast addresses, in which the position of the "255" octet is not checked, may be responsible for the observed preference. Since none of our cloud honeypots have IP addresses with a "255" octet that does not appear at the end, we leave to future work to validate the existence of this pattern in the cloud. Botnets exhibit less intuitive, yet still significant, preferences in both the telescope and cloud. For example, when targeting port 22 in the telescope, the Mirai botnet and scanners from the bullet hosting provider PonyNet (ASN 53667) are one order of magnitude more likely to choose the first address of a /16 (e.g., x. B.0.0) as its first scanning target compared to any other address. Within our Hurricane Electric /24 honeypot network, the Tsunami botnet (Tsurami et al., 2017) is one order of magnitude more likely to target a single IP address. Thus, random IP address assignment leaves some services unknowingly more vulnerable to botnet attacks than others. ### Internet Service Search Engines The recycled address space of the cloud assigns services to IPs that previously hosted unrelated services. In this section, we investigate how attackers use the most-frequently scanning Internet service search engines (Stein Censys to discover their SSH/22 service, one group only allows Shodan to discover their SSH/22 service, etc. By systematically "leaking" services to the two most popular Internet service search engines (Krishnan et al., 2017), we test how search-engines influence the services that attackers target. When comparing and presenting our results, we exclude scanning traffic from Censys and Shodan so that increases in scanning traffic are not due to the Censys/Shodan scanners themselves. To perform our experiment, we do not deploy honeypots in the cloud because our experiment requires un-tainted service histories, and we do not control the service history of cloud IPs. Thus, we deploy the honeypots in a network we control: Stanford. While this network is not a cloud network, our results in Section 5.2 show that scanners that target the cloud are similar to scanners that target education networks--roughly 89% of IPs that target the cloud also target the education network. There is no significant difference in the payloads or fraction of malicious traffic. Thus, our analysis of scanners targeting the Stanford network can likely be extrapolated to also characterize scanners that target the cloud. Attackers use Internet-service search engines.We observe two primary attacker behaviors that target leaked services. First, across protocols, scanners and attackers are significantly5 more likely to target a service that is currently, or has been previously, leaked (Table 3). For example, HTTP/80 services listed on Censys or Shodan are attacked with 7.3 times more malicious traffic per hour compared to non-leaked services. SSH/22 services leaked on Shodan are attacked with 2.8 times more malicious traffic per hour than non-leaked services, and 1.6 times more for Telnet/23 services found on Censys. Footnote 5: We use a one-sided Mann-Whitney U test to evaluate whether the volume of traffic per hour that targets leaked services is stochastically greater than the volume targeting the control group. We only discuss significant results. Second, we observe that attackers are significantly6 more likely to increase the number of "spikes" of traffic towards leaked services. In other words, scanners and attackers are more likely to only briefly scan a leaked service, likely after it has been found by the attacker on a search engine. For example, scanners send significantly more spikes of traffic towards Shodan-leaked HTTP/80 and Censys-leaked Telnet/23 services compared to non-leaked services. Spikes of traffic often carry unique brute force logins; attackers will attempt on average 3 times more unique SSH passwords on leaked compared to non-leaked services. Footnote 6: We use the Kolmogorov-Smirnov test to compare the distributions of the average volume of traffic per hour targeting leaked and non-leaked services. Upon manual verification, we determine that the spikes of traffic are the underlying cause of the difference in distributions. A different set of ASes target leaked HTTP/80 services. For example, while three ASes--Avast (ASN 198605), M247 (ASN 9009), and CDN77 (ASN 60068)--conduct nmap (Brands et al., 2017) scans against our non-Censys-leaked HTTP/80 honeypots, they actively _avoid_ all Censys-leaked HTTP/80 honeypots. Interestingly, the nmap scanners also target the previously leaked honeypots, implying that the nmap scanners source only up-to-date information from Censys. We do not find significant differences in the ASes that scan leaked and non-leaked SSH/22 and Telnet/23 services, nor do we find significant differences in the most popular payloads targeting leaked SSH/22 and Telnet/23 services. Attackers targeting a specific set of protocols also exhibit searching preferences (Table 3): attackers targeting HTTP/80 rely more on Censys (4.0 times increase in traffic per hour) while attackers targeting SSH/22 rely more heavily on Shodan (2.8 times increase in traffic per hour). Attackers targeting Telnet/23 use both Censys and Shodan (1.3--1.6 times increase in traffic per hour) but rely on search engines less than attackers targeting SSH and HTTP. ### Discussion and Summary The vulnerability of services in the cloud are dependent on their randomly-assigned IP address due to differences in attacker proclivities. Scanners guess network structures, botnets latch on to individual targets, and malicious actors rely on Censys and Shodan to identify targets to brute-force attack. Consequently, neighboring services see significant differences in malicious payloads. Hence, researchers who deploy honeypots in the cloud can also inadvertently observe dramatically different patterns in attacker behavior. Temporal consistency.Over the years, scanners and attackers have consistently exhibited preferences between neighboring targets. In 2013, Irwin (Irwin, 2013) found that scanners targeting port 445 were less likely to scan broadcast addresses in telescope networks, which we confirm is still the case. When analyzing our data from 2020, we observe the same patterns as in 2021 (e.g., scanners and attackers still originate from different ASes and send different payloads towards neighboring services), which we detail in Appendix C.1. Fittering attacker preferences.In the rest of our analysis, we account for attacker preferences for certain IPs and network structures by (1) using multiple honeypots in each region and (2) comparing the median expected values (e.g., the median number of packets sent by an AS within a group of honeypots) across groups. We elect not to compare the intersection of all scanning events within a group of honeypots, since the majority of scanning campaigns conduct sub-sampled Internet-wide scans and are not expected to target all honeypots within a region (Krishnan et al., 2017; Sohn et al., 2017; Sohn et al., 2017). ## 5. Geographies and Providers Deploying services across multiple geographic regions and providers is remarkably simple in the cloud. In this section, we explore how attackers target services across different geographies and networks, \begin{table} \begin{tabular}{l l l l l} \hline \hline Service & Traffic & Censys & Shodan & Previously \\ & & Leaked & Leaked & Leaked \\ \hline \multicolumn{4}{c}{Fold Increase in Traffic per Hour} \\ \hline HTTP/80 & All & \(\mathbf{7.7^{*}}\) & \(15.7^{*}\) & \(17.2^{*}\) \\ & Malicious & \(\mathbf{4.0^{*}}\) & \(5.8\) & \(\mathbf{7.3}\) \\ \hline SSH/22 & All & \(2.4\) & \(\mathbf{2.6^{*}}\) & \(\mathbf{1.5^{*}}\) \\ & Malicious & \(2.5\) & \(\mathbf{2.8^{*}}\) & \(\mathbf{1.7^{*}}\) \\ \hline Telnet/23 & All & \(72.6^{*}\) & \(\mathbf{1.06^{*}}\) & \(201\) \\ & Malicious & \(\mathbf{1.6^{*}}\) & \(\mathbf{1.3^{*}}\) & \(\mathbf{1.8}\) \\ \hline \hline \end{tabular} \end{table} Table 3. Impact of Internet Service search engines—Attackers are more likely to attack a service that is currently, or has been previously, indexed by Censys or Shodan. Statistically significant increases are marked in bold and traffic distributions that are significantly different from our control group’s traffic distribution (e.g., exhibit spikes of—but not necessarily overall—increased volume) are indicated by \(*\). after accounting for the biases that scanners exhibit when targeting neighboring services. We find that attackers exhibit significant biases when scanning across continents or within Asia Pacific. However, attackers rarely discriminate amongst different cloud networks within the same geographic region. Further, scanners and attackers that target the cloud are likely to avoid scanning networks that are publicly known to not host services (i.e., telescopes). ### Discriminating Geographic Regions We investigate how attackers consider geography when identifying targets in the cloud. Attackers exhibit significant biases across continents and across the Asia Pacific region. However, contrary to prior work's inferences (Santos et al., 2018) and telescope results (Santos et al., 2019), they do not send significantly more or less malicious payloads within the US or EU. Methodology.We compare traffic distributions from the GreyNoise honeypots across geographic regions using the statistical methodology described in Section 3.3. We group continental regions in the same manner that AWS and Google group datacenters (i.e., North America, Europe, Asia Pacific). We exclude Azure and Hurricane Electric due to their lack of geographic diversity in our dataset. Attackers discriminate among Asia Pacific.Scanners and attackers exhibit the most significant preferences when targeting Asia Pacific. In Table 4, we show that, across Asia Pacific, attackers attempt significantly different payloads than in other regions (large \(\phi\) 0.27\(-\)0.47), including different usernames (large \(\phi\) 0.47\(-\)0.56) and different passwords (large \(\phi\) 0.50\(-\)0.52). For example, the top attempted Telnet usernames for most geographic regions are "root", "admin", and "support." However, honeypots within the AWS Australia region see an order of magnitude less of those usernames, and are most targeted with "mother" and "eSehome," a credential often used by the Mirai botnet targeting Huawei devices (Han et al., 2019). There are also biases within the Asia Pacific region. Across Asia Pacific, scanners and attackers isolate specific sub-regions to avoid or target. For example, Emirates Internet (ASN 5384) sends HTTP/80 post requests only towards honeypots located in Mumbai, India\(-\)the location closest to the United Arab Emirates in our dataset\(-\)while scans from SATNET (ASN 14522) Ecuador target all geographic regions except for Mumbai. Attacker preferences are widespread throughout the Asia Pacific: 80% of Asia Pacific region pairs are targeted with different distributions of HTTP payloads across all ports. Scanners target significantly different regions of the Asia Pacific across all cloud providers: AWS, Google, Linode. Attackers attempt significantly more different SSH and Telnet usernames between Asia Pacific geographic regions (large \(\phi\) 0.47\(-\)0.56) than amongst neighboring services (Section 4.2, large \(\phi\) 0.22\(-\)0.24). However, when comparing top attempted passwords, fraction of malicious traffic, and scanning AEs, scanners and attackers exhibit a similar magnitude of biases when targeting neighboring and inter-continental services. We do not find any consistent AS-geographic patterns that directly explain why Asia Pacific biases exist. For example, while attackers are _less_ likely to send malicious traffic in the Asia Pacific Azure and AWS regions (small \(\phi<0.16\)), they are _more_ likely to send malicious traffic in Google's Asia Pacific region (small \(\phi=0.04\)). Grouping too many autonomous cultures/governments (i.e., compared to grouping states and countries within North America) within the Asia Pacific\(-\)a common methodology in technology, politics, and commerce (Han et al., 2019)\(-\)might contribute to the variation. Attackers do not discriminate between sub-regions within the U.S. and Europe.Scanners exhibit significantly less biases when scanning within the US and EU (Table 4). For example, the same set of AEs consistently target regions within the US or EU, and attackers do not send significantly more (or less) malicious payloads to a particular region. While scanners send different payloads across 50% of US and 53% of EU geographic regions (Table 5), the effect size is always smaller when compared to differences between Asia Pacific sub-regions. We observe scanners send an increased amount of Telnet payloads to the AWS Paris region, and more Android emulator commands to the AWS Frankfurt region. We find no significant differences in the median scanning traffic volume within or across continents. Our results are consistent with Section 5.2, in which education networks located on the opposite coasts of the US see no significant differences in traffic. ### Discriminating Network Types While attackers discriminate between and amongst certain geographic regions, they are unlikely to discriminate amongst different cloud providers in the same geographic region.7 However, we do find that many attackers that target networks that do have services (i.e., cloud, education) do not scan networks that are publicly known to not have services (i.e., network telescopes). Thus, consistent with prior results, we emphasize that researchers that rely on only telescopes are blind to an important scanning population that only targets and attacks real Internet services. Footnote 7: Due to lack of sufficient honeypots in different providers and regions within Asia Pacific, we are only able to verify this result in North America and Europe. Methodology.We compare traffic across networks using the methodology from Section 3.3. To perform cloud-to-cloud comparisons, we use GreyNoise data and compare only cloud honeypots \begin{table} \begin{tabular}{l l|l|l|l|l|l} \hline \hline \multirow{2}{*}{Traffic} & \multirow{2}{*}{Protocol} & \multicolumn{2}{c|}{AWS} & \multicolumn{2}{c}{Google} & \multicolumn{2}{c}{Linode} \\ \cline{3-7} & & Most Dz. & Avg. \(\phi\) & Most Dz. & Avg. \(\phi\) & Most Dz. & Avg. \(\phi\) \\ & & Region & Region & Region & Region & \\ \hline Top 3 & SHI22 & AP-p & 0.68 & AP-SG & 0.16 & AP-SG & 0.27 \\ AS & TEL23 & AP-AU & 0.50 & - & - & - \\ & HTTP/20 & AP-ND & 0.53 & AP-HD & 0.47 & - \\ & HTTP/All & AP-SG & 0.21 & AP-AU & 0.23 & US-CAL & 0.28 \\ \hline Top 3 & SHI22 & AP-p & 0.47 & - & - & - \\ & TEL23 & AP-AU & 0.56 & - & - & - \\ \hline Top 3 & TEL23 & CA-TOR & 0.32 & - & AP-SO & 0.30 \\ \hline Top 3 & HTTP/100 & AP-HK & 0.31 & AP-ID & 0.27 & AP-SG & 0.35 \\ Payload & HTTP/All & AP-HK & 0.32 & AP-ID & 0.25 & AP-ND & 0.47 \\ \hline Fraction & SSI/SSI22 & AP-AU & 0.13 & - & - & - \\ & TEL23 & AP-AU & 0.16 & - & - & - \\ & Any/All & - & AP-JP & 0.64 & - & - \\ \hline \hline \end{tabular} \end{table} Table 4. Geographic regions with most different traffic patterns—When comparing all geographic regions against each other, Asia Pacific (AP) regions exhibit the largest statistically significant deviations of traffic distributions compared to other geographic regions within the same network. We mark the absence of statistically significant results with a “-”. We color the effect sizes with its the relative magnitude (i.e., blue\(-\)small”, yellow\(=\)medium”, red\(=\)large”). As discussed in Section 3.3, identical \(\phi\) values can have different effect sizes given the degrees of freedom per experiment. that are located in the same city or state to minimize geographic biases (Table 6). To avoid comparing data from different honeypot frameworks, we use the Honeytrap honeypots we deployed in AWS and Google geographically near the Honeytrap honeypots in the EDU networks to compare cloud and EDU networks. We use the Honeytrap honeypots in Stanford and Merit for the EDU-EDU comparison. When comparing education networks and the network telescope, we ensure that all honeypots are located in the US (which Section 5.1 shows minimizes bias). **Scanners do not discriminate between networks with real services.** Although scanners significantly avoid the telescope network, we demonstrate in Table 7 that scanners targeting assigned services within different cloud networks nearly always originate from the same top 3 ASes (small \(\phi\)\(<\)0.21) and attempt the same most common usernames and passwords (small \(\phi\)\(<\)0.06). We never see scanning ASes entirely ignore specific cloud regions. Zero cloud honeypots see a difference between the most popular SSH and Telnet passwords within a European or North American region. However, the majority of scanners that target _unassigned_ services (i.e., aggregating across all ports and protocols) originate from different ASes and attempt different payloads (small \(\phi\)\(=\)0.23). Nevertheless, the differences are much smaller than those seen across neighboring services (Section 4.2) and those alluded to in prior work studying network telescopes (Santners et al., 2022). We never observe scanners significantly discriminating between education networks, even though the networks are located on opposite coasts of the US. This shows that attacker discrimination of the telescope network is not geography-induced. Scanners also do not significantly discriminate between cloud and education network: scanners always attempt the same usernames, passwords, payloads (small \(\phi\)\(<\) 0.15), and send the same amount of malicious traffic. There is one exception. In 2021, scanners targeting SSH/22 in clouds were more likely to originate from different ASes than those that targeted education networks (large \(\phi\)\(=\)0.48). Six times more scanners from Chinnet (ASN 4134) targeted the SSH/22 service in our education networks compared to cloud networks, while seven times more scanners from Cogent networks (ASN 174) target the SSH/22 service in our cloud networks compared to our education networks. However, in 2022, we no longer saw significant difference \begin{table} \begin{tabular}{l c c c} \hline \hline City & \multicolumn{3}{c}{Cloud} \\ \hline & AWS & Google & Linode & Azure \\ \hline CA, US & + & + & + \\ GA, US & + & + & \\ OR, US & + & + & \\ TX, US & & + & + \\ VG, US & & + & + \\ FRA, GE & + & + & + \\ \hline \hline \end{tabular} \end{table} Table 6. Honeypots in multiple clouds—When comparing scanner activity between networks, we only compare traffic destined towards vantage points located in the same city or state, in order to minimize geographic biases. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Traffic} & \multicolumn{5}{c}{CSH/22} & \multicolumn{5}{c}{Telnet/23} \\ \cline{2-7} & \multicolumn{3}{c}{\% Similar Pairs of Regions in Same Geo-Region/Network} & \multicolumn{3}{c}{\% Similar Pairs of Regions in Same Geo-Region/Network} \\ \cline{2-7} Characteristic & US (n=31) & EU (n=19) & APAC (n=40) & Intercontinental (n=267) & US (n=31) & EU (n=19) & APAC (n=40) & Intercontinental (n=267) \\ \hline Top 3 AS & 94\% & 100\% & 63\% & 70\% & 100\% & 100\% & 73\% & 81\% \\ Frac Malicious & 94\% & 100\% & 88\% & 83\% & 100\% & 100\% & 98\% & 99\% \\ Top 3 USername & 94\% & 100\% & 88\% & 79\% & 100\% & 89\% & 75\% & 76\% \\ Top 3 Password & 100\% & 100\% & 100\% & 100\% & 89\% & 73\% & 75\% \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Traffic} & \multicolumn{5}{c}{HTTP/80} & \multicolumn{5}{c}{HTTP/All Ports} \\ \cline{2-7} & \multicolumn{3}{c}{\% Similar Pairs of Regions in Same Geo-Region/Network} \\ \cline{2-7} & US (n=31) & EU (n=19) & APAC (n=40) & Intercontinental (n=267) & US (n=31) & EU (n=19) & APAC (n=40) & Intercontinental (n=267) \\ \hline Top 3 AS & 97\% & 100\% & 85\% & 92\% & 91\% & 84\% & 44\% & 39\% \\ Frac Malicious & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 99\% \\ Top 3 Payloads & 94\% & 100\% & 90\% & 94\% & 50\% & 53\% & 20\% & 11\% \\ \hline \hline \end{tabular} \end{table} Table 5. Traffic similarities within and between geo-locations—Scanners targeting assigned services in regions within the US or EU nearly always originate from the same top 3 ASes and attempt the same most common payloads. However, geographic regions within Asia Pacific are much more likely to exhibit statistically significant variation in traffic characteristics. \begin{table} \begin{tabular}{l c|c c|c c|c} \hline \hline \multirow{2}{*}{Traffic} & \multicolumn{5}{c}{Cloud–} & \multicolumn{5}{c}{Cloud–} & \multicolumn{2}{c}{EDU–} \\ \cline{2-7} & \multicolumn{2}{c}{AWS} & Google & Linode & Azure \\ \hline CA, US & + & + & + & \\ GA, US & + & + & & \\ OR, US & + & + & & \\ TX, US & & + & + & \\ VG, US & & + & + & \\ FRA, GE & + & + & + & \\ \hline \hline \end{tabular} \end{table} Table 7. Differences across network types—Scanners that target reduction networks are unlikely to prefer a specific cloud (e.g., AWS versus Google), but are more likely to partially avoid education networks. Fields that cannot be calculated due to lack of payload collection are denoted by an \(\times\). Effect sizes (\(\phi\)) are colored with their relative magnitude (i.e., blue=“small”, red=“large”). between the scanners targeting SSH/22 in the cloud and education networks (Appendix C.2). The absence of a difference implies that either (1) targeted-SSH events are an anomaly, or (2) targeted-SSH events "spike" (a pattern defined in Section 4.3) and are less likely to appear across all slices of time. The popular presence of SSH/22 in the clouds (e.g., AWS EC2 instances often come pre-configured with SSH/22) might contribute to attracting scanners in spikes. Scanners and attackers avoid telescopes.Across the majority of popular ports, scanners that target networks with real services (i.e., clouds and education networks) are not seen in the network telescope. In Table 8, we compute the fraction of overlap between the IP addresses that target at least one cloud or education honeypot and the telescope; only 13% of IPs that target port 22 on any of our cloud honeypots send at least one packet to port 22 in the telescope. Only 44% of scanners that target port 443 in one of our education honeypots also scan port 443 in the telescope. Scanners that target services hosted in education networks are more likely to target the telescope than those that target services in cloud networks (e.g., 71% vs. 33% on port 7547). We hypothesize this is due to Merit and Orion being located in the same autonomous system. Telnet/23 is the only service targeted by scanners that, for the most part, does not discriminate against telescopes: at least 91% of the IPs that scan clouds and educational networks also scan the telescope. We hypothesize the lack of network preference is due to the prevalence of botnet scanning activity, which historically has not avoided unused IP address space (Zhou et al., 2019; Zhou et al., 2019). Attackers targeting SSH-assigned ports also avoid telescopes.In Table 9, we perform a similar analysis, but filter for scanners that send malicious payloads to cloud or education networks. Less than 10% of attackers that target SSH-assigned ports on the cloud also target the telescope. A significantly different set of AEss scan telescopes; e.g., in Table 10, AEs targeting Telnet/23 in telescopes and clouds differ with a large effect size of 0.82. AEss geo-located in China actively avoid scanning the telescope; 12 times more unique scanners from China Mobile (ASN 56046) and 2.5 times more unique scanners from Chinnet (ASN 4134) target SSH/22 in our cloud and education honeypots compared to the telescope. Researchers studying honeypots located in cloud and education networks, as opposed to network telescopes, are more likely to encounter attackers targeting real services. In Section 8, we discuss the benefits and drawbacks of deploying honeypots across different networks when measuring attacker activity. ### Discussion and Summary Attackers reduce their scanning search space by tailoring their scans towards specific networks and geographic regions. Services hosted in the cloud, especially on SSH-assigned ports, are most likely to be scanned or attacked by a scanner that avoids telescope networks. When filtering for geographic regions, scanners and attackers are most likely to discriminate services hosted in the Asia Pacific--either completely avoiding them or only targeting them. Researchers should be wary of data from only network telescopes, but can use cloud resources to better understand real-world attacks. Temporal consistency.When repeating our experiments in July 2020 and July 2022 (Appendix C.2), scanners targeting services hosted in cloud and education networks continue to significantly avoid telescope networks. As in 2021, scanners exhibit less significant preferences when differentiating between cloud and education networks than between different cloud networks. Geographic preferences also remain similar (Appendix C.3): scanners and attackers are most likely to discriminate services hosted in the Asia Pacific. The only different pattern that we see in 2020 is that scanners and attackers targeting SSH/22 are more likely to discriminate amongst \begin{table} \begin{tabular}{c c|c|c|c} \hline \hline & \multicolumn{2}{c}{Telescope–EDU} & \multicolumn{2}{c}{Telescope–Cloud} \\ \cline{3-5} Traffic & Protocol & \# diff. region (m=2) & Avg. \# diff. region (n=3) & Avg. \# \\ \hline Top 3 & SSH/22 & 2 & 0.41 & 3 & 0.71 \\ AS & TEL/23 & 2 & 0.68 & 3 & 0.82 \\ & HTTP/80 & 0 & - & 2 & 0.40 \\ & HTTP/All & 2 & 0.20 & 3 & 0.30 \\ \hline \hline \end{tabular} \end{table} Table 10. Different scanners target telescopes—A significantly different set of AEss target telescopes, compared to clouds and education networks. We color the relative magnitude (blue=“small”, red=“large”) of all effect sizes (\(\phi\)). \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Port & Tel \(\cap\) Cloud & Tel \(\cap\) EDU & Cloud \(\cap\) EDU \\ \hline & \(\mid\) Cloud \(\mid\) & EDU \(\mid\) & Cloud \(\mid\) \\ \hline 23 & 91\% & 96\% & 88\% \\ 2323 & 53\% & 94\% & 83\% \\ 80 & 73\% & 86\% & 82\% \\ 8080 & 80\% & 85\% & 90\% \\ 21 & 29\% & 82\% & 94\% \\ 2222 & 9\% & 82\% & 94\% \\ 25 & 19\% & 79\% & 84\% \\ 7547 & 33\% & 71\% & 97\% \\ 22 & 13\% & 60\% & 94\% \\ 443 & 30\% & 44\% & 81\% \\ \hline \hline \end{tabular} \end{table} Table 8. Scanners avoid telescopes—Scanners that target the majority of popular ports at least once across any of our 440 cloud vantage points avoid scanning any of the 475K IPs in the telescope on the same port. However, the vast majority of scanners that target the cloud also target EDU networks. \begin{table} \begin{tabular}{c|c|c} \hline \hline Port & Tel-IPs \(\cap\) Mal. Cloud-IPs \(\mid\) & Tel-IPs \(\cap\) Mal. EDU-IPs \(\mid\) \\ \hline & \(\mid\) Mal. Cloud-IPs \(\mid\) & \(\mid\) Mal. EDU-IPs \(\mid\) \\ \hline 23 & 94\% & \(\times\) \\ 2323 & 88\% & \(\times\) \\ 80 & 84\% & 96\% \\ 8080 & 84\% & 97\% \\ 2222 & 3.6\% & \(\times\) \\ 22 & 7.5\% & \(\times\) \\ \hline \hline \end{tabular} \end{table} Table 9. Attackers targeting SSH-assigned ports in the cloud avoid telescopes—A maximum of 7.5% of attacker IPs that target SSH assigned ports at least once across any of our 440 cloud vantage points also scan any of the 475K IPs in the telescope on the same port. The majority of attacker IPs that target the education honeypots also target the telescope. Not every field can be calculated due to the manner in which payloads are or are not collected (Section 3.1), denoted by an \(\times\). We do not perform the analysis between cloud and education networks, due to the small sample size of malicious scans that target the set of cloud honeypots that are located in the same geographic region as the EDU honeypots. geographic regions within the US and EU in 2020 compared to 2021. Nevertheless, discrimination between SSH/22 services hosted in the US and EU is weaker than within the Asia Pacific. We describe in more detail the similarities and differences in temporal patterns in Appendix C.2 and C.3. ## 6. Targeted ports and protocols Building upon our investigation of how attackers source targets, we investigate what protocols they target after having identified an open port. Researchers recently discovered that the majority of services live on unassigned ports, especially in cloud networks (Hanan et al., 2020). In this section, we show that attackers target a different set of protocols than what operators and researchers monitor and analyze. Attackers target unexpected protocols (e.g., TLS and Telnet) on IANA-assigned ports (e.g., port 80). The targeting of unexpected services, which prior work has also found are often more vulnerably configured (Han et al., 2020), causes popular honeypot frameworks and telescopes monitoring HTTP to miss at least 15% of scanning traffic because they are not engineered to capture unexpected protocol handshakes. MethodologyWe analyze the traffic destined towards our three /26 networks of honeypots located in the Google, AWS, and Stanford networks,8 all of which are in the same geographic region. We omit the GreyNoise honeypots as they only collect assigned protocol payloads destined towards ports 22, 2222, 23, and 2323 (Section 3). Since our non-GreyNoise honeypots do not speak any protocols, our study is limited to only client-first protocols (i.e., only HTTP) to guarantee that a client sends the intended payload immediately after the TCP handshake. Thus, our results serve as a lower bound, since we are unable to capture unexpected data from a scanner who is waiting for our honeypot to speak a server-first protocol. Footnote 8: To increase our sample size, and since Section 5.2 shows that nearly the same set of attackers target both education and cloud networks, we combine data from both education and cloud networks. We use the open-source scanner LZR (Han et al., 2020) to fingerprint unexpected services for 13 of the most popular TCP scanning protocols: HTTP, TLS, SSH, TELNET, SMB, RTSP, SIP, NTP, RDP, ADB, FOX, REDIS and SQL. We use the GreyNoise API (Brands et al., 2020) to label benign and malicious scanning actors. The API labels actors as malicious if the scanning IP was seen actively exploiting services, and benign if the owners of the scanning IPs have undergone a rigorous vetting process (Han et al., 2020). For scanners that GreyNoise does not see or label, we consider the reputation as unknown. We report our results in Table 11. Scanners and attackers target unexpected protocolsAt least 15% of scanners that target ports 80 and 8080 do not target the HTTP protocol. Rather, 7% of scanners target TLS, Telnet (0.5%), SQL (0.4%), RTSP (0.3%), SMB (0.3%), etc. Both scanners and attackers target unexpected protocols. Across HTTP-assigned ports 80 and 8080, no matter the protocol targeted, at least half of scanners are malicious. Malicious attackers constitute the majority of scanners that target non-TLS alternative protocols (i.e., Telnet, SMB, etc). Scanners from Censys (Steiner et al., 2019) are the leading benign organization to find unexpected services. Scanners from various ASes geolocated in China (e.g., ASN 4134, ASN 9808) are the leading malicious scanners responsible for exploring unexpected services. Attackers targeting unexpected protocols bypass honeypots and telescopesPopular honeypot frameworks such as Cowrie (Cowrie, 2019), T-Pot (Kippo, 2019), and Kipp (Kipp, 2020) by default only perform protocol assigned handshakes on protocol assigned ports. Telescopes that do not collect payloads rely on the destination port to derive the target protocol. However, by only performing the assigned handshake or relying on the destination port to fingerprint the protocol, honeypots and telescopes miss at least 15% of incoming traffic on ports 80 and 8080. When possible, honeypots should collect all handshakes across all ports to prevent the underestimation of attacker traffic. ## 7. Limitations and future work Our vantage points provide an IPv4 server's perspective on scanner behavior, which has several limitations that serve as foundation for future work: FirewallsWhile none of our honeypots have firewalls, it is possible that a network could transparently drop malicious traffic before they reach our honeypots (Han et al., 2020). To mitigate confounding factors, we validate observed patterns across multiple independently-operated networks or geographic regions, which are targeted by tens of thousands of unique IPs and thousands of unique ASes (Table 1). Additionally, we use statistical tests, described in Section 3.3, to report on the statistical significance of the observed patterns. Future work should measure the prevalence and impact of firewalls across networks. Honeypot FingerprintingScanners occasionally fingerprint honeypots to avoid detection. However, the majority of honeypot fingerprinting requires a scanner to log into the system (Brands et al., 2020), which Gamma honeypots prohibit. A prior exploit that fingerprints Cowrie without logging-in (Kippo, 2019) was patched before our data collection. Nevertheless, other fingerprinting techniques could bias results against sophisticated attackers. Future work should investigate the prevalence of honeypot fingerprinting across the cloud. IPv6Unfortunately, we could not study IPv6 scanning patterns, as neither Gamma nor Omega collect/provide IPv6 traffic. Future work should analyze IPv6 scanning patterns in the cloud, since the sparse search space of IPv6 (Pavon et al., 2020) address space will likely surface different scanning patterns. Protocol DiversityOur analysis focuses on scanning campaigns that target popular protocols over TCP on the cloud. Scanning campaigns that target unpopular TCP protocols (e.g., SMB, RDP), UDP protocols (e.g., DNS, SNMP) or specialized cloud services (e.g., cloud storage) may target different vulnerabilities and use \begin{table} \begin{tabular}{l|c|c c} \hline \hline Protocol/Port & Breakdown & \% Benign & \% Malicious \\ \hline HTTP/80 & 85\% & 42\% & 55\% \\ HTTP/80 & 15\% & 42\% & 51\% \\ \hline HTTP/8080 & 85\% & 22\% & 77\% \\ \(\rightarrow\)HTTP/8080 & 16\% & 35\% & 49\% \\ \hline \hline \end{tabular} \end{table} Table 11. Scanner-targeted protocols—Malicious scanners target unexpected/unassigned protocols across ports. We define \(\sim\)Protocol-A/XX to be all protocols that are not Protocol-A that target port XX. Note, the % of benign and malicious scanners may not sum to 100% due to a fraction of scanners having unknown status. specialized scanning tools with unique scanning patterns (Kang et al., 2017; Wang et al., 2018), which future work should research. **Temporal Validity.** The scanning patterns our work surface arise from a set of 1-week data collection periods between 2020-2022. Future work should analyze scanning patterns across longer data collection periods, as that may surface different scanning campaigns and new temporal patterns. ## 8. Recommendations and Discussion Our results show that scanners--including known malicious actors--are selective when identifying IPs to scan. Unfortunately, many measurement tools that we use today have made assumptions about scanning that may obstruct our understanding of attacker behavior, particularly when trying to understand how attacks target cloud services and other enterprise networks where vulnerable services are most likely to reside. In this section, we discuss methodological considerations for researchers and service operators attempting to understand and protect against malicious Internet scanning. **Collect scan traffic from networks that host services.** While telescopes have been tremendously useful in understanding some types of attacker behavior, they fail to accurately capture cloud-focused attacks for several reasons: (1) scanners that target services in cloud and education networks frequently avoid telescopes (Section 5.2); and (2) most telescopes do not collect payloads, which prevents identifying malicious intent (Section 3.2) or the targeted protocol (Section 6). Nevertheless, telescopes do provide the benefit of encompassing large portions of the IP address space and, therefore, a significant sample size. Some attacker patterns are visible in telescopes but not cloud services. For example, identifying scanner address structure preferences (Section 4.2) would not have been possible using a limited amount of cloud honeypots. However, researchers must not assume that the scanning activity a telescope sees is representative of the scanning activity that targets cloud services. Instead, researchers should consider deploying honeypots in networks that house real services. In many cases, when using telescopes, results should be validated with honeypots deployed in networks that house real services. **Consider an IP address' service history.** Researchers and service operators are often faced with the decision of where to deploy services. The bits and service-search-engine presence of an IP address can increase the likelihood of being attacked, particularly for SSH. While likely not a tractable solution for operators to base their security based on an IP's history, researchers need to consider how past activity will affect the research results they collect. Researchers can use search engines (e.g., Censys (Sandan, 2017) and Shodan (Sho et al., 2017)) to obtain a history of an IP address. **Consider that attackers scan unexpected protocols.** A significant fraction of services run on unassigned ports. Open source tools for finding unexpected services (Sandan, 2017; Wang et al., 2018) are now available and search engines have already begun to detect protocols on unassigned ports (Sandan, 2017). Operators should not assume that hiding services on unexpected ports prevents attacker discovery, and researchers should configure honeypots to capture attacker traffic on unexpected ports. **Account for differences amongst neighboring IPs.** Researchers who rely on cloud deployments often do not have large slices of IP address space to devote to honeypots. Consequently, researchers may be tempted to only deploy one honeypot per region (Kang et al., 2017; Wang et al., 2018; Wang et al., 2018). However, our results show that researchers must (1) use more than one honeypot when comparing regions to understand the source of differences; (2) use statistical tests when comparing regions. The majority of scanning activity targets only a subset of the IP address space; it is important to highlight which differences are statistically significant across all honeypots. **Deploy honeypots across geographies, network operators, and IP addresses.** To maximize attacker traffic (e.g., to populate blocklists or understand scanning behavior), researchers should recognize that significant variation exists even amongst neighboring IP addresses. The IP address itself (e.g., its structure, reputation) should be diversified when deploying honeypots. Across geographic regions, there is more benefit to deploying a honeypot in a unique geographic region in the Asia Pacific compared to within the US or EU. Across networks, there is more benefit to deploying a honeypot in a different network type (i.e., cloud vs. educational) than within the same network type (i.e., AWS vs. Google). **Consider biases when deploying blocklists.** Companies and operators often share previously seen malicious IP addresses (e.g., blocklists) and payloads (e.g., payload filters) to help others protect their services. Sharing blocklists and payload-filters assumes that the same attackers attack services across geographic locations and networks. However, our results show that scanners and payloads differ across continents, especially within the Asia Pacific. We leave to future work comparing the efficacy of blocklists that source information from different regions. **Track attacker trends and update methodologies to protect services accordingly.** As the Internet and attackers continue to evolve, researchers should reassess the approaches they use to understand network attacks. While our results show that attacker preferences remain relatively stable across years, behavioral shifts do occur. For example, deploying honeypots in public clouds may one day become obsolete if the majority of services migrate elsewhere. Further, as the research community develops new tools and data sets to study the Internet, researchers and operators should build protections that can withstand the expectation that attackers will use and abuse the same resources. ## 9. Conclusion In this paper, we showed that Internet-scanning behavior targeting the cloud is nuanced; scanners discriminate between specific IP address structures, regions, and networks. Additionally, attackers have altered their behavior in response to new deployment patterns and public resources, by targeting services on non-standard ports and using Internet search engines to uncover vulnerable services. Many of our standard measurement techniques, including using telescopes or only collecting assigned handshakes, have caused us to underestimate and potentially mis-characterize scanner and attacker behavior targeting the cloud. Our work illustrates the importance of reevaluating our measurement instruments and assumptions as the Internet ecosystem and attackers continue to evolve. ## Acknowledgements We thank Hans Hanley, Katherine Izhikevich, Tatyana Izhikevich, Kimberly Ruth, Deepak Kumar, Eric Pauley, Patrick McDaniel, members of the Stanford University security and networking groups, our shepherd, Vasileios Giotsas, and the anonymous reviewers for insightful discussion and comments. We also thank Daniel Grant, Matt Lehman, Andrew Morris, and the entire GreyNoise team for their invaluable data and support. This work was supported in part by the National Science Foundation under awards CNS-1823192, CNS-2120400, CNS-1823192, as well as Google Inc., the NSF Graduate Fellowship DGE-1656518, and a Stanford Graduate Fellowship.
2303.17894
Writing Tools: Looking Back to Look Ahead
Research on writing tools started with the increased availability of computers in the 1970s. After a first phase addressing the needs of programmers and data scientists, research in the late 1980s started to focus on writing-specific needs. Several projects aimed at supporting writers and letting them concentrate on the creative aspects of writing by having the writing tool take care of the mundane aspects using NLP techniques. Due to technical limitations at that time the projects failed and research in this area stopped. However, today's computing power and NLP resources make the ideas from these projects technically feasible; in fact, we see projects explicitly continuing from where abandoned projects stopped, and we see new applications integrating NLP resources without making references to those old projects. To design intelligent writing assistants with the possibilities offered by today's technology, we should re-examine the goals and lessons learned from previous projects to define the important dimensions to be considered.
Cerstin Mahlow
2023-03-31T08:55:19Z
http://arxiv.org/abs/2303.17894v1
# Writing Tools: Looking Back to Look Ahead ###### Abstract. Research on writing tools started with the increased availability of computers in the 1970s. After a first phase addressing the needs of programmers and data scientists, research in the late 1980s started to focus on writing-specific needs. Several projects aimed at supporting writers and letting them concentrate on the creative aspects of writing by having the writing tool take care of the mundane aspects using NLP techniques. Due to technical limitations at that time the projects failed and research in this area stopped. However, today's computing power and NLP resources make the ideas from these projects technically feasible; in fact, we see projects explicitly continuing from where abandoned projects stopped, and we see new applications integrating NLP resources without making references to those old projects. To design intelligent writing assistants with the possibilities offered by today's technology, we should re-examine the goals and lessons learned from previous projects to define the important dimensions to be considered. writing technology, natural language processing, intelligent writing tools, interactive editing + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision + Footnote †: journal: Computer Vision the format of text files produced with MS Word became the default file format expected and demanded for submissions of academic theses and the like, for interchange between writers when writing collaboratively, and for further processing in publishing houses. The first effect led to the general assumption that any other new writing facility, e.g. in the first learning management systems that appeared in the early 2000s, should be designed to resemble the look and feel of MS Word and include its main features to provide a familiar user experience. This also applies to the first versions of Google's web-based word processor Google Docs, which became available in a beta version in early 2006. Experimental projects such as the British Telecom-funded Editor's Assistant had no progress in sight in the 1990s to overcome technological obstacles (computing power, quality of NLP) that would justify further investment. The integration of NLP technology into word processors beyond checkers for spelling and grammar has been a research topic since the 1980s [e.g., 31, 32], but did not result in commercial products either. To overcome the challenges for parsers arising from what Van De Vanterr [51] calls "the three I's": _ill-formedness_, _incompleteness_, and _inconsistency_ of sentences during writing, experimental word processors attempted to incorporate _syntax orientation_ as derived from _syntax-oriented_ text editors such as EMILY [24], Cornell [50], PEN [1], JANUS [8], PARSE [9], Mentor [22, 33], PAN [3, 53], or CodeProcessor [52]. These editors handled documents as tree structures and were implementations of programming principles like _stepwise refinement_ and _structured programming_[17, 63]. However, similar to programmers, writers objected to always produce complete, well-formed sentences, as this was not compatible with their writing habits. It also does not reflect the writing process as has been observed in various studies: authors often start revising a sentence _before_ a complete first version of this sentence is finished [see 34, 41, 56]. Dale [12] predicted in 1997: The major developments in the next five to ten years are likely to be of an augmentative nature, with increasingly sophisticated systems that have people and machines doing what they each do best. The key here is to add intelligence and sophistication to provide _language sensitivity_, enabling the software to see a text not just as a sequence of characters, but as words and sentences combined in particular structures for particular semantic and pragmatic effect. [12, p. 235, emphasis in original] No such systems were available for the general public in the 2000s, though. At that time, Mahlow and Piotrowski [38] proposed language-aware functionality, but only developed a proof of concept as extension to Emacs [39]. In writing research, the influence of the writing tool and medium are only occasionally acknowledged [7, 36, 45]; the field concentrates on cognitive aspects and writing strategies. ## 2. Where we are now Starting in the 2010s, the emphasis on writing experience, personalization of tools, and the growing diversity of input devices (and methods) and displays prompted the development of "new writing tools." Their functionalities are often working implementations of methods and concepts originally described and developed in the 1960s and 1970s that used to be considered failures--but had actually only failed due to the limitations of computers at that time. Only now we see the inverse development, back to ideas and applications of the 1960s, when projects like NLS (oN-Line System) [19] where "pushed aside in favor of computer systems more oriented toward print practices" [54]. NLS already combined functionality to write text, messages similar to what later became known as e-mail, and "computer conferencing" for allowing collaborative simultaneous editing of documents [6]. There was no fixed final document format--e.g., a printed page--the focus was on facilitating online text production by implementing text editing capabilities of later word processors, including word wrap, search and replace, and scrolling, and the use of a mouse to select text to be cut and pasted between documents. Indeed Engelbart's system was much more complex than most of subsequent word processing systems [23, p. 21] One type of functionality that Mahlow and Piotrowski [38] suggest are information functions that use NLP techniques to highlight certain aspects of the evolving text, commonly referred to as "syntax highlighting". Since 2013, iA Writer has offered such a feature in a commercial product to specifically highlight nouns, verbs, adjectives, etc., advertising it as "using parts of speech to improve your writing" and explicitly stating that writers deserve the same professional support as programmers.1 The use of NLP has been feasible for some time now, both in terms of quality and the computing power required. Footnote 1: [https://ia.net/writer/support/writing-tip/parts-of-speech](https://ia.net/writer/support/writing-tip/parts-of-speech) Williams [59] stated that professional writers, including academics and journalists, seemed to be satisfied with the tools available in the early 1990s. They had adapted to these tools and did not seem to be aware of other options. In the early 2000s, only writers who had used WordPerfect or other word processors "back in the days" sometimes complained about missing functionality in MS Word. Today, users are willing to try out new interfaces and new writing experiences. The implementation of applications with appealing user interfaces is easier than ever: current programming languages and toolkits allow for fast development and roll-out of responsive applications. At the same time, the assumption that any writing tool must resemble MS Word is fading, which is also driven by developments in creating, sharing, and accessing documents beyond the paper-based structure [35, 37]. For some time now, new writing applications as stand-alone tools or integrated into other services--e.g., learning management systems or blogging software--are being developed. As for the first wave of writing tools, we also see the adoption of tools originally intended for writing code now for writing all kinds of texts. The shift of academic writing to include dynamic aspects of "text," e.g., code (snippets), data plots, and other visualizations clearly supports the use of these affordances. ## 3. Where we should be going The failed projects from the late 1980s addressed issues that can be considered general considerations for the design and implementation of writing technology: 1. user-friendly interfaces, carefully designed functionality instead of ad-hoc hacks (Sendle et al., 2016); 2. support not only for writing, but also for teaching and learning how to write (Sendle et al., 2017), including sophisticated feedback on various levels to stimulate reflection on the writing and decisions for revising and editing (Sendle et al., 2016); 3. help for interpreting system messages and feedback (Sendle et al., 2018); 4. easy extension of features based on user needs (Sendle et al., 2016); 5. real interaction with the system that enables writers to stay in control of edits (Sendle et al., 2018); 6. application of NLP technology users can trust (Sendle et al., 2018); 7. various views on the evolving text (rheotic, linguistic, typographical, graphemic) to stimulate creativity (Sendle et al., 2019). Most of these requirements are generic requirements for software development and emphasize the need for input from real users, both for functionality and for the user interface. Strong collaboration between designers/developers of writing tools and writing researchers modeling human writing processes at multiple levels (e.g., the cognitive or the linguistic level) should be established. We are already seeing experimental applications that use recent technological possibilities to finally approach writing in ways that previous experiments could not realize: One such example is Tilio, which tried to implement the ideas proposed by Sharples (Sharples, 2017) by understanding writing as design and incorporating aspects and techniques today known as "design thinking." While this endeavor was halted by the COVID-19 situation in 2020, the technical feasibility was demonstrated in an alpha version, so we may see another attempt later. Similarly, the combination of different features and services in an application like Scriverver for seamless integration of idea creation, management of sources and references, connection to data tracing, and communication channels (chat and messaging) can be seen as a functional implementation of Engelbart (Engelbart, 2018)'s ideas, even if the developers do not explicitly refer to it. In the late 1990s, projects like Intelligent Workstation, intended as an instance of the "fifth generation of text-editing programs" (Sendle et al., 2019), and Integrated Language Tools for Writing and Document Handling from KTH Stockholm suffered from insufficient NLP resources. Today's NLP tools make it worth to reconsider the underlying ideas of those projects. They are also of interest for document creation processes, as they already abstract from the print-oriented document, which is in line with current developments: the creation of texts for documents that can be rendered according to need and display device. The Web generally allows for _dynamic_ documents with respect to form and content. Linking of documents as hypertext challenges authors during writing but can be supported using recommender functionality based on artificial intelligence (AI). The understanding of "text" changed at the turn of the century to include "interactive, hypertextual documents-many of which reside on the Internet--[which] use color, sound, images, video, words, and icons to express their messages" (Sendle et al., 2016, p. 282). This clearly requires tools that allow writers to create and edit such documents; here again, writers could be supported by powerful AI-based components. Taking into account that communication takes place on various channels with specific and complex formats emphasizes the need for structure within texts. This allows the display of the content/text according to features of devices and tailored to the needs of readers. Writing in these scenarios used to be challenging and required knowledge of specific markup for rendering. Abandoning WYSIWYG and its focus on printed paper documents, together with the development of truly augmented and responsive writing tools based on generative AI could actually free writers to "fully embracing the new opportunities offered by digital media" (Belle et al., 2017). Dale and Viethen (Dale and Viethen, 2017) analyze the "automated writing assistance landscape in 2021." GPT-3 was already available at that time and was integrated into several tools aimed at supporting writers as co-authors. These applications addressed specific genres like blog posts and poetry, and specific writing tasks like expanding, rewriting, and shortening texts (Dale and Viethen, 2017). Some months later, they were included as writing aids into experimental editors (e.g., 16; Dale and Viethen, 2017). However, they were not widely used and did not trigger the same discussions that we see now. We also see reimplementations of popular applications with integrated access to LIMs. One such example is Lex, intended as a "Google Docs style editor" (Dale and Viethen, 2017). It has access to GPT-3 and GPT-4 so that writers can invoke the language model to produce plausible continuations of the text, taking into account everything before the current cursor position, and to rewrite and summarize paragraphs. However, research from a writing research perspective on how humans and AI-based language models produce text through _co-creation_ is still pending at this point. In contrast to Dale's 1997 prediction of augmented language-sensitive editors within 10 years, Dale's 2021 prediction seems feasible and even close to reality, given the current pace of development in both machine learning-based NLP and writing application implementations: But the big shift is the transition from tools that help with editing to tools that help with authoring. It's conceivable that, in 5 years' time, no automated writing assistance tool will be considered complete without a functionality that finishes your sentences, writes you out of tight corners and fills in background paragraphs while you doze at the keyboard. And given Microsoft's exclusive licencing deal with OpenAI in regard to GPT-3, it won't be a surprise if, before too long, we see some of these capabilities as yet another item on Microsoft Word's ribbon menu. (Dale and Viethen, 2017, p. 518) Note that this prediction does not include the part of language-sensitive or language-aware functionality supporting authors during production and revision for semantic and pragmatic aspects. ## 4. Conclusion To design and implement writing tools effectively and efficiently, HCI researchers must work closely with writing researchers to both foster the development of operationalizable writing models and base the implementation of writing tools on the latest insights into writing processes. Many ideas for designing writing tools that actually address the needs of writers can be gleaned from earlier projects by exploring the technical feasibility of the underlying concepts. In this way, the development of writing tools would finally respond to the demands and predictions made in the 1990s by Holt and Williams (Holt and Williams, 2018) and Dale (Dale and Viethen, 2018).
2309.14478
Incorporating Ensemble and Transfer Learning For An End-To-End Auto-Colorized Image Detection Model
Image colorization is the process of colorizing grayscale images or recoloring an already-color image. This image manipulation can be used for grayscale satellite, medical and historical images making them more expressive. With the help of the increasing computation power of deep learning techniques, the colorization algorithms results are becoming more realistic in such a way that human eyes cannot differentiate between natural and colorized images. However, this poses a potential security concern, as forged or illegally manipulated images can be used illegally. There is a growing need for effective detection methods to distinguish between natural color and computer-colorized images. This paper presents a novel approach that combines the advantages of transfer and ensemble learning approaches to help reduce training time and resource requirements while proposing a model to classify natural color and computer-colorized images. The proposed model uses pre-trained branches VGG16 and Resnet50, along with Mobile Net v2 or Efficientnet feature vectors. The proposed model showed promising results, with accuracy ranging from 94.55% to 99.13% and very low Half Total Error Rate values. The proposed model outperformed existing state-of-the-art models regarding classification performance and generalization capabilities.
Ahmed Samir Ragab, Shereen Aly Taie, Howida Youssry Abdelnaby
2023-09-25T19:22:57Z
http://arxiv.org/abs/2309.14478v1
# Incorporating Ensemble and Transfer Learning for an End-to-End Auto-Colorized Image Detection Model ###### Abstract Image colorization is the process of colorizing grayscale images or recoloring an already-color image. This image manipulation can be used for grayscale satellite, medical and historical images making them more expressive. With the help of the increasing computation power of deep learning techniques, the colorization algorithm's results are becoming more realistic in such a way that human eyes cannot differentiate between natural and colorized images. However, this poses a potential security concern, as forged or illegally manipulated images can be used illegally. There is a growing need for effective detection methods to distinguish between natural color and computer-colorized images. This paper presents a novel approach that combines the advantages of transfer and ensemble learning approaches to help reduce training time and resource requirements while proposing a model to classify natural color and computer-colorized images. The proposed model uses pre-trained branches VGG16 and Resnet50, along with Mobile Net v2 or Efficientnet feature vectors. The proposed model showed promising results, with accuracy ranging from 94.55% to 99.13% and very low Half Total Error Rate values. The proposed model outperformed existing state-of-the-art models regarding classification performance and generalization capabilities. Image Colorization, Ensemble Learning, Transfer Learning, Image Forensics, Colorization Detection. 15th September 2023, Vol.101. No 17 2023 Little Lion Scientific ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195 ## 1 Introduction Image colorization is the process that adds color to a grayscale image to obtain a realistic color image. Doing this manually consumes effort and time; thanks to machine learning techniques, the colorization process could be as simple as a button click. There are three main colorization methods, Scribble-based, Example-based (reference-based), and fully automatic approaches. Scribble-based methods [1]-[5] is a supervised technique in which the user begins assigning colors to pixels in the grayscale image and then assumes that the neighboring pixels with similar intensities should have similar colors; the result is a color image. This method is usually accompanied by trial and error to obtain satisfactory results; this relies on, is limited by the user's experience, requires a large number of experiments to achieve good performance, and thus is instead a time-consuming process. Example-based (reference-based) algorithms [6]-[8] are also supervised techniques that require the user to supervise the system by providing reference color images semantically similar to the greyscale image. The system then transfers the colors in the reference color image to the target greyscale image by searching for similar patterns/objects. The performance then depends on the quality of the reference image, and selecting a suitable reference image may be difficult. In contrast to the above methods, the Fully automatic colorization methods [9]-[12] are unsupervised techniques that do not need a user Figure 1: (a) is the Computer Colorized Image CCI generated by the colorization method proposed in [9] from the grayscale version of (b), (b) is a Natural Color Image NCI Picked from ImageNet [24] interaction or an example image. Training a neural network to predict per-pixel color histograms utilizing low-level and semantic representations. Using a grayscale image as input to a trained, fully automatic colorization neural network outputs a color image without interaction. These method results are plausible enough to be detected as Computer Colorized Images (CCI) by the human eye. The last method (automatic colorization) gets the most research interest, and the proposed models' purpose is to detect the resultant colorized images using automatic colorization approaches. The computer-colorized images can be used for entertainment and other non-high-importance fields without affecting legal or security decisions. This includes colorizing old grayscale photos and recoloring personal images to vibrant colors other than the original colors. Legal or security usages of color images will require a technique to detect whether it is a computer-colorized or a raw image. Using a computer-colorized image as a raw image in the healthcare, criminal identification, and urban planning fields could cause a wrong decision. Color image usage in the legal and security fields is becoming tremendous, making the colorized image detection step in the process of color image usage in such fields a vital step in making the right decision. Image Colorization can be categorized as a passive forgery pixel-based image tampering technique that can be detected using statistical analysis and semantics of the image properties and features. Other tampering techniques, such as camera-based, physical-based... etc., can be detected using different methods. **The contributions of this paper are:** * An end-to-end framework to classify computer-colorized and natural color images with high accuracy and best-known generalization performance. * A new training and testing approach uses 1:3, one natural color-to-three computer-colorized images for training and testing the proposed model. The performance of the proposed model is tested and compared with the state-of-the-art methods. The rest of the paper is organized as follows. Section 2 presents the related work. Section 3 introduces the proposed model. Section 4 shows the Experiment and Results. Section 5 Discussions to compare the results to the other models' performance. Finally, Section 6, Conclusion, summarizes the paper and discusses future work. ## 2 Related Work In 2018, Y.Guo et al. [13] were the first to publish a detection method for colorized images, followed by other researchers who tried to develop Y.Guo et al.'s methods (histogram, feature encoding) using other algorithms instead of support vector machine. Then newly published algorithms modified some hyper-parameters or model structures to enhance the detection performance and accomplish better results than Y.guo's, with some drawbacks like the increased training time or decrease in the model's generalization performance. While all related papers used Ctest10K [10] dataset and the same accuracy metrics HTER, results are combined into one table at the end of this section. This gives a summarized overview of all related work. The detection approaches will be categorized into three categories; 1) Hand-crafted features approaches, 2) Learned Feature approaches, and 3) Merging both approaches (handcrafted and learned features). **Hand-Crafted Features Approaches** Y.Guo et al. [13] were the first to propose a fake colorized image detection technique; they used two approaches to detect colorized images (FCID-HIST Histogram based, FCID-FE feature encoding based). They used a support vector machine (SVM) classifier to calculate the statistical difference between natural images and auto-colorized image properties (Hue - Saturation, Dark - Bright channels). As they use handcrafted methods, the detection is built on the prior knowledge observed from data. This leads to a performance drop when training and testing images produced by different colorization methods. The FCID-FE method performs better than the FCID-HIST method; FCID-FE can be modified using Fisher vectors and other encoding methods other than GMM. With a more in-depth study of the common characteristics of the SOTA automatic colorization methods, FCID-FE will perform better in generalizing the solution. Saurabh Agarwal et al. [20] used the local binary pattern (LBP) operator that is often used in feature extraction and the Linear Discriminant Analysis (LDA) classifier for binary classifications; both are used together to detect CCIs. Saurabh Agarwal et al. found that their LBP operator method performs better with lower computational cost than [13], using mainly the dataset ImageNet test10K of [9]. Yangxin Yu. et al. [23] used the LCA feature shown in the natural images taken by digital cameras because of the imperfection of the physical property of the camera lens to differentiate the Nis from CCIs, which have been colorized using computer algorithms. Their proposed method performs better than [13] and moderates better than [16], who used deep learning low convolutional layers. While [17] used a deep learning model with high convolutional layers that performed better than using the LCA feature to detect CCIs. Zhang et al. [40] found that the colorization techniques leave some degrees of damage to the texture of the original image as an image reconstruction problem during the encoding and decoding of the colorized image features. They proposed a SVM to classify colorized and natural images using the LBP operator. They analyzed the LBP operator of the RGB, HSV, and YCbCr spaces to find the most discriminative texture artifacts in which color space to be used to classify the colorized images. Their analysis shows that the chrominance space's texture information discriminates most to the colorized images. They used the ImageNet Validation dataset[24] and COCO validation dataset [41] for their experiment; their proposed method performance achieves better accuracy than [23] and [32] as they used the chrominance features that are more representative of the common features of colorized images from different sources. **Learned Features Approaches** Long Zhuo et al. [14] used steganalysis algorithms to detect colorized images. They assumed that when fully automatic colorization methods reconstruct the red, green, and blue channels from a single grayscale template, it is inevitable that artifacts are introduced in the inherent statistical properties among RGB channels of the colorized image. The task of true-color image steganalysis is to expose the artifacts hidden among the RGB channels of the stego images. The task of detecting fake colorized images is similar to that of true-color image steganalysis, so they apply a true-color image steganalyzer to detect fake colorized images generated by fully automatic colorization methods. Long Zhuo used a Tensorflow network named WISERNet (Wider Separate-then-reunion Network), a deep learning-based data-driven color image steganalyzer [15], and the dataset used by Y.Guo [13], the performance was better than of the handcrafted features methods FCID-FE and FCID-HIST proposed by Y.Guo. Weize Quan et al. [16] tried to enhance Y.Guo et al. [13] results; Weize Quan used an end-to-end framework based on Convolutional Neural Network built on a modified BaseNet [28] architecture to learn informative and generic characteristics automatically between Natural Color Images (NCIs), and Computer Colorized Images CCIs that have been created by [9] - [11]. To improve their proposed work performance added a generalization capability to Y.Guo's work. Wieze Quan's model then outperforms Y.Guo's results. Weize Quan used a proposed Generalization approach by inserting negative samples that were automatically constructed from the available training samples to help enhance the network-training phase. They added a new branch to the network architecture borrowed from ensemble learning to combine multiple predictions of a set of individually trained classifier to extract more features. Weize Quan et al. [17] then improved the generalization capability of the detection by editing their training phase and modifying WISERNet; they constructed negative samples through linear interpolation of paired natural and colorized images. Then progressively inserted these negative samples into the original training dataset and continued training; this enhanced training technique significantly improved the generalization performance of different CNN models used in the detection process but with a slight decrease in the classification accuracy. Weize Quan et al. [19] found that the data preparation phase affects the improving generalization performance of the detection process, and the CCIs JPEG compression badly affects the performance of the generalization detection process in both Y.gou et al. [13] and Long Zhuo et al. [14]. Using WISERNet's first layer, they improve classification accuracy using CCIs of the same colorization algorithm in training and testing and generalization performance using CCIs of different colorization algorithms in testing and training. The CCIs they used are from the three-colorization algorithms [9], [10], [11], and their corresponding NCIs from ImageNet test10K of [10]. They found that to improve detection accuracy and generalization performance should opt out of all testing and training images with JPEG compression from the dataset used. Ulloa C. et al. [21] proposed a custom NN model to detect CCIs. This model uses Images resulting from automatic and manual colorization algorithms, they compared their model with VGG16 [22], and VGG16 outperforms their model in terms of performance results with training time three times longer than their proposed custom model to be trained, Twelve times longer in inference time, their custom model is a better solution for high-volume image classification. They also found that using the transfer-learning-based model VGG16 outperforms all previously proposed models that use WISERNet [15]. **Merging handcrafted and learned features** Yuze Li et al. [18] analyzed the statistical differences between CCIs and their corresponding NCIs to detect the significant differences; they also used the cosine similarity to measure the overall similarity of normalized histogram distribution of various channels for natural and CCIs to extract features for detection. They studied the statistical differences in the color distribution between NCI and CCI. They found that the RGB color space has redundant information that leads to insufficient feature differentiation, so they used RGB (red, green, blue) with HSV (hue, saturation, value) representations to extract more features. Yuze li et al. used a modified DenseNet [27] called ColorDet-NN with the same dataset test10K [10], commonly used by the other approaches. The feature analysis they performed on the output images from the automatic colorization methods to determine the handcrafted features that commonly enable detecting CCIs, the most significant feature they found to have an effect in the detection process is the color saturation channels distribution between NCI and CCI. Their model results have better performance than Y.Guo's [13]. However, the performance is less when the training and testing image sets are drawn from different datasets, which leads to a drawback in terms of the generalization performance of their algorithm. Bonthala Swathi et al. [37] proposed a deep neural network (CNN) with inputs HSV color image, the derived difference image inter-channel correlation, and the computed RGB version of the same image both as one input. The third input to the model is the illumination mapping of the RGB input image. The inter-channel correlation is the relationship between image channels (H, S, V) of the input image. Because these channels are not independent, recoloring an image could have an artifact detected as a feature to identify the colorized image. Illumination mapping is used to maintain the consistency of illuminant colors in an image; this consistency cannot be maintained for a colorized image. Bonthala Swathi et al. used inter-channel correlation and illumination mapping for detecting recolored images, as the correlation may be disrupted or altered after a recoloring process; besides the illumination mapping inconsistent of a recolored image, both with the original input image fused to train a convolutional neural network to classify colorized and normal images. They used MATLAB CNN with 16 layers with the VOC PASCAL2012 [38], and their model's accuracy was 100% without showing the size of the training, validation, and testing subsets. Also, the generalization performance of their model using different colorization algorithms and showing the model performance on each is not discussed. Bonthala Swathi, et al. experiment's results did not state the number of images used in training and testing their proposed method to be compared to the others. Phutke et al. [39] observed from their proposed channel difference map of fake-to-fake and real-to-real images that the fake colorized images have blurred edges and fewer color shades. They proposed an auto-encoder based on the difference image regeneration followed by a fake colorized image detection framework; their architecture first concatenates the channel differences and then fed them to the classification framework (Dense module) to correlate the color and edge information from each channel for effective image regeneration, then a decoder module used to regenerate the input image back. After training this regenerate network, they used the transfer learning approach to use the trained regenerate encoder weights as initial weights for the proposed classification network denoted as DCDNet. Shashikala s. et al. [42] used three corresponding 2D scaleograms for each HSV channel of the images, then three separate modified Densenet [27] classifiers trained to classify 2D scalegogram image NCI or CCI. The results of the preceding three layers are combined by an ensemble learning approach to calculate the probability of the image being NCI or CCI. They used 5000 NCI from ImageNet [24] and 5000 CCI from CTest10k[10]. The previously proposed methods that used the same metrics and dataset are compared in the following table (Table 1), each with its contribution, dataset, and detection algorithm used. In Table (1), each paper by its references index with the corresponding dataset, detection model/algorithm, and a summary note about its accuracy is grouped to summarize all the previous work. This gives quick guidance when reading any of these papers from their source. Table (1) lists All previous work that mostly used the dataset ImageNet ctest10K, with some modifications like JPEG Compression as [19] did. Others [18], [20], [21], [23] added datasets (Oxford buildings [25], CG-1050 [26], etc..) to be detected by The above diagram shows the related work timeline according to the publishing date. Categorizing the proposed models of the related work papers are discussed into three categories, 1) Handcrafted features category uses the CCIs, and NCIs to extract statistically the features that can be used manually to differentiate CCIs from NCIs by providing them to the classification model. 2) Learned features approaches used automatically learned features extracted by Neural Network \begin{table} \begin{tabular}{|c|l|l|l|} \hline Paper & Detection Model/Algorithm & Dataset & Research Findings \\ \hline [13] & SVM & ImageNet test10K[10] & FCID-FE is better than FCID-Hist. \\ \hline [14] & WiserNet [15] & ImageNet test10K & Accuracy better than [13] \\ & & Oxford building DS[25] & \\ \hline [16] & DCNet: & ImageNet test10K (removing any and CMYK (900 images) CCIs from [10] \&[11] \\ & & Oxford building DS & \\ \hline [17] & AutoNet: WiserNet[15] & ImageNet test10K[10] & Negative sample insertion improved detection accuracy \\ \hline [18] & ColorDet: & ImageNet test10K & Accuracy \(>\)88\% (train \& test using same colorization Technique) \\ & DenseNet [27] & CCIs from [10] \& [11] \\ & with 4fouhidden layers & & Accuracy \(>\)73\% (train \& test using different colorization Technique) \\ & & & Overall Acc. Better than [13] \\ \hline [19] & Modified WiserNet[15] & ImageNet test10K[10] & JPEG compression affects the detection in [13] and [14] \\ & 30 - 5X5 SRN residual filters \& \& 1\% layer trainable & \\ \hline [20] & Linear Discriminant & ImageNet test10K[10] \& \\ & Analysis (LDA) classifier & 1519 images from [12] \& \\ & & 5150 images from [29] \& \\ & & 1338 images from [30] \& \\ & & 10000 images from [31] & \\ \hline [21] & Custom Model: & CG-1050 [26] & Accuracy VGG16[22] has \\ & 3Conv. Layers & Ctest10K[10] & higher training(3x) and \\ & Transfer Learning Model: & & inference(12x) time \\ & VGG-16 (13 Conv. Layers) & & \\ \hline [23] & lateral chromatic aberration & ImageNet test10K & Higher performance than [13] \\ & (LCA) Feature & Oxford building DS & Moderate better than low CNN of \\ & & & [16], Fall behind the AutoNet \\ \hline [37] & CNN 16 Layers & VOC PASCAL2012 [38] & The size of the training, validation, and testing subsets not determined. Need to assess the generalization performance. \\ \hline [39] & Channel Difference Map- & ImageNet test10K[10] & Accuracy and generalization \\ & based Auto-Encoder, & Oxford building DS[25] & performance better than [13] \& [14] \\ & DCDNet & & \\ \hline [40] & SVM & COCO validation dataset [41] & Accuracy better than [23] \\ \hline [42] & Modified Densenet [27] & ImageNet test10K[10] & Accuracy better than [21] by 2.2\% \\ \hline \end{tabular} \end{table} Table 1: Shows a detailed comparison of related work detection approaches with the used datasets, the model proposed, and a note on the resultant accuracy and performance. models named in front of each paper in table (1) with its architecture or the base network it has been built on with some layer modifications. 3) Mixed handcrafted and automatic learned features approaches used the statistical analysis of both CCI and NCIs to get the most features found to differentiate, then feeding those features to their proposed model ColorDet that has been built on DenseNet [27] with four hidden layers, which provides a good technique to get the most benefits from the two previous categories. Despite their model performance's acceptable accuracy, the generalization performance degrades when the colorized images are from another colorization algorithm that their model trained on. The notes on the accuracy comparison between the related work papers in Table (1) are based on the results stated in each. The accuracy of the [13]'s method FCID-FE was first found to be better than FCID-Hist, then when others as [14], used a neural network instead of a support vector machine and with some modification in the training phase as a step of generalizing the detection accuracy found to be better than FCID-FE method of [13]. Weize Quan et al. [19] model's accuracy and generalization capability decrease _Table 2, Compares SOTA models used to detect fake colorized images. Results are in half-total error rate (HTER), where lower is better._ In Table (3), Ulloa, C et al. [21] compared their transfer learning VGG-16-based model with other state-of-the-art models using the difference between internal and external validation HTER (Half Total Error Rate); the [21] results show higher classification performance with the best generalization accuracy for VGG-16-based model. _Table 3 shows that [21] VGG-16 achieves the best detection accuracy with the lower HTER Difference._ \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Algorithm} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Internal Validation} & \multicolumn{3}{c|}{External Validation} & \multicolumn{3}{c|}{HTER’s Difference} \\ & & & & & & & (External-Internal) \\ \hline WISERNet [14] & [9]+[10]+[11] & 0.95 & 22.5 & +21.55 \\ \hline WISERNet-i [17] & [9]+[10]+[11] & 0.89 & 4.7 & +3.81 \\ \hline Custom model [21] & [9]+[26] & 9.00 & 16.0 & +7 \\ \hline VGG-16-based model[21] & [9]+[26] & 2.60 & 2.9 & +0.3 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Algorithm} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Internal Validation} & \multicolumn{3}{c|}{External Validation} & \multicolumn{3}{c|}{HTER’s Difference} \\ & & & & & & & (External-Internal) \\ \hline WISERNet [14] & [9]+[10]+[11] & 0.95 & 22.5 & +21.55 \\ \hline WISERNet-i [17] & [9]+[10]+[11] & 0.89 & 4.7 & +3.81 \\ \hline Custom model [21] & [9]+[26] & 9.00 & 16.0 & +7 \\ \hline VGG-16-based model[21] & [9]+[26] & 2.60 & 2.9 & +0.3 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Algorithm} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Internal Validation} & \multicolumn{3}{c|}{External Validation} & \multicolumn{3}{c|}{HTER’s Difference} \\ & & & & & & (External-Internal) \\ \hline WISERNet [14] & [9]+[10]+[11] & 0.95 & 22.5 & +21.55 \\ \hline WISERNet-i [17] & [9]+[10]+[11] & 0.89 & 4.7 & +3.81 \\ \hline Custom model [21] & [9]+[26] & 9.00 & 16.0 & +7 \\ \hline VGG-16-based model[21] & [9]+[26] & 2.60 & 2.9 & +0.3 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Algorithm} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Internal Validation} & \multicolumn{3}{c|}{External Validation} & \multicolumn{3}{c|}{HTER’s Difference} \\ & & & & & (External-Internal) \\ \hline WISERNet [14] & [9]+[10]+[11] & 0.95 & 22.5 & +21.55 \\ \hline WISERNet-i [17] & [9]+[10]+[11] & 0.89 & 4.7 & +3.81 \\ \hline Custom model [21] & [9]+[26] & 9.00 & 16.0 & +7 \\ \hline VGG-16-based model[21] & [9]+[26] & 2.60 & 2.9 & +0.3 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Algorithm} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Internal Validation} & \multicolumn{3}{c|}{External Validation} & \multicolumn{3}{c|}{HTER’s Difference} \\ & & & & & (External-Internal) \\ \hline WISERNet [14] & [9]+[10]+[11] & 0.95 & 22.5 & +21.55 \\ \hline WISERNet-i [17] & [9]+[10]+[11] & 0.89 & 4.7 & +3.81 \\ \hline Custom model [21] & [9]+[26] & 9.00 & 16.0 & +7 \\ \hline VGG-16-based model[21] & [9]+[26] & 2.60 & 2.9 & +0.3 \\ \hline \end{tabular} The results above will guide the best approach to building the proposed models for the best classification and generalization accuracy. This study will cover the related work gaps through related work evaluation by answering the research questions and objectives. This study will show the potential of ensemble learning and transfer learning approaches in detecting auto-colorized images with higher precision and generalization capabilities than all previously published methods. ## 3 Proposed Model: The intuition of the proposed models is based on the use of transfer and ensemble learning together; using their benefits gives the models the advantages of high accuracy and less time and resources used for training. The proposed models, as illustrated in fig. (1), fig. (2), and fig. (3), have two branches; the first is the pre-trained branch, either the VGG16 feature vector or the Resnet50 feature vector, both used to extract the basic features of the training datasets, the second branch is the Mobile Net v2 or Effecientnet feature vector, both trained to extract features from colorized and natural color images. Both branch's features are then concatenated to be the input for the Dense top layer of the models with its two neurons that are trained to perform classification of the two image classes based on the extracted features from the preceding two branches to get a final result of if the image is a Colorized or natural color image. The proposed models' architecture comprises three phases, preprocessing, feature extraction, and classification phases, as illustrated in fig. (1), fig. (2), and fig. (3); each phase is discussed in detail in the following subsections. Figure 1: VGG16-based Model (1) architecture ### Preprocessing Phase In this phase, the datasets used for training and testing are preprocessed the same way by resizing the images, normalizing pixel values, and shuffling. The preprocessing steps are as follows: **Step1: Image resizing:** To ensure uniform processing and optimize resource utilization during training and testing, the original sizes of the 10,000 natural images and their corresponding 30,000 colorized images of the three datasets DS1, DS2, and DS3 were resized to 224 x 224 pixels. **Step2: Image normalization:** To improve the model's robustness to variations in lighting, color, and other features, employing MinMax normalization to scale the pixel values of all images in the dataset to a range of 0-1. This was achieved by implementing a normalization layer with a scaling factor of 1.0/255, which was applied to the entire dataset to ensure consistency in the normalization process. **Step3: Image shuffling:** To prevent overfitting and ensure that the model is exposed to diverse examples of each class, the images in the dataset were shuffled to distribute the classes randomly. This enhances the model's ability to learn from various examples and improves its generalizability. ### Feature Extraction Phase In the feature extraction phase, deep neural networks are employed to extract features relevant to the images' output labels. This approach was chosen over the use of handcrafted features. Given the large number of images used for training and testing, and as part of the ImageNet dataset, which comprises 1 million images, utilizing transfer learning and fine-tuning techniques to use the features and weights obtained from a pre-trained model that had been trained on the ImageNet dataset. The feature vector of a model is the base component used in ensemble learning architectures. Figure 3: Vgg16-Effecientnet based Model (3) architecture Figure 2: Resnet50-based Model (2) architecture In a two-level deep learning model, the feature vector is referred to as the level-0 layer of the model. The feature vector is obtained by removing the top layer of the model; it is then used to extract the fundamental characteristics or features of the dataset images. These extracted features are then fed to the subsequent level, a custom level-1 layer that fits the number of classes to be classified; this custom level-1 layer will be trained to classify the two classes (colorized or original). Using feature vectors allows efficient feature extraction and improves classification performance [34]. It specifically uses pre-trained (Vgg16, ResNet50) trained to classify the 1000 image categories of the ImageNet dataset as the base for the proposed models. Using these pre-trained models' prior knowledge to get the basic features to be concatenated to the proposed model's second branch (MobileNet V2 or effecientnet) with its layers are trainable to extract the features of the natural and computer-colorized images dataset. They demonstrated stacked ensemble learning by employing the two deep neural network models on the same dataset. The outputs of these models were then concatenated and fed as input to the classification phase level-1 Layer. This approach allows for exploiting the strengths of the two models, The whole framework architecture for the proposed model 1 algorithm is shown below. ``` Input: Preprocessed datasets contain colorized and original color images Output: Label the result for the detection (Colorized or Original) 1: Tr_DS,Val_DS = DS.split 2: Create a normalization rescaling layer (1.0/255) 3: Map the normalization layer to Tr_DS, Val_DS 4: Resize image to 224x224 5: Shuffle 6: Cache and prefetch Tr_DS & Val_DS 7: Vgg_model = load_vgg16_model - with initial weights of imagenet 8: Set all vgg_model layers trainable property to false 9: Mob_model = load_mobilenet_v2_model_with initial weights of imagenet 10: Set all mob_model layers trainable property to True 11: inputs = create_input_layer 12: Vgg_output = flatten_vgg_output(vgg_model, inputs) 13: Mob_output = get_mobilenet_v2_output(mob_model, inputs) 14: x = concatenate_features(vgg_output, mob_output) concatenated along the last axis 15: outputs = Dense layer with two neurons with its input features (x) 16: Proposed_model_1 = create_composite_model(inputs, outputs) 17: Train the proposed model 1 using Preprocessed Tr_DS, Val_DS ``` For the proposed model 2, the VGG16 was replaced with Resnet50. For the proposed model 3, the mobilenet_v2 was replaced with efficientnet. ## 4 Experiment and Results Using Y.Guo et al. [13] dataset, the most commonly used dataset for evaluating the three proposed models, is the best approach to compare our models' performance with other models classifying nature and auto-colorized images. This dataset contains DS1, DS2, and DS3. DS1 and DS2, each of approximately 40000 images distributed as: 1. Ten thousand natural (10000) images were selected from the 50000 images ImageNet [24] validation dataset. 2. Thirty thousand (30000) images that have been auto-colorized from the grayscale version of the 10000 images in one- by the three state-of-the-art different colorization methods [9]-[11]. DS1 and DS2 datasets are each comprised of 40,000 images; consequently, the entire DS1 and DS2 datasets have almost 80,000 images. DS3 comprises 5063 images of the Oxford buildings dataset [35] and their three-colorization image versions of the automatically colorized approaches [9]-[11], with a total of 20252 images for DS3. ### Experimental Results In this study, a rigorous experimental design was employed to ensure the validity of the results. The training and testing samples were carefully selected to avoid overlapping. This is crucial in avoiding overfitting and ensuring the model generalizes well to unseen data. Using the Tensorflow pipelining technique to prefetch part of the DS1 dataset with 11997 images to train the three proposed models. This small part of the dataset is almost a third of the total DS1 size. Shuffling our dataset files, making this training sample distribution 3106 for the natural color images and 8898 for the three-colorization methods, with a natural to-colorized images ratio of 0.349. The Validation dataset natural to colorized images ratio is 0.243. The proposed model 1 training accuracy achieves 99.74%, with a validation accuracy of 98.49%. Testing the proposed model 1 using DS2 \(\sim\)40000 images (another 10000 images of ImageNet with its three auto-colorized versions resulting from the three colorization methods), keeping the same class distribution of (3 colorized images -To - 1 raw image) the same distribution as DS1. The resultant accuracy was 96.52%. Tested proposed model 1 using the DS3 dataset. The resultant accuracy was 97.16%. Testing proposed model 1 accuracy using the remaining DS1 27991 images (40000 images excluding the previously \(\sim\)12000 images used for model training) and then testing using DS2 dataset \(\sim\)40000 images and then testing with DS3. All datasets have the same class distribution (three colorized images to one natural). The best resultant accuracy was of DS1 testing at 98.93%. Using the proposed model 2, with ReseNet50 pre-trained feature vector branch instead of VGG16, keeping all the other hyper-parameters as it is, repeating the same training steps using the same datasets followed with the proposed model 1, the training accuracy was 99.68%, and the validation accuracy was 97.49%. The proposed model 3 (with the Efficientnet model branch) used the same training and testing approaches. Proposed models were evaluated using Accuracy (higher is better) and HTER (lower is better). The HTER metric used can be calculated from the equation: \[\text{HTER}=0.5\text{ * }(\frac{FP}{TN+}+\frac{FN}{TP+FN})\] TP, TN, FP, and FN are true positive, true negative, false positive, and false negative values, respectively. The proposed models used a new training approach, using a set of 1 to 3 natural to colorized images that are colorized using three different colorization methods [9]-[11]. The proposed models, after training, can classify colorized images resulting from the state-of-the-art auto-colorization methods with high accuracy. The proposed models are built on the pre-trained models with the initial weights of the ImageNet dataset and fine-tuned using a part of the DS1, which comprises images from the ImageNet dataset and three colorization versions for each image. The results of testing using another dataset images from Oxford buildings [25] dataset and their colorized versions (same colorization algorithms used with ImageNet) shows high classification performance. The results of testing the three proposed models are listed in Table (4); the bold result indicates the best performance. ## 5 Discussions: From the results of the three proposed models, proposed model 1 had both the best classification accuracy and the best generalization performance. Accordingly, it will be used to be compared to the other models. The results of the proposed model 1 from Table (4) and Table (3) are compared using internal validation HTER results testing the model using DS1 and external validation HTER results testing the model using DS2. Figure 4: (a) Model 1 DS1 Test C.M, (b) Model 1 DS2 Test C.M., (c) Model 1 DS3 Test C.M, (d) Model 2 DS1 Test C.M, (e) Model 2 DS2 Test C.M, (f) Model 2 DS3 Test C.M (g) Model 3 DS1 Test C.M, (h) Model 3 DS3 Test C.M \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{15th September 2023, Vol.101, No 17} \\ \hline \multicolumn{3}{|c|}{\(\copyright\) 2023 Little Lion Scientific} \\ \hline ISSN: **1992-8645** & www.iatti.org & E-ISSN: **1817-3195** \\ \hline Shashikala s. et al. [42] compared their model with the state-of-the-art models used to detect autocorized images; in the following table (5), their results and this research's proposed model 3 will be compared in terms of accuracy and HTER. The proposed model 3 was selected from this research's \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline Model & Accuracy & HTER \\ \hline Shashikala s. et al. [42] & 0.949 & 18. 3 as it is the best of the three proposed models in terms of classification accuracy and HTER. Proposed model 1 achieves an HTER of 0.017 when tested using non-seen data from the same image dataset that was colorized using the same colorization method. It also reaches an HTER of 0.058 when tested using non-seen data from a different image dataset colorized using the same method. Then proposed model 1 outperforms all the related work models with an excellent generalization performance. Although proposed model 1 is trained using 1/3 of the DS1 dataset images, it achieves lower HTER, indicating higher accuracy. It was considered that most of the related work papers used to evaluate their models using 2000 natural images with their corresponding 2000 colorized images. The proposed model was evaluated using datasets larger than ten times the subsets used for most related work methods, ensuring superior performance. In the critical assessment of the proposed models, the potential impact of dataset variations affects the proposed models' performance and generalization capabilities. Some of the related work models used a cross-validation methodology where the dataset is divided into subsets for training and testing that introduces a potential difference in the distribution of subsets used in the cross-validation of this research's models. Considering the variations in dataset division and distribution used by those related work papers challenges the direct comparisons between their results and the proposed models. Nevertheless, our proposed models demonstrate exceptional accuracy and generalization capabilities when tested using the same dataset with a higher number of images and the same distribution as that used for its training. ## 6 Conclusion and Future Work: This study uses ensemble and transfer learning techniques to develop end-to-end models for detecting auto-colorized images. The proposed models achieve the best overall accuracy and generalization performance compared to previously published works in this area. The commonly used datasets for training and testing colorized image detection models have been used, which enable better comparisons with the proposed models. Proposed Model 1, utilizing vgg16 and mobilenet_v2 branches, achieves a slightly lower detection accuracy than Proposed Model 3. However, it shows exceptional generalization performance with an accuracy of 97%. As Proposed Model 3 achieves the highest classification accuracy of 99.13% when using the same dataset for training and testing. The evaluation metrics employed in this study include accuracy and half-total error rate (HTER). The findings of this research show the efficiency of incorporating ensemble learning and transfer learning approaches. The proposed models show high accuracy in detecting auto-colorized images that have been colorized using state-of-the-art colorization methods. In future work, the plan is to test the proposed models by evaluating their performance in detecting example and scribble-based colorization methods and test them against the new fake colorized image datasets. This assessment will provide valuable insights into the generalization capabilities of our models and their potential to address various colorization techniques. Further investigations and discussions on the impact of dataset variations to better understand the limitations of our proposed models.
2309.13317
Class Attendance System in Education with Deep Learning Method
With the advancing technology, the hardware gain of computers and the increase in the processing capacity of processors have facilitated the processing of instantaneous and real-time images. Face recognition processes are also studies in the field of image processing. Facial recognition processes are frequently used in security applications and commercial applications. Especially in the last 20 years, the high performances of artificial intelligence (AI) studies have contributed to the spread of these studies in many different fields. Education is one of them. The potential and advantages of using AI in education; can be grouped under three headings: student, teacher, and institution. One of the institutional studies may be the security of educational environments and the contribution of automation to education and training processes. From this point of view, deep learning methods, one of the sub-branches of AI, were used in this study. For object detection from images, a pioneering study has been designed and successfully implemented to keep records of students' entrance to the educational institution and to perform class attendance with images taken from the camera using image processing algorithms. The application of the study to real-life problems will be carried out in a school determined in the 2022-2023 academic year.
Hüdaverdi Demir, Serkan Savaş
2023-09-23T09:22:58Z
http://arxiv.org/abs/2309.13317v1
International LET-IN 2022 Conference October 06-08, 2022_ ###### Abstract With the advancing technology, the hardware gain of computers and the increase in the processing capacity of processors have facilitated the processing of instantaneous and real-time images. Face recognition processes are also studies in the field of image processing. Facial recognition processes are frequently used in security applications and commercial applications. Especially in the last 20 years, the high performances of artificial intelligence (AI) studies have contributed to the spread of these studies in many different fields. Education is one of them. The potential and advantages of using AI in education; can be grouped under three headings: student, teacher, and institution. One of the institutional studies may be the security of educational environments and the contribution of automation to education and training processes. From this point of view, deep learning methods, one of the sub-branches of AI, were used in this study. For object detection from images, a pioneering study has been designed and successfully implemented to keep records of students' entrance to the educational institution and to perform class attendance with images taken from the camera using image processing algorithms. The application of the study to real-life problems will be carried out in a school determined in the 2022-2023 academic year. F ace recognition, deep learning, artificial intelligence in education, HOG ## Introduction With the advancing technology, the hardware gain of computers and the increase in the processing capacity of processors have facilitated the processing of instantaneous and real-time images. In this way, studies in the field of image processing have increased rapidly. The images taken from the cameras are processed on hardware-powered computers and researchers have done many studies on topics such as (Ucar, 2019): * Recognition of the visible object, * Face detection and recognition, * Tracking the face, * Recognizing human emotions, International LET-IN 2022 Conference October 06-08, 2022_ \(\bullet\) Human gender and age determination, \(\bullet\) Sign language detection. The application of the face recognition technique is divided into two main parts security applications and commercial applications. Facial recognition technology has primarily been used in security applications, especially in photo albums for criminal records, and in video surveillance (real-time matching with video footage sequences). Commercial applications range from the static matching of photos on credit cards, ATM cards, passports, driver's licenses, and photo IDs to real-time matching with still images or video image sequences for access control. Each application has different restrictions in terms of processing information and obtaining results (Tolba et al., 2006). In educational institutions, systems are designed with student safety in mind. Many technological methods such as Radio Frequency Identification (RFID), wireless communication, fingerprint, iris, and advanced face recognition based, etc. are tested and developed in security systems. Most of these methods have high system installation costs and have some advantages and disadvantages. Considering that school budgets are not too high, an automatic attendance system was designed using the existing technological infrastructure without the cost of extra equipment. Especially in the last 20 years, the high performances of artificial intelligence (AI) studies have contributed to the spread of these studies in many different fields. Education is one of them. The potentials and advantages of using AI in education can be grouped under three headings such as students, teachers, and institutions (Savas, 2021). One of the institutional studies may be the security of educational environments and the contribution of automation to education and training processes. From this point of view, deep learning (DL) methods, one of the sub-branches of AI, were used in this study. In the study, image processing algorithms were used for object detection from images, student admission records were kept with the images taken from the camera, and a pioneering study was designed to perform class attendance. ### Literature Review Eldem and Palali (2017) used Open Source Computer Vision Library (OpenCV) and image processing libraries in their study. In the study, the OpenCVSharp component, which was developed for the C # programming language that works in harmony with OpenCV, was used. In the system, images of the people were taken using the camera and the facial regions were marked with the haarcascade structure. The faces registered in the database and the faces from the camera were compared using the template matching method. In this study, a success rate of 79% was achieved in face recognition (Eldem, Eldem, & Palali, 2017). Kaplan (2018) used the Haar-Cascades classifier to determine whether there is a face in any image. In the study, AForge.NET software library was used to speed up image processing. EmguCV software library, which uses timer logic, was used to prevent lag in the interface. As a result, in this thesis study, face detection in images has been successfully performed in real time to meet the needs of the system (Kaplan, 2018). In his thesis, Ucar (2019) determined the distraction rates of students in a classroom environment where their faces are recognized in real time, their head directions are followed and their head direction movements are interpreted. He used OpenCV and Dlib image processing and machine learning libraries in his system. In the developed application, different head directions and facial expressions of the students were photographed and recorded. Using the Local Binary Patterns method of this training dataset, the students' face recognition model was created. Photos labeled as "Careful" and "Careless" were detected using the support vector machine algorithm. As a result of the tests, the success rate of the system was determined as 72.4% (Ucar, 2019). In the study by Savas et al. (2017), it was aimed to find how many human faces are in the photographs taken by a mobile device and to calculate the occupancy capacity of the environment with this number. In the study, haarcascade_frontalface_alt and haarcascade_mcs_eypair_big algorithms were used to detect face and eyes in photographs taken with smart devices. As a result, the haarcascade_frontalface_alt algorithm works more efficiently in the study (Savas, Becerikli, & likin, 2017). ### Method The architecture of the study is shown in Figure 1. As seen in Figure 1, the images of the students are taken with the camera at the entrance of the classroom, and the face of the student is detected by using image processing algorithms, one of the DL methods. The image data obtained is compared with the class database and the student's attendance information is recorded. The steps for this process are as follows: * Creating dataset from student images (by class) * Obtaining lecture attendance image via the camera * Face detection using Convolutional Neural Network (CNN) and Histogram of Oriented Gradients (HOG) * Face identification * Registration of attendance records as a result of face recognition Figure 1: Architecture of the study ### Dataset Before applying the designed study to real-world problems, design and development studies were carried out. A dataset consisting of web photos of real users and different popular names has been created, which is easy to access during the development phase and provides the opportunity to act according to the situation. In the study, the comparison of this data set, which was created with the image to be taken from the camera, was carried out. The images added to the data set were recorded as id and the person's name and surname. To prevent more than one person from entering the camera angle in the study, the resolution is set to 150x150 pixels at the beginning. ### Face Detection Technique Face detection is used to define this information by finding the coordinates and size of face objects in an image or video sent to the system. In face detection processes, some problems are usually encountered in images obtained from uncontrolled environments. These problems can be listed as [14]: * Exposure variation: The ideal scenario for face detection is one that only includes frontal images, but in general uncontrolled conditions this is unlikely. Also, the performance of face detection algorithms degrades drastically when there are large pose variations. There may also be a change in pose due to the person's movements or the angle of the camera. * Attribute occlusion: The presence of elements such as a beard, glasses, or hat provides high variability. Faces can also be partially covered by objects or other faces. * Facial expression: Facial features also vary greatly due to different facial movements. * Viewing conditions: Different cameras and environmental conditions can affect the appearance of a face, which can affect the quality of the image. In order to increase the accuracy of the face recognition system, the accuracy rate of the face detection algorithm to be used should also be the highest. If faces are not detected correctly in the system, problems may occur during polling, malfunction, or the system may have to be restarted. Figure 2: Sample images of the dataset In the system we have designed for educational institutions to use in real-world problems, these problems will be taken under control in general, since the student photos to be included in the dataset will be taken by the institution. Apart from the problems that were overcome to create the dataset, students will also be warned so that the camera can get a good view during the polling process. The basic information processing process in the real-time polling system was created using the Python programming language and utilizing the face-recognition library. This library is built using the face recognition feature of the Dlib library and is built with DL. The accuracy of the model reached 99.38% in different studies (Geitgey, 2020). Dlib includes machine learning (ML) and DL algorithms (Recursive Least Squares, Support Vector Machine, K-Means, CNN, Deep Neural Network, Artificial Neural Network, Sequential Minimal Optimization) and tools (Speeded Up Robust Features, HOG, Fast HOG, Color Space Conversions) to build complex software in C++ to solve real-world problems. It is an open-source library. It is used in both industry and academia in a wide variety of fields, including robotics, embedded devices, mobile phones, and large high-performance computing environments (Dlib, 2022; Piskin, 2018). Open Source Computer Vision Library (OpenCV), which will be used in the designed architecture, is an open source computer vision and ML software library. The library has more than 2500 optimized algorithms that include a comprehensive set of computer vision and ML algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, monitor camera movements, track moving objects, extract 3D models of objects, generate 3D point clouds from stereo cameras, and combine images to achieve high resolution (OpenCV, 2022). There are two different face detection models in the Face-recognition library to be used in the designed architecture. These are HOG and CNN algorithm used for DL-based face detection. In this study, face detection was performed using HOG. CNN, on the other hand, is widely used in video processing applications where there are many video frames to be processed. CNN can be up to 3x faster for batch processing if large numbers of images are to be processed and a graphics processing unit (GPU) with Compute Unified Device Architecture (CUDA) is used. However, the HOG model, which is faster for both hardware costs and less processing in educational institutions, was preferred in this study. #### Face Detection with HOG Face detection is one of the most challenging problems in ML. The HOG used in the study is a feature descriptor used in machine vision for processing digital images to detect objects. HOG is also widely used in detecting moving objects (Aditya et al., 2022). Feature extraction using HOG and Gradient Direction and Gradient Magnitude is shown in Figure 3. Direction and magnitude calculation formulas are given in Equation (1) and Equation (2), respectively (Aditya et al., 2022). \[Direction:\theta=tan^{-1}\left(\frac{\partial f}{\partial y}\frac{\partial f}{ \partial x}\right) \tag{1}\] \[Magnitude:||\nabla f||=\sqrt{\left(\frac{\partial f}{\partial x}\right)^{2}+ \left(\frac{\partial f}{\partial y}\right)^{2}} \tag{2}\] After successfully calculating the gradient direction and magnitude for each pixel using Equation (1) and Equation (2), features are extracted as shown in Figure 4 for example. #### Face Recognition In the face recognition phase, the important measurements of the face area of all the photographs in the dataset were recorded with the face recognition algorithm. These measurements are a set of Figure 4: Example of image detection using HOG (Aditya et al., 2022) Figure 3: Feature extraction using HOG (Aditya et al., 2022) Red/Green/Blue (RGB) values for the algorithm learned only from the data samples provided to it. The algorithm used for face recognition notes some important metrics on the face such as the color, size, and slope of the eyes, the gap between the eyebrows, etc. All this together defines the face coding (information from the image) used to identify the particular face. Face coding consists of 128 numbers. Each of these numbers represents an orthogonal component of face encoding. Figure 5 shows sample values obtained from faces in the dataset. In the study, the similarities between the faces were determined in the next step. Each component of all compared faces is checked and it is checked whether the component at hand changes within the tolerance limits. The two sequences seen in Figure 6 indicate the similarity of the given image (in the second parameter) with each of the known face encodings in the provided list (in the first parameter). It is seen that the first sequence in the figure shows much more similarity, indicating that it accurately describes the person (Solegaonkar, 2019). Figure 5: Sample face coding (Solegaonkar, 2019) Figure 6: Code similarities of faces (Solegaonkar, 2019) _International LET-IN 2022 Conference October 06-08, 2022_ If multiple matches are provided for the same person, people in the dataset may look very similar to each other in the photos. In such a case, a lower tolerance value is required to make face comparisons more stringent. More accurate identifications can be made by changing the value of the tolerance parameter. The default tolerance value is 0.6. Lower numbers make face comparisons more stringent. As a result of the comparisons, the login records are stored in a.csv file for later use (Face Recognition, 2017). **1. Results and Discussion** The performances of two different face detection models (HOG and CNN) in the Face-recognition library used in the study were also compared. In the experiments carried out, it was concluded that the HOG model captures the image on the camera faster and the camera frame per second (FPS) speed works better. Image capture rate and FPS rates are shown in Table 1. In the study, student attendance information is recorded in a.csv file. The information obtained was compared with the list previously entered into the database, and attendance-absence information was kept. As seen in Figure 8 in the sample attendance list, the id and name-surname, time, date and status information of the individuals can be checked by the teacher. \begin{table} \begin{tabular}{|c|c|c|} \hline Model Name & Image Capture Speed (Sec) & Fps Value \\ \hline HOG & 0.0322 & 15 \\ \hline CNN & 0.3262 & 5 \\ \hline \end{tabular} When the image was obtained with the system designed in the study, all the photos in the data set were encoded and the facial features were kept in a list. Then, when the camera compares the detected face with the features of the faces in the data set and provides a match, the person’s id and name-surname information is shown on the screen. If the match is not achieved, the information that the face is not recognized is displayed on the screen. Examples of identified (matched) and unidentified (unmatched) faces are shown in Figure 7, respectively. \end{table} Table 1: Image capture performance results Figure 7: (a) Identified and (b) unidentified face image Although this designed system is not a new study for face recognition studies, it sets an example in terms of its adaptability to class attendance processes in education and training environments. In addition to class attendance, it is also important for security purposes in field-based education institutions such as Vocational and Technical Anatolian High Schools, when students enter and exit the workshops and laboratories in their own fields. Automating time-consuming attendance, especially in crowded classrooms, will be an important innovation for teachers. In addition, it will also contribute to the more efficient use of FATIH hardware infrastructure established in all schools in Turkey. ## 2 Conclusions AI technologies, which have started to be used in many different disciplines, have also started to be used in educational environments today. Equipping educational environments with automated systems is one of these uses. In this study, face recognition and polling system have been developed in educational institutions during the testing phase to set an example for and pave the way for automatic class attendance systems. The system established in the study successfully detected the faces in the data set and successfully recorded the absenteeism of the names in the class list into the database. This study helped to understand and apply the presented face recognition system to take it one step further using a real-time application. The application of the study to real-life problems will be carried out in a school determined in the 2022-2023 academic year. With the results obtained and the feedback received from this conference, an article study will also be carried out and the results will be disseminated. In addition, steps will be taken toward the use of innovative technologies in education.
2310.18321
Self-interaction correction schemes for non-collinear spin-density-functional theory
We extend some of the well established self-interaction correction (SIC) schemes of density-functional theory to the case of systems with noncollinear magnetism. Our proposed SIC schemes are tested on a set of molecules and metallic clusters in combination with the widely used local spin-density approximation. As expected from the collinear SIC, we show that the averaged-density SIC works well for improving ionization energies but fails to improve more subtle quantities like the dipole moments of polar molecules. We investigate the exchange-correlation magnetic field produced by our extension of the Perdew-Zunger SIC, showing that it is not aligned with the local total magnetization, thus producing an exchange-correlation torque.
Nicolas Tancogne-Dejean, Martin Lüders, Carsten A. Ullrich
2023-09-29T14:05:00Z
http://arxiv.org/abs/2310.18321v1
# Self-interaction correction schemes for non-collinear spin-density-functional theory ###### Abstract We extend some of the well established self-interaction correction (SIC) schemes of density-functional theory to the case of systems with noncollinear magnetism. Our proposed SIC schemes are tested on a set of molecules and metallic clusters in combination with the widely used local spin-density approximation. As expected from the collinear SIC, we show that the averaged-density SIC works well for improving ionization energies but fails to improve more subtle quantities like the dipole moments of polar molecules. We investigate the exchange-correlation magnetic field produced by our extension of the Perdew-Zunger SIC, showing that it is not aligned with the local total magnetization, thus producing an exchange-correlation torque. ## I Introduction In practical (spin) density-functional theory (DFT) calculations, one needs to select an approximate functional of the density to compute the exchange-correlation energy and the corresponding potential.[1] Most of the commonly employed approximations are known to suffer from the so-called self-interaction error,[2] an error that implies that the electron can interact with itself via the total electronic density. The self-interaction error can lead to problems in the prediction of the electronic properties of molecules and materials. For example, it can cause an underestimation of the bandgap of insulators and semiconductors, and an underestimation of the ionization potential and electron affinity of molecules. Thus, correcting for the self-interaction error is important for obtaining reliable DFT predictions of the electronic properties of molecules and materials.[3] The search for schemes correcting the self-interaction error, known as self-interaction correction (SIC), has been pioneered by Perdew and Zunger.[2] Their proposed method, now referred to as the Perdew-Zunger self-interaction correction (PZ-SIC), leads to an exchange-correlation energy functional that is an explicit functional of the orbitals and, hence, an implicit density functional. Implementations of the PZ-SIC approach are often done in a generalized Kohn-Sham sense,[4] where the exchange-correlation potential depends on the orbital on which it acts. Alternatively, and in the spirit of the original Kohn-Sham DFT, a local multiplicative exchange-correlation potential can be constructed from PZ-SIC using the optimized effective potential (OEP) technique.[5] The so constructed exchange-correlation potentials have the correct asymptotic behavior and exhibit discontinuities as a function of particle number.[6; 3] It is possible to solve the OEP equations exactly,[7; 8] but this is known to be numerically challenging, and one often resorts to the scheme introduced by Krieger, Li, and Iafrate (KLI) to approximate the full solution of the OEP equations.[9] A further simplification of the KLI approach is the Slater approximation, which neglects the orbital-dependent part in the OEP equations and replaces it by an orbital-averaged term.[9] The so-called globally averaged method (GAM) is defined in a similar spirit.[10; 11] An even more drastic approximation for the SIC consists in replacing in the PZ-SIC the orbital-dependent part directly by an averaged value for all orbitals, leading to the average-density SIC (AD-SIC).[12] More recently, Perdew and coworkers proposed new schemes like the local-scaling SIC[13] which are intended to fix some of the known deficiencies of the original PZ-SIC. To our knowledge, all of these methods have so far only be proposed and employed in the context of collinear spin DFT (SDFT). However, there exist many electronic systems in which noncollinear magnetism, spin-orbit coupling (SOC) and other relativistic effects are relevant, and often the DFT practitioners are left with no other option than to use the local spin density approximation (LSDA), which suffers from self-interaction error. It is the goal of this paper to explore how to extend the applicability of the above mentioned SIC schemes to the realm of noncollinear magnetism.[14; 15; 16] This allows one to include effects stemming from the noncollinear magnetism and at the same time improve upon the LSDA. Extending the existing SIC schemes to treat noncollinear magnetism requires care: important fundamental conditions are the local SU(2) gauge invariance of the exchange-correlation energy, and the requirement that the method properly reduces to the collinear limit. Moreover, an important question is whether the exchange-correlation magnetic field produced by a noncollinear SIC can exert a local torque on the magnetization.[17; 18] If such a torque exists, it must satisfy the condition that the system cannot exert a global torque on itself (this is known as the zero-torque theorem of SDFT).[19] It is the goal of this work to discuss these points. The paper is organized as follows. In Sec. II, we present the motivation underlying our proposed SIC and extend the collinear formulation of PZ-SIC and AD-SIC to the noncollinear case. Then, in Sec. III we report numerical results obtained for several isolated systems, for which we analyze the effect of the SIC on the electronic and magnetic properties of atoms, small molecules, and clusters. We also discuss its effect on the local texture of the exchange-correction torque. Finally we draw our conclusions in Sec. IV. ## II Theory We begin by defining the concept of self-interaction for the general case of noncollinear spin systems. Self-interaction is usually introduced separately for exchange and correlation. Thus, let us first consider the exact exchange energy of a system of \(N\) electrons,[20] \[E_{\rm x}[n,{\bf m}]=-\frac{1}{2}\int\int\frac{d{\bf r}d{\bf r}^{\prime}}{|{ \bf r}-{\bf r}^{\prime}|}{\rm Tr}\Bigl{[}\underline{\underline{\gamma}}({\bf r },{\bf r}^{\prime})\underline{\underline{\gamma}}({\bf r}^{\prime},{\bf r}) \Bigr{]}\,, \tag{1}\] where \({\rm Tr}\) is the trace over spin indices of the one-particle spin density matrix \(\gamma_{\sigma\tau}({\bf r},{\bf r}^{\prime})=\sum_{j}^{N}\psi_{j\sigma}({\bf r })\psi_{j\tau}^{*}({\bf r}^{\prime})\), constructed from two-component spinor Kohn-Sham orbitals, where \(\sigma=\uparrow,\downarrow\) and likewise for \(\tau\). Here, the double underline in \(\underline{\underline{\gamma}}\) represents a \(2\times 2\) matrix in spin space.[16] The Hartree energy is given by \[E_{\rm H}[n]=\frac{1}{2}\int\int d{\bf r}d{\bf r}^{\prime}\frac{n({\bf r})n( {\bf r}^{\prime})}{|{\bf r}-{\bf r}^{\prime}|}\,, \tag{2}\] where \(n({\bf r})={\rm Tr}[\underline{\underline{\gamma}}({\bf r},{\bf r})]\) is the total charge density of the system. From the above definitions of \(E_{\rm x}\) and \(E_{\rm H}\), it is straightforward to show that in the one-electron case we have \[E_{\rm x}[n_{i},{\bf m}_{i}]+E_{\rm H}[n_{i}]=0\,, \tag{3}\] where \(n_{i}\) and \({\bf m}_{i}\) are the single orbital charge and magnetization densities. This is the generalization of the result shown in Ref. [2] for the collinear case, and forms the basis of the self-interaction corrections that we are proposing below. More generally, for a single orbital there is no correlation energy, so we can write that the exchange-correlation energy should fulfill the constraint \[E_{\rm xc}[n_{i},{\bf m}_{i}]+E_{\rm H}[n_{i}]=0\,. \tag{4}\] Importantly, we remark here that both the exchange energy, Eq. (1), and the Hartree energy, Eq. (2), are invariant under local SU(2) rotations of the spin. We thus obtain from Eq. (4) that the property remains true if we rotate the orbitals such that their magnetization aligns with the \(z\) direction: \[E_{\rm xc}[n_{i},\hat{R}_{z}{\bf m}_{i}]+E_{\rm H}[n_{i}]=0\,, \tag{5}\] where \(\hat{R}_{z}{\bf m}_{i}\) is a symbolic operator notation for performing a rotation on the spin parts of all orbitals such that they are reckoned with respect to a given global \(z\)-axis, and then constructing the resulting orbital magnetizations. This allows us to make the link with the collinear result, see Eq. (30) of Ref. [2]. Of course, when starting from the noncollinear formulation of SDFT, one needs to break some symmetries to reduce the four-component noncollinear theory based on the variables \((n,{\bf m})\) into a two-component collinear theory based on the variables \((n,m_{z})\). This can be achieved for instance using a uniform magnetic field of small magnitude along the \(z\)-axis, which causes the orbitals to align their magnetization along this direction. In other words, the system needs to be told to choose the \(z\)-axis as its spin quantization axis. From this, we obtain a set of necessary conditions to be able to employ Eq. (3) to build a SIC. The first condition is that the approximate exchange-correlation functional must be locally SU(2) gauge invariant, i.e., it produces the same exchange-correlation energy independently of the orientation of the orbital magnetization. The second condition is that the noncollinear and collinear functionals should produce the same energy for the same density, for a magnetization along the \(z\) direction. In other words, \(E_{\rm xc}^{\rm noncol.}[n_{i},m_{i,z}.\hat{\bf e}_{\bf z}]=E_{\rm xc}^{\rm coll. }[n_{i\sigma},0]\), where it is stipulated that \(m_{i,z}=n_{i,\uparrow\uparrow}-n_{i,\downarrow\downarrow}\) and \(n_{i,\downarrow\uparrow}=n_{i,\uparrow\downarrow}=0\) (and hence \(m_{i,x}=m_{i,y}=0\)). The collinear functional \(E_{\rm xc}^{\rm coll.}[n_{i\sigma},0]\) appears in the definition of PZ-SIC, see below. These conditions are naturally fulfilled by the LSDA when using the method proposed originally by Kubler _et al.[21]_ The first condition is also fulfilled by the more recently proposed noncollinear exchange meta-GGA,[20; 22] which also recovers properly the result of the Becke-Roussel collinear exchange functional[23] for closed-shell systems. ### Noncollinear Perdew-Zunger SIC Based on Eq. (3), we can propose a generalization of the PZ-SIC to the noncollinear case. Let us first start by reviewing briefly the collinear case. The idea behind the PZ-SIC consists in removing all the single-electron self-interaction errors for a given density functional approximation. This leads to the energy functional \[E_{\rm xc}^{\rm SIC} = E_{\rm xc}^{\rm DFT}[n_{\uparrow},n_{\downarrow}] \tag{6}\] \[- \sum_{\sigma=\{\uparrow,\downarrow\}}\sum_{i}f_{i,\sigma}\left(E_ {\rm H}[n_{i\sigma}]+E_{\rm xc}^{\rm DFT}[n_{i\sigma},0]\right).\] In this expression, \(n_{\uparrow}\) and \(n_{\downarrow}\) refer respectively to the up and down channels of the total electronic density, and the \(f_{i,\sigma}\) are occupation numbers. For each Kohn-Sham orbital \(\varphi_{i}\) one needs to compute the corresponding Hartree and exchange-correlation energy from its individual density \(n_{i\sigma}\) and subtract it from the energy computed from the total density. This above expression is intrinsically limited to the collinear case, but can be easily generalized to the noncollinear case. Indeed, in the latter case the exchange-correlation functional is not a functional of the density in the two spin channels (\(E_{\rm xc}[n_{\uparrow},n_{\downarrow}]\)) but a functional of the total density \(n\) and the local magnetization \(\mathbf{m}\). This immediately suggests generalizing Eq. (6) to the noncollinear case as \[E_{\rm xc}^{\rm SIC}=E_{\rm xc}^{\rm DFT}[n,\mathbf{m}]-\sum_{i}f_{i}\left(E_{ \rm H}[n_{i}]+E_{\rm xc}^{\rm DFT}[n_{i},\mathbf{m}_{i}]\right)\,. \tag{7}\] This correction removes the self-interaction of each orbital \(\varphi_{i}\) as in the collinear case. In practice, the noncollinear PZ-SIC scheme can be challenging to implement. First of all, it requires finding the local effective potential originating from this orbital-dependent scheme, unless one wants to resort to using a generalized Kohn-Sham scheme that allows for orbital-dependent potentials.[4] Finding this local multiplicative potential is usually achieved by solving the OEP equation,[5; 7] or some simplified version of it like the KLI approximation.[9] A more subtle complexity comes from the fact that different orbitals can produce the same density. For a typical density functional approximation like LSDA, this is not a problem. However, this becomes a well-known problem with PZ-SIC, whose results depend on the orbitals and hence vary under a unitary transformation of the orbitals,[24; 25; 26; 27; 28] unless one minimizes explicitly over all possible unitary transformations,[7; 29] or use specific orbitals that make the SIC a true density functional.[30] We will briefly discuss this point below with numerical examples. Finally, let us comment on an important difference between the collinear case and the noncollinear case, which concerns the practical solution of the KLI equations to get to an approximate solution to the full OEP equation. When solving these equations, the potential is defined up to a constant, which is fixed by imposing for isolated systems that \(v_{{\rm xc},\sigma}\to 0\) for \(r\) going to infinity.[9] This leads to a different constant for the up and down potentials in the collinear case. However, in the noncollinear case, we end up with a single constant, as we have a \(2\times 2\) matrix in spin space for the potential. As a consequence, for an open-shell system without SOC, for which we can compare directly the collinear and noncollinear results, the potentials for the majority spin are very similar, but in the minority spin channel they may be different. ### Noncollinear averaged density SIC While the PZ-SIC is known to produce very good results, it is also known to be numerically expensive to evaluate, as one needs to solve one Poisson equation and compute the exchange-correlation energy for each occupied Kohn-Sham state, and one further needs to solve the OEP equations to obtain a local multiplicative potential needed to perform Kohn-Sham SDFT calculations. This is why several simplified methods have been proposed. Among them, the most effective method is probably the AD-SIC, which, a bit surprisingly given its simplicity, can produce excellent results for atoms compared to PZ-SIC. The motivation of this method is that if all orbitals have a similar localization, we can replace their density in Eq. (6) by their averaged density.[7] This is particularly suited for calculations with identical atoms and pseudopotential-based simulations as orbitals are similar in these cases. However, AD-SIC suffers from a size-consistency problem as it is explicitly based on the number of electrons,[7] which makes it unsuitable for extended systems. In this section, we show how to generalize the AD-SIC to the noncollinear case. In the collinear case, the AD-SIC is obtained by replacing in Eq. (6) the orbital and spin-resolved density \(n_{i\sigma}\) by the average spin-resolved density \(n_{\sigma}/N_{\sigma}\), where \(N_{\sigma}=\int d\mathbf{r}n_{\sigma}(\mathbf{r})\) is the number of electrons in the spin channel \(\sigma\). This directly leads to the collinear AD-SIC energy functional: \[E_{\rm xc}^{\rm AD-SIC} = E_{\rm xc}^{\rm DFT}[n_{\uparrow},n_{\downarrow}]-\sum_{\sigma= \{\uparrow,\downarrow\}}N_{\sigma}\Big{(}E_{\rm H}[n_{\sigma}/N_{\sigma}] \tag{8}\] \[+ E_{\rm xc}^{\rm DFT}[n_{\sigma}/N_{\sigma},0]\Big{)}\,.\] Following this logic, one could be tempted to average not the up and down densities of collinear SDFT, but the full spin-density matrix of non-collinear SDFT, or equivalently the local charge and magnetization densities. Inserting this into Eq. (7), one would directly obtain \[E_{\rm xc}^{\rm AD-SIC} = E_{\rm xc}^{\rm DFT}[n,\mathbf{m}]-N\Big{(}E_{\rm H}[n/N] \tag{9}\] \[+ E_{\rm xc}^{\rm DFT}[n/N,\mathbf{m}/N]\Big{)}\,.\] However, this choice does not produce the correct collinear limit. In order to illustrate this, let us consider a Li atom in a uniform magnetic field aligned along the \(z\) direction. In this case, the system has three electrons, two residing in the 1s level, and one in the 2s level. It is straightforward to see that the one electron in the 1s (spin-channel \(\alpha\)) and the one in the 2s level have their orbital magnetization antialigned with the external magnetic field, while the second 1s electron (spin-channel \(\beta\)) has its orbital magnetization aligned with the external magnetization. The densities corresponding to these states are denoted \(n_{1s,\alpha}\), \(n_{2s,\alpha}\) and \(n_{1s,\beta}\). Assuming that the approximate functional which we want corrected with AD-SIC fulfills the requirements mentioned in the introduction [SU(2) gauge invariance, and the same energy for a single orbital density with \(m_{z}>0\) in the noncollinear case and for the same density in the up channel for the collinear functional] we can treat the same Li atom as a collinear electronic system with a static magnetic field along the \(z\) axis. Let us now compute the collinear and noncollinear AD-SIC corrections for this Li atom. The AD-SIC for the collinear-spin case, Eq. (8), is \[\Delta E^{\rm AD-SIC-col.} = -2E_{\rm H}[\frac{n_{1s,\alpha}+n_{2s,\alpha}}{2}]-E_{\rm H}[n_{ 1s,\beta}]\] \[- 2E_{\rm xc}[\frac{n_{1s,\alpha}+n_{2s,\alpha}}{2},0]-E_{\rm xc}[ n_{1s,\beta},0].\] If we use the proposed averaged density SIC as in Eq. (9), we get \[\Delta E_{\rm xc}^{\rm AD-SIC}=-3\Big{(}E_{\rm H}[\frac{n_{1s, \alpha}+n_{2s,\alpha}+n_{1s,\beta}}{3}]\] \[+E_{\rm xc}^{\rm DFT}[\frac{n_{1s,\alpha}+n_{2s,\alpha}+n_{1s, \beta}}{3},\frac{{\bf m}_{1s,\alpha}+{\bf m}_{2s,\alpha}+{\bf m}_{1s,\beta}}{ 3}]\Big{)}. \tag{11}\] Clearly, this expression will not lead to the desired collinear limit, as seen directly from the Hartree term. However, it is possible to recover the collinear limit using the same logic as proposed originally by Kubler _et al._[21] for treating LSDA with noncollinear spin. By diagonalizing first the spin-density matrix, we obtain two densities, \(n_{\uparrow}\) and \(n_{\downarrow}\), which we can average by normalizing them by their integrals (thus defining the number of "up" and "down" electrons in the frame defined by the local magnetization). Similarly to the LSDA case, the potential is computed in the local frame and independently for the up and down channels, and then rotated back to the global frame using the total magnetisation. This procedure will produce the collinear limit expected in the above Li atom example. The direct consequence of this procedure is that both the LSDA energy/potential and the AD-SIC corrections are evaluated in the same frame, which makes this approach consistent and also invariant under local and global SU(2) rotations. However, the price to pay is that the exchange-correlation magnetic field originating from the AD-SIC correction term is aligned with the local magnetisation, meaning that no exchange-correlation torque is produced by the correction scheme. ## III Numerical results We have implemented the above equations in the real-space code Octopus [31] in order to perform tests. For the case of PZ-SIC, we only computed the solution of the OEP equations at the KLI level, using the explicit solution for noncollinear spin proposed in our recent work (see supplementary information of Ref. [20]). ### Isolated Xe atom In order to investigate the interplay between SIC and SOC, as well as numerical and theoretical problems related to the various schemes, we first consider the case of an isolated Xe atom. We use a grid spacing of 0.30 Bohr, employing norm-conserving fully relativistic Hartwigsen-Goedecker-Hutter (HGH) pseudo-potentials.[32] The simulation box is taken as a sphere of radius 12 Bohr centered at the atomic center. In Fig. 1 we show the splitting of the 5p electronic levels into 5p\({}_{1/2}\) and 5p\({}_{3/2}\) levels for LSDA, LSDA+AD-SIC, and LSDA+PZ-SIC. In all cases the collinear limit is correctly recovered for PZ-SIC and AD-SIC. We found that the inclusion of the SIC does not change how SOC affects the energy levels, and the degeneracy of the energy levels is properly described by our corrections. As visible from the figure, we nicely recover the collinear limit, indicated by the symbols in Fig. 1. We also checked that in the case of vanishing SOC strength, using a small magnetic field along \(x\), \(y\), or \(z\) directions produces identical results, as expected from the SU(2) invariance of our proposed formulation. However, we note that for more complicated molecules, the collinear limit is not always recovered, see below. Let us now comment on the dependence on a unitary transformation of the orbitals used in the evaluation of Eq. (6) and Eq. (7). In order to reveal this, we define a new set of orbitals \(\{\tilde{\varphi}_{i}\}\) such that \[\tilde{\varphi}_{i}({\bf r})=\sum_{j}U_{ij}\varphi_{j}({\bf r})\,, \tag{12}\] where \(U\) is a unitary matrix. The two sets of orbitals, \(\{\varphi_{j}\}\) and \(\{\tilde{\varphi}_{i}\}\), have the same density, but their contribution to their PZ-SIC energy is different. To illustrate this we consider here three cases: i) the "minimizing" orbitals obtained directly from the solution of the Kohn-Sham equations, ii) the result of the so-called subspace diagonalization procedure in which the unitary matrix is found by diagonalizing the Hamiltonian matrix in the subspace of the "minimizing" orbitals, iii) the localization method known as the SCDM method [33] that produces Wannier functions. In Table 1 we report the total energy and ionization potential of Xe for the first and the last approach for both the collinear and the noncollinear case. We find no difference between the "minimizing orbitals" and the ones obtained by subspace diagonalization. As expected, Figure 1: Splitting of 5p levels of Xe due to SOC versus the spin–orbit strength computed for LDA (blue curves), LDA+AD-SIC (orange curves), and LDA+PZ-SIC (red curves). The symbols (square, circle, and triangle) indicate the results obtained for the corresponding spin-unpolarized calculations. more localized orbitals produce a lower total energy and a higher ionization potential. Overall, it is apparent from these results that our non-collinear functional suffers from the same problems as the collinear formulation. One solution would be to implement a minimization of the PZ-SIC energy correction with respect to the unitary transformation \(U\), which we defer to some future work. In the following, unless specified explicitly, orbitals from the subspace diagonalization are always employed. ### Diatomic closed-shell systems We continue analyzing the effect of our proposed functional on small closed-shell molecules for which SOC is known to be important for their electronic structure. It is known that SOC plays an important role on the bond length of closed-shell dimers, as well as their harmonic frequency and their dissociation energy.[34] However, the choice of the functional is also important for these properties,[34] and we expect the SIC to be relevant for improving the theoretical modelling of these molecules. We start by considering the Bi\({}_{2}\) molecule. We performed calculations at the experimental bond length[35] of 2.661 A for LSDA, LSDA+AD-SIC, and LSDA+PZ-SIC. We used a grid spacing of 0.30 Bohr, employing norm-conserving fully relativistic Hartwigsen-Goedecker-Hutter (HGH) pseudo-potentials.[32] The simulation box was obtained from the union of two spheres of radii 12 Bohr centered on each atoms. As shown in Fig. 2, the inclusion of the SIC does not change how SOC affects the energy levels of the molecules, and the degeneracy of the energy levels is properly described by our corrections. As in the case of Xe, the AD-SIC properly recovers the collinear limit, while we found that the PZ-SIC becomes unstable when the SOC strength is set to zero. Indeed, in this case Bi\({}_{2}\) is non-magnetic, and hence any local SU(2) rotation of the spins associated with a given orbital leaves the energy unchanged but changes the potential. In order to get a converged ground state in absence of SOC, we apply a tiny magnetic field. Unlike the case of Xe, we found here two possible solutions. Aligning the external magnetic field along the molecular axis, we get the limit of vanishing SOC strength. Aligning the magnetic field perpendicular to the molecular axis, we get the same eigenvalues as in the collinear calculation. This is analyzed more in detail in Appendix A. We also performed similar simulations for other diatomic molecules using their experimental geometry, see Table 2. For all molecules we employ a grid spacing of 0.3 Bohr, a radius for atom-center spheres of 12 Bohr, except for Au for which we included semi-core states and used a grid spacing of 0.25 Bohr. Overall, we find that the inclusion of the SIC drastically improves the agreement with respect to the experiment for the ionization potential, as expected from the vast literature on collinear SIC. We also investigated the polar diatomic molecules HI, IF, PbO, and TlF at their experimental geometry and compared the dipole moments for different level of description with the experimental values. Consistent with the collinear case,[43] we found that the dipole moment on average deviates more from the experimental value when using SIC than simply using noncollinear LSDA. Importantly, the limitations of the approximation of an averaged density used to get to AD-SIC appears more clearly on the dipole moments than on the ionization energy. We also performed geometry relaxation. As found in the collinear case,[44; 24] we obtain that including SIC shortens the bonds, resulting here in underestimated bond lengths compared to the LSDA, the latter being in better agreement with experimental values. \begin{table} \begin{tabular}{l c c c c} & \multicolumn{2}{c}{Collinear case} & \multicolumn{2}{c}{Noncollinear case} \\ & Minimizing & SCDM & Minimizing & SCDM \\ \hline \(E_{\text{tot}}\) & -15.5938 & -15.6492 & -15.5938 & -15.6499 \\ \(I_{p}\) & 0.4449 & 0.4674 & 0.4449 & 0.4695 \\ \end{tabular} \end{table} Table 1: Total energy \(E_{\text{tot}}\) and ionization potential \(I_{p}\), in Hartree, for the collinear and collinear cases using different orbitals, as explained in the main text. Figure 2: Eigenvalues of the highest bonding (\(\sigma_{1/2g}\), \(\pi_{1/2u}\), and \(\pi_{3/2u}\)) and lowest antibonding (\(\pi_{1/2g}\) and \(\pi_{3/2g}\)) molecular orbitals of the bismuth dimer as a function of the SOC strength computed for LDA (blue curves), LDA+AD-SIC (orange curves), and LDA+PZ-SIC (red curves). The dots indicate the results obtained for spin-unpolarized calculations. ### Magnetic cluster We now investigate the effect of SIC on the properties of small magnetic clusters by specifically considering the iron dimer, Fe\({}_{2}\).[45] Clusters of this type have been widely studied by means of LSDA, including SOC (see for instance Ref. [46] and references therein). Unless stated differently, SOC is included throughout. In all calculations we employ a grid spacing of 0.15 Bohr, a radius for atom-center spheres of 12 Bohr, and we included the semi-core states for Fe atoms. A small Fermi-Dirac smearing of 10 meV for the occupations was also used. The Fe-Fe distance was taken for the Fe dimer to be the experimental one of 2.02 A.[47] In all cases that included both LSDA exchange and correlation energy, we found a total magnetic moment of \(6\mu_{B}\), in agreement with in prior works. We note that our LSDA value matches well the atomic magnetic moment reported in the pioneering work of Oda _et al.[48]_ The fact that the atomic magnetic moments computed on a sphere around the atoms decrease indicates that for Fe\({}_{2}\), the SIC tends to push away the magnetization from the atomic center, while the increase of the ionization potential is consistent with an increased localization of the orbitals. This points toward a non-negligible contribution of itinerant electrons to the magnetic properties in this cluster. We also computed the values for exchange-only LSDA, together with SIC corrections. The total magnetic moments are not properly predicted in these cases, demonstrating the key importance of correlations in order to obtain reliable magnetic structures. We finally turn our attention to the exchange-correlation torque \(\tau(\mathbf{r})\), defined as \[\tau(\mathbf{r})=\mathbf{m}(\mathbf{r})\times\mathbf{B}_{\mathrm{xc}}(\mathbf{ r})\,, \tag{13}\] where \(\mathbf{m}\) is the local magnetization density and \(\mathbf{B}_{\mathrm{xc}}\) is the exchange-correlation magnetic field. We computed this quantity using LSDA and LSDAx with PZ-SIC, and also with the Slater potential. As a reference here, we consider the Slater potential, which was shown to give reasonable results compared to the result of exact-exchange potential computed at the level of KLI.[20] From our results (see Figs. 3a and d) the Slater potential produces a small exchange-correlation torque around the atoms, where the symmetries of the system are clearly apparent. Our results for PZ-SIC (Figs. 3b, c, e, and f) show that PZ-SIC also produces a non-vanishing torque around the atoms. While it shows, as required by the zero-torque theorem, alternating positive and negative patterns that are also in accordance with the symmetries of the system, the overall shape and magnitude strongly differs from what is obtained from Slater potential. Importantly, we want to stress here that like the energy, the torque obtained from PZ-SIC depends upon \begin{table} \begin{tabular}{l c c c c c c c} & Bi\({}_{2}\) & Au\({}_{2}\) & I\({}_{2}\) & HI & IF & PbO & TIF \\ \hline Exp. & 7.31 & 9.52 & 10.38 & 10.38 & 10.38 & 10.54 & 9.43 & 10.52 \\ LSDA & 4.898 & 6.104 & 6.062 & 6.627 & 6.549 & 6.373 & 5.959 \\ LSDA+AD-SIC & 7.773 & 9.481 & 8.651 & 10.294 & 10.015 & 10.190 & 10.614 \\ LSDA+PZ-SIC & 7.120 & 9.324 & 8.252 & 10.028 & 9.481 & 10.372 & 11.778 \\ \end{tabular} \end{table} Table 2: Ionization potentials, in eV, of diatomic systems using their experimental geometry, including SOC, for different energy functionals. \begin{table} \begin{tabular}{l c c c} & \(I_{p}\) & \(M\) & \(|\mathbf{m}|\) \\ \hline LSDA & 3.327 & 6.00 & 2.71 \\ LSDA+AD-SIC & 7.854 & 6.00 & 2.69 \\ LSDA+PZ-SIC & 6.843 & 6.00 & 2.59 \\ \hline LSDAx & 3.453 & 8.00 & 3.29 \\ LSDAx+AD-SIC & 7.464 & 7.00 & 2.97 \\ LSDAx+PZ-SIC & 5.995 & 7.50 & 3.10 \\ Slater & 6.760 & 6.00 & 2.96 \\ \end{tabular} \end{table} Table 4: Electronic and magnetic properties of Fe\({}_{2}\) for different energy functionals. Ionization potential (\(I_{p}\)) is given in eV, and the total (\(M\)) and local magnetic moments (\(|\mathbf{m}|\)) are given in \(\mu_{B}\) and are obtained by integrating the density on a sphere of radius 1.909 Bohr around the atoms. Exchange-only LSDA(LSDAx) results and also reported. \begin{table} \begin{tabular}{l c c c} & HI & IF & PbO & TIF \\ \hline Exp.1 & 0.45 & 1.95 & 4.64 & 4.23 \\ LSDA & 0.451 & 1.371 & 4.310 & 4.457 \\ LSDA+AD-SIC & 0.534 & 2.512 & 6.235 & 6.590 \\ LSDA+PZ-SIC & 0.372 & 1.483 & 5.569 & 4.612 \\ \end{tabular} \end{table} Table 3: Dipole moments, in Debye, of diatomic systems using their experimental geometry, including SOC, for different energy functionals. the unitary transformation of the orbitals. This quantity therefore needs to be analyzed with great care, and we aim in the future at implementing a minimization over unitary transformations in order to eliminate this ambiguity, similarly to prior efforts. [29] ## IV Conclusions To summarize, we presented how to extend some of the existing SIC approaches to the case of non-collinear spins. We then analyzed numerically how these non-collinear SIC schemes behave for various closed-shell and magnetic systems. Overall, we found that our noncollinear schemes exhibit similar advantages and deficiencies as the collinear ones. The ionization energies are improved, but bond lengths are found to be worse than those obtained for LSDA. When the localization of individual orbitals is important, the AD-SIC performs poorly for observables that depend on local orbitals, like dipole moments or magnetic moments. We further demonstrated that PZ-SIC for noncollinear spin can produce a non-negligible exchange-correlation torque around the magnetic atoms, but we found large differences in the magnitude and texture of the exchange-correlation torque compared to the result of the Slater potential. Overall, our work opens the door to a better description of the electronic and magnetic properties of systems when noncollinear effects are important, but we note that some further work, including the computation of accurate benchmarks, is needed in order to get reliable results for the collinear and noncollinear PZ-SIC schemes. Once such SIC schemes are fully established we expect them to become a useful tool for the description of materials with noncollinear magnetism. ###### Acknowledgements. C.A.U. is supported by DOE Grant No. DE-SC0019109. ## Appendix A Vanishing SOC limit in Bi\({}_{2}\) In this section we investigate in more detail the case of Bi\({}_{2}\) without SOC using PZ-SIC, with a tiny magnetic field included. As explained in the main text, the dependence in the orbitals leads to different results for a magnetic field aligned with the molecular axis as opposed to saligned perpendicular to it. In Fig. 4 we report the square modulus of the four highest occupied states of Bi\({}_{2}\) computed with PZ-SIC, corresponding to the \(\pi_{1/2u}\) and \(\pi_{3/2u}\) bounding orbitals, without SOC, and with a magnetic field aligned with the molecular axis or perpendicular to it. While these orbitals produce the same charge density when summed over, their individual contributions for the PZ-SIC energy and potential is different, leading to a different ground state. While these result might be surprising at first glance, the reported shapes are in fact the direct consequence of the symmetries of the system. When the system has a magnetic field aligned with the molecular axis, it is clear that the system is invariant under any rotation along this axis. It is therefore not surprising to find radially symmetric wavefunctions in the panels a)-d). On the contrary, when a tiny magnetic field is applied perpendicular to the molecular axis, the radial symmetry is broken, resulting in the splitting of the orbitals into two sets, one aligned with the magnetic field (e); g)) and one perpendicular to it (f); h)). The obtained wavefunctions therefore respect the symmetries of the system in the presence of a tiny magnetic field, and it is therefore expected that taking directly these orbitals to build the PZ-SIC energy (and the potential following from it) leads to some differences, even if the magnetic field itself has a negligible effect on the charge density. Importantly, the change in the orbitals leads to a large change of 35mH in the total energy (the molecular-axis-aligned magnetic field giving the lowest energy), while the magnetic field itself only causes a splitting of these four energy levels by 15\(\mu\)H.
2309.17171
Comparative Analysis of Named Entity Recognition in the Dungeons and Dragons Domain
Many NLP tasks, although well-resolved for general English, face challenges in specific domains like fantasy literature. This is evident in Named Entity Recognition (NER), which detects and categorizes entities in text. We analyzed 10 NER models on 7 Dungeons and Dragons (D&D) adventure books to assess domain-specific performance. Using open-source Large Language Models, we annotated named entities in these books and evaluated each model's precision. Our findings indicate that, without modifications, Flair, Trankit, and Spacy outperform others in identifying named entities in the D&D context.
Gayashan Weerasundara, Nisansa de Silva
2023-09-29T12:09:36Z
http://arxiv.org/abs/2309.17171v1
# Comparative Analysis of Named Entity Recognition ###### Abstract Many NLP tasks, although well-resolved for general English, face challenges in specific domains like fantasy literature. This is evident in Named Entity Recognition (NER), which detects and categorizes entities in text. We analyzed 10 NER models on 7 Dungeons and Dragons (D&D) adventure books to assess domain-specific performance. Using open-source Large Language Models, we annotated named entities in these books and evaluated each model's precision. Our findings indicate that, without modifications, Flair, Trankit, and Spacy outperform others in identifying named entities in the D&D context. ## 1 Introduction Named Entity Recognition (NER) targets the identification and classification of textual entities, such as names and locations. In the diverse and intricate vocabulary of fantasy literature, like that of Dungeons and Dragons (D&D), NER becomes challenging (Zagal and Deterding, 2018). D&D, a prominent fantasy literature domain, spans content for its namesake tabletop game (Peiris and de Silva, 2022, 2023; Zhou et al., 2022). These narratives inhabit fictional realms like Forgotten Realms and Dragonlance, bursting with characters, locations, and objects (Gygax and Arneson, 1974). NER's utility in fantasy literature is vast: from extracting information and summarizing text to character analysis and plot creation. However, conventional NER models, primarily trained on standard datasets like CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) or OntoNotes 5.0 (Weischedel et al., 2013), might falter on fantasy texts due to their unique linguistic attributes. Recognizing the need for domain-specific adaptation, other specialized areas such as law (Sugathadasa et al., 2017), medicine (de Silva et al., 2017), and the dynamic landscape of social media (de Silva and Dou, 2021) have already seen research emphasizing it. Large models, as Yao et al. (2021) points out, can face domain adaptation challenges, stressing the need for evaluating NER models specifically on fantasy content. Fantasy NER has potential, especially with advancements in image generation. A notable application might involve an image generation model leveraging NER tags to derive prompts and subsequently produce contextually relevant images. Our study contrasts 10 NER models across seven D&D books, each averaging 118,000 words. Manual annotations of entities were made and juxtaposed against model outputs. Through precision assessments and named entity distribution analyses, we glean insights into model performances in the fantasy domain. Our key contributions include: * A pioneering, comprehensive NER model evaluation on fantasy content. * An annotated D&D book dataset for NER studies. * A deep dive into varied NER models' strengths and pitfalls in the fantasy realm. * Discussions on NER's role and prospects in fantasy literature. Following this, Section 2 delves into related NER and fantasy literature works. Section 3 details our data and annotation process, while Section 4 unveils our methods and findings. Sections 5 and 6 respectively discuss insights and conclude our research, and Section 7 outlines potential future endeavors. ## 2 Related Works NER has seen the development of various models like rule-based systems, statistical models, neural networks, and transformer-based models Seo et al. (2021); Liu et al. (2022); Krasnov et al. (2021). Although they've been trained on standard datasets, these don't encompass the complexities found in domains like fantasy literature, which poses challenges due to invented names, variable spellings, entity ambiguity, and limited resources. We introduced a novel annotated dataset of D&D books for NER and evaluated 10 NER models, including XLM-RoBERTa Conneau et al. (2019), StanfordDeID Chambon et al. (2023), ELECTRA Clark et al. (2020), and others. Other studies have compared the performance of NER models on different types of texts and languages. For example, Wang et al. (2021) compared Spacy, Flair, m-BERT, and camemBERT on anonymizing French commercial legal cases. They found that camemBERT performed the best overall, followed by Flair and m-BERT. SpaCy had the lowest scores but also the fastest prediction time. Benesty (2019) compared spaCy, Flair, and Stanford Core NLP on anonymizing English court decisions. They found that Flair had the highest scores, followed by Stanford Core NLP and spaCy. Shelar et al. (2020) compared rule-based, CRF-based, and BERT-based techniques for NER on text data. They found that BERT-based technique had the highest accuracy and recall, followed by CRF-based and rule-based techniques. Naseer et al. (2021) compared NLTK, spaCy, Stanford Core NLP, and BERT Devlin et al. (2018) on extracting information from resumes. They found that BERT had the highest accuracy and F-measure, followed by spaCy, Stanford Core NLP, and NLTK. These studies suggest that different NER models may have different strengths and weaknesses depending on the type, language, and domain of the text data. Our study aims to contribute to this understanding by providing the first systematic comparison of NER models on fantasy texts and analyzing their performance and characteristics. ## 3 Data Collection and Annotation This section details the data sources and annotation process utilized for our named entity recognition (NER) task, a subtask of information extraction that classifies named entities in unstructured text into categories such as persons, organizations, and locations Mohit (2014). We examined seven adventure books from the Dungeons and Dragons (D&D) realm, listed in table 1. These books, primarily adventure-centric, were sourced from the official DnDBeyond site, the main publication hub for D&D by Wizards of the Coast. Through a comprehensive analysis of these texts, we used their rich narratives and character dynamics to benchmark and assess various NER models in this intricate domain. Each of our chosen books averages 118,000 words. The selection was driven by our familiarity with these tales and the broader D&D universe. Additionally, they span multiple genres, themes, and settings in the fantasy realm, offering a vast array of named entities for NER. The source books were transformed into text and organized hierarchically into chapters, topics, and paragraphs. An example from "The Wild Beyond the Witchlight" is displayed in table 2. We first manually perused the source books, marking named entities hierarchically by chapter, topic, and paragraph, recording only entity counts. Subsequently, we employed three state-of-the-art large language models: Bloom Scao et al. (2022), OpenLLaMA Geng and Liu (2023), and Dolly Databricks (2023), to detect named entities in each book chapter. These models, trained on vast conversational data, can craft natural language responses, making them apt for the intricate language patterns in D&D texts, such as neologisms and metaphors. After eliminating duplicates and pinpointing unique entities, we verified these results against our initial counts. The named entities identified by the three LLMs underwent a manual review for accuracy and consistency, adding crucial missed entities. Table 3 contrasts the named entity counts from each LLM, with recall metrics based on entities common across all models. When annotating the resultant named entities we followed a set of annotation guidelines that define the entity types and the annotation rules for our NER task. The entity types that were used are: \begin{table} \begin{tabular}{l c c} \hline \hline **Book** & \multicolumn{2}{c}{**Counts**} \\ \cline{2-3} & **Words** & **Topics** \\ \hline Lost mine of Phandeliver Baker and Perkins (2014) & 45947 & 29 \\ Hoard of the Dragon Queen Baur et al. (2014) & 74243 & 45 \\ Rise of Tiamart Baur et al. (2014) & 80065 & 48 \\ Curse of Strand Perkins et al. (2016) & 154519 & 62 \\ Tomb of Annihilation Perkins et al. (2017) & 148605 & 35 \\ Candlekger Mysteries Perkins et al. (2021) & 141104 & 106 \\ The Wild Beyond the Witchlight Allan et al. (2021) & 184135 & 60 \\ \hline \hline \end{tabular} \end{table} Table 1: D&D adventure books * Person: any named character or creature that can act as an agent, such as heroes, villains, allies, enemies, etc. * Organization: any named group or faction that has a common goal or identity, such as guilds, culs, clans, etc. * Location: any named place or region that has a geographical or spatial dimension, such as cities, dungeons, forests, etc. * Misc: any named entity that do not belong to above mentioned categories. (This contain important information like Spells, Artifacts, Potions etc.) The process of annotation is done through a script, where a paragraph segment is taken iteratively and fed into the LLMs with a template prompt. Following Algorithm 1 is the pseudo-code for the process in identifying named entities: ``` Input: Books; Output: Named entities; foreachbookdo segments \(\leftarrow\) divideIntoSegments(book); foreachsegment in segmentsdo paragraphs \(\leftarrow\) divideIntoParagraphs(segment); foreachparagraph in paragraphs do foreachLLM in LLMsdo prompt \(\leftarrow\) createPrompt(paragraph); namedEntities \(\leftarrow\) LLM(prompt); processNamedEntities(namedEntities); end foreach end foreach end foreach removeDuplicates(namedEntities); ``` **Algorithm 1** Named Entity Recognition using Multiple LLMs As shown in above pseudocode, the algorithm 1 takes a set of books as input and outputs the named entities identified by the LLMs. The algorithm iterates over each book and divides it into segments. Each segment is further divided into paragraphs, and each paragraph is iteratively fed into each of the LLMs with a prompt to identify named entities. The named entities identified by each LLM are then processed and saved. Finally, all named entities are checked for duplicates, and those duplicates are removed. After named entities were recognized, they were then mapped in to json objects for storage as shown in Figure 1. Nesting of objects is done according to the hierarchy as mentioned in table 2. Each of the named entities were nested in an array of entities as entity objects with corresponding attributes as mentioned bellow. ## 4 Experimental Setup and Results The experiment was conducted to identify how effective are the NER models when using them as off the shelf models in identifying named entities for a fantasy domain when there are no available corpora for fine tuning. For testing we used 10 different contemporary NER models. Following table 5 shows the identified count of named entities for each categories of the adventure book Candlekeep Mysteries. The testing approach for the NER models mirrors algorithm 1. Here, paragraphs of input text are fed into the models without specific prompts. \begin{table} \begin{tabular}{l l l l} \hline **Chapter** & **Topic** & **Paragraph** & **Word Count** \\ \hline \multirow{6}{*}{ \begin{tabular}{} \end{tabular} } & & The main & \\ & & antagonists of & \\ & & this story are & \\ & Adventure & three hags... & \\ & & Summary & One of the \\ & & many novelties & \\ & & of this adventure & \\ & & is that... & \\ & & The characters & \\ & & are drawn into & \\ & & the adventure & \\ & & by one of two & \\ & & adventure hooks. & \\ & & You choose... & \\ & & Chapter 1 & \\ & describes the & 40 \\ & & Witchlight Carnival... & \\ &... &... & \\ & & The Monster & \\ & Manual contains & \\ & & stat blocks & \\ & Adventure & for most of the & \\ & & creatures encountered & \\ & in this... & \\ & & Spells and & \\ & equipment mentioned & \\ & in the adventure & \\ & are described & 31 \\ & in the Players & \\ & Handbook... & \\ \end{tabular} \end{table} Table 2: Content hierarchy in a book The resultant output is refined by filtering out corrupted values (e.g., "Strahd Von Zarovich" might be mistakenly split into two distinct names) and redundant entries, before being transitioned into the JSON structure showcased in Figure 1. During initial processing, NER models often produce numerous erroneous outputs. These arise from factors like incomplete word detection, missegmentation of terms, or misinterpretation of special characters. Such discrepancies can be mitigated using string manipulations and by cross-referencing outputs with a pre-curated list of named entities. Figure 2 displays entries that encountered corruption. These highlight instances where NER models incorrectly processed and extracted entities from the source material. In the given example shown in Figure 2, the name "Fembris Larlancer" is erroneously divided into two distinct words, "Fembris L#" and "rancer", as a result of corruption during the NER processing stage. This example underscores the challenges faced during the entity extraction pro \begin{table} \begin{tabular}{|l|l|} \hline \hline \multicolumn{1}{|l|}{Please identify and list all named entities} \\ \multicolumn{1}{|l|}{(beginning-inside-outside) scheme: \\ \hline “The traveling extravaganza known as \\ the Witchlight Carnival visits your \\ world once every eight years. You \\ have a dim memory of sneaking into \\ the carnival as a child without paying... \\...pair of elves named Mister Witch and \\ Mister Lightwere decidedly unhelpful.” \\ \hline \multicolumn{1}{|l|}{B-Organization:} & Witchlight & Carnival \\ \hline I-Person: & Mister & Witch \\ \hline I-Person: & Mister & Light \\ \hline \hline \end{tabular} \end{table} Table 4: Process of Annotation Figure 1: sample format of the JSON output \begin{table} \begin{tabular}{|l|r r|r r|r r|r|} \hline **Book** & \multicolumn{2}{c|}{**Bloom**} & \multicolumn{2}{c|}{**Dolly**} & \multicolumn{2}{c|}{**OpenLLaMA**} & \multicolumn{2}{c|}{**Total**} \\ & & & & & & & **Unique** \\ \cline{2-7} & Count & Recall & Count & Recall & Count & Recall & **Entities** \\ \hline Lost Mine of Phandelver & 21 & 0.47 & 32 & 0.73 & 40 & 0.91 & 44 \\ Hoard of the Dragon Queen & 58 & 0.89 & 62 & 0.95 & 60 & 0.92 & 65 \\ Rise of Tiamat & 54 & 0.88 & 57 & 0.93 & 53 & 0.87 & 61 \\ Curse Of Strahd & 92 & 0.90 & 96 & 0.94 & 101 & 0.99 & 102 \\ Tomb of Annihilation & 101 & 0.80 & 99 & 0.79 & 112 & 0.89 & 126 \\ Candle keep Mysteries & 60 & 0.87 & 61 & 0.88 & 64 & 0.93 & 69 \\ The Wild Beyond Witch Light & 66 & 0.84 & 67 & 0.85 & 71 & 0.89 & 79 \\ \hline \end{tabular} \end{table} Table 3: Result comparison between LLMs cess and the need for robust post-processing to ensure the accuracy and quality of the extracted entities. After removing corrupted and eligible named entities, duplicate entries must be removed to do a proper comparison of performance between different models. For this tuples of words in adjacent positions were generated and compared. For example Mayor Lei Duvezin, Mayor Duvezin, Lei Duvezin and Duvezin all refers to the same entity with the label Person. In cases such as above tuple with most similarity matches will be retained as the named entity and duplicates will be removed. To visualize the raw named entity identification potential of each model, a density plot was plotted with respect to count of identified named entities with NER models. Following Figure 3 shows the density of named entities recognized by each NER model. The hue represents the overlapping count ranges of named entities identified in each source book. Without training, NLP frameworks like Transkit Nguyen et al. (2021), Flair Akbik et al. (2018), and Spacy Honnibal and Montani (2017) show a strong baseline in entity recognition. Model precision is key in performance evaluation. This is gauged by comparing the true positive entities with actual named entities. This comparison can be visually represented for each model across source books. For a comprehensive model assessment across books, Kernel Density Estimation (KDE) is used. It's a non-parametric method estimating the probability density function Terrell and Scott (1992): \[f(x)=\frac{1}{nh}\sum_{i=1}^{n}K\left(\frac{x-x_{i}}{h}\right)\] where: * xi are the data points * K is the kernel function, which is typically a Gaussian function or a uniform function * h is the bandwidth, which determines the width of the kernel function and controls the smoothness of the estimate * n is the number of data points KDE calculates \(f(x)\) through a summed kernel function \(K(u)\), anchored at data points \(x_{i}\). Figure 4 illustrates models' efficacy over seven source books. A gradient near 1 signifies optimal performance. In D&D, named characters, with their elaborate backstories, are central. Assessing a model's inclination to identify these characters over other entities is vital. This inclin Figure 3: Density plot for each model Figure 2: sample format of a corrupted JSON outputs by juxtaposing character counts with total entities, contrasted against real metrics. Figure 5 delineates the frequency of character identification across all source books. Meanwhile, Figure 4(a) and Figure 4(b) depict the distribution pertaining to models and books, respectively. In the D&D landscape, named characters, renowned for their intricate histories, are paramount. Evaluating a model's propensity to spot these characters in relation to other entities is imperative due to the significant role characters play in D&D narratives. This bias can be graphically represented by mapping character counts against all identified entities and contrasting them with authentic counts. By scrutinizing the named entity counts from diverse NER models and comparing them to true values, one can infer model behavior and efficacy. Figure 5 offers a glimpse into character recognition frequency for different models across sourcebooks, with Figure 4(a) and Figure 4(b) charting the distributions for models and books respectively. From Figure 4(b), we observe a consistent ratio between characters and other named entities across books. This consistency allows us to downplay book variability and focus on the insights from Figure 4(a). Notably, NLP frameworks such as Spacy Honnibal and Montani (2017) and Flair Akbik et al. (2018) exhibit more balanced frequency distributions, indicating a higher character identification ratio. Although this might be unfavorable in certain contexts, in this domain, aligning character identifications closely with overall named entity values signals optimal performance. This suggests Spacy and Flair perform exceptionally in an off-the-shelf setting. Figure 6 showcases precision and recall metrics for each NER model. To determine recall, we derived the true positive count from average unique named entity counts, while the true count originated from LLM models, as outlined in table 3. For precision, false positives were ascertained from misidentified unique named entities on average. The precision and recall values were averaged for each model across source books, and plotted to offer a concise visualization of each NER model's performance. Evidently, Flair and Spacy outshine other NER models in precision and performance, while Trankit Nguyen et al. (2021) excels in recall relative to its precision. ## 5 Discussion We undertook a Named Entity Recognition (NER) task on seven adventure books from the esteemed Dungeons and Dragons (D&D) series. Our methodology involved manual entity annotations in these books, which were subsequently verified against outputs from three leading language models: Bloom, OpenLLaMA, and Dolly. Our annotation guidelines delineated entity types into categories like person, organization, location, and misc. Ten NER models were subsequently employed to gauge their efficacy in recognizing named entities within D&D. Among these, Flair, Trankit, and Spacy emerged superior, mirroring findings from past NER-centric studies. Conversely, StanfordDeID Chambon et al. (2023) and RoBERTaNER Baptiste (2022) lagged in performance. A precision-centric analysis further reiterated the dominance of Flair, Trankit, and Spacy over their counterparts. The findings imply that while generic models can decently handle NER tasks in specialized domains like D&D, performance inconsistencies exist across models. Employing annotation guidelines bolsters consistency in entity recognition across varied books and contexts. Moreover, incorporating large language models for automated annotations can significantly mitigate the manual intervention needed for comprehensive datasets, particularly in intricate domains such as D&D. However, our study bears certain caveats. We refrained from fine-tuning the NER models specifically for D&D, so our findings are indicative of generic model capabilities and might not capture the full potential of domain-specific optimization. Our dataset, comprising just seven books, might not encompass the depth and breadth of D&D narratives. The exclusive focus on Wizards of the Coast publications could also inadvertently introduce stylistic biases. Finally, while our study zeroes in on D&D as a fantasy subset, our insights might not seamlessly extend to other literary domains with their unique nuances. ## 6 Conclusion Our exploration illuminates the remarkable potential of harnessing off-the-shelf models for NER tasks within the D&D universe's nuanced realm. Some models showcase an impressive baseline in entity recognition for this domain without extensive fine-tuning. However, there's a compelling need for continued research and refinement to tailor these models optimally for D&D's unique intricacies. Additionally, our research serves as a foundational resource for future inquiries. The dataset we've curated and our annotation guidelines stand as a benchmark for gauging the efficiency of future NER models or techniques. Consequently, our work not only reveals the current prowess of NER models within the D&D context but also sets the stage for continued innovation at the confluence of fantasy literature and artificial intelligence. ## 7 Future Works Based on our findings and limitations, we suggest some directions for future research. One direction is to fine-tune NER models on the D&D dataset and comparing their performance with off-the-shelf models. Additionally, other techniques such as transfer learning or domain adaptation could be explored to improve the performance of NER models in the D&D domain. Another direction is to use different data sources for NER in D&D, such as novels, comics, podcasts, or video games. A third direction is to apply different evaluation metrics for NER in D&D, such as F1-score, recall, accuracy, or error analysis. Finally other aspects of NER in D&D can also be explored, such as entity linking, coreference resolution, relation extraction, or sentiment analysis. Figure 4: Distribution plot for each model Figure 5: Frequency plots with respect to models and adventure sourcebooks Figure 6: Precision graph for different NER models
2309.16938
"I'd Like to Have an Argument, Please": Argumentative Reasoning in Large Language Models
We evaluate two large language models (LLMs) ability to perform argumentative reasoning. We experiment with argument mining (AM) and argument pair extraction (APE), and evaluate the LLMs' ability to recognize arguments under progressively more abstract input and output (I/O) representations (e.g., arbitrary label sets, graphs, etc.). Unlike the well-known evaluation of prompt phrasings, abstraction evaluation retains the prompt's phrasing but tests reasoning capabilities. We find that scoring-wise the LLMs match or surpass the SOTA in AM and APE, and under certain I/O abstractions LLMs perform well, even beating chain-of-thought--we call this symbolic prompting. However, statistical analysis on the LLMs outputs when subject to small, yet still human-readable, alterations in the I/O representations (e.g., asking for BIO tags as opposed to line numbers) showed that the models are not performing reasoning. This suggests that LLM applications to some tasks, such as data labelling and paper reviewing, must be done with care.
Adrian de Wynter, Tangming Yuan
2023-09-29T02:41:38Z
http://arxiv.org/abs/2309.16938v2
# I Wish to Have an Argument: Argumentative Reasoning in Large Language Models ###### Abstract We evaluate the ability of contemporary large language models (LLMs) to perform argumentative reasoning. We frame our experiments in terms of the argument mining (AM) and argument pair extraction (APE) tasks, and evaluate their ability to perform reasoning at increasing levels of abstraction in the input and output representations (e.g., arbitrary label sets, semantic graphs). We find that, although LLMs are able to match or surpass the state-of-the-art in AM and APE, their argumentative reasoning performance is very dependent on the input and output representation. We also find an "exemplar effect", where too many exemplars increasingly become detrimental for task performance, and about \(4-5\) being the optimal amount. Neither result extends to chain-of-thought (CoT) prompting: we find the exemplar effect to be nullified, and our results suggest that CoT allows for better performance under ill-conditioned problems. We hope that the work reported contributes to the improvement of argumentative reasoning in LLMs. ## 1 Introduction Large language models (LLMs) such as GPT-4 (Open AI, 2023) are able to solve a significant amount of natural language processing tasks to a spectacular degree of accuracy. Lately, there has been a push to use these models to automate more complex tasks reliant on deeper comprehension of language, such as data labelling (Cheng, Li, and Bing, 2023; Liu et al., 2023), model evaluation (Zheng et al., 2023), and even paper reviews (Liu and Shah, 2023). Argumentative reasoning is central to all of these efforts. This type of reasoning encompasses formal and informal logic, and has as a prerequisite a deep understanding of language and the context. Plenty of work has been carried out to evaluate LLM performance in formal logic scenarios; but, in human-centered applications, such as chatbots, formal logic issues are overlooked in favor of defeasible reasoning (Walton, 2008). Hence, to accurately decide whether LLMs are reliable on more complex tasks, such as data annotation and human interaction, we must also evaluate their capabilities in the context of argumentative reasoning. In this paper we perform one such evaluation for GPT-3 (Brown et al., 2020) and GPT-4. We explore these model's argumentative reasoning capabilities by altering the input and output representations and subsequently observing their performance in two argumentation-related tasks. We frame our experiments from an argument mining (AM) and argument pair extraction (APE; Cheng et al., 2020) perspective. AM requires the model to be able to recognize spans of a text as belonging to an argument; and APE tasks the model to match arguments from two texts as argument-rebuttal pairs. For the input and output representation evaluation, we test progressively more abstract reasoning settings: from the _concrete_ setting, where the model is tasked to return matches from a passage (e.g., in AM it would return the spans that correspond to an argument); to the purely _symbolic_ setting, where we require from the model to return the answer with arbitrary symbols (e.g., extract the spans, label them based on predefined rules, and only return the labels), or the input is in an abstract format such as abstract meaning representation (AMR; (Banarescu et al., 2013)) graphs. AMR graphs are known to aid in argument comparison and interpretation (Opitz et al., 2021). We also factor in the number of exemplars to test if the model's reasoning capabilities depend on their context length, and chain-of-thought (CoT; (Wei et al., 2022b)) prompting, a prompting paradigm that conditions the model to work step-by-step by following a templatized process ("Let's think about this step-by-step..."). It is known to provide good results in multiple reasoning tasks (Wei et al., 2022; Kojima et al., 2022; Suzgun et al., 2023). The diagram below displays our evaluation. ### Findings In terms of raw scoring, we find that GPT-4 is able to reach SOTA performance in APE, and near-SOTA in AM. However, our analysis also shows the following: 1. Both LLMs fall short when the input and output representations are not carefully conditioned. Concrete reasoning does well, but low-abstraction (i.e., with "hints" as the input representation) symbolic approaches outperform it. 2. The performance with respect to the number of exemplars behaves consistently regardless of setting, representation, or task. The LLMs have best performance when observing between 4 and 5 exemplars. More exemplars (up to 40 in some cases) cause a linear decrease in performance. 3. Performance drops noticeably, but remains above random, when the input representations are too abstract (e.g., AMR graphs). This does not hold for purely symbolic output representations (e.g., matrices), where the performance drops to near-zero levels. 4. The results above do not extend to CoT. We find that, in line with the literature, CoT outperforms analogous non-CoT approaches. But we find that the distributions output by CoT are similar, which suggests that the effect of the input and output representations changes is mitigated by this technique. The "exemplar effect" mentioned above does not extend to CoT either. ## 2 Methodology ### Data Throughout this paper, we utilize the Review-Rebuttal Submission-v2 (RRv2) dataset from Cheng et al. (2020). It is comprised of \(4,764\) pairs of review and rebuttal passages related to scientific article submissions. The test set size is \(474\) pairs, and the data was human annotated by \(5\) annotators following a specific guideline. Each passage is sentence-separated, and normally includes multiple arguments. RRv2 is a comprehensive corpus and focuses on long-distance relationships between statements; in addition to having two tasks using the same data (AM and APE), which are both actively researched in argumentative reasoning. For AM, each sentence is labelled with a BIO scheme, and the model must retrieve (label) each sentence from the review and rebuttal entries.1 In this task, the distinction between review and rebuttal is not relevant and each entry is treated as a separate point in the corpus. Footnote 1: In the RRv2 corpus, the BIO tags correspond to the Beginning, Inner, and Outer parts of an argument. For APE, the arguments within each review-rebuttal pair must now be aligned: every argument made by a reviewer must be mapped, when applicable, to a refutation from the rebuttal. This is normally represented as a binary matrix with overlaps (Bao et al., 2022; Cheng et al., 2021). See Figure 1 for examples of the tasks and its labelling scheme. Prior to use we clean the text from tag and sentence delimiters. ### Models Evaluated We evaluate GPT-4 and the text-davinci-003 variant of GPT-3. The latter is similar to the original GPT-3 model, but further pretrained with reinforcement learning with human feedback (Ouyang et al., 2022; Christiano et al., 2017) and instruction pretraining (Wei et al., 2022; Sanh et al., 2022; Ouyang et al., 2022). It is sometimes referred to as GPT-3.51, and has a context length of \(4,097\) tokens.2 Few details have been released around the architecture, model size, or pretraining procedure for GPT-4. Its authors consider it marginally better than GPT-3 at some tasks, and better at more complex tasks. They evaluate it in benchmarks specific to GPT-4 that do not explicitly include AM or APE. However, some of these benchmarks have an argumentation component, such as the essay portion of the SAT.3 The variant of GPT-4 we utilized has a context length of \(32,768\) tokens. Throughout our experiments, for both models we set the temperature to \(0.8\), set the maximum return tokens based on the task, and left all other parameters as their default. To account for randomness, we report the average of five calls. Footnote 2: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models) Footnote 3: A test for entrance to universities in the United States. ### Prompting The models are well-known to be sensitive to the prompt's wording (Lu et al., 2022), so prior to starting the work carried out some small experimentation to find out which phrasing would be best. We always include exemplars of the desired output format, and, when the prompts used \(n\) exemplars, we used the first \(n\) points from the develop ment set. In the case of AM, we also prompted GPT-4 with and without CoT, as it has been shown to improve model's reasoning capabilities in various tasks. We wrote all our prompts following the structure from Open AI (2023), as we observed it produced more reliable outputs. For CoT we followed the standard phrasing ("Let's read line-by-line and solve this step by step") and indicated which sentence was being read, as well as the rationale. For example, "We now read {SENTENCE}. It follows the previous argument, and hence it is labelled with an 'I." Sample prompts can be found in Figures 2 and 3.4 We used the Open AI API for all our experiments, and a consumer-grade laptop for the data processing. Footnote 4: The full set of the prompts, scoring code, and model responses will be available at [https://github.com/adewynter/argumentation-llms](https://github.com/adewynter/argumentation-llms) ### Scoring Cheng et al. (2020) utilize two F\({}_{1}\) to evaluate their models: binary F\({}_{1}\) (F\({}_{01}\)) for APE, and micro-F\({}_{1}\) (F\({}_{\mu}\)) for AM. We follow the same approach. The LLMs are instructed to return their output in a parseable format. We parse the answer by allocating an array of all "O" labels (or a zero-valued matrix in the case of APE) and parsing the response line-by-line. Our prompts have specified a return format to signal the beginning of parsing via a token (|begin response|). ### Baselines Our baselines are the MLMC by Cheng et al. (2021), and MRC-APE by Bao et al. (2022). MLMC models APE as a table-filling problem: passages are related by their pairing on a table and it relies on an especially designed encoding scheme and loss. MRC-APE phrases instead as a machine reading comprehension task: first, the model performs AM, and it pairs the detected arguments. This approach is generally better than MLMC when using BERT (Devlin et al., 2019); and considerably higher-performing when using a Longformer (Beltagy et al., 2020) equipped with a context length of \(4,096\) tokens-ideal for long-context tasks, such as APE. We also consider random baselines as sanity checks: a random guesser in AM would achieve around 33% F\({}_{\mu}\) in AM and 14% F\({}_{01}\) in APE. Figure 1: Examples of the (raw) data entries in RRv2. The BIO tag is in parentheses, and highlighted in orange and blue the “B” and “I” labels corresponding to an argument. In AM, review and rebuttal (response) passages are considered independent, and the task is to label all the arguments from each passage. In APE, the task is to match arguments from the review to their corresponding refutations. In the above the first argument is unmatched, and the second (“BERT produces...”) pairs to the first argument from the response. Figure 2: Sample AM prompt for the concrete setting. The model is requested to perform AM, and to mark every new argument with a special token (“ISTART|”) for identification. Exemplars, if any, are included in the prompt and follow the output representation. Otherwise (in the case of zero-shot), we only leave the specified output representation. More complex prompts, such as the one for APE, simply ask for the pairing (e.g., “return all arguments from the response that match those of the review”). ### Settings We have named our settings (i.e., representations) as _concrete_ and _symbolic_, to broadly distinguish the approach taken towards representing the task. The former returns full sentences, while the later encompasses a variety of input and output symbols. This is solely for practical purposes: in reality, symbolic approaches cover a variety of abstractions which may be considered "easier" than the concrete approach, and, strictly speaking, the concrete approach _is_ a symbolic approach (Brachman and Levesque, 2004). Below we describe these settings, and we discuss our distinctions further in 3.3. #### 2.6.1 Concrete We instruct the model to return a portion of the text based on the instructions given: in AM, the lines corresponding to an argument, and in APE, the argument pairs themselves. To distinguish the "B" and "I" labels, we enforce a specific return format to work with our parsing code via a special token (\(\,|\)START\(\,|\)). For scoring concrete answers, we always expect an exact match. See Figure 2 for a basic AM concrete prompt. #### 2.6.2 Symbolic The model is instructed to return symbols (labels) based on its judgment of the text. This requires more reasoning steps than the concrete setting, given that the model is solving AM and labelling the span with a pre-specified, arbitrary label set from the instructions. See Figure 3 for an example. In the AM symbolic setting we evaluated two types of labels: the BIO tags and the line indices. For APE we evaluated the line indices and the full matrix representation. For AMR, we used amrlib5 and a pre-trained BART-based model (Lewis et al., 2020) included in the library. The AMR graphs were created at the sentence-level. Footnote 5: [https://github.com/bjascob/amrlib](https://github.com/bjascob/amrlib) ## 3 Experiments and Results ### AM/APE: Symbolic and Concrete Reasoning In our first experiment we evaluated the performance of the models in AM and APE in both the concrete setting and various iterations of symbolic settings. The results for the best-performing prompts and settings are in Table 1, and a description of every symbolic setting in Table 2. In spite of our relatively high temperature, the variance observed was low, \(<1.0\%\). In AM the LLMs performed well, but were unable to beat the existing SOTA. For APE, GPT-4, however, consistently beat the best-performing models (\(+14\%\) F\({}_{01}\)). In AM, the best-performing symbolic setting had line indices included in the input representation, and requested the indices of each argument as the output representation. To handle the conversion to a BIO label set, we instructed the model to return the "B" labels as indices enclosed in parentheses. Not including the indices in the input did not lead to an equivalently good performance. In APE, requesting a full binary matrix output led to extremely poor performance (\(9.88\%F_{01}\), well below the random baseline, and closer to guessing all zeros every time). However, both symbolic and concrete approaches performed well with respect to the existing non-LLM-based baselines. We tested other approaches, such as first extracting the arguments and then matching them, but it did not yield sufficiently good results. Due to token-length and budget limitations, we were unable to test CoT and AMR in APE. CoT approaches had generally better performance than their non-CoT counterparts, even though they use fewer exemplars due to their higher token usage. The only exception to this observation was the best-performing approach (AM: indices inline and indices as output) where the difference was of \(3\%\) points-still above the concrete setting. Figure 3: Sample symbolic AM prompt. The model is requested to parse each line and return BIO tags. Exemplars, if any, are included in the prompt, and follow the specified output representation. For zero-shot, we only specify the output representation. In other output representations the model is requested to mark the B-label (e.g, “(15) 16 17”). In APE we request the index-based response with matching arguments (e.g., “15, 16, 17: 9, 10, 18, 19”) and convert it into the binary matrix representation for scoring. We also experimented with requesting the binary matrix outright. ### Performance and Number of Exemplars On our second experiment we evaluated the performance of the LLMs in AM and APE, but compared with the number of exemplars seen by the LLMs. The number of exemplars was \(\{0,4,8,16,\tau\}\), where \(\tau\) is the maximum number allowed by the model's token limit. In the case of GPT-3, \(\tau\) tended to be around \(4\); for GPT-4 it was a variable number depending on the task and setting, with an average of \(44\) for symbolic, non-CoT approaches in AM (\(15\) CoT, \(9\) AMR) and \(29\) concrete; and \(22\) for both symbolic and concrete in APE. The total size of our collected data, including all exemplars, tasks, settings and five calls, is about 3.4GB. The results are in Figures 4 and 5. The models showed on average a peak in performance when the number of exemplars was \(4\), and their performance steadily decreased as this number increased. This peak was independent of the task and the setting, since all curves were similar to one another. In the case of CoT, we did not observe such a trend. ### Performance and Input Representation On our last experiment we explored whether the representation of the input text into progressively more abstract ways led to a change of performance in the symbolic performance of the models. In this section we focused solely on GPT-4 and AM, since the AMR graphs tended to be too big for the models to handle (for GPT-3 in both AM and APE, for GPT-4 only APE). We tested three representations (text with indices inline, plain text, and with and without an AMR graph). There is no rigorous way to quantify the level of abstraction for these approaches. However, we consider the concrete approach to be least abstract, "hints" (indices inline, indices in output) to be slightly more abstract, and purely symbolic input representations (AMR graphs) as the most abstract. In between these categories, we ranked them based \begin{table} \begin{tabular}{|c||c|c|} \hline Model & AM F\({}_{\mu}\) & APE F\({}_{01}\) \\ \hline GPT-3 (c) & \(39.86\pm 0.51\) & \(18.58\pm 0.70\) \\ \hline GPT-4 (c) & \(64.51\pm 0.53\) & \(53.84\pm 0.73\) \\ \hline \hline GPT-3 (s) & \(62.00\pm 0.32\) & \(20.15\pm 0.91\) \\ \hline GPT-4 (s) & \(70.63\pm 0.21\) & \(49.85\pm 0.96\) \\ \hline \hline MRC-APE & \(72.43\) & \(39.92\) \\ \hline MLMC & \(71.35\) & \(32.81\) \\ \hline \end{tabular} \end{table} Table 1: Results for the AM and APE tasks in the symbolic (s) and concrete (c) settings. We report the best-performing prompt for an average of five calls: in the case of symbolic reasoning, the best performing approach in both tasks had indices inline and indices as the output. We also report MLMC and MRC-APE, the two best-performing, non-LLM-based approaches for RRv2. Overall, GPT-4 had good performance, almost matching the existing baselines in AM and outperforming them in APE. GPT-3 had much lower performance. Figure 4: Effect of the number of exemplars in AM. Thick blue line is the concrete approach, thickest red line is the average symbolic performance. Last entry in all plots is the average maximum number of exemplars supported. On average, the LLMs performance peaked at \(4\) exemplars, and steadily decreased as the exemplars became more numerous. The exception to this observation was CoT prompting (solid red lines), where they consistently outperformed concrete prompting approaches and all but one of the symbolic approaches (dashed red lines), and did not show this decreasing trend. GPT-3’s performance was quickly outclassed by GPT-4, even under the same number of exemplars. Figure 5: Effect of the number of exemplars in APE. Thick blue line is the concrete approach, thickest red line is the average symbolic performance. Last entry on each plot is the average maximum number of exemplars supported. Similar to AM, on average model performance peaked at \(4\) exemplars, and steadily decreased as they became more numerous. We were unable to test CoT and AMR under this approach due to limitations on token length and budget. Dotted lines for symbolic are the various symbolic representations. on the complexity of the output representation: BIO tags are more abstract than indices, since they have rules for matching, and both are more complex than concrete approaches since outputting the matching sentence is simpler than mapping to an arbitrary symbol and returning it. The full list of experiments is in Table 2, and our results in Figure 6. In non-CoT approaches, we observed noticeable improvements when the abstraction level was sufficiently low (indices inline and indices in output; concrete), and as the abstraction increased, the model started to perform poorly, but retained above-random performance. These results are significant under a Welch's \(t\)-test carried out on the predictions arrays: when the input text is unaltered, requesting line numbers or BIO tags as the output provides no difference in predictions with CoT (\(p\approx 0.77\); large \(p\)-values imply the distributions have identical expected values), but does provide a noticeable difference without (\(p<0.05\)). A Welch's \(t\)-test shows that \(t<-0.86\) and \(p<0.39\) for the inline indices setting when comparing it with its non-CoT counterpart, thus suggesting that we must reject the null hypothesis that both distributions are distinct. Since the performance in the non-CoT part was better on average (67% vs 70% F\({}_{\mu}\)), it is possible that CoT may harm performance in outliers. We also compared CoT in other abstract scenarios, and we also observed large \(p\) values when comparing with only indices in the output (\(<0.98\)), and when comparing the latter with BIO tags (\(p<0.77\)). This suggests that CoT is effective in highly-abstract or hard-to-evaluate scenarios (such as requesting BIO tags). The results noted only hold for specific numbers of exemplars, compared one-to-one in both settings, and are \(p<0.05\) otherwise. ## 4 Discussion ### Relationship of F-scores and Reasoning The models, and GPT-4 in particular, performed well in both AM and APE, to the point of beating, or almost matching, the existing SOTA models. This alone is not sufficient to claim that the models are able to perform argumentative reasoning, let alone understand it. When altering the input and output representation with conceptually minor changes-such as adding line numbers or requesting a BIO set as opposed to integers-the models had noticeably different performances. We argue that this is indicative of a potential false positive: the models are unable to reason in an argumentative setting, but give excellent appearance of being able to do so. It is worth noting that, although GPT-4's performance in APE is SOTA, it is still well-below the equivalent performance in AM. Although the tasks are slightly different, another plausible explanation is that the models failed to generalize due to the length of the task, which it is known to cause problems in transformer-based models (Anil et al., 2022). ### Impact of CoT in Argumentative Reasoning That said, we noticed that this performance difference did not extend to the inclusion of CoT. These prompts had \begin{table} \begin{tabular}{|c|c|c|} \hline Task & Input & Output (label set) \\ \hline \hline AM* & Text & Text and \(|\)START\(|\) \\ \hline AM* & Text with indices & Indices \\ \hline AM* & Text & Indices \\ \hline AM & Text with AMR graph & BIO tags \\ \hline AM & AMR graph & BIO tags \\ \hline \end{tabular} \end{table} Table 2: Input representations tested, in roughly increasing order of abstraction. Tasks marked with an asterisk (*) were tested with and without CoT. The first row (input text, output text requested) corresponds to our concrete reasoning setting. Our ranking of abstraction is arbitrary: we consider the text with indices marked inline less abstract than a text without them, since the former provides a “hint” of what the label set is supposed to be like. Output representations with BIO tags as the output are more abstract, since it requires some rule matching in order to determine which symbol corresponds to which part of the problem. AMR graphs are considered the most abstract. Figure 6: Effect of abstraction level in the input/output representation with respect to output performance in AM. For non-CoT approaches, the more abstract (under our criterion) the input representation, the more difficult it is for the model to solve the task. For example, if the input is conditioned to have “hints” about the desired output (for example, requesting indices and adding line numbers to the input text) the performance increases significantly. That said, even the most abstract representations (AMR graphs) outperform noticeably the random baseline. CoT approaches (at the top, marked with an asterisk) consistently outperformed their non-CoT counterparts, even when the maximum number of exemplars supported is much lower, and regardless of abstraction level. We did not test CoT with AMR in AM due to token limitations. on average better performance than their non-CoT counterparts, and we observed that it yielded better results in ill-posted (here, overly abstract) problems. However, our analysis showed that the output distributions for all CoT approaches were rather similar. This is perhaps indicative that CoT allows the models to return the desired output regardless of representation. One possible explanation for the similar output distributions is due to the templatized nature of CoT: it usually starts with a common phrase (in our case, "Let's read line-by-line and solve this step by step") and follows very specific steps that represent (in our prompt) the reasoning required. While we don't discard this possible explanation, it is worth noting that the input and output representations are completely different in all approaches. That said, it is clear that CoT mitigates the effect of input and output representations on the execution of the task. ### The Exemplar Effect Perhaps the most surprising aspect of our evaluation was the effect of the number of exemplars with respect to the model's downstream performance. It was not a linear relationship, and instead there was a clear "sweet spot" at around \(4\) samples. Originally, Brown et al. (2020) had indicated that more exemplars in GPT-3 would yield better performance, but that this number must be finetuned ahead of time. We observed the same thing, but these results did not hold for GPT-4. What this suggests, from our experiment results, is that (assuming that more exemplars imply better understanding of the task) the models are not accurately performing inference from the data provided. That said, this "exemplar effect" did not extend to CoT settings, since the maximum number of tokens used had slightly higher performance than the other approaches. However, we do not discard the possibility that models with longer token limitations could also show a similar exemplar effect in CoT. ## 5 Limitations ### Reasoning Evaluation It could be argued that our results are not complete in terms of evaluating argumentative reasoning capabilities. We agree: _recognizing_ an argument is not the same as _deciding_ whether it is sound, valid, or neither. However, without the ability of the model to show that it is able to recognize arguments and identify relations between them, any potentially generated argument or result evaluating the model's performance in these tasks could be deemed as untrustworthy. Note that issues arising from the model not understanding the prompt (Webson and Pavlick, 2022) are not present in our evaluation, since the results reported here all perform better than the random baselines, and our analysis was comparative in nature. ### Data contamination We factored out, to an extent, potential data contamination, which is known to impact downstream model performance (Carlini et al., 2023; Liang et al., 2022; Lee et al., 2023; De Wynter et al., 2023), by tasking the model to recognize arguments from the passage. However, we are unable to guarantee that the models have not been trained with this data, and therefore have at least some bias towards these results. ### Model Setup We only evaluated two models, so our results may not extend to other LLMs. We did not fine-tune the models, and opted instead to treating them as generalists performing in-context learning, in line with their standard, contemporary usage. However, it is likely that fine-tuning or specialized pre-training could show more insights into our results, and probably a more uniform performance distribution across representations: after all, MRC-APE and MLMC are specialist models and remain the SOTA in AM for RRv2. ## 6 Related Work ### Informal Reasoning and Argumentation In this section we limit ourselves to discuss informal reasoning and argumentation evaluation of LLMs. For an introduction to AM covering other modelling approaches see the survey by Lawrence and Reed (2020) and for a broader introduction to the field and open problems see Lauscher et al. (2022). There has been work on using LLMs for AM (Zhao et al., 2023), as well as determining argument quality (van der Meer et al., 2022; Holtermann et al., 2022). More recently, Han et al. (2022) showed that GPT-3 is unable to mimic human-like inductive reasoning. While non-LLM-based approaches to APE do exist (Cheng et al., 2020; Bao et al., 2022), our work is the first to evaluate LLMs for this particular task. A subset of informal reasoning, commonsense reasoning, is a very active of research. It is not strictly the same as argumentative reasoning, but it allows for the evaluation of some types of defeasible arguments. While not the direct subject of this work, there are reports that GPT-3's performance is rather low in commonsense reasoning (Cong, 2022) and figurative language (Chakrabarty et al., 2022). In addition, there are techniques to improve the model's performance in this area, such as generating explanations for out-of-distribution observations (Brahman et al., 2021), corpora and benchmarks (Forbes et al., 2020; Rudinger et al., 2020; Pyatkin et al., 2023), and decoding (Cohen and Mooney, 2023) and prompting (Liu et al., 2022; Jin et al., 2022; Stefanik and Kadl\(\ddot{\text{c}}\)k, 2023; Lu et al., 2022) strategies. Of note, Webson and Pavlick (2022) explored whether GPT-3 and other LLMs were able to _understand_ the prompts. They found evidence to the contrary, and although they provided two alternate interpretations for their findings (the task was too hard or the model refused to answer), their main conclusion is that the models do not always follow, or understand, the instructions the way a human being would. ### Formal Reasoning The scientific community has long been interested in the evaluation of reasoning capabilities for statistical models, and LLMs are no exception ((Zhang et al., 2023); (Dziri et al., 2023); (Dan, Bastani, and Roth, 2022); (Liu et al., 2022); to name a few). Benchmarks like BIG-Bench Hard (Suzgun et al., 2023) have subsets of data that are meant to evaluate formal reasoning. Saparov and He (2023) performed a formal analysis of the reasoning capabilities of LLMs. They built an ontology and tested whether GPT-3 was able to perform modus ponens given a specific premise. They found that the models are unable to do so consistently, with a much lower performance under fictional worlds. This has been taken beyond formal logic and into real-world reasoning problems (e.g., maths problems), applying GPT-3, and other LLMs with much success (Open AI, 2023; Lewkowycz et al., 2022; Zong and Krishnamachari, 2023). That said, there exists evidence to the contrary on some tasks, such as arithmetic (Dziri et al., 2023). The latter reference also indicates that there is strong evidence that tasks with significant reasoning depths will cause these models to fail, a result also shared by Anil et al. (2022). ## 7 Conclusion In this paper we evaluated the argumentative reasoning capabilities of GPT-3 and GPT-4. We altered the input and output representations of the data to try to understand their behavior via their performances in AM and APE. We found that the models present good performance in these tasks, but this was not sufficient to conclude that they were indeed performing any type of reasoning. On the contrary, our experiments showed that the input and output representations had significant impact on their downstream performance. This sensitivity suggested that the model's applicability to critical uses such as data labelling and paper reviewing needs to be exercised with care. However, when the input and output representations included CoT, our results became more surprising: the "exemplar effect" (too many exemplars being detrimental for performance) was no longer present, and the output distributions were very similar regardless of input and output representations. We were unable to conclude that CoT helped argumentative reasoning in LLMs, but we did note that it helped mitigate potential issues associated with ill-conditioned problems. As mentioned in the Limitations section, our work may be expanded by exploring the ability of the models to working in-depth with argumentation. Our work did not evaluate their ability to judge an argument's strength, or for the model to provide reasonable rebuttals. Moreover, due to token length limitations, we were unable to evaluate AMR, the most abstract setting we tested, with CoT. We believe that the performance difference between non-CoT AMR and CoT AMR would provide valuable insights on to what extent these models are able to discern abstract input representations successfully. All of these are well-known problems to the computational argumentation field and the scientific community at large. We plan to explore in further work these as they relate to LLMs (GPT-4 specifically, given its performance compared to GPT-3) as in-depth evaluation of these is important for downstream applications. Finally, although the goal of our work is to study LLM argumentative reasoning capabilities, we believe that this type of representation-based analysis could be extended to other models and non-argument-related tasks, such as natural language inference. ## Acknowledgements The authors wish to thank Liying Cheng for answering questions about the RRv2 dataset and its methodologies.
2309.09818
Grasp-Anything: Large-scale Grasp Dataset from Foundation Models
Foundation models such as ChatGPT have made significant strides in robotic tasks due to their universal representation of real-world domains. In this paper, we leverage foundation models to tackle grasp detection, a persistent challenge in robotics with broad industrial applications. Despite numerous grasp datasets, their object diversity remains limited compared to real-world figures. Fortunately, foundation models possess an extensive repository of real-world knowledge, including objects we encounter in our daily lives. As a consequence, a promising solution to the limited representation in previous grasp datasets is to harness the universal knowledge embedded in these foundation models. We present Grasp-Anything, a new large-scale grasp dataset synthesized from foundation models to implement this solution. Grasp-Anything excels in diversity and magnitude, boasting 1M samples with text descriptions and more than 3M objects, surpassing prior datasets. Empirically, we show that Grasp-Anything successfully facilitates zero-shot grasp detection on vision-based tasks and real-world robotic experiments. Our dataset and code are available at https://grasp-anything-2023.github.io.
An Dinh Vuong, Minh Nhat Vu, Hieu Le, Baoru Huang, Binh Huynh, Thieu Vo, Andreas Kugi, Anh Nguyen
2023-09-18T14:39:26Z
http://arxiv.org/abs/2309.09818v1
# Grasp-Anything: Large-scale Grasp Dataset from Foundation Models ###### Abstract Foundation models such as ChatGPT have made significant strides in robotic tasks due to their universal representation of real-world domains. In this paper, we leverage foundation models to tackle grasp detection, a persistent challenge in robotics with broad industrial applications. Despite numerous grasp datasets, their object diversity remains limited compared to real-world figures. Fortunately, foundation models possess an extensive repository of real-world knowledge, including objects we encounter in our daily lives. As a consequence, a promising solution to the limited representation in previous grasp datasets is to harness the universal knowledge embedded in these foundation models. We present Grasp-Anything, a new large-scale grasp dataset synthesized from foundation models to implement this solution. Grasp-Anything excels in diversity and magnitude, boasting 1M samples with text descriptions and more than 3M objects, surpassing prior datasets. Empirically, we show that Grasp-Anything successfully facilitates zero-shot grasp detection on vision-based tasks and real-world robotic experiments. Our dataset and code are available at [https://grasp-anything-2023.github.io](https://grasp-anything-2023.github.io). ## I Introduction Grasp detection is a fundamental and long-standing research topic in robotics [1]. Establishing principles and techniques for grasp detection has enabled multiple applications such as manufacturing, logistics, and warehouse automation [2]. Recent advances in deep learning have introduced effective avenues for the development of data-driven systems for robotic grasping [3]. Numerous deep learning approaches have been proposed to address robotic grasping [4, 5, 6, 7, 8]; however, they primarily focus on improving the neural network (i.e., _model-centric_ approach). While often excelling in the training progress, these model-centric strategies demonstrate unstable outcomes with different datasets, especially with real-world data [9]. Furthermore, Platt _et al._[10] contend that the outcomes conducted on physical robots strongly depend on the training data, underscoring the pivotal role of grasp data. Consequently, our paper explores the _data-centric_ approach, which aims to improve the quality of grasp data to achieve more robust generalization in grasp detection. Over the years, many grasp datasets have been proposed (Table I). Nevertheless, current grasping datasets share common limitations. First, grasp datasets have been constrained by the limited number of objects [11]. The restriction of the number of objects leads to dissimilarities among existing datasets and may introduce inconsistencies when transferring to real-life robotic applications [9, 12]. Second, most of the current grasp datasets do not consider natural language descriptions for each scene arrangement, limiting human-robot interactions [13]. Finally, previous works make assumptions about scene arrangements resembling bin-like configurations [14] or lab-controlled environments [3], which diverges significantly from the complexity of natural settings [9]. To overcome the challenge, we aim to establish a new large-scale _language-driven grasp dataset_ that ideally covers unlimited scene arrangements in our daily lives. Recently, we have witnessed the applications of utilizing foundation models, such as large language model (LLM) [30] or text-to-image (T2I) model [31], across various domains of robotics research [32]. Foundation models have demonstrated remarkable encouragements in various tasks [33], for example, task and motion planning [34], manipulation [35], visual-and-language navigation [36, 37], and scene understanding [38]. The utilization of large-scale foundation models has facilitated the integration of omniscient knowledge into robotic systems [39], overcoming the challenges faced by traditional methods in robustly modeling unstructured and novel environments [40]. Inspired by these phenomena, we hypothesize that it is possible to apply panoptic knowledge from large foundation models to ideally generate an unlimited number of objects and, therefore, serve as a foundation to synthesize a grasp dataset that universally covers possible objects and arrangements that come into existence. We introduce **Grasp-Anything**, a new large-scale dataset for grasp detection. Unlike existing grasp datasets [1, 3, 5, 6, 12, 15, 21, 22, 23, 24, 26, 27, 28, 41] that are limited to a predefined set of objects and arrangements, our dataset offers an extensive range of objects and closely replicates real-world scenarios in natural environments, thus alleviating the generalization issues [42]. We empirically Fig. 1: We introduce Grasp-Anything, a new large-scale language-driven grasp dataset that universally covers objects in our daily lives by using knowledge from foundation models. demonstrate that Grasp-Anything facilitates zero-shot learning in both computer vision and robotic aspects of grasp detection. In the vision aspect, our findings confirm that the performances of grasp detection baselines trained by Grasp-Anything improve significantly over related datasets. In the robotic aspect, we demonstrate that our large-scale dataset can be directly applied to real robot systems to improve the grasping task. In summary, our contributions are as follows: * We leverage knowledge from foundation models to introduce Grasp-Anything, a new large-scale dataset with 1M (one million) samples and 3M objects, substantially surpassing prior datasets in diversity and magnitude. * We benchmark zero-shot grasp detection on various settings, including real-world robot experiments. The results indicate that Grasp-Anything effectively supports zero-shot grasp detection in light of its comprehensive representation of real-world scene arrangements. ## II Related Work **Grasp Datasets.** Several grasp datasets have been introduced recently [43] (see Table I). Many factors can be considered when designing a grasp dataset, such as data representation (RGB-D or 3D point clouds), grasp labels (rectangle-based or 6-DoF), and quantity [11]. Notably, a key distinction between our Grasp-Anything dataset and its counterparts lies in its _universality_. While a limited selection of objects constrains existing benchmarks, our dataset is designed to encompass a diverse spectrum of objects observed in our natural lives. In addition, our dataset incorporates natural settings for object arrangements, thereby distinguishing itself from prior works where object configurations are more strictly controlled [10]. Grasp-Anything outperforms other benchmarks in both the number of objects and the number of samples. **Grasp Detection.** Deep learning techniques have demonstrated notable achievements in grasp detection. Lenz _et al._[44] present one of the first works utilizing deep learning to detect grasp pose. Subsequently, learning-based approaches [45, 46, 19, 47, 6, 19, 4] have gained prominence as the most widely utilized solution for grasp detection. Despite the extensive research on deep learning methods for robotic grasp, it is still challenging to apply to real-world grasping applications [10]. This is primarily attributed to the limited size and diversity of existing datasets for robotic grasp [14]. On the other hand, zero-shot grasp detection is a promising approach for utilizing large-scale datasets. Liu _et al._[48] consider the zero-shot grasp detection problem. However, they mainly focus on sim2real applications rather than benchmarking the zero-shot feasibility of existing datasets. Therefore, we are motivated to develop a large-scale benchmark for grasp detection, enabling the successful grasp of universal objects we encounter in real-life scenarios. **Foundation Models for Robotic Applications.** Different attempts to incorporate foundation models into robotic applications have been proposed [49, 36]. For instance, Kapelyukh _et al._[35] introduce a framework based on DALL-E [50] to solve practical rearrangement tasks. Pretrained image in-painting models have also been utilized in [51] for data augmentation. Although applying foundation models to robotic systems has become an inevitable trend [33], it still bears several uncertainties [39]. First, these models need to be improved in functional competence [39], impeding their ability to solve novel planning problems that require an understanding of how the world operates [52]. In addition, prior approaches often exhibit drawbacks in terms of scope, limited functionalities, or an open-loop nature that does not allow for fluid interactions and corrections based on user feedback [33]. Recognizing these limitations, we design a data-centric approach based on LLM and T2I models, focusing on generating large-scale data for robotic grasping within this paper. ## III The Grasp-Anything Dataset Fig. 2 shows the overview of the procedure to generate our Grasp-Anything dataset. We first perform prompt engineering to generate scene descriptions and utilize foundation models to generate images from these text prompts. The grasp poses are then automatically generated and evaluated. We represent grasp poses as 2D rectangles [15] as in many previous works due to the simplicity and compatibility with real-world parallel plate grippers [12]. ### _Scene Generation_ **Prompt Engineering.** To tackle the challenge of generating a universal set of objects, we utilize ChatGPT [30] and perform a prompt engineering technique [53] to guide ChatGPT to generate diverse scene descriptions. Directives Initialization.The concept of 'directives' refers to the configuration of the goal for ChatGPT to match the user preference [36]. In our context, the goal for ChatGPT is to generate a diverse set of scene descriptions that cover a large proportion of objects. Therefore, we initiate directives for ChatGPT using the following conversation **Q:** "Imagine you are helping me to generate a corpus of scene descriptions, each condensed to a single sentence. My goal is to generate as many diverse graspable objects as possible. Each sentence must be distinct and should contain at least two objects." **A:** "Sure, I'd be happy to help you generate a large corpus of scene descriptions with diverse objects." We then set up the output template for ChatGPT as follows **Q:** "The template for each sentence contains two parts. The first part is the sentence with the structure as <Obj_1><Obj_2>...<Verb><Container>. The second part is the list of extracted objects from the sentence [<Obj_1><Obj_2>...]." **A:** "Understood! Let's generate the first sentence..." This template guarantees that each generated prompt has two components: the text describing the scene arrangement and a list indicating graspable objects in the text. _ii) Context Augmentation._ Directives guide ChatGPT at a high level for scene descriptions but do not guarantee long-term quality due to hallucination [33]. To ensure consistent quality over a long time horizon, we employ a context augmentation by creating a self-reinforcing loop for ChatGPT. We initialize a prompt buffer to store generated prompts, with the first 50 samples _manually_ assigned. We sample 10-15 scene descriptions from the prompt buffer each time and input them to ChatGPT. An example of this process is given in the following example **Q:** "Based on sample scene descriptions, generate a new scene description with a similar structure." **A:** "Certainly! Here's a new scene description: A spoon and a plate on a desk. [spoon, plate]" The new sample is then appended to the prompt buffer. We repeat the process until 1M scene descriptions are generated. **Image Synthesis.** Given the scene descriptions generated by ChatGPT, we use Stable Diffusion 2.1 [31] to generate images that align with the scene descriptions. We then gather instance segmentation masks for every object that appeared in the grasp list using the state-of-the-art visual grounding and instance segmentation models (OFA [13] and Segment-Anything [54]). At the end of the image synthesis stage, we obtain a grounding mask for each referenced object. ### _Grasp Pose Annotation_ To annotate grasp poses for each object in the grasp list of the scene description, we employ a pretrained model RAGT-3/3 [6]. Since the candidate poses may not always be accurate, we use a traditional method by Kamon _et al._[55] to further evaluate grasp poses. More specifically, we determine the grasp quality of each pose by calculating the net torque, denoted as \(\mathcal{T}\), associated with the grasp as follows \[\mathcal{T}=\underbrace{(\tau_{1}+\tau_{2})}_{\text{Resistance}}-\underbrace{ RMg}_{\text{Torque}}. \tag{1}\] The resistance at each contact point, denoted as \(\tau_{i}\), can be computed by \(\tau_{i}=K\mu_{s}F\cos\alpha_{i},\forall i\in\{1,2\}\). In Equation (1), the terms \(M,g,K,\mu_{s},F\) correspond to the object's mass, gravitational acceleration, geometrical characteristics of the contact area, coefficient of static friction, and the applied force, respectively. These parameters are assumed to be constant across all grasps. Consequently, each grasp pose is characterized by three variables: \(R,\alpha_{1},\alpha_{2}\), as depicted in Fig. 3(b). Due to the impracticality of explicitly determining Fig. 3: **Grasp pose evaluation. We construct a convex hull for each object segmentation mask and use the theory of [55] to determine feasible grasps.** Fig. 2: **Dataset creation pipeline.** the physical terms for each object [56], the computation of \(\mathcal{T}\) becomes infeasible without knowing the physical parameters \(M,K,\mu_{s}\). We employ the following concept to address the challenges posed by physical constraints \[\tilde{\mathcal{T}}=\frac{\cos\alpha_{1}+\cos\alpha_{2}}{R}. \tag{2}\] The term \(\tilde{\mathcal{T}}\) indicates the ratio between the resistance and the torque. By examining Equations (1) and (2), we verify that the original net torque \(\mathcal{T}\) is correlated to \(\tilde{\mathcal{T}}\). In cases a grasp results in fewer than two contact points, we assign each missing term \(\cos\alpha_{i}\) a value of \(-1\). Antipodal grasps tend to yield greater positive values of \(\tilde{\mathcal{T}}\) compared to non-antipodal grasps [57]. Thus, grasp qualities can be ranked based on \(\tilde{\mathcal{T}}\). We define grasps with positive \(\tilde{\mathcal{T}}\) values as positive grasps, while the remaining grasps are considered negative. We remark that our grasp evaluation procedure can only be applied when the segmentation mask for each object is available to compute its convex hull. Some examples of our dataset can be found in Fig. 5. ### _Grasp-Anything Statistics_ **Number of Categories.** To assess the diversity of object categories in our dataset, we conduct the same methodology as in [59]. We leverage 300 categories of LVIS [60] dataset and identify 300 candidate objects from Grasp-Anything for each category using a pretrained model [61]. Subsequently, we select a subset of 90,000 objects and filter out objects not semantically aligned with their assigned categories. Each category is considered significant if it has more than 40 objects. The outcomes of this analysis are showed in Fig. 4(a). With this setup, we observe that our Grasp-Anything spreads over 236 categories of the LVIS dataset. We apply the same procedure to other datasets and show the comparison in Fig. 4(b). It is noteworthy that VMRD [18] dataset is the runner-up and only has 115 LVIS categories, compared to 236 categories in our dataset. **Number of Objects.** We visualize the number of objects (in log-scale) of Grasp-Anything and other grasp datasets in Fig. 4(b). From this figure, we can see that our Grasp-Anything dataset has a significantly larger number of objects than other datasets. **Number of POS tags.** To categorize words in a text according to their grammatical roles and syntactic functions, we extract the POS tags [62] in our dataset and visualize them in Fig. 4(c). Our scene descriptions corpus utilizes a wide range of words to describe scene arrangements. We provide about 1.5M POS tags, with 35% being nouns, 20% being adjectives, 7% being verbs, and the remainder being other POS tags such as prepositions. **Object Shape Distribution.** Fig. 6 compares object shape distributions between Grasp-Anything and Jacquard. We randomly select \(5000\) objects from each dataset and extract the \((x,y)\) coordinates associated with pixels lying in the interior of each object. We then aggregate all coordinates and combine them to create heatmaps. These heatmaps are normalized by each dataset's image resolution. We can see that objects in Grasp-Anything span over a greater area than Jacquard's, indicating a greater degree of shape diversity. ### _How will Grasp-Anything be helpful to the community?_ Given the large-scale nature and its multi-modalities, such as text prompts, images, and segmented masks, we hope that our new dataset can drive future research in these topics: Fig. 4: **Dataset Statistics. We provide statistics on (a) the number of categories, (b) the number of POS tags, and (c) a comparison of number of objects.** Fig. 5: **Samples from Grasp-Anything. For simplicity, we only display the grasp with the highest \(\tilde{\mathcal{T}}\) for each graspable object (marked in bold).** Fig. 6: **Object shape heatmap visualization.** * _Grasp Detection_: Although many grasp datasets have been established [1, 3, 10, 12, 14, 15], we demonstrate that Grasp-Anything accommodates a broader range of objects and more natural scene settings; therefore, our dataset can advance more grasp detection research, especially on zero-shot grasp detection and domain adaptation. * _Language-driven Grasping_: Language-driven grasping detection is a promising research area with industrial applications [49, 63, 64]. Grasp-Anything supports diverse scene descriptions, enabling large-scale training to align natural language with grasp detection. Furthermore, we believe Grasp-Anything can be useful in related tasks such as sim2real grasping [65], human-robot interaction [66], or language-driven mobile manipulation [42]. ## IV Experiments We conduct experiments to validate the usefulness of our Grasp-Anything dataset and answer _two questions_: _i)_ Can Grasp-Anything serve as a challenging dataset for grasp detection? and _ii)_ Since our dataset is generated by the foundation models, will it work on real robot experiments? ### _Zero-shot Grasp Detection_ **Setup.** We train three deep-learning grasp networks: GR-ConvNet [4], Det-Seg-Refine [19], and GG-CNN [58] on five datasets: Grasp-Anything, Jacquard [12], Cornell [15], VMRD [18], and OCID-grasp [19]. The primary metric is the success rate, defined similarly to [4], requiring an IoU score above \(0.25\) with the ground truth grasp and an offset angle less than \(30^{\circ}\). For zero-shot learning's base and new labels [67], we utilize the classified LVIS's labels from Section III-C. Initially, we merge LVIS labels from all five datasets and then identify the top \(70\%\) labels by occurrence. These labels form the 'Base' classes, while the remaining \(30\%\) become 'New' classes. We use the harmonic mean ('H') to measure the overall zero-shot success rates as in [67]. **Base-to-New Generalization.** We report the base-to-new grasp detection results in Table II. There are two central observations from the results. First, three baselines GR-ConvNet, Det-Seg-Refine, and GG-CNN exhibit satisfactory performances over five datasets, implying there is less room for model-centric approaches to improve the grasp detection results on each separated dataset. Second, Grasp-Anything is more challenging to train as our detection results are lower than related datasets using the same approaches due to the greater coverage of unseen objects in the testing phase. **Cross-dataset Transfer Learning.** Table III presents the results of training GR-ConvNet on a dataset (row) and testing on another dataset (column). For instance, when a GR-ConvNet is trained on Jacquard and tested on Cornell, an accuracy of 0.51 is achieved. Our dataset improves in about \(9-29\%\) over other datasets. Particularly, when testing on Jacquard, the performance of GR-ConvNet trained on Grasp-Anything is four times better than other datasets. ### _Grasp Detection in the Wild_ We visualize grasp detection of a daily office arrangement image from GR-ConvNet trained by different datasets in Fig. 8. This figure shows that Grasp-Anything can improve the grasp detection quality over related datasets when the same baseline network is used. Fig. 9 showcases grasp detection examples utilizing a pretrained GR-ConvNet on the Grasp-Anything dataset on random images from the internet and different datasets. We can see that the detected grasp poses are adequate in quality and quantity. ### _Discussion_ Through the zero-shot experiment (Section IV-A), we can answer our first question and conclude that our Grasp-Anything can be used as a challenging dataset for grasp detection as networks trained on our dataset achieve lower accuracy compared to other datasets (Table II). Furthermore, the cross-dataset experiment in Table III implies that utilizing our dataset on other datasets brings significant improvement. Also, the robotic experiments (Section IV-B) demonstrate that the model trained on our synthesis dataset outperforms the same model trained on different datasets with real-world images. This answers our second question and validates the usefulness of our dataset in real-world robotic experiments. While promising results have been achieved with our dataset, we see several important improvement points. First, we remark that our dataset's creation is time-consuming and relies on access to the commercial ChatGPT API. Specifically, it took approximately three months to generate and process 1M scene descriptions on a cluster of three NVIDIA Quadro 8000. Fortunately, future research can reuse our provided assets (images, prompts, etc.) without starting from scratch. Second, our dataset currently lacks 3D point clouds. This is primarily because text-to-point-cloud or image-to-point-cloud foundation models have yet to achieve convincing results [70]. Therefore, creating point clouds from our prompts and images would make our dataset more useful in robotic tasks. Finally, since our dataset includes text prompts, we believe it will foster interesting research directions, such as language-driven grasping and human-robot interaction. ## V Conclusions We have presented Grasp-Anything, a new large-scale language-driven dataset for robotic grasp detection. Our analyses demonstrate that Grasp-Anything encompasses many objects and natural scene arrangements. The experiments on different networks and datasets with the real robot reveal that our Grasp-Anything dataset improves significantly over related datasets. By incorporating natural scene descriptions, we hope our dataset can serve as a foundation dataset for language-driven grasp detection. Fig. 8: **Qualitative results.** We use GR-ConvNet across different datasets. Fig. 7: **Overview of the robotic experiment setup.** Fig. 9: **In the wild grasp detection with a model trained on our dataset.** The top row are images from other datasets: NBMOD [6], YCB-Video [21], GraspNet [22]; while the bottom row includes internet images.
2302.14285
Model for the propagation of fermions in a Bose-Einstein condensate
We consider the dispersion relations of fermions that propagate in the background of a scalar Bose-Einstein condensate. Some illustrative examples are discussed using simple Yukawa-type coupling models between the fermions and the scalar fields. The dispersion relations are determined explicitly in those cases, to the lowest order. The method also allows to determine the corrections to the dispersion relations due to the interactions with the excitations of the Bose-Einstein condensate. Possible applications of the results to the case of neutrinos are indicated.
José F. Nieves, Sarira Sahu
2023-02-28T03:36:54Z
http://arxiv.org/abs/2302.14285v2
# Model for the propagation of fermions in a Bose-Einstein condensate ###### Abstract We consider the dispersion relations of fermions that propagate in the background of a scalar Bose-Einstein condensate. Some illustrative examples are discussed using simple Yukawa-type coupling models between the fermions and the scalar fields. The dispersion relations are determined explicitly in those cases, to the lowest order. The method also allows to determine the corrections to the dispersion relations due to the interactions with the excitations of the Bose-Einstein condensate. Possible applications of the results to the case of neutrinos are indicated. ## 1 Introduction and motivation In several models and extensions of the standard electroweak theory the neutrinos interact with a scalar (\(\phi\)) and fermion (\(f\)) via a coupling of the form \(\bar{f}_{R}\nu_{L}\phi\), or just with neutrinos themselves \(\bar{\nu}_{R}^{c}\nu_{L}\phi\). Couplings of the form \(\bar{f}_{R}\nu_{L}\phi\) produce additional contributions to the neutrino effective potential when the neutrino propagates in a background of \(\phi\) and \(f\) particles and their possible effects have been considered in various contexts, such as collective oscillations in supernova (see for example Refs. [1] and [2] and the works cited therein), the hot plasma of the Early-Universe[3, 4], cosmological observations such as cosmic microwave background and big bang nucleosynthesis data[5], and in particular Dark Matter-neutrino interactions[6, 7, 8, 9, 10, 11]. Motivated by these developments, we have carried out in previous works a systematic calculation of the neutrino dispersion relation in such models, including the damping and decoherence effects (see Ref. [12] and references therein). These works have been based on the calculation of the neutrino thermal self-energy using thermal field theory (TFT) methods[13]. Analytic formulas for the various quantities of interest have been obtained by considering various different cases of the \(f\) and \(\phi\) background, such as the non-relativistic or ultra-relativistic gases, and in particular the case in which the \(f\) background is a completely degenerate Fermi-gas. To complement that previous work, our goal is to determine the corresponding quantities (e.g, effective potential and/or dispersion relation and damping) of a neutrino that propagates in a thermal background that contains a scalar Bose-Einstein (BE) condensate. The hypothesis that the dark matter (DM) can be self-interacting is intriguing, and a DM background of scalar particles is a candidate for such environments[14, 15, 16, 17]. In that context, the interest is the application to the case of a neutrino propagating in such a background. The problem of fermions propagating in such backgrounds can be relevant in other contexts as well. For example, the possibility of BE condensation of pions and/or kaons in the interior of a neutron star, or kaon condensation in heavy ion collisions[18, 19, 20, 21]. Our purpose here is to propose an efficient and consistent method to treat the propagation of a fermion in the background of the BE condensate, in particular the calculation of the effective potential and dispersion relation, in a general way and not tied to any specific application. To model the fermion propagation in such an environment, we assume some simple Yukawa-type interactions between the fermions and the scalar. We consider three generic, but specific, models of the fermion-scalar interaction: 1. Model I: Two massless chiral fermions, \(f_{L}\) and \(f_{R}\), with a coupling to the scalar particle \(\phi\) of the form \(\bar{f}_{R}f_{L}\phi\). 2. Model II: A massless chiral fermion \(f_{L}\) with coupling \(\bar{f}_{R}^{c}f_{L}\phi\). 3. Model III: One massive Dirac fermion \(f\) with a coupling \(\bar{f}^{c}f\phi\). As we will see, the symmetry breaking process produces a Dirac fermion, a Majorana fermion and a pseudo-Dirac fermion in Model I, II and III, respectively[22]. The field theoretical method we use to treat the BE condensate has been discussed by various authors[23, 24, 25]. For completeness we first discuss those aspects and details of the method that are relevant for our purposes. We then present the extension we propose to treat the fermion propagation in the BE condensate, in the context of the three models mentioned above for concreteness and illustrative purposes. Although one of our motivations is the possible application in neutrino physics contexts, the method we propose for the propagation of fermions in a BE condensate has never being used before, and most importantly, is general and paves the way for applications to problems in other systems, for example condensed matter, or nuclear matter systems and heavy-ion collisions as already mentioned. The plan of the paper is as follows. In Section 2 we review the model we use to describe the BE condensate. There we focus on the essential elements of the symmetry breaking mechanism that we need in the next sections. In Section 3 we consider in detail the method we use for calculating the dispersion relations of the propagating fermions in the BE condensate, in the context of the model-I mentioned above. The method is further illustrated by applying it to the models II and III in Sections 4 and 5, respectively. With a view to possible interest and/or future work, we summarize in an appendix the details related to the scalar modes that have a definite dispersion relation, which are useful for the calculation of the thermal corrections to the fermion dispersion relations due to the thermal excitations of the BE condensate. Our concluding remarks and outlook are given in Section 6. ## 2 Model for the BE condensate To describe the BE condensate the proposal is to start with the complex scalar field \(\phi\) that has a standard \(\phi^{4}\) Lagrangian \[L^{(\phi)}=(\partial^{\mu}\phi)^{*}(\partial_{\mu}\phi)-V_{0}\,, \tag{2.1}\] where \[V_{0}(\phi)=m_{\phi}^{2}\phi^{*}\phi+\lambda_{\phi}(\phi^{*}\phi)^{2}\,. \tag{2.2}\] In the context of thermal field theory (TFT), denoting the temperature by \(T\) and the chemical potential of \(\phi\) by \(\mu_{\phi}\), the procedure is to calculate the _effective potential_ of \(\phi\), call it \(V_{\rm eff}^{(\phi)}(T,\mu_{\phi})\), and then see under what conditions \(V_{\rm eff}^{(\phi)}\) has minimum at \(\phi=0\) or some other value. In the latter case, there has been a phase transition, and \[\langle\phi\rangle\neq 0\,, \tag{2.3}\] indicative of the symmetry breaking. The alternative approach that we use, which is particularly useful for treating the symmetry breaking associated with the transition to the BE condensate, is to consider the field \(\phi^{\prime}\) defined by[23, 24, 25] \[\phi^{\prime}\equiv e^{i\mu_{\phi}t}\phi\,. \tag{2.4}\] The recipe is to substitute \(\phi=e^{-i\mu_{\phi}t}\phi^{\prime}\) in \(L^{(\phi)}\) to obtain the Lagrangian for the field \(\phi^{\prime}\), which we denote by \(L^{(\phi^{\prime})}\). To express \(L^{(\phi^{\prime})}\) in a convenient form we write \[\mu_{\phi}t=\mu_{\phi}(u\cdot x)\,, \tag{2.5}\] where \[u^{\mu}=(1,\vec{0})\,, \tag{2.6}\] and define \[D_{\mu}\equiv\partial_{\mu}-iv_{\mu}\,, \tag{2.7}\] with \[v_{\mu}=\mu_{\phi}u_{\mu}\,. \tag{2.8}\] Then using \[\partial_{\mu}\phi=\partial_{\mu}(e^{-i\mu_{\phi}t}\phi^{\prime})=e^{-i\mu_{ \phi}t}D_{\mu}\phi^{\prime}\,, \tag{2.9}\] it follows that \[L^{(\phi^{\prime})}=(D^{\mu}\phi^{\prime})^{*}(D_{\mu}\phi^{\prime})-V_{0}( \phi^{\prime})\,. \tag{2.10}\] Expanding the \(D\) term in Eq. (2.10), \[L^{(\phi^{\prime})}=(\partial^{\mu}\phi^{\prime})^{*}(\partial_{\mu}\phi^{ \prime})+i[\phi^{\prime}{}^{*}(v\cdot\partial\phi^{\prime})-(v\cdot\partial \phi^{\prime})^{*}\phi^{\prime}]-U(\phi^{\prime})\,, \tag{2.11}\] where \[U=-(\mu_{\phi}^{2}-m_{\phi}^{2})\phi^{\prime}{}^{*}\phi^{\prime}+\lambda_{ \phi}(\phi^{\prime}{}^{*}\phi^{\prime})^{2}\,. \tag{2.12}\] Now comes the key observation. If \(m_{\phi}^{2}>\mu_{\phi}^{2}\), this \(U\) corresponds to a standard massive complex scalar with mass \(m_{\phi}^{2}-\mu_{\phi}^{2}\). On the other hand, if \(\mu_{\phi}^{2}>m_{\phi}^{2}\), the minimum of the potential is not at \(\phi=0\), and therefore \(\phi\) develops a non-zero expectation value and the \(U(1)\) symmetry is broken. We assume the second option, \[\mu_{\phi}^{2}>m_{\phi}^{2}\,, \tag{2.13}\] and proceed accordingly. Namely, we put \[\phi^{\prime}=\frac{1}{\sqrt{2}}\left(\phi_{0}+\phi_{1}+i\phi_{2}\right), \tag{2.14}\] where \[\langle\phi^{\prime}\rangle\equiv\frac{1}{\sqrt{2}}\phi_{0}\,, \tag{2.15}\] is chosen to be the minimum of \[U_{0}=-\frac{1}{2}(\mu_{\phi}^{2}-m_{\phi}^{2})\phi_{0}^{2}+\frac{1}{4} \lambda_{\phi}\phi_{0}^{4}\,. \tag{2.16}\] Thus, \[\phi_{0}^{2}=\frac{\mu_{\phi}^{2}-m_{\phi}^{2}}{\lambda_{\phi}}\,. \tag{2.17}\] Substituting Eqs. (2.14) and (2.17) in Eq. (2.1) we obtain the Lagrangian for \(\phi_{1,2}\). \(\phi_{1}\) and \(\phi_{2}\) are mixed by the \(v^{\mu}\) term. The central result that we invoke now is that the calculation of the effective potential \(V_{\rm eff}^{(\phi)}(T,\mu_{\phi})\) can be carried out in TFT using \(\mu_{\phi}=0\) in the partition (and/or distribution) function, but using the \(\mu_{\phi}\)-dependent Lagrangian \(L^{(\phi^{\prime})}\) given in Eq. (2.11) [26]. Therefore, the next step would be to find the propagator matrix of the \(\phi_{1,2}\) system, determine the modes that have a definite dispersion relation, and then define the thermal propagators of the modes. However, for our purposes in what follows, it is sufficient to observe that, neglecting the \(T\)-dependent terms (that is, at zero temperature), \(V_{\rm eff}^{(\phi)}(0,\mu_{\phi})\) is simply the \(U\) potential given in Eq. (2.12), and the zero-temperature expectation value of \(\phi^{\prime}\) is given by Eqs. (2.15) and (2.17). As we will see, this strategy will allow us to determine the contribution to the effective potential of fermions propagating in the BE condensate. The thermal propagators of the \(\phi_{1,2}\) modes would allow us to calculate the corresponding corrections due to the thermal excitations. While we do not purse here the calculation of those thermal corrections, for completeness and possible relevance in future work we give in Appendix A some details about the propagator matrix of the \(\phi_{1,2}\) complex, the modes that have a definite dispersion relation, and the corresponding propagators of the modes. ## 3 Model I ### Formulation We consider two chiral fermions \(f_{L}\) and \(f_{R}\), with an interaction \[L_{\rm int}=-\lambda\phi\bar{f}_{R}f_{L}+h.c\,. \tag{3.1}\] There are two conserved charges, which we will label as \(Q_{1,2}\). The assignments must satisfy \[Q_{i}(\phi)+Q_{i}(f_{L})-Q_{i}(f_{R})=0\,. \tag{3.2}\] We can take \[Q_{1}(f_{L})=Q_{1}(f_{R})=1, Q_{1}(\phi)=0\,,\] \[Q_{2}(\phi)=Q_{2}(f_{R})=1, Q_{2}(f_{L})=0\,. \tag{3.3}\] Remembering how the \(Q_{i}\) enter in the partition function operator, namely \[Z=e^{-\beta{\cal H}+\sum_{i}\alpha_{i}Q_{i}}\,, \tag{3.4}\] the assignments in Eq. (3.2) imply that the chemical potentials satisfy \[\mu_{\phi}+\mu_{L}-\mu_{R}=0\,, \tag{3.5}\] where we are denoting by \(\mu_{L}\) and \(\mu_{R}\) the chemical potential of \(f_{L}\) and \(f_{R}\), respectively. From our discussion of the BE condensate model in Section 2 we take that we should rewrite the Lagrangian in terms of the field \(\phi^{\prime}\) defined in Eq. (2.4). The generalization that we propose here is that every field with non-zero \(Q_{i}\) must be transformed accordingly. Therefore, a generalization of the transformation considered in Section 2 is to put \[\phi = e^{-i\mu_{\phi}t}\phi^{\prime}\,,\] \[f_{L} = e^{-i\mu_{L}t}f_{L}^{\prime}\,,\] \[f_{R} = e^{-i\mu_{R}t}f_{R}^{\prime}\,. \tag{3.6}\] Our assumption here is that the prime fields, \(f_{R}^{\prime}\) and \(f_{L}^{\prime}\), are the convenient ones to use to determine the fermion modes in the BE condensate. With the condition in Eq. (3.5), the interaction coupling keeps the same form, namely \[L_{\rm int}=-\lambda\phi^{\prime}\bar{f}_{R}^{\prime}f_{L}^{\prime}+h.c\,. \tag{3.7}\] However, the kinetic part of the Lagrangian changes. For \(\phi^{\prime}\) we will borrow what we did in Section 2. But now we have to do something analogous for the fermion fields. The kinetic part of the fermion Lagrangian, \[L_{f}=i\bar{f}_{L}\partial\!\!\!/f_{L}+i\bar{f}_{R}\partial\!\!\!/f_{R}\,, \tag{3.8}\] in terms of \(f^{\prime}_{R}\) and \(f^{\prime}_{L}\) is \[L_{f}=i\bar{f}^{\prime}_{L}\not{\partial}f^{\prime}_{L}+i\bar{f}^{\prime}_{R} \not{\partial}f^{\prime}_{R}+\mu_{L}\bar{f}^{\prime}_{L}\not{n}f^{\prime}_{L}+ \mu_{R}\bar{f}^{\prime}_{R}\not{n}f^{\prime}_{R}\,. \tag{3.9}\] As discussed in Section 2, we assume a symmetry breaking by the mechanism implemented around Eq. (2.13). Therefore, we put \[\langle\phi^{\prime}\rangle\equiv\frac{1}{\sqrt{2}}\phi_{0}\,, \tag{3.10}\] where \(\phi_{0}\) is given in Eq. (2.17). As a result \(Q_{2}\) is broken, but \(Q_{1}\) remains unbroken. This produces a mass term in Eq. (3.7) of the form \[-m\bar{f}^{\prime}_{R}f^{\prime}_{L}+h.c\,. \tag{3.11}\] with \[m = \frac{\lambda\phi_{0}}{\sqrt{2}}\,, \tag{3.12}\] \[= \frac{\lambda}{\sqrt{2}}\left(\frac{\mu_{\phi}^{2}-m_{\phi}^{2}} {\lambda_{\phi}}\right)^{1/2}\,,\] where in the second equality we have used Eq. (2.17). The total Lagrangian is then \[L=L^{(\phi^{\prime})}+L_{0}+L^{\prime}_{\rm int}\,, \tag{3.13}\] where \(L^{(\phi^{\prime})}\) is given in Eq. (2.11), \[L_{0}=\bar{f}^{\prime}_{L}i\not{\partial}f^{\prime}_{L}+\bar{f}^{\prime}_{R}i \not{\partial}f^{\prime}_{R}+\mu_{L}\bar{f}^{\prime}_{L}\not{n}f^{\prime}_{L}+ \mu_{R}\bar{f}^{\prime}_{R}\not{n}f^{\prime}_{R}-(m\bar{f}^{\prime}_{R}f^{ \prime}_{L}+h.c.)\,, \tag{3.14}\] and \[L^{\prime}_{\rm int}=-\frac{\lambda}{\sqrt{2}}(\phi_{1}+i\phi_{2})\bar{f}^{ \prime}_{R}f^{\prime}_{L}+h.c\,. \tag{3.15}\] Defining \[f=f^{\prime}_{L}+f^{\prime}_{R}\,, \tag{3.16}\] in momentum space \(L_{0}\) is given by \[L_{0}(k)=\bar{f}(k)(\not{k}-\Sigma(k))f(k)\,, \tag{3.17}\] where \[\Sigma=mL+m^{*}R-\mu_{L}\not{n}L-\mu_{R}\not{n}R\,. \tag{3.18}\] The two chiral fermions form a Dirac particle, in which the left and right components have different dispersion relations. The next step is to find the propagating modes (dispersion relations and wave functions) at the tree-level. This is most conveniently done using the Weyl representation of the \(\gamma\) matrices. ### Dispersion relations The field equation in momentum space is \[(\not{k}-\Sigma)f=0\,, \tag{3.19}\] or, in terms of the left- and right-hand components of \(f\), \[\not{A}_{L}f^{\prime}_{L}-m^{*}f^{\prime}_{R} = 0\] \[\not{A}_{R}f^{\prime}_{R}-mf^{\prime}_{L} = 0\,, \tag{3.20}\] where \[A_{L\mu} = k_{\mu}+\mu_{L}u_{\mu}\] \[A_{R\mu} = k_{\mu}+\mu_{R}u_{\mu}\,. \tag{3.21}\] In the one-generation case we are considering the phase of \(m\) is irrelevant, since it can be absorbed by a field redefinition, so that we could take \(m^{*}=m\). However, since in more general cases such field redefinitions cannot be done independently, we keep \(m\) arbitrary. We use the Weyl representation of the gamma matrices and put \[f^{\prime}_{L} = \left(\begin{array}{c}0\\ \eta\end{array}\right)\,,\] \[f^{\prime}_{R} = \left(\begin{array}{c}\xi\\ 0\end{array}\right)\,. \tag{3.22}\] The equations to be solved then become \[\left(A_{L}^{0}+\vec{\sigma}\cdot\vec{\kappa}\right)\eta-m^{*}\xi = 0\,,\] \[\left(A_{R}^{0}-\vec{\sigma}\cdot\vec{\kappa}\right)\xi-m\eta = 0\,, \tag{3.23}\] where \[A_{L0} = \omega+\mu_{L}\,,\] \[A_{R0} = \omega+\mu_{R}\,, \tag{3.24}\] and we have used \(\vec{A}_{L}=\vec{A}_{R}=\vec{\kappa}\). In general, leaving out the case that \(\mu_{R}=\mu_{L}\) (i.e., assuming \(\mu_{\phi}\neq 0\)), these equations have non-trivial solutions only if \(\xi\) and \(\eta\) are proportional to the same eigenvector of \(\vec{\sigma}\cdot\vec{\kappa}\). This can be seen in various ways. For example, using the second equation of Eq. (3.23) to eliminate \(\eta\) in the first equation gives \[\left[A_{L}^{0}A_{R}^{0}-\kappa^{2}-|m|^{2}+\vec{\sigma}\cdot\vec{\kappa}(A_{ R}^{0}-A_{L}^{0})\right]\xi=0\,, \tag{3.25}\] which implies that \(\xi\) is eigenvector of \(\vec{\sigma}\cdot\vec{\kappa}\), and then the second equation implies that \(\eta\) is proportional to \(\xi\). Therefore, we write the solution in the form \[\eta = x\chi_{s}\,,\] \[\xi = y\chi_{s}\,, \tag{3.26}\] where \(\chi_{s}\) is the spinor with definite helicity, defined by \[\left(\vec{\sigma}\cdot\hat{\kappa}\right)\chi_{s}=s\chi_{s}\,, \tag{3.27}\] with \(s=\pm 1\). For a given helicity \(s\), the equations for \(x\) and \(y\) are \[(\omega+s\kappa+\mu_{L})x-m^{*}y = 0\,,\] \[(\omega-s\kappa+\mu_{R})y-mx = 0\,, \tag{3.28}\] which imply that \(\omega\) must satisfy \[(\omega+s\kappa+\mu_{L})(\omega-s\kappa+\mu_{R})-|m|^{2}=0\,. \tag{3.29}\] Expressing \(\mu_{R}\) and \(\mu_{L}\) in terms of their sum and their difference \(\mu_{R}\pm\mu_{L}\), this equation can be written in the form \[\left[\omega+\frac{1}{2}(\mu_{R}+\mu_{L})\right]^{2}-\left[s\kappa-\frac{1}{2 }(\mu_{R}-\mu_{L})\right]^{2}-|m|^{2}=0\,. \tag{3.30}\] For each \(s\), we have two solutions, one with positive \(\omega\) and another with a negative \(\omega\). They correspond to the positive and negative helicity states of the Dirac particle and its anti-particle, which are associated with the unbroken \(Q_{1}\). We label the two solutions for each \(s\) as \(\omega_{s}^{(\pm)}\). With this notation the solutions are \[\omega_{s}^{(\pm)}(\vec{\kappa})=\pm\left\{\left[\kappa-\frac{s}{2}(\mu_{R}- \mu_{L})\right]^{2}+|m|^{2}\right\}^{1/2}-\frac{1}{2}(\mu_{R}+\mu_{L})\,. \tag{3.31}\] Denoting the particle and anti-particle dispersion relations by \(\omega_{s}\) and \(\bar{\omega}_{s}\), respectively, they are to be identified according to \[\omega_{s}(\vec{\kappa}) = \omega_{s}^{(+)}(\vec{\kappa})\] \[= \left\{\left[\kappa-\frac{s}{2}\mu_{\phi}\right]^{2}+|m|^{2} \right\}^{1/2}-\frac{1}{2}\mu_{RL}\,,\] \[\bar{\omega}_{s}(\vec{\kappa}) = -\omega_{s}^{(-)}(-\vec{\kappa}) \tag{3.32}\] \[= \left\{\left[\kappa-\frac{s}{2}\mu_{\phi}\right]^{2}+|m|^{2} \right\}^{1/2}+\frac{1}{2}\mu_{RL}\,,\] where we have used Eq. (3.5), and defined \[\mu_{RL}=\mu_{R}+\mu_{L}\,. \tag{3.33}\] It should be kept in mind that, apart from the explicit dependence on \(\mu_{\phi}\) in Eq. (3.32), \(m\) also depends on \(\mu_{\phi}\) [see Eq. (3.12)]. ### Discussion To gain some insight into the solution we can consider some particular cases. For example, while the particle and anti-particle dispersion relations are different in general, they are approximately equal in the limit of small \(\mu_{RL}\). We also note that in the limit \(\kappa\gg|\mu_{\phi}|\), the dispersion relations are approximately independent of \(s\). They are strictly independent of \(s\) at \(\kappa=0\), \[\omega_{s}(0) = \left\{\frac{1}{4}\mu_{\phi}^{2}+|m|^{2}\right\}^{1/2}-\frac{1}{ 2}\mu_{RL}\,,\] \[\bar{\omega}_{s}(0) = \left\{\frac{1}{4}\mu_{\phi}^{2}+|m|^{2}\right\}^{1/2}+\frac{1}{ 2}\mu_{RL}\,, \tag{3.34}\] which can be interpreted as the effective masses of the particle and anti-particle. On top of these effects, the dispersion relations will also get corrections due to the interactions with the background excitations. In the context of thermal field theory such corrections can be determined by calculating the one-loop self-energy diagrams. As we have already indicated, those calculations are not in the scope of the present work. ## 4 Model II We consider a massless chiral fermion \(f_{L}\) with an interaction \[L_{\rm int}=-\frac{\lambda}{2}\phi\bar{f}_{R}^{c}f_{L}+h.c\,. \tag{4.1}\] In this case there is one conserved charge, with \[Q(f_{L})=1\,,\qquad Q(\phi)=-2\,, \tag{4.2}\] and the chemical potentials satisfy \[\mu_{\phi}+2\mu_{L}=0\,, \tag{4.3}\] where we are denoting by \(\mu_{L}\) the chemical potential of \(f_{L}\). Proceeding as in Section 3, the total Lagrangian is given as in Eq. (3.13), but in the present case \[L_{0}=\bar{f}_{L}^{\prime}i\partial\!\!\!/f_{L}^{\prime}+\mu_{L}\bar{f}_{L}^{ \prime}\psi\!\!\!/f_{L}^{\prime}-\left(\frac{m}{2}\bar{f}_{R}^{c\,\prime}f_{L }^{\prime}+h.c.\right)\,, \tag{4.4}\] \[L^{\prime}_{\rm int}=-\frac{\lambda}{2\sqrt{2}}(\phi_{1}+i\phi_{2})\bar{f}_{R}^{ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, Putting once more \[\phi = e^{-i\mu_{\theta}t}\phi^{\prime}\,,\] \[f = e^{-i\mu_{f}t}f^{\prime}\,, \tag{5.3}\] instead of Eqs. (4.4) and (4.5) in this case we have \[L_{0} = \bar{f}^{\prime}i\partial\!\!\!/f^{\prime}+\mu_{f}\bar{f}^{\prime }\!\!\!/f^{\prime}-M\bar{f}^{\prime}f^{\prime}-\left(\frac{m}{2}\,\bar{f}^{ \prime\,c}f^{\prime}+h.c.\right)\,, \tag{5.5}\] \[L^{\prime}_{\rm int} = -\frac{\lambda}{2\sqrt{2}}(\phi_{1}+i\phi_{2})\bar{f}^{\,c}f^{ \prime}+h.c\,, \tag{5.6}\] where \(m\) is given in Eq. (3.12). The mass term \(\frac{m}{2}\bar{f}^{\,c\prime}f^{\prime}\) breaks the degeneracy between the two Majorana components of what would otherwise be a Dirac fermion. \(L_{0}\) in Eq. (5.5) resembles the kinetic part of the Lagrangian of the pseudo-Dirac neutrino model[22], but here it has the additional term involving the chemical potential. We take \(m\) to be complex in general, and denote its phase by \(\theta\), i.e., \[m=|m|e^{i\theta}\,. \tag{5.7}\] To proceed we introduce the Majorana fields \[f_{1} = \frac{1}{\sqrt{2}}\left(e^{i\theta/2}f^{\prime}+e^{-i\theta/2}{f ^{\prime}}^{c}\right)\,,\] \[f_{2} = \frac{1}{i\sqrt{2}}\left(e^{i\theta/2}f^{\prime}-e^{-i\theta/2}{ f^{\prime}}^{c}\right)\,, \tag{5.8}\] and therefore \[f^{\prime}=\frac{e^{-i\theta/2}}{\sqrt{2}}(f_{1}+if_{2})\,. \tag{5.9}\] In terms of the Majorana fields \(f_{1,2}\), \(L_{0}\) becomes \[L_{0}=\frac{1}{2}(\bar{f}_{1}i\partial\!\!\!/f_{1}+\bar{f}_{2}i\partial\!\!\!/ f_{2})+\frac{i\mu_{f}}{2}(\bar{f}_{1}\!\!\!/f_{2}-\bar{f}_{2}\!\!\!/f_{1})-\frac{M}{ 2}(\bar{f}_{1}f_{1}+\bar{f}_{2}f_{2})-\frac{|m|}{2}(\bar{f}_{1}f_{1}-\bar{f}_{ 2}f_{2})\,. \tag{5.10}\] Therefore, in the absence of the \(\mu_{f}\) term, \(f_{1}\) and \(f_{2}\) are uncoupled in \(L_{0}\) with masses \(M\pm|m|\), respectively. In the presence of the \(\mu_{f}\) term, \(f_{1}\) and \(f_{2}\) are mixed. Our purpose now is to obtain the proper combinations that have a definite dispersion relation in the presence of the \(\mu_{f}\) term. ### Dispersion relations To restate the problem in a more compact algebraic form we introduce the notation \[f_{M}=\left(\begin{array}{c}f_{1}\\ f_{2}\end{array}\right)\,. \tag{5.11}\] In momentum space, \(L_{0}\) is then \[L_{0}=\frac{1}{2}\bar{f}_{M}\left(\not{k}+\hat{\mu}_{f}\not{u}-\hat{M}\right) f_{M}\,, \tag{5.12}\] where \[\hat{\mu}_{f}=\mu_{f}\left(\begin{array}{cc}0&i\\ -i&0\end{array}\right)\,, \tag{5.13}\] and \[\hat{M}=\left(\begin{array}{cc}M_{+}&0\\ 0&M_{-}\end{array}\right)\,, \tag{5.14}\] where we have defined \[M_{\pm}=M\pm|m|\,. \tag{5.15}\] The equation for the dispersion relations and the corresponding eigenspinors is \[(k\!\!\!/+\hat{\mu}_{f}\not{u}-\hat{M})f_{M}=0\,. \tag{5.16}\] As in the previous cases, we use the Weyl representation of the gamma matrices, and decompose \[f_{i}=\left(\begin{array}{c}x_{i}\chi_{s}\\ y_{i}\chi_{s}\end{array}\right)\qquad(i=1,2)\,, \tag{5.17}\] using the helicity spinors \(\chi_{s}\) (defined in Eq. (3.27)) as basis. The equations for the coefficients \(x_{i}\) and \(y_{i}\) then become \[(\omega+s\kappa+\hat{\mu}_{f})x-\hat{M}y = 0\,,\] \[(\omega-s\kappa+\hat{\mu}_{f})y-\hat{M}x = 0\,, \tag{5.18}\] where \(x,y\) are two-dimensional spinors in the \(f_{1,2}\)_flavor_ space, \[x = \left(\begin{array}{c}x_{1}\\ x_{2}\end{array}\right)\,,\] \[y = \left(\begin{array}{c}y_{1}\\ y_{2}\end{array}\right)\,. \tag{5.19}\] Again, if the \(\mu_{f}\) term is dropped, we get back two uncoupled pairs of equations, in the Weyl representation and the helicity basis, for two massive fermions with dispersion relations \(\omega=\sqrt{\kappa^{2}+(M\pm|m|)^{2}}\). We now seek the solutions in the presence of \(\mu_{f}\) term. Using the first to write \[y=\frac{1}{\hat{M}}(\omega+s\kappa+\hat{\mu}_{f})x\,, \tag{5.20}\] and substituting in the second one, we get the equation for \(x\), \[\left[(\omega-s\kappa+\hat{\mu}_{f})\frac{1}{\hat{M}}(\omega+s\kappa+\hat{\mu }_{f})-\hat{M}\right]x=0\,. \tag{5.21}\] By straightforward algebra, we obtain \[(\omega-s\kappa+\hat{\mu}_{f})\frac{1}{\hat{M}}(\omega+s\kappa+\hat{\mu}_{f}) =\frac{1}{\hat{M}}\hat{A}\,, \tag{5.22}\] where \[\hat{A}=\left(\begin{array}{cc}\omega^{2}-\kappa^{2}+r\mu_{f}^{2}&i\mu_{f} (\omega-s\kappa)+i\mu_{f}r(\omega+s\kappa)\\ -i\mu_{f}(\omega-s\kappa)-i\frac{\mu_{f}}{r}(\omega+s\kappa)&\omega^{2}- \kappa^{2}+\frac{\mu_{f}^{2}}{r}\end{array}\right) \tag{5.23}\] with \[r=\frac{M_{+}}{M_{-}}\,. \tag{5.24}\] Substituting Eq. (5.22) in Eq. (5.21) and multiplying by \(\hat{M}\), the equation for \(x\) is \[(\hat{A}-\hat{M}^{2})x=0\,, \tag{5.25}\] where \(\hat{M}\) and \(\hat{A}\) are given in Eqs. (5.14) and (5.23), respectively. The dispersion relations are obtained by solving the equation \[(A_{11}-M_{+}^{2})(A_{22}-M_{-}^{2})-A_{12}A_{21}=0\,, \tag{5.26}\] where \(A_{ij}\) are the elements of the matrix \(\hat{A}\) defined in Eq. (5.23). It follows by inspection of Eq. (5.23) that the products of the \(A_{ij}\) that appear in Eq. (5.26) have the form \[(A_{11}-M^{2}{}_{+})(A_{22}-M_{-}^{2}) = \omega^{4}+A_{1}\omega^{2}+A_{0}\,,\] \[A_{12}A_{21} = A_{1}^{\prime}\omega^{2}+A_{0}^{\prime}\,, \tag{5.27}\] where \(A_{0,1}\) and \(A_{0,1}^{\prime}\) are independent of \(\omega\). Eq. (5.26) then leads to the following equation for the dispersion relation, \[\omega^{4}-2b\omega^{2}+c=0\,, \tag{5.28}\] where \[b = -\frac{1}{2}(A_{1}-A_{1}^{\prime})\,,\] \[c = A_{0}-A_{0}^{\prime}\,. \tag{5.29}\] By straightforward algebra, after some simplifications, we find \[A_{1}^{\prime} = \frac{\mu_{f}^{2}}{r}(1+r)^{2}\,,\] \[A_{0}^{\prime} = -\frac{\mu_{f}^{2}}{r}(1-r)^{2}\kappa^{2}\,,\] \[A_{0} = (\kappa^{2}+M_{+}^{2}-r\mu_{f}^{2})\left(\kappa^{2}+M_{-}^{2}- \frac{\mu_{f}^{2}}{r}\right)\,,\] \[A_{1} = -\left[2(\kappa^{2}+M^{2}+|m|^{2}+\mu_{f}^{2})-\frac{\mu_{f}^{2} }{r}(1+r)^{2}\right]\,. \tag{5.30}\] Then from Eq. (5.29), \[b = \kappa^{2}+M^{2}+|m|^{2}+\frac{1}{4}\mu_{\phi}^{2}\,,\] \[c = \kappa^{4}+2\kappa^{2}\left(M^{2}+|m|^{2}-\frac{1}{4}\mu_{\phi}^ {2}\right)+\left(M_{+}^{2}-\frac{r\mu_{\phi}^{2}}{4}\right)\left(M_{-}^{2}- \frac{\mu_{\phi}^{2}}{4r}\right)\,. \tag{5.31}\] The dispersion relations are given by \[\omega_{\pm}^{2}=b\pm\sqrt{d}\,, \tag{5.32}\] with \[d=b^{2}-c\,, \tag{5.33}\] where, from Eq. (5.31), \[d=4M^{2}|m|^{2}+\mu_{\phi}^{2}\kappa^{2}+\frac{\mu_{\phi}^{2}}{4r}\left[(1+r) ^{2}(M^{2}+|m|^{2})+2(1-r^{2})M|m|\right]\,. \tag{5.34}\] Once again we recall that \(m\) is given in Eq. (3.12). ### Discussion To gain some insight we can consider various limiting cases. **Pseudo-Dirac limit.**: If the situation is such that the term \(\mu_{\phi}^{2}\kappa^{2}\) in Eq. (5.34) can be dropped (sufficiently small \(\mu_{\phi}\) and/or \(\kappa\)), then the dispersion relations are given by \[\omega_{\pm}^{2}=\kappa^{2}+M_{\pm}^{\prime\,2}\,, \tag{5.35}\] where \[M^{\prime\,2}_{\pm}=M^{2}+|m|^{2}+\frac{1}{4}\mu_{\phi}^{2}\pm\left\{4M^{2}|m|^{2 }+\frac{\mu_{\phi}^{2}}{4r}\left[(1+r)^{2}(M^{2}+|m|^{2})+2(1-r^{2})M|m|\right] \right\}^{1/2}\,, \tag{5.36}\] which are the dispersion relations for two fermions with effective masses masses \(M^{\prime}_{\pm}\). Further, in the special case that \(\mu_{\phi}\) is sufficiently small that the explicit \(\mu_{\phi}\) terms can be dropped in Eq. (5.36) (while \(|m|\) is kept), the dispersion relations reduce to \[\omega_{\pm}^{2}=\kappa^{2}+M_{\pm}^{2}\,, \tag{5.37}\] which resemble the dispersion relations in vacuum for two fermions with masses \(M_{\pm}\), as already anticipated above. In the neutrino context Eq. (5.37) is the familiar pseudo-Dirac neutrino model[22]. However it must be kept in mind that in the more general case in which the term \(\mu_{\phi}^{2}\kappa^{2}\) in Eq. (5.34) cannot be dropped, the \(\kappa\) dependence of the dispersion relations does not have the canonical form of Eqs. (5.36) and (5.37). \(|m|\ll M\) **limit.** In this limit, the \(d\) term in Eq. (5.34) can be approximated by \[d=4M^{2}|m|^{2}+\mu_{\phi}^{2}\kappa^{2}+\mu_{\phi}^{2}M^{2}\,, \tag{5.38}\] so that the dispersion relations reduce to \[\omega_{\pm}^{2}(\kappa)=\kappa^{2}+M^{2}+|m|^{2}+\frac{1}{4}\mu_{\phi}^{2} \pm 2\sqrt{M^{2}|m|^{2}+\frac{1}{4}\mu_{\phi}^{2}\kappa^{2}+\frac{1}{4}\mu_ {\phi}^{2}M^{2}}\,. \tag{5.39}\] Further, taking the \(\kappa\to 0\) limit, \[\omega_{\pm}^{2}(0)=\left(M\pm\sqrt{|m|^{2}+\frac{1}{4}\mu_{\phi}^{2}}\right)^ {2}\,, \tag{5.40}\] Figure 1: Plot of the dispersion relations of the Majorana modes in the case of negligible \(M\), given in Eq. (5.42). For the plot we taken \(|m|^{2}\sim\mu_{\phi}^{2}\). For reference, the plot of the dispersion relation \(\omega_{0}=\kappa\) is superimposed. which can be interpreted as the effective masses of the Majorana modes, in the \(|m|\ll M\) limit. But again, the \(\kappa\) dependence of the dispersion relation is different than the one given in Eqs. (5.36) and (5.37). In the case that \(|m|\) can be neglected relative to \(\mu_{\phi}\) (for example, if \(\mu_{\phi}\) is sufficiently close to \(m_{\phi}\)), then Eq. (5.39) can be approximated by \[\omega_{\pm}(\kappa)=\sqrt{\kappa^{2}+M^{2}}\pm\frac{1}{2}\mu_{\phi}\,,\] (5.41) which resemble the dispersion relation of a neutrino propagating in a matter background with a Wolfenstein-like potential \(V_{\rm eff}=\frac{1}{2}\mu_{\phi}\). **Small \(M\) limit.**: For sufficently small values of \(M\), the dispersion relations are approximated by \[\omega_{\pm}^{2}=\kappa^{2}+|m|^{2}+\frac{1}{4}\mu_{\phi}^{2}\pm\mu_{\phi} \kappa\,.\] (5.42) Therefore, the two modes have the same efective mass \[\omega(0)=\sqrt{|m|^{2}+\frac{1}{4}\mu_{\phi}^{2}}\,,\] (5.43) but different dispersion relations away from \(\kappa=0\). A plot of Eq. (5.42) is shown in Fig. 1. ## 6 Conclusions and outlook In previous works we have carried out a systematic calculation of the neutrino dispersion relation, as well as the damping and decoherence effects, when the neutrino propagates in a thermal background of fermions and scalars, with a Yukawa-type interaction between the neutrino and the background particles [see Ref. [12] and references therein]. As a complement of that work, the motivation of the present work is to determine the corresponding quantities for the case in which the scalar background consists of a Bose-Einstein condensate. To this end, here we have proposed an efficient and consistent method to treat the propagation of generic fermions in the background of BE condensate. With an outlook to possible application in other contexts, we have illustrated and implemented the method in a general way and not tied to any specific application. In the present work we have focused exclusively on the calculation of the dispersion relations. To model the propagation of the fermions in such an environment, we assumed some simple Yukawa-type interactions between the fermions and the scalar. As mentioned in the Introduction, the method we use to treat the BE condensate has been discussed by various authors[23, 24, 25]. In Section 2 we reviewed those aspects and details of the method that are relevant for our purposes. In the following three sections we presented the extension we propose of that method to treat the propagation of fermions in the BE condensate, in the context of three generic, but specific, models of the fermion-scalar interaction. Specifically in Section 3 we considered two massless chiral fermions, \(f_{L}\) and \(f_{R}\), with a coupling to the scalar particle \(\phi\) of the form \(\bar{f}_{R}f_{L}\phi\) (Model I). In Section 4 we considered a massless chiral fermion \(f_{L}\) with coupling \(\bar{f}_{R}^{c}f_{L}\phi\) (Model II). Finally in Section 5 we considered one massive Dirac fermion \(f\) with a coupling \(\bar{f}^{c}f\phi\) (Model III). In each case we determined the fermion modes and corresponding dispersion relations and pointed out some of their particular characteristics. For example, as a result of the symmetry breaking the propagating mode is a Dirac fermion and a Majorana fermion in Models I and II, respectively. In Model III the symmetry breaking produces two non-degenerate Majorana modes of what otherwise would form a Dirac fermion field in the unbroken phase. In the latter case, various particular features of the dispersion relations of the Majorana modes were illustrated by considering particular limiting cases of the parameters of the model. For example, one interesting observation is that, while in general the two Majorana modes have different effective masses (the value of the dispersion relation at zero momentum), in some limits the two modes have the same effective mass although the dispersion relations at non-zero momentum are different. The method we propose for the propagation of fermions in a BE condensate has never being used before, and can be applicable in various contexts, for example neutrino physics, condensed or nuclear matter systems and heavy-ion collisions. In addition, the work sets the ground for considering the case of various fermion flavors, as would be required for the aplication to neutrinos, or the corrections to the dispersion relations due to the thermal effects of the background excitations, that could be required for particular applications. The work of S. S. is partially supported by DGAPA-UNAM (Mexico) PAPIIT project No. IN103522. ## Appendix A Scalar modes of the BE condensate In this appendix we complete the discussion of the model presented in Section 2 with regard to the \(\phi_{1,2}\) excitation modes of the BE condensate. To simplify the notation, here we omit the subscript in the chemical potential, mass and quartic coupling of the \(\phi\) and denote them by simply \(\mu,m,\lambda\) (without the \(\phi\) subscript), respectively. ### Lagrangian for the scalar modes As already mentioned in Section 2, the starting point is to substitute Eqs. (2.14) and (2.17) in Eq. (2.11) to obtain the Lagrangian for the scalar excitations \(\phi_{1,2}\). Doing piece by piece, \[(\partial^{\mu}\phi^{\prime})^{*}(\partial_{\mu}\phi^{\prime}) = \frac{1}{2}\left[(\partial^{\mu}\phi_{1})^{2}+(\partial^{\mu} \phi_{2})^{2}\right]\,,\] \[i[\phi^{\prime\,*}(v\cdot\partial\phi^{\prime})-(v\cdot \partial\phi^{\prime})^{*}\phi^{\prime}] = 2\mbox{Re}\left(i\phi^{*}v\cdot\partial\phi\right)\] (A.1) \[= \phi_{2}v\cdot\partial(\phi_{1}+\phi_{0})-(\phi_{1}+\phi_{0})v \cdot\partial\phi_{2}\] \[= \phi_{2}v\cdot\partial\phi_{1}-\phi_{1}v\cdot\partial\phi_{2}- \phi_{0}v\cdot\partial\phi_{2}\,.\] The last term is a total derivative and therefore does not contribute to the action or the equations of motion and can be dropped. Finally, \[U(\phi^{\prime}) = -\frac{1}{2}(\mu^{2}-m^{2})[(\phi_{0}+\phi_{1})^{2}+\phi_{2}^{2}] +\frac{1}{4}\lambda[(\phi_{0}+\phi_{1})^{2}+\phi_{2}^{2}]^{2}\] (A.2) \[= U_{0}+U_{1}+U_{2}+U_{3}+U_{4}\,,\] where \(U_{0}\) has been defined in Eq. (2.16), and \[U_{1} = \frac{\partial U_{0}}{\partial\phi_{0}}\phi_{1}\,,\] \[U_{2} = -\frac{1}{2}(\mu^{2}-m^{2})[\phi_{1}^{2}+\phi_{2}^{2}]+\frac{1}{ 2}\lambda\phi_{0}^{2}\phi_{2}^{2}+\frac{3}{2}\lambda\phi_{0}^{2}\phi_{1}^{2}\,,\] \[U_{3} = \lambda\phi_{0}(\phi_{1}^{3}+\phi_{1}\phi_{2}^{2})\,,\] \[U_{4} = \frac{\lambda}{4}(\phi_{1}^{4}+\phi_{2}^{4}+2\phi_{1}^{2}\phi_{2 }^{2})\,.\] (A.3) The terms \(U_{3,4}\) give the self-interactions between \(\phi_{1,2}\) which we are not interested in at the moment, \(U_{0}\) is an irrelevant constant, and \(U_{1}=0\) when Eq. (2.17) is used. The quadratic part, using Eq. (2.17) is \[U_{2}=\frac{1}{2}m_{1}^{2}\phi_{1}^{2}+\frac{1}{2}m_{2}^{2}\phi_{2}^{2}\,,\] (A.4) where \[m_{1}^{2} = -(\mu^{2}-m^{2})+3\lambda\phi_{0}^{2}\] (A.5) \[= 2(\mu^{2}-m^{2})\,,\] \[m_{2}^{2} = -(\mu^{2}-m^{2})+\lambda\phi_{0}^{2}\] \[= 0\,.\] In the second line in each equation we have used Eq. (2.17). Therefore, the quadratic part of the \(\phi_{1,2}\) Lagrangian is \[L_{\phi}^{(2)}=\frac{1}{2}\left[(\partial^{\mu}\phi_{1})^{2}+(\partial^{\mu} \phi_{2})^{2}\right]+\phi_{2}v\cdot\partial\phi_{1}-\phi_{1}v\cdot\partial \phi_{2}-\frac{1}{2}m_{1}^{2}\phi_{1}^{2}\,.\] (A.6) Thus \(\phi_{1}\) and \(\phi_{2}\) are mixed by the \(v^{\mu}\) term. The next step is to find the propagator matrix of the \(\phi_{1,2}\) complex and determine the modes that have a definite dispersion relation. ### Dispersion relations for the scalar modes Using matrix notation, \[\hat{\phi}=\left(\begin{array}{c}\phi_{1}\\ \phi_{2}\end{array}\right)\,,\] (A.7) the Lagrangian, in momentum space, is \[L^{(2)}_{\phi}(k)=\frac{1}{2}\hat{\phi}^{*}(k)\Delta_{\phi}^{-1}(k)\hat{\phi}( k)\,,\] (A.8) where \[\Delta_{\phi}^{-1}(k)=\left(\begin{array}{cc}k^{2}-m_{1}^{2}&2iv\cdot k\\ -2iv\cdot k&k^{2}\end{array}\right)\,.\] (A.9) The classical equations of motion are then \[\Delta_{\phi}^{-1}(k)\hat{\phi}=0\,.\] (A.10) The dispersion relations of the eigenmodes are given by the solutions of \[D=0\,,\] (A.11) where \(D\) is the determinant of \(\Delta_{\phi}^{-1}\), \[D\equiv k^{2}(k^{2}-m_{1}^{2})-4(v\cdot k)^{2}\,,\] (A.12) or, \[D=(\omega^{2}-\kappa^{2})(\omega^{2}-\kappa^{2}-m_{1}^{2})-\alpha^{2}\omega^{ 2}\,,\] (A.13) where we have defined \[\alpha=2\mu\,.\] (A.14) The dispersion relations are determined by solving \[(\omega^{2}-\kappa^{2})(\omega^{2}-\kappa^{2}-m_{1}^{2})-\alpha^{2}\omega^{2 }=0\,,\] (A.15) which we write in the form \[(\omega^{2}-\kappa^{2})(\omega^{2}-\kappa^{2}-m_{1}^{2})-\alpha^ {2}(\omega^{2}-\kappa^{2})-\alpha^{2}\kappa^{2} = 0\,,\] \[(\omega^{2}-\kappa^{2})^{2}-(\omega^{2}-\kappa^{2})(m_{1}^{2}+ \alpha^{2})-\alpha^{2}\kappa^{2} = 0\,.\] (A.16) This is a quadratic equation for \((\omega^{2}-\kappa^{2})\) with solutions \[\omega_{\pm}^{2}(\kappa)=\kappa^{2}+\frac{1}{2}(m_{1}^{2}+\alpha^{2})\pm \sqrt{\frac{1}{4}(m_{1}^{2}+\alpha^{2})^{2}+\alpha^{2}\kappa^{2}}\,,\] (A.17) and obviously, \[D=(\omega^{2}-\omega_{+}^{2})(\omega^{2}-\omega_{-}^{2})\,.\] (A.18) Thus, the masses of the propagating modes are \[m_{+}^{2} = m_{1}^{2}+\alpha^{2}\,,\] \[m_{-}^{2} = 0\,.\] (A.19) The zero mass mode is the realization of the Goldstone mode associated with the breaking of the global \(U(1)\) symmetry. The corresponding eigenvectors satisfy \[\Delta_{\phi}^{-1}(k)\Big{|}_{\omega=\omega_{s}}\hat{\phi}_{s}(\kappa)=0\,,\] (A.20) where \(s=\pm\). Writing \[\hat{\phi}_{s}(\kappa)=\left(\begin{array}{c}a_{s}\\ b_{s}\end{array}\right)\,,\] (A.21) the equations for the components are \[(\omega_{s}^{2}-\kappa^{2}-m_{1}^{2})a_{s}+i\alpha\omega_{s}b_{s} = 0\,,\] \[-i\alpha\omega_{s}a_{s}+(\omega_{s}^{2}-\kappa^{2})b_{s} = 0\,.\] (A.22) We write the solutions in the form \[\hat{\phi}_{+}(\kappa) = \frac{1}{\sqrt{N_{+}}}\left(\begin{array}{c}\omega_{+}^{2}- \kappa^{2}\\ i\alpha\omega_{+}\end{array}\right)\,,\] \[\hat{\phi}_{-}(\kappa) = \frac{1}{\sqrt{N_{-}}}\left(\begin{array}{c}-i\alpha\omega-\\ \omega_{-}^{2}-\kappa^{2}-m_{1}^{2}\end{array}\right)\,.\] (A.23) The normalization factors \(N_{\pm}\) are determined by requiring that the one-particle contribution to the propagator from the eigenmodes coincide with the form of the propagator near the dispersion relations (\(\omega\rightarrow\omega_{s}\)). The procedure is the following. Instead of expressing \(\hat{\phi}\) in terms of the \(1,2\) modes, \[\hat{\phi}(k)=\left(\begin{array}{c}\phi_{1}(k)\\ \phi_{2}(k)\end{array}\right)\,,\] (A.24) it is expressed in terms of the modes that have a definite dispersion relation, \[\hat{\phi}(k)=\sum_{s=\pm}c_{s}(k)\hat{\phi}_{s}(\kappa)\,,\] (A.25) where the \(\phi_{s}\) are the eigenvectors found above. The _free-field_\(\hat{\phi}(x)\) is then expanded in the usual form, \[\hat{\phi}(x)=\sum_{s=\pm}\int\frac{d^{3}\kappa}{(2\pi)^{3}2\omega_{s}(\kappa) }\left[e^{-ik_{s}\cdot x}a_{s}(\vec{\kappa})\hat{\phi}_{s}(\kappa)+e^{ik_{s} \cdot x}a_{s}^{*}(\vec{\kappa})\hat{\phi}_{s}^{*}(\kappa)\right]\,,\] (A.26) with \[[a_{s}(\vec{\kappa}),a_{s}^{*}(\vec{\kappa}^{\prime})]=(2\pi)^{3}2\omega_{s}( \kappa)\delta(\vec{\kappa}-\vec{\kappa}^{\prime})\,,\] (A.27) and \[k_{s}^{\mu}=(\omega_{s},\vec{\kappa})\,.\] (A.28) The one-particle contribution to the propagator from a given mode is then \[\Delta_{\phi}\big{|}_{\rm one\mbox{-particle},s}=\frac{\hat{\phi}_{s}\hat{\phi} _{s}^{\dagger}}{2\omega_{s}(\omega-\omega_{s})}\,.\] (A.29) For reference and example, we give explicitly the formula for \(s=+\), \[\hat{\phi}_{+}\hat{\phi}_{+}^{\dagger}=\frac{1}{N_{+}}\left(\begin{array}{ cc}(\omega_{+}^{2}-\kappa^{2})(\omega_{+}^{2}-\kappa^{2})&(\omega_{+}^{2}-\kappa^{2})(-i \alpha\omega_{+})\\ i\alpha\omega_{+}(\omega_{+}^{2}-\kappa^{2})&(i\alpha\omega_{+})(-i\alpha \omega_{+})\end{array}\right)\,.\] (A.30) On the other hand, by inverting Eq. (A.9), we obtain the propagator of the \(\phi_{1,2}\) complex \[\Delta_{\phi}(k)=\frac{1}{D}\left(\begin{array}{cc}k^{2}&-i\alpha\omega\\ i\alpha\omega&k^{2}-m_{1}^{2}\end{array}\right)\,.\] (A.31) where \(D\) is given in Eq. (A.13)1 Footnote 1: To leading order in \(\mu\), \[\Delta_{\phi}(k)\simeq\Delta_{\phi}^{(1)}(k)\equiv\left(\begin{array}{cc} \frac{1}{k^{2}-m_{1}^{2}}&\frac{-i\alpha\omega}{d}\\ \frac{i\alpha\omega}{d}&\frac{1}{k^{2}}\end{array}\right)\,,\] (A.32) where \[d=k^{2}(k^{2}-m_{1}^{2})\,.\] (A.33) . The propagator has poles at the dispersion relations given in Eq. (A.17). Using Eq. (A.18), near the \(\omega=\omega_{+}\) pole, Eq. (A.31) gives \[\Delta_{\phi}(k)\big{|}_{\omega\rightarrow\omega_{+}}=\frac{1}{2\omega_{+}( \omega-\omega_{+})(\omega_{+}^{2}-\omega_{-}^{2})}\left(\begin{array}{cc} \omega_{+}^{2}-\kappa^{2}&-i\alpha\omega_{+}\\ i\alpha\omega_{+}&\omega_{+}^{2}-\kappa^{2}-m_{1}^{2}\end{array}\right)\,.\] (A.34) The normalization factor \(N_{+}\) is determined by requiring that \[\left.\Delta_{\phi}\right|_{\omega\rightarrow\omega_{+}}=\left.\Delta_{\phi} \right|_{\text{one-particle},+}\,.\] (A.35) Comparing Eqs. (A.29) and (A.34), the normalization factor is then determined by requiring that \[\hat{\phi}_{+}\hat{\phi}_{+}^{\dagger}=\frac{1}{\omega_{+}^{2}-\omega_{-}^{2}} \left(\begin{array}{cc}\omega_{+}^{2}-\kappa^{2}&-i\alpha\omega_{+}\\ i\alpha\omega_{+}&\omega_{+}^{2}-\kappa^{2}-m_{1}^{2}\end{array}\right)\,,\] (A.36) and using Eq. (A.30) (and remembering Eq. (A.15), for the 22 element) we then obtain the wave function renormalization factor \[\frac{1}{N_{+}}=\frac{1}{\omega_{+}^{2}-\omega_{-}^{2}}\frac{1}{\omega_{+}^{2 }-\kappa^{2}}\,.\] (A.37) Applying similar arguments to \(N_{-}\), \[\frac{1}{N_{-}}=\frac{1}{\omega_{+}^{2}-\omega_{-}^{2}}\frac{1}{\kappa^{2}+m_ {1}^{2}-\omega_{-}^{2}}\,.\] (A.38)
2309.07835
Learning to Warm-Start Fixed-Point Optimization Algorithms
We introduce a machine-learning framework to warm-start fixed-point optimization algorithms. Our architecture consists of a neural network mapping problem parameters to warm starts, followed by a predefined number of fixed-point iterations. We propose two loss functions designed to either minimize the fixed-point residual or the distance to a ground truth solution. In this way, the neural network predicts warm starts with the end-to-end goal of minimizing the downstream loss. An important feature of our architecture is its flexibility, in that it can predict a warm start for fixed-point algorithms run for any number of steps, without being limited to the number of steps it has been trained on. We provide PAC-Bayes generalization bounds on unseen data for common classes of fixed-point operators: contractive, linearly convergent, and averaged. Applying this framework to well-known applications in control, statistics, and signal processing, we observe a significant reduction in the number of iterations and solution time required to solve these problems, through learned warm starts.
Rajiv Sambharya, Georgina Hall, Brandon Amos, Bartolomeo Stellato
2023-09-14T16:22:14Z
http://arxiv.org/abs/2309.07835v1
# Learning to Warm-Start Fixed-Point Optimization Algorithms ###### Abstract We introduce a machine-learning framework to warm-start fixed-point optimization algorithms. Our architecture consists of a neural network mapping problem parameters to warm starts, followed by a predefined number of fixed-point iterations. We propose two loss functions designed to either minimize the fixed-point residual or the distance to a ground truth solution. In this way, the neural network predicts warm starts with the end-to-end goal of minimizing the downstream loss. An important feature of our architecture is its flexibility, in that it can predict a warm start for fixed-point algorithms run for any number of steps, without being limited to the number of steps it has been trained on. We provide PAC-Bayes generalization bounds on unseen data for common classes of fixed-point operators: contractive, linearly convergent, and averaged. Applying this framework to well-known applications in control, statistics, and signal processing, we observe a significant reduction in the number of iterations and solution time required to solve these problems, through learned warm starts. ## 1 Introduction We consider _parametric fixed-point problems_ of the form \[\mbox{find}\;z\quad\mbox{ such that }\quad z=T_{\theta}(z), \tag{1}\] where \(z\in\mathbf{R}^{p}\) is the decision variable and \(\theta\in\Theta\subseteq\mathbf{R}^{d}\) is the _problem parameter_ defining each instance of (1) via the _fixed-point operator_\(T_{\theta}\). We assume that \(\theta\) is drawn from an unknown distribution \(Q\), accessible only via samples, and that for every \(\theta\in\Theta\), problem (1) is a solvable (_i.e._, \(T_{\theta}\) admits a fixed-point) convex optimization problem. Almost all convex optimization problems can be cast as finding a fixed-point of an operator (Ryu and Yin, 2022), often representing the optimality conditions (Garstka et al., 2019; O'Donoghue et al., 2019; Stellato et al., 2020). To solve problem (1), we repeatedly apply the operator \(T_{\theta}\), obtaining the iterations \[z^{i+1}=T_{\theta}(z^{i}). \tag{2}\] We assume that the iterations (2) converge to a fixed-point, that is, \(\lim_{i\to\infty}\|z^{i}-z^{\star}(\theta)\|=0\), where \(z^{\star}(\theta)\) is a fixed point of \(T_{\theta}\). In practice, it is common to return an \(\epsilon\)-approximate solution, corresponding to a vector \(z^{i}\) for which the _fixed-point residual_, \(\|T_{\theta}(z^{i})-z^{i}\|_{2}\) is below \(\epsilon.\) Many optimization algorithms correspond to fixed-point iterations of the form (2); see Table 1 for some examples. Applications._Parametric fixed-point problems_ arise in several applications in machine learning, operations research, and engineering, where we repeatedly solve a problem of the form (1) with varying parameter \(\theta\). For example, in optimal control, we update the inputs (_e.g._, propeller thrusts) as sensor signals (_e.g._, system state) and goals (_e.g._, desired trajectory) vary (Borrelli et al., 2017, Section 7.1). Other examples include backtesting financial models (Boyd et al., 2017), power flow optimization (Hentenryck, 2021; Zamzam and Baker, 2020), and image restoration (Elad and Aharon, 2006). In non-convex optimization, finding a stationary point can also be cast as a fixed-point problem (Wang et al., 2019; Hong et al., 2016). In game theory, finding the Nash equilibrium of a multi-player game can be formulated as a fixed-point problem under some mild assumptions on the utility functions of each \begin{table} \begin{tabular}{l l l l l} \hline \hline Algorithm & Problem & Iterates & Fixed-point operator \(T_{\theta}\) \\ \hline Gradient descent & min & \(f_{\theta}(z)\) & \(z^{i+1}=z^{i}-\alpha\nabla f_{\theta}(z^{i})\) & \(T_{\theta}(z)=z-\alpha\nabla f_{\theta}(z)\) \\ \hline Proximal gradient descent & min & \(f_{\theta}(z)+g_{\theta}(z)\) & \(z^{i+1}=\mathbf{prox}_{\text{sup}}(z^{i}-\alpha\nabla f_{\theta}(z^{i}))\) & \(T_{\theta}(z)=\mathbf{prox}_{\text{sup}}(z-\alpha\nabla f_{\theta}(z))\) \\ \hline ADMM (Douglas and Rachford, 1956) (Gabay and Mercier, 1976) & min & \(f_{\theta}(u)+g_{\theta}(u)\) & \(u^{i+1}=\mathbf{prox}_{\theta}(2\tilde{u}^{i+1}-z^{i})\) & \(T_{\theta}(z)=z+\mathbf{prox}_{f_{\theta}}(2\mathbf{prox}_{\text{sup}}(z)-z)\) \\ & & & \(z^{i+1}=z^{i}+u^{i}-\tilde{u}^{i}\) & \\ \hline \multirow{6}{*}{OSQP (Stellato et al., 2020)} & min & \((1/2)x^{T}Px+c^{T}x\) & solve \(Qx^{i+1}=\alpha x^{i}-c+A^{T}(\rho w^{i}-y^{i})\) & \\ & s.t. & \(l\leq Ax\leq u\) & \(w^{i+1}=\Pi_{l,\infty}(Ax^{i}+\rho^{-1}y^{i})\) & \\ & & & \(y^{i+1}=y^{i}+\rho(Ax^{i+1}-w^{i+1})\) & \(T_{\theta}(z)\) where \(z=(x,Ax+\rho^{-1}y)\) \\ & & with & \(\theta=(\mathbf{vec}(P),\mathbf{vec}(A),c,l,u)\) with & \(Q=P+\sigma l+\rho\mathcal{A}^{T}A\) & \\ \hline \multirow{6}{*}{SCS (O’Donoghue, 2021)} & min & \((1/2)x^{T}Px+c^{T}x\) & \(u^{i+1}=\Pi_{c}(2\tilde{u}^{i+1}-z^{i})\) & \\ & s.t. & \(Ax+s=b\) & \(z^{i+1}=z^{i}+u^{i+1}-\tilde{u}^{i+1}\) & \(T_{\theta}(z)\) where \(z\) is the \\ & & \(s\in\mathcal{K}\) & & dual variable to \(u=(x,y)\) \\ \cline{1-1} & & & with & \(Q=\begin{bmatrix}P+I&A^{T}\\ -A&I\end{bmatrix}\) & \\ \cline{1-1} & with & & \(\mathcal{C}=\mathbf{R}^{n}\times\mathcal{K}\) & \\ \hline \hline \end{tabular} We denote **prox** as the proximal operator (Parikh and Boyd, 2014) and **vec** as the vectorization operator stacking the columns of a matrix (See the notation paragraph in 1 for formal definitions). See Appendix A for more information on the algorithms in this table. \end{table} Table 1: Many optimization algorithms can be written as fixed-point iterations. player (Briceno-Arias and Combettes, 2013; Ryu and Boyd, 2015). Finding fixed-points are also important in other areas, such as finding the optimal policy of Markov decision processes (Bellman, 1957) and solving variational inequality problems (Rockafellar and Wets, 1998; Bauschke and Combettes, 2011). Acceleration.In spite of the widespread use of fixed-point iterative algorithms, they are known to suffer from slow convergence to high-accuracy solutions (Zhang et al., 2020). Acceleration schemes (Zhang et al., 2020; Walker and Ni, 2011; d'Aspremont et al., 2021; Sopasakis et al., 2019) are an active area of research designed to speed up the solving of fixed-point problems. These methods, such as Anderson acceleration (Walker and Ni, 2011; Zhang et al., 2020), combine past iterates to generate the next one in order to improve the convergence behavior. Although acceleration methods are known to work well in certain cases, such as Nesterov acceleration to solve smooth, convex optimization problems, it is still an open research question to design schemes that are robust and versatile. Learning for optimization.Instead of designing acceleration methods for single problems, recent approaches take advantage of the parametric structure of fixed-point problems encountered in practice to learn efficient solution methods. In particular, they _learn algorithm steps_ using data from past solutions (Amos, 2023; Chen et al., 2021). Despite recent successes in a variety of fields, _e.g._, in sparse coding (Gregor and LeCun, 2010; Liu et al., 2019), convex optimization (Ichnowski et al., 2021; Venkataraman and Amos, 2021), and meta-learning (Li and Malik, 2016; Finn et al., 2017), most of these approaches lack _convergence guarantees_ because they directly alter the algorithm iterations with learned variants (Chen et al., 2021; Amos, 2023). Although some efforts have been made to safeguard the learned iterations (Premont-Schwarz et al., 2022; Heaton et al., 2020; Banert et al., 2021), guaranteeing convergence for general learned optimizers is still a challenge. In addition, most of these approaches do not provide _generalization guarantees_ on unseen data (Chen et al., 2021; Amos, 2023). Another data-driven approach to reduce the number of iterations is to _learn warm starts_ rather than the steps of the algorithm (Chen et al., 2022; Baker, 2019). An advantage to learning warm starts as opposed to algorithm steps is that this approach can be integrated with existing algorithms that provably converge from any starting point. However, existing methods to learn warm starts still lack generalization guarantees. They also decouple the learning procedure from the algorithm behavior after warm-starting, which can lead to suboptimality and infeasibility issues on unseen problem instances. Our contributions.We present a learning framework that predicts warm starts for iterative algorithms of the form (2), which solve parametric fixed-point problems of the type given in (1). The framework consists of two modules. The first module maps the parameter to a warm start via a neural network, and the second runs a predefined number of steps of the fixed-point algorithm. We propose two loss functions. The first one is the _fixed-point residual_ loss which directly penalizes the fixed-point residual of the output of the architec ture. The second one is the _regression_ loss which penalizes the distance between the output of the architecture and a given ground truth fixed-point (among possibly many). Compared to existing literature on learning warm starts, we train our architecture by differentiating through the fixed-point iterations. In this way, we construct warm-start predictions that perform well after a specific number of fixed-point iteration in an _end-to-end_ fashion. Furthermore, after training, our architecture allows the flexibility of selecting an arbitrary number of fixed-point iterations to perform and is not limited to the number it was originally trained on. By combining operator theory with the PAC-Bayes framework (McAllester, 1998; Shawe-Taylor and Williamson, 1997), we provide two types of guarantees on the performance of our framework. First, we give bounds on the fixed-point residual when we apply our framework to an arbitrary number of steps, larger than the number used during training. Second, we provide generalization bounds to unseen problems for common classes of operators: contractive, linearly convergent, and averaged. Finally, we apply our framework to a variety of algorithms including gradient descent, proximal gradient descent, and the alternating direction method of multipliers. In our benchmarks, we show that our learned warm starts lead to a significant reduction in the required number of iterations used to solve the fixed-point problems. We also demonstrate compatibility with state-of-the-art solvers by learning architectures specifically tailored to SCS (O'Donoghue et al., 2019) and OSQP (Stellato et al., 2020), and inputting warm starts into the corresponding C implementations. Notation.We denote the set of non-negative vectors of length \(n\) as \(\mathbf{R}_{+}^{n}\), and the set of vectors with positive entries of length \(n\) as \(\mathbf{R}_{++}^{n}\). We let the set of \(n\times n\) positive semidefinite and positive definite matrices be \(\mathbf{S}_{+}^{n}\) and \(\mathbf{S}_{++}^{n}\) respectively. We define the set of fixed-points of the operator \(T\), assumed to be non-empty, as \(\mathbf{fix}\,T\). For any closed and convex set \(S\), we denote \(\mathbf{dist}_{S}:\mathbf{R}^{n}\to\mathbf{R}\) to be the distance function, where \(\mathbf{dist}_{S}(x)=\min_{s\in S}\|s-x\|_{2}\). For any set \(S\subset\mathbf{R}^{n}\), we define the indicator function \(\mathcal{I}_{S}:\mathbf{R}^{n}\to\mathbf{R}\cup\{+\infty\}\) where \(\mathcal{I}_{S}(x)=0\) if \(x\in S\) and \(\mathcal{I}_{S}(x)=+\infty\) otherwise. We take \(k\) applications of any single-valued operator \(T\) to be \(T^{k}:\mathbf{R}^{n}\to\mathbf{R}^{n}\). For any matrix \(A\), we denote its spectral norm and Frobenius norm with \(\|A\|_{2}\) and \(\|A\|_{F}\) respectively. For a matrix \(Z\in\mathbf{R}^{m\times n}\), \(\mathbf{vec}(Z)\) is the vector obtained by stacking the columns of \(Z\). For a symmetric matrix \(Y\in\mathbf{S}^{n}\), \(\mathbf{vec}(Y)\) is the vector obtained by taking the upper triangular entries of matrix \(Y\). We let the all-ones vectors of length \(n\) be \(\mathbf{1}\in\mathbf{R}^{n}\). The proximal operator, \(\mathbf{prox}_{h}:\mathbf{R}^{n}\to\mathbf{R}^{n}\), of \(h\) is defined as (Parikh and Boyd, 2014) \[\mathbf{prox}_{h}(v)=\underset{x}{\operatorname{argmin}}\Big{(}h(x)+(1/2)\|x- v\|_{2}^{2}\Big{)}.\] Outline.We structure the rest of the paper as follows. In Section 2, we review some related work on learned solvers. In Section 3, we present our learning to warm-start framework. In Section 4, we provide generalization guarantees to unseen data for our method. In Section 5, we discuss choosing the right architecture, namely the choice of loss function and the number steps to train on. Section 6 presents various numerical benchmarks. Related work Learning warm starts.A common approach to reduce the number of iterations of iterative algorithms is to learn a mapping from problem parameters to high-quality initializations. Sambharya et al. (2023) learn warm starts for Douglas-Rachford splitting to solve convex quadratic programs (QPs). While this work conducts end-to-end learning, our work is more general in scope since we consider fixed-point problems rather than QPs. Additionally, we provide generalization guarantees for more cases of operators, and finally, we add a regression loss. In contrast to our approach, most of the techniques to learn warm starts don't consider the downstream algorithm in the warm start prediction. Baker (2019) and Mak et al. (2023) use machine learning to warm-start the optimal power flow problem. In the model predictive control (MPC) (Borrelli et al., 2017) paradigm, Chen et al. (2022) use a neural network to accelerate the optimal control law computation by warm-starting an active set method. Other works in MPC use machine learning to predict an approximate optimal solution and, instead of using it to warm-start an algorithm, directly ensure feasibility and optimality. Chen et al. (2018) and Karg and Lucia (2020) use a constrained neural network architecture that guarantees feasibility by projecting its output onto the QP feasible region. Zhang et al. (2019) uses a neural network to predict the solution while also certifying suboptimality of the output. Our paper differs from these works in that the training of the neural network we propose is designed to minimize the loss after many fixed-point steps, allowing us to improve solution quality. Our work is also more general in scope since we consider general parametric fixed-point problems. Finally, we provide generalization guarantees to unseen data which other works lack. Learning algorithm steps for convex optimization.In the area of learning to optimize (Chen et al., 2021) or amortized optimization (Amos, 2023), a parallel approach to learning warm starts consists in learning the algorithm steps themselves to solve convex optimization problems. Ichnowski et al. (2021) and Jung et al. (2022) use reinforcement learning to solve quadratic programs quickly by learning high-quality hyperparameters of algorithms. Venkataraman and Amos (2021) learns to accelerate fixed-point problems that correspond to convex problems quickly. One risk of some of these approaches is that convergence may not be guaranteed (Amos, 2023). To solve this problem, some works safeguard learned optimizers to guarantee convergence by reverting to a fallback update if the learned update starts to diverge (Heaton et al., 2020; Premont-Schwarz et al., 2022). Other strategies guarantee convergence by making sure that the learned algorithm does not deviate too much from a known convergent algorithm (Banert et al., 2021) or by providing convergence rate bounds (Tan et al., 2023). In addition to convergence challenges, approaches that learn algorithm steps generally do not have generalization guarantees to unseen data (Amos, 2023; Chen et al., 2021). Lastly, these methods generally cannot interface with existing algorithms that are written in C. Learning algorithm steps beyond convex optimization.Many works have learned algorithm steps for problems outside of convex optimization. For example, in non-convex optimization, Sjolund and Bankestad (2022) use graph neural networks (Wu et al., 2022) to accelerate algorithms to solve matrix factorization problems, and Bai et al. (2022) learn the acceleratation scheme to solve fixed-point problems quickly. The idea of learning algorithm steps has ventured beyond optimization. There has been a surge in recent years to learn algorithm steps to solve _inverse problems_, that is, problems where one wishes to recover a true signal, rather than minimizing an objective (Chen et al., 2021). This is typically done by embedding algorithm steps or reasoning layers (Chen et al., 2020) into a deep neural network and has been applied to various fields such as sparse coding (Gregor and LeCun, 2010; Liu et al., 2019; Wu et al., 2020), image restoration (Diamond et al., 2017; Zhang et al., 2017; Chang et al., 2017), and wireless communication (He et al., 2020; Balatsoukas-Stimming and Studer, 2019). A widely used technique involves _unrolling_ algorithmic steps (Monga et al., 2021), meaning differentiating through these steps to minimize a performance loss. While we also unroll algorithm steps, our work is different in scope since we aim to solve optimization problems rather than inverse problems, and in method since we learn warm starts rather than algorithm steps. Additionally, generalization and convergence remain issues in the context of learning to solve inverse problems (Chen et al., 2021; Amos, 2023). Learning surrogate optimization problems.Instead of solving the original parametric problem, several works aim to learn a surrogate model of large optimization problems. Then, an approximate solution can be obtained by solving the simpler or smaller optimization problem. For instance, Wang et al. (2020) learn a mapping to reduce the dimensionality of the decision variables in the surrogate problem. Li et al. (2023) use a neural approximator with reformulation and relaxation steps to solve linearly constrained optimization problems. Other works predict which constraints are active (Misra et al., 2022) and the value of the optimal integer solutions (Bertsimas and Stellato, 2021, 2019). In contrast, our approach refrains from approximating any problem; instead, we warm-start the fixed-point iterations. This allows us to clearly quantify the suboptimality achieved within a set number of fixed-point iterations. Meta-learning.Meta-learning (Hospedales et al., 2020; Vilalta and Drissi, 2001; Ruder, 2017) or learning to learn overlaps with the learning for optimization literature when the tasks are general machine learning tasks (Chen et al., 2021). A wide array of works learn the update function to gradient-based methods to speed up machine learning tasks with a variety of techniques including reinforcement learning (Li and Malik, 2016), unrolled gradient steps (Andrychowicz et al., 2016), and evolutionary strategies (Metz et al., 2022). More in the spirit of our work, Finn et al. (2017) learn the initial model weights so that a new task can be learned after only a few gradient updates. While the initialization of the model weights for their method is shared across the tasks, in our method we, instead, predict the warm start from the problem parameter. This tailors our initialization to the specific parametric problem under consideration. Algorithms with predictions.Another area that uses machine learning to improve algorithm performance is algorithms with predictions (Mitzenmacher and Vassilvitskii, 2020; Kraska et al., 2018; Khodak et al., 2022). Here, algorithms take advantage of a possibly imperfect prediction of some aspect of the problem to improve upon worst-case analysis. This idea has been applied to many problems such as ski-rental (Purohit et al., 2018), caching (Rohatgi, 2020), and bipartite matching (Dinitz et al., 2021). Even though the prediction can be used to improve the warm start for algorithms (Dinitz et al., 2021; Sakaue and Oki, 2022), the task we consider is fundamentally different since we aim to solve parametric problems as quickly as possible rather than to take advantage of a prediction. Generalization guarantees.The generalization guarantees we provide use a PAC-Bayes framework, which has been used in prior work in the amortized optimization setting (Gupta and Roughgarden, 2017; Bartlett et al., 2022). Chen et al. (2020) provide generalization guarantees for architectures with reasoning layers, using a local Rademacher complexity analysis. However, to the best of our knowledge, generalization guarantees have not been obtained with methods that aim to solve fixed-point problems quickly. Additionally, the bounds from these works mentioned above are obtained in methods where the algorithm steps are learned rather than the warm start. Unlike Sambharya et al. (2023) which focused on solving QPs, we obtain guarantees in the non-contractive case by using the PAC-Bayes framework rather than Rademacher complexity theory. ## 3 Learning to warm-start framework We now present our learning framework to learn warm starts to solve the parametric fixed-point problem (1). A key feature of our framework is the inclusion of a predefined number of fixed-point steps within the architecture. In this way, the warm-start predictions are tailored for the downstream algorithm, and we conduct end-to-end learning. The section is organized as follows. In Section 3.1, we provide intuition as to why learning end-to-end can be beneficial through a small illustrative example. In Section 3.2, we describe our architecture, and in Section 3.3 we introduce the two different loss functions we consider. A concise summary of these aspects is depicted in Figure 2. ### An illustrative example To build intuition, we provide a two-dimensional example that illustrates the importance of tailoring the warm-start prediction to the downstream algorithm. Consider the problem, \[\begin{array}{ll}\text{minimize}&(1/2)z^{T}Qz\\ \text{subject to}&z\geq 0,\end{array} \tag{3}\] where \(Q=\mathbf{diag}(10,1)\). We solve problem (3) using proximal gradient descent (see Table 1) with the iterates \[z^{i+1}=\Pi(z^{i}-\alpha\nabla f(z^{i}))\] where \(\nabla f(z)=Qz\), \(\alpha\in\mathbf{R}_{++}\) is picked to get the fastest worst-case convergence rate (Ryu and Boyd, 2015), and \(\Pi\) is the projection onto the non-negative orthant. The optimal solution for problem (3) is at the origin, and we consider three different warm starts shown in Figure 1. All three are equidistant to the optimal solution, but lead to different convergence behavior. The purple warm start has the fastest convergence since the projection step clips non-negative values to zero. The orange warm start converges more quickly than the green warm start due to the difference in scaling of the objective function along each axis. This results in faster convergence for the orange warm start compared with the green one since the orange warm start is closer to the \(z_{1}\) axis. This example shows the necessity of considering the downstream algorithm when choosing a warm start. All three warm starts in this case appear of equal quality as they are equidistant from \(z^{\star}\), but when considering the downstream algorithm, there is a clear hierarchy in terms of convergence speed: purple takes the lead, followed by orange, then green. ### Learning to warm-start architecture Our learning architecture consists of two modules, a neural network with \(L\) layers and \(k\) iterations of operator \(T_{\theta}\); see Figure 2. The neural network uses ReLU activation functions defined as \(\phi(z)=\max(0,z)\) element-wise. We let \(w=\{W_{i}\}_{i=1}^{L}\) be the neural network weights for each layer where \(W_{i}\in\mathbf{R}^{m_{i}\times n_{i}}\). Our warm-start prediction is computed as \[h_{w}(\theta)=W_{L}\phi(W_{L-1}\phi(\ldots\phi(W_{1}\theta))). \tag{4}\] While we do not explicitly represent bias terms, we can include them by appending a new column to matrices \(W_{i}\) for \(i=1,\ldots,L\), and a \(1\) to the input vector. The warm-start prediction \(h_{w}(\theta)\in\mathbf{R}^{p}\) feeds into the fixed-point algorithm parametrized by \(\theta\). The second Figure 1: The iterates of proximal gradient descent to solve problem (3) with different warm starts. For three different warm starts equidistant to the optimal solution \(z^{\star}\), we plot the first \(5\) iterates on the left. The contour lines of the objective function are in blue and the infeasible region is shaded in pink. We plot the fixed-point residuals for the different warm starts on the right. Depending on the warm start, the convergence to the optimal solution, can vary greatly. part of our architecture consists of \(k\) applications of the operator \(T_{\theta}\) to the warm start \(h_{w}(\theta)\). The final output is the candidate solution \(T_{\theta}^{k}(h_{w}(\theta))\). ### Loss functions Training for \(k\) steps.We propose two loss functions to analyze the output of our learning to warm-start architecture, \(T_{\theta}^{k}(h_{w}(\theta))\). The first one is the _fixed-point residual loss_ \[\ell_{\theta}^{\text{fp}}(z)=\|T_{\theta}(z)-z\|_{2}, \tag{5}\] which measures of the distance to convergence of the fixed-point algorithm (2) (Ryu and Yin, 2022, Section 2.4). The second one is the _regression loss_ \[\ell_{\theta}^{\text{reg}}(z)=\|z-z^{\star}(\theta)\|_{2}, \tag{6}\] where \(z^{\star}(\theta)\) is a known (possibly non-unique) fixed-point of \(T_{\theta}\). The learning problem is \[\text{minimize}\ \ \ \mathbf{E}_{\theta\sim Q}\ell_{\theta}(T_{\theta}^{k}(h_{w}( \theta))), \tag{7}\] where \(\ell_{\theta}\) is either \(\ell_{\theta}^{\text{fp}}\) or \(\ell_{\theta}^{\text{reg}}\), and \(k\) is the number of fixed-point iterations in our architecture. Note that choosing \(k=0\) decouples the learning procedure from the downstream algorithm, thereby making our architecture no longer end-to-end. It is generally infeasible to evaluate the objective in problem (7) because the distribution \(Q\) is unknown. Instead, we minimize its empirical estimate over training data hoping to attain generalization to unseen data. We leverage stochastic gradient descent (SGD) methods to efficiently train the neural network weights, by constructing stochastic approximations of the gradient of the empirical risk (Sra et al., 2011). To compute such gradient Figure 2: Illustration of the learning framework. The architecture consists of two modules: a neural network mapping the parameter \(\theta\) to a warm start \(h_{w}(\theta)\), and a second module executing \(k\) fixed-point iterations starting from \(h_{w}(\theta)\) to obtain the candidate solution \(T_{\theta}^{k}(h_{w}(\theta))\). The fixed-point steps in the architecture depend on the parameter \(\theta\), and have no learnable weights. There are two options for the loss function \(\ell_{\theta}\): the fixed-point residual loss \(\ell_{\theta}^{\text{fp}}\), or the regression loss \(\ell_{\theta}^{\text{reg}}\). We backpropagate from the loss through the fixed-point iterates to learn the neural network weights \(w\). estimates, we use automatic differentiation (Baydin et al., 2017) techniques to differentiate through the \(k\) fixed-point iterations. We note that due to the inclusion of ReLU layers and projection steps in the fixed-point algorithms (_e.g._, the projection step in OSQP), there are non-differentiable mappings in the architecture. At non-differentiable points, SGD uses sub-gradients (Rockafellar and Wets, 1998) to estimate directional derivatives of the loss. By tailoring the warm-start prediction to the downstream fixed-point algorithm, our framework constitutes an end-to-end learning scheme. Testing for \(t\) steps.We now evaluate the learned model with \(t\) fixed-point iterations (\(t\) possibly different from \(k\) used during training) on an unseen parameter \(\theta\). While we consider two different loss functions for training, we always measure the test performance on unseen problems by the fixed-point residual since it is a standard measure of progress (Ryu and Yin, 2022, Section 2.4). To analyze the generalization of our architecture, we define the _risk_ as the following function of \(t\): \[R^{t}(w)=\mathbf{E}_{\theta\sim Q}\ell_{\theta}^{\text{fp}}(T_{\theta}^{t}(h_ {w}(\theta))). \tag{8}\] Since we only access the distribution \(Q\) via \(N\) samples \(\theta_{1},\dots,\theta_{N}\), we define the _empirical risk_ over training data as \[\hat{R}^{t}(w)=\frac{1}{N}\sum_{i=1}^{N}\ell_{\theta_{i}}^{\text{fp}}(T_{ \theta_{i}}^{t}(h_{w}(\theta_{i}))). \tag{9}\] ## 4 PAC-Bayes generalization bounds In this section, we provide generalization bounds for our approach using the PAC-Bayes framework (Shawe-Taylor and Williamson, 1997; McAllester, 1998). More specifically, we provide a generalization guarantee on the risk in Equation (8) after any number of evaluation steps \(t\) (\(t\) need not be equal to the number of fixed-point steps \(k\) taken during training). First, we introduce preliminary results and definitions needed for our proofs in Section 4.1. In particular, we define the _marginal fixed-point residual_, a key ingredient of our proof technique, which measures the maximum fixed-point residual incurred by a warm start when subjected to a bounded perturbation. Then, we derive our main generalization bound result, Theorem 2, in Section 4.2. Finally, in Section 4.3, we specialize Theorem 2 to three different cases of operators: contractive, linearly convergent, and averaged. ### Preliminaries In this subsection, we introduce our marginal fixed-point residual in Equation (10) and McAllester's bound in inequality (11). Marginal fixed-point residual.We define the marginal fixed-point residual to be the worst-case fixed-point residual for a warm start subjected to a bounded perturbation: \[g_{\gamma,\theta}^{t}(z)=\max_{\|\Delta\|_{2}\leq\gamma}\ell_{\theta}^{\text{ fp}}(T_{\theta}^{t}(z+\Delta)). \tag{10}\] Similarly, we define the marginal risk and marginal empirical risk in the same way as for the non-marginal case from Section 3 with \[R^{t}_{\gamma}(w)=\mathbf{E}_{\theta\sim Q}g^{t}_{\gamma,\theta}(h_{w}(\theta)) \quad\text{and}\quad\hat{R}^{t}_{\gamma}(w)=\frac{1}{N}\sum_{i=1}^{N}g^{t}_{ \gamma,\theta_{i}}(h_{w}(\theta_{i})).\] Setting \(\gamma=0\) corresponds to the original fixed-point residual and risk functions, _i.e._, \(g^{t}_{0,\theta}(z)=\ell^{\text{fp}}_{\theta}(T^{t}_{\theta}(z))\), \(\hat{R}^{t}_{0}(w)=\hat{R}^{t}(w)\), and \(R^{t}_{0}(w)=R^{t}(w)\) from Equations (8) and (9). McAllester's bound.The PAC-Bayesian framework provides generalization bounds randomized predictors, as opposed to a learned single predictor. Randomized predictors are obtained by sampling in a set of basic predictors based on a specific probability distribution (Alquier, 2023). This is especially useful in our setting because we can manipulate the bounds on the randomized predictors into bounds on our learned predictors. In our case, \(h_{w}\) from Equation (4) corresponds to the fixed warm-start prediction parameterized by the weights of the neural network \(w\in\mathcal{W}\) where \(\mathcal{W}\) is a set of possible weights. We aim to bound \(R^{t}(w)\), the risk after \(t\) fixed-point steps from Equation (8), in terms of empirical quantities. To do so, we consider perturbations of the neural network weights given by the random variable \(u\) whose distribution may also depend on the training data. Now, we have a distribution of predictors \(h_{w+u}\), where \(w\) is fixed and \(u\) is random. Given a prior distribution \(\pi\) over the set of predictors that is independent of the training data, the expected marginal risk of the randomized predictor \(\mathbf{E}_{u}[R^{t}_{\gamma}(w+u)]\) can be bounded as (McAllester, 2003) \[\mathbf{E}_{u}[R^{t}_{\gamma}(w+u)]\leq\mathbf{E}_{u}[\hat{R}^{t}_{\gamma}(w+u )]+2C_{\gamma}(t)\sqrt{\frac{2(\text{KL}(w+u||\pi)+\log(2N/\delta))}{N-1}}, \tag{11}\] with probability at least \(1-\delta\). Here \(\text{KL}(p||\pi)\) is the KL-divergence between the distributions \(p=w+u\) and \(\pi\), \[\text{KL}(p||\pi)=\int_{-\infty}^{\infty}p(x)\log\biggl{(}\frac{p(x)}{\pi(x)} \biggr{)}.\] The quantity \(C_{\gamma}(t)\) upper bounds the fixed-point residual after \(t\) steps, _i.e._, \[g^{t}_{\gamma,\theta}(h_{w}(\theta))\leq C_{\gamma}(t),\quad\forall\theta\in \Theta,\;w\in\mathcal{W}.\] Note that the marginal risk and empirical marginal risk lie in the range \([0,C_{\gamma}(t)]\). In order to bound \(C_{\gamma}(t)\), we will consider predictors where the distance from warm start to the set of fixed-points is upper bounded by \(D\): \[\mathbf{dist_{fix}}_{T_{\theta}}(h_{w}(\theta))\leq D\quad\forall\theta\in \Theta,\;w\in\mathcal{W}. \tag{12}\] In Section 4.3, we bound \(C_{\gamma}(t)\) in terms of \(t\), \(\gamma\), \(D\), and properties of the operator \(T_{\theta}\). ### Generalization bounds In this subsection, we use the marginal fixed-point residual and the McAllester bound from Section 4.1 to bound the generalization gap. We first transform the McAllester bound in (11), which provides a generalization bound on the expected marginal risk of the randomized predictor, to a bound on the risk with the following lemma. **Lemma 1**.: _Let \(h_{w}:\Theta\to\mathbf{R}^{p}\) be any predictor to a warm start learned from the training data such that \(g_{\gamma/2,\theta}^{t}(h_{w}(\theta))\leq C_{\gamma/2}(t),\;\forall\theta\in\Theta\). Let \(h_{w}\) be any learned predictor parametrized by \(w\) and \(\pi\) be any distribution that is independent from the training data. Then, for any \(\delta,\gamma>0\), with probability at least \(1-\delta\) over a training set of size \(N\) and for any random perturbation \(u\) such that \(\mathbf{P}(\max_{\theta\in\Theta}\|h_{w+u}(\theta)-h_{w}(\theta)\|_{2}\leq \gamma/2)\geq 1/2\) we have_ \[R^{t}(w)\leq\hat{R}^{t}_{\gamma}(w)+4C_{\gamma/2}(t)\sqrt{\frac{\mathrm{KL}(w +u||\pi)+\log(6N/\delta)}{N-1}}.\] See Appendix B.1 for the proof. In the above expression, \(w\) is fixed and \(u\) is a random variable. This lemma bears resemblance to Neyshabur et al. (2018, Lemma 1), and the proof is nearly identical. Next, we use Lemma 1 to obtain generalization bounds with our main theorem. **Theorem 2**.: _Assume that \(\|\theta\|_{2}\leq B\) for all \(\theta\in\Theta\). Let \(h_{w}:\Theta\to\mathbf{R}^{p}\) be an \(L\)-layer neural network with ReLU activations where \(g_{\gamma/2,\theta}^{t}(h_{w}(\theta))\leq C_{\gamma/2}(t),\;\forall\theta\in\Theta\). Let \(c=B^{2}L^{2}\bar{h}\log(L\bar{h})\Pi_{j=1}^{L}\|W_{j}\|_{2}^{2}\sum_{i=1}^{L} \|W_{i}\|_{F}^{2}/\|W_{i}\|_{2}^{2}\) and let \(\bar{h}=\max_{i}n_{i}\) be the largest number of output units in any layer. Then for any \(\delta,\gamma>0\) with probability at least \(1-\delta\) over a training set of size \(N\),_ \[R^{t}(w)\leq\hat{R}^{t}_{\gamma}(w)+\begin{cases}\mathcal{O}\!\left(C_{\gamma /2}(t)\sqrt{\frac{c+\log(\frac{LN}{\delta})}{\gamma^{2}N}}\right)&\text{if }\Pi_{j=1}^{L}\|W_{j}\|_{2}\geq\frac{ \gamma}{2B}\\ C_{\gamma/2}(t)\sqrt{\frac{\log(1/\delta)}{2N}}&\text{else.}\end{cases} \tag{13}\] See Appendix B.2 for the proof. With Theorem 2, we bound the risk in terms of the empirical marginal risk and a penalty term. The main case is when the weights are sufficiently large: \(\Pi_{j=1}^{L}\|W_{j}\|_{2}\geq\gamma/(2B)\). In this case, we use the PAC-Bayesian framework to provide the generalization bound. We directly use the perturbation bound from Neyshabur et al. (2018, Lemma 2) which bounds the change in the warm start \(h_{w}(\theta)\) with respect to the change in the neural network weights \(w\). In the other case, if \(\Pi_{j=1}^{L}\|W_{j}\|_{2}\leq\gamma/(2B)\), then the warm start \(h_{w}(\theta)\) is close to the zero vector. Here, we leverage Hoeffding's inequality to get the generalization bound. As \(t\to\infty\), the generalization gap in Theorem 2 approaches zero since \(C_{\gamma/2}(t)\) goes to zero. Intuitively, this happens because the algorithm is run until convergence. On the other hand, as \(N\to\infty\), the second term in each of the cases disappears and the generalization gap becomes the difference between the marginal empirical risk and the risk for a fixed \(\gamma\). Our bounds also generalizes the setting where the warm start is not learned. Setting all of the weights to zero corresponds to warm-starting every problem from the zero vector. In this case, with high probability, \(R^{t}(0)\leq\hat{R}^{t}(0)+C_{\gamma/2}(t)\sqrt{\log(1/\delta)/(2N)}\). ### Bounding the empirical marginal risk Theorem 2 bounds the risk \(R^{t}(w)\) in terms of the empirical marginal risk \(\hat{R}^{t}_{\gamma}(w)\) plus a penalty term. In this subsection, we use operator theory to bound two things: i) \(\hat{R}^{t}_{\gamma}(w)\), thus removing the dependency on the marginal component, and ii) \(C_{\gamma/2}(t)\) in terms of \(D\) given by Equation (12). We first assume that the operator \(T_{\theta}\) is non-expansive, which is a common characteristic of solving convex problems (Ryu and Boyd, 2015). **Definition 4.1** (Non-expansive operator).: An operator \(T\) is non-expansive if \[\|Tx-Ty\|_{2}\leq\|x-y\|_{2},\quad\forall x,y\in\mathbf{dom}\,T.\] Since non-expansiveness is not enough to guarantee convergence, we break our analysis into three different cases of fixed-point operators which converge: contractive in Section 4.3.1, linearly convergent in Section 4.3.2, and averaged in Section 4.3.3. By using the different properties of each, we can bound the marginal fixed-point residual after \(t\) steps, \(g^{t}_{\gamma,\theta}(z)\) defined in (10). Since the empirical marginal risk is the average of these marginal fixed-point residuals, we can remove the dependence on the empirical marginal risk in Theorem 2. The sets of the three different types of operators are not mutually exclusive as seen in the set relationships depicted in Figure 3. The contractive case provides the strongest bounds, followed by the linearly convergent case, and then the averaged case. To help in the subsequent analysis, we define the following functions which give the distance to optimality and marginal distance to optimality: \[r_{\theta}(z)=\mathbf{dist}_{\mathbf{fix}\,T_{\theta}}(z),\quad f^{t}_{\gamma,\theta}(z)=\max_{\|\Delta\|z\leq\gamma}r_{\theta}(T^{t}_{\theta}(z+\Delta)). \tag{14}\] We give the following lemma to relate the fixed-point residual to the distance to optimality. **Lemma 3**.: _For any non-expansive operator, \(T_{\theta}\),_ \[\ell^{\mathrm{fp}}_{\theta}(z)\leq 2r_{\theta}(z).\] See Appendix B.3 for the proof. Figure 3: The set relationship between the different types of operators we consider in this section. #### 4.3.1 Contractive operators We first consider contractive operators which give the strongest perturbation bounds. **Definition 4.2** (\(\beta\)-contractive operator).: An operator \(T\) is \(\beta\)-contractive for \(\beta\in(0,1)\) if \[\|Tx-Ty\|_{2}\leq\beta\|x-y\|_{2}\quad\forall x,y\in\mathbf{dom}\,T.\] If \(T_{\theta}\) is \(\beta\)-contractive, then \[g_{\gamma,\theta}^{t}(z)\leq\ell_{\theta}^{\mathrm{fp}}(T_{\theta}^{t}(z))+2 \beta^{t}\gamma, \tag{15}\] which follows from \(\ell_{\theta}^{\mathrm{fp}}(T_{\theta}^{t}(\cdot))\) being \(2\beta^{t}\)-Lipschitz (Sambharya et al., 2023, Appendix A.1). In the contractive case, we remove the marginal risk dependency with the following corollary. **Corollary 4**.: _We define \(B\) and \(\bar{h}\) as in Theorem 2. Let \(T_{\theta}\) be \(\beta\)-contractive for any \(\theta\in\Theta\). Let \(h_{w}\) be an \(L\)-layer neural network with ReLU activations such that (12) holds with bound \(D\). Let \(c=B^{2}L^{2}\bar{h}\log(L\bar{h})\Pi_{j=1}^{L}\|W_{j}\|_{2}^{2}\sum_{i=1}^{L} \|W_{i}\|_{F}^{2}/\|W_{i}\|_{2}^{2}\). Then for any \(\delta,\gamma>0\) with probability \(\geq 1-\delta\) over a training set of size \(N\),_ \[R^{t}(w)\leq\hat{R}^{t}(w)+2\beta^{t}\gamma+\begin{cases}\mathcal{O}\!\left( \beta^{t}(D+\frac{\gamma}{2})\sqrt{\frac{c+\log(\frac{LN}{\delta})}{\gamma^{2} N}}\right)&\text{if }\Pi_{j=1}^{L}\|W_{j}\|_{2}\geq\frac{\gamma}{2B}\\ 2\beta^{t}(D+\frac{\gamma}{2})\sqrt{\frac{\log(1/\delta)}{2N}}&\text{else} \end{cases}\] Proof.: We remove the marginal dependence by applying inequality (15) to get \[\hat{R}_{\gamma}^{t}(w)=\frac{1}{N}\sum_{i=1}^{N}g_{\gamma,\theta}^{t}(h_{w}( \theta_{i}))\leq 2\beta^{t}\gamma+\frac{1}{N}\sum_{i=1}^{N}\ell_{\theta}^{ \mathrm{fp}}(T_{\theta}^{t}(h_{w}(\theta_{i})))=\hat{R}^{t}(w)+2\beta^{t}\gamma.\] We bound the worst-case fixed-point residual as \(C_{\gamma/2}(t)\leq 2\beta^{t}(D+\gamma/2)\) which comes from \(C_{0}(t)\leq 2\beta^{t}D\)(Sambharya et al., 2023, Appendix A.1) and the inequality \[\mathbf{dist}_{\mathbf{fix}\,T_{\theta}}(h_{w}(\theta)+\Delta)\leq\|\Pi_{ \mathbf{fix}\,T_{\theta}}(h_{w}(\theta))-(h_{w}(\theta)+\Delta)\|_{2}\leq \mathbf{dist}_{\mathbf{fix}\,T_{\theta}}(h_{w}(\theta))+\|\Delta\|_{2}. \tag{16}\] Here, \(\Pi_{\mathbf{fix}\,T_{\theta}}\) is the projection on the set \(\mathbf{fix}\,T_{\theta}\). The first inequality in (16) uses the definition of the distance function \(\mathbf{dist}_{\mathbf{fix}\,T_{\theta}}\), and the second uses the triangle inequality. #### 4.3.2 Linearly convergent operators Now, we consider a broader category of operators, linearly convergent operators. **Definition 4.3** (\(\beta\)-linearly convergent operator).: An operator \(T\) is \(\beta\)-linearly convergent for \(\beta\in[0,1)\) if \[\mathbf{dist}_{\mathbf{fix}\,T}(Tx)\leq\beta\mathbf{dist}_{\mathbf{fix}\,T}( x)\quad\forall x\in\mathbf{dom}\,T.\] If the operator, \(T_{\theta}\), is not contractive, then we get the weaker property that \(\ell_{\theta}^{\mathrm{fp}}(T_{\theta}^{t}(\cdot))\) is \(2\)-Lipschitz. To provide tighter perturbation bounds, we first establish the following lemma. **Lemma 5**.: _For any non-expansive operator \(T_{\theta}\) and for any \(t\geq 0\),_ \[|r_{\theta}(T_{\theta}^{t}(z))-r_{\theta}(T_{\theta}^{t}(w))|\leq 2\|z-w\|_{2}.\] See Appendix B.4 for the proof. We now use Lemma 5 and the linear convergence guarantee, Definition 4.3, to bound \(g_{\gamma,\theta}^{t}\) in terms of empirical quantities. **Lemma 6**.: _Assume that \(T_{\theta}\) is \(\beta\)-linearly convergent where \(\beta\in(0,1)\). Then the following bounds hold for all \(t\geq 0\):_ \[f_{\gamma,\theta}^{t}(z)\leq r_{\theta}(T_{\theta}^{t}(z))+2\gamma,\quad f_{ \gamma,\theta}^{t+1}(z)\leq\beta f_{\gamma,\theta}^{t}(z)\] \[g_{\gamma,\theta}^{t}(z)\leq 2f_{\gamma,\theta}^{t}(z).\] Proof.: The inequality \(f_{\gamma,\theta}^{t+1}(z)\leq\beta f_{\gamma,\theta}^{t}(z)\) comes from \[f_{\gamma,\theta}^{t+1}(z)=r_{\theta}(T_{\theta}^{t+1}(z+\Delta^{\star}))\leq \beta r_{\theta}(T_{\theta}^{t}(z+\Delta^{\star}))\leq\beta f_{\gamma,\theta} ^{t}(z),\] where \(\|\Delta^{\star}\|_{2}\leq\gamma\) is the maximizer to \(f_{\gamma,\theta}^{t+1}(z)\). The first inequality comes from Definition 4.3 and the second from (14). The inequality \(f_{\gamma,\theta}^{t+1}(z)\leq r_{\theta}(T_{\theta}^{t+1}(z))+2\|\Delta\|_{2}\) in Lemma 6 follows from Lemma 5. The final inequality in Lemma 6 is derived as follows: \[g_{\gamma,\theta}^{t}(z)=\ell_{\theta}^{\mathrm{fp}}(T_{\theta}^{t}(z+\Delta^ {\star}))\leq 2r_{\theta}(T_{\theta}^{t}(z+\Delta^{\star}))\leq 2f_{\gamma, \theta}^{t}(z).\] Here, \(\|\Delta^{\star}\|_{2}\leq\gamma\) is the maximizer for \(g_{\gamma,\theta}^{t}(z)\), and Lemma 3 gives the first inequality. Using Lemma 6, we can bound the marginal empirical risk for the linearly convergent case. For the \(\beta\)-linearly convergent case, \(C_{\gamma/2}(t)\) is bounded by \(2\beta^{t}(D+\gamma/2)\) which uses (16) and \(C_{0}(t)\leq 2r_{\theta}(T_{\theta}^{t}(z))\leq 2\beta^{t}D\). The first inequality comes from Lemma 3 and the second inequality follows from Definition 4.3. #### 4.3.3 Averaged operators Lastly, we consider the averaged operator case which in general gives sublinear convergence. **Definition 4.4** (\(\alpha\)-averaged operator).: An operator \(T\) is \(\alpha\)-averaged for \(\alpha\in(0,1)\) if there exists a non-expansive operator \(R\) such that \(T=(1-\alpha)I+\alpha R\). **Lemma 7**.: _Let \(T_{\theta}\) be an \(\alpha\)-averaged. Then the following bound holds:_ \[g_{\gamma,\theta}^{t}(z)\leq\min_{j=0,\ldots,t}\sqrt{\frac{\alpha}{(1-\alpha) (t-j+1)}}(r_{\theta}(T_{\theta}^{j}(z))+2\gamma)\quad\text{ for }t\geq 0.\] Proof.: Let \(\bar{\alpha}_{t,j}=\sqrt{\frac{\alpha}{(1-\alpha)(t-j+1)}}\). There exists \(\|\Delta^{\star}\|_{2}\leq\gamma\) such that the equality below holds by definition of the marginal fixed-point residual. \[g^{t}_{\gamma,\theta}(z)=\ell_{\theta}^{\text{fp}}(T_{\theta}^{t}(z+\Delta^{ \star}))\leq\bar{\alpha}_{t,j}(r_{\theta}(T_{\theta}^{j}(z+\Delta^{\star}))) \leq\bar{\alpha}_{t,j}f^{j}_{\gamma,\theta}(z)\leq\bar{\alpha}_{t,j}(r_{ \theta}(T_{\theta}^{j}(z))+2\gamma) \tag{17}\] The three inequalities comes from Ryu and Yin (2022, Theorem 1), the definition of \(f^{j}_{\gamma,\theta}(z)\) in (14), and Lemma 6 respectively. Equation (17) holds for all \(0\leq j\leq t\). Using Lemma 7, we can bound the marginal empirical risk for the averaged case. We bound the worst-case marginal fixed-point residual with \(C_{\gamma/2}(t)\leq\sqrt{\alpha/((1-\alpha)(t+1))}(D+\gamma)\) which follows from Lemma 7 by letting \(z=h_{w}(\theta)\) and \(j=0\). Then the inequality holds for every \(\theta\in\Theta\). ## 5 Choosing the right architecture In this section, we discuss how the number of fixed-point steps the model is trained on, \(k\), and the loss function affect performance. ### Bounds on the fixed-point residual for \(t\) evaluation steps In this subsection, we derive bounds on the fixed-point residual after \(t\) steps, \(\ell_{\theta}^{\text{fp}}(T_{\theta}^{t}(z))\), in terms of the loss after \(k\) steps, \(\ell_{\theta}(T_{\theta}^{k}(z))\) where \(k<t\). A summary of these results is given in Table 2 where we provide the bound for each of the two loss functions. The bounds in Table 2 using the fixed-point residual loss in the denominator are given by either applying the definition of contractiveness in the contractive case or non-expansiveness in the other two cases. To get the bounds in Table 2 when using the regression loss in the denominator, we first establish the inequality \[\ell_{\theta}^{\text{fp}}(z)\leq 2\ell_{\theta}^{\text{reg}}(z), \tag{18}\] for any non-expansive operator \(T_{\theta}\). This result follows from Lemma 3 since \(r_{\theta}(z)\leq\ell_{\theta}^{\text{reg}}(z)\). The results in the contractive and linearly convergent cases follow from applying the definition of each and inequality (18). In the averaged case, we directly apply Ryu and Yin (2022, Theorem 1). Unless the operator is contractive, the results from Table 2 indicate that stronger bounds can be obtained from using the regression loss. ### Training for the fixed-point residual vs regression loss The fixed-point residual (5) and regression (6) losses, align with the main distinction of learning methods mentioned in Amos (2023, Section 2.2) which splits between learning strategies that penalize suboptimality directly and those that penalize the distance to known ground truth solutions. The primary advantages of using our fixed-point residual loss are twofold: i) there is no need to compute a ground truth solution \(z^{\star}(\theta)\) for each problem instance before training, and ii) the loss exactly corresponds to the evaluation metric, the fixed-point residual. On the other hand, there are two main advantages to using the regression loss: i) the regression loss uses the global information of the ground truth solution \(z^{\star}(\theta)\), while the fixed-point residual loss exploits only local information, and ii) as mentioned in Section 5.1, stronger bounds on future iterations can be obtained when using the regression loss. ## 6 Numerical experiments We now illustrate our method on examples of fixed-point algorithms from Table 1. We implemented our architecture in the JAX library (Bradbury et al., 2018) using the Adam (Kingma and Ba, 2015) optimizer to train. We use 10000 training problems and evaluate on 1000 test problems for the examples except the first one in Section 6.1. In our examples, we conduct a hyperparameter sweep over learning rates of either \(10^{-3}\) or \(10^{-4}\), and architectures with \(0,1\), or \(2\) layers with \(500\) neurons each. We decay the learning rate by a factor of \(5\) when the training loss fails to decrease over a window of \(10\) epochs. All computations were run on the Princeton HPC Della Cluster and each example could be trained under \(5\) hours. The code to reproduce our results is available at [https://github.com/stellatogrp/l2ws_fixed_point](https://github.com/stellatogrp/l2ws_fixed_point). Baselines.We compare our learned warm start, for both the fixed-point residual loss and the regression loss functions, against the following initialization approaches: **Cold start.**: We initialize the fixed-point algorithm for a test problem with parameter \(\theta\) with the prediction \(h_{w_{\text{cs}}}(\theta)\) where \(w_{\text{cs}}\) has been randomly initialized. **Nearest-neighbor warm start.**: The nearest-neighbor warm start initializes the test problem with an optimal solution of the nearest of the training problems measured by distance in terms of its parameter \(\theta\in\mathbf{R}^{d}\). In most of our examples, the parametrized problems are sufficiently far apart so that the nearest-neighbor initializations do not significantly improve upon the cold start. \begin{table} \begin{tabular}{l c c} \hline \hline **Operator** & \(\dfrac{\ell_{\theta}^{\text{fp}}(T_{\theta}^{t}(z))}{\ell_{\theta}^{\text{fp}} (T_{\theta}^{s}(z))}\) & \(\dfrac{\ell_{\theta}^{\text{fp}}(T_{\theta}^{t}(z))}{\ell_{\theta}^{\text{ reg}}(T_{\theta}^{k}(z))}\) \\ \hline \(\beta\)-contractive & \(\beta^{t-k}\) & \(2\beta^{t-k}\) \\ \(\beta\)-linearly convergent & \(1\) & \(2\beta^{t-k}\) \\ \(\alpha\)-averaged & \(1\) & \(\sqrt{\frac{\alpha}{(1-\alpha)(t-k+1)}}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Bounds for the ratios of testing at \(t\) steps and training at \(k\) steps. Here, we bound the ratio of the fixed-point residual after \(t\) steps and the loss after \(k\) steps where \(t>k\). The value in the table provides the bound, _e.g._, for a \(\beta\)-contractive operator, \(\ell_{\theta}^{\text{fp}}(T_{\theta}^{t}(z))/\ell_{\theta}^{\text{reg}}(T_{ \theta}^{k}(z))\leq 2\beta^{t-k}\). In every experiment, we plot the average of the fixed-point residuals of the test problems for varying \(t\) as defined in Section 3.3. Additionally, we plot the average _gain_ of each initialization relative to the cold start across the test problems. This gain for a given parameter \(\theta\) corresponds to the ratio \[\frac{\ell_{\theta}^{\text{fp}}(T_{\theta}^{t}(h_{w_{cs}}(\theta)))}{\ell_{ \theta}^{\text{fp}}(T_{\theta}^{t}(h_{w}(\theta)))},\] where \(h_{w}\) is the initialization technique in question and \(h_{w_{cs}}\) is the cold-start predictor described above. Importantly, we code exact replicas of the OSQP and SCS algorithms in JAX. This allows us to input the learned warm starts into the corresponding C implementations; moreover, we report the solve times in milliseconds to reach various tolerances for the experiments we run with OSQP in Section 6.3 and SCS in Section 6.4. ### Gradient descent #### 6.1.1 Unconstrained QP We first consider a stylized example to illustrate why unrolling fixed-point steps can significantly improve over a decoupled approach, where \(k=0\). Consider the problem \[\text{minimize}\ \ (1/2)z^{T}Pz+c^{T}z,\] where \(P\in\mathbf{S}_{++}^{n}\), and \(c\in\mathbf{R}^{n}\) are the problem data and \(z\in\mathbf{R}^{n}\) is the decision variable. The parameter is \(\theta=c\). Numerical example.We consider a small example where \(n=20\). We have a single hidden layer with 10 neurons, and 100 training problems. Let \(P\in\mathbf{S}_{++}^{n}\) be a diagonal matrix where the first 10 diagonals take the value 100 and the last ten take the value of 1. Let \(\theta=c\in\mathbf{R}^{n}\). Here, each \(\theta_{i}\) is sampled according to the uniform distribution \(\psi_{i}\mathcal{U}[-10,10]\), where \(\psi_{i}=10000\) if \(i\leq 10\) else 1. The idea is that the first 10 indices of the optimal solution \(z^{\star}(\theta)\) vary much more than the last 10, but the first 10 indices of \(z\) will converge much faster. Results.Figure 4 and Table 3 show the convergence behavior of our method. The decoupled approaches prioritize minimizing the error to predict the first 10 indices and fail to improve on the cold start. By unrolling these gradient steps, our learning framework with \(k>0\) is able to adapt the warm start to take advantage of the downstream algorithm. These gains remain constant as the number of evaluation steps increases. where \(A\in\mathbf{R}^{m\times n}\), \(b\in\mathbf{R}^{m}\), and \(\lambda\in\mathbf{R}_{++}\) are problem data and \(z\in\mathbf{R}^{n}\) is the decision variable. The parameter here is \(\theta=b\). Numerical example.We generate \(A\in\mathbf{R}^{500\times 500}\) with i.i.d standard Gaussian entries and pick \(\lambda=10\). We sample each \(b\) vector from the uniform distribution \(\mathcal{U}[0,30]\). Figure 4: Unconstrained QP results. All of the learned warm starts provide large improvements over the cold start and nearest neighbor initializations except for the ones learned with \(k=0\). \begin{table} \end{table} Table 3: Unconstrained QP. Results.Figure 5 and Table 4 show the convergence behavior of our method. While most of the learned warm starts significantly improve upon the baselines, the warm starts learned with \(k=5\) and the regression loss perform the best. ### Osqp In this subsection, we apply our learning framework to the OSQP (Stellato et al., 2020) algorithm from Table 1 to solve convex quadratic programs (QPs).We compare solve times using OSQP code written in C for our learned warm starts against the baselines. Table 5 shows the sizes of the problems we run: model predictive control of a quadcopter in Section 6.3.1 and image deblurring in Section 6.3.2. #### 6.3.1 Model predictive control of a quadcopter In our next example, we use model predictive control (Borrelli et al., 2017) to control a quadcopter to follow a reference trajectory. The idea of MPC is to optimize over a finite horizon length, but then to only implement the first control before optimizing again. Since the family of problems is sequential in nature, we add an additional baseline called the _previous-solution warm start_ where we shift the solution of the previous problem by one time index to warm-start the current problem. We model the quadcopter as a rigid body controlled by four motors as in Song and Scaramuzza (2022). The state vector is \(x=(p,v,q)\in\mathbf{R}^{n_{x}}\) where the state size is \(n_{x}=10\). The position vector \(p=(p_{x},p_{y},p_{z})\in\mathbf{R}^{3}\) and the velocity vector \(v=(v_{x},v_{y},v_{z})\in\mathbf{R}^{3}\) indicate the coordinates and velocities of the center of the quadcopter respectively. The vector \(q=(q_{w},q_{x},q_{y},q_{z})\in\mathbf{R}^{4}\) is the quaternion vector indicating the orientation of the quadcopter. The inputs are \(u=(c,\omega_{x},\omega_{y},\omega_{z})\in\mathbf{R}^{n_{u}}\) where the input size is \(n_{u}=4\). The first input is the vertical thrust, and the last three are the angular velocities in the body frame. The dynamics are \[\dot{p}=v,\qquad\dot{v}=\begin{bmatrix}2(q_{w}q_{y}+q_{x}q_{z})c\\ 2(q_{w}q_{y}+q_{x}q_{z})c\\ (q_{w}^{2}-q_{x}^{2}-q_{y}^{2}+q_{z}^{2})c-g\end{bmatrix},\qquad\dot{q}=\frac{ 1}{2}\begin{bmatrix}-w_{x}q_{x}-w_{y}q_{y}-w_{z}q_{z}\\ w_{x}q_{w}-w_{y}q_{z}+w_{z}q_{y}\\ w_{x}q_{z}+w_{y}q_{w}-w_{z}q_{x}\\ w_{x}q_{y}+w_{y}q_{x}+w_{z}q_{w}\end{bmatrix},\] where \(g\) is the gravitational constant. At each time step, the goal is to track a reference trajectory given by \(x^{\text{ref}}=(x^{\text{ref}}_{1},\ldots,x^{\text{ref}}_{T})\), while satisfying constraints on the states and the \begin{table} \begin{tabular}{l l l} \hline \hline & Quadcopter & Image deblurring \\ \hline constraints \(m\) & 600 & 2102 \\ variables \(n\) & 550 & 802 \\ parameter size \(d\) & 44 & 784 \\ \hline \hline \end{tabular} \end{table} Table 5: Sizes of conic problems from Table 1 that we use OSQP to solve. We give the number of primal constraints (\(m\)), size of the primal variable (\(n\)), and the parameter size, \(d\). controls. We discretize the system with \(\Delta t\) and solve the QP \[\begin{array}{ll}\text{minimize}&(x_{T}-x_{T}^{\text{ref}})^{T}Q_{T}(x_{T}-x_{T }^{\text{ref}})+\sum_{t=1}^{T-1}(x_{t}-x_{t}^{\text{ref}})^{T}Q(x_{t}-x_{t}^{ \text{ref}})+u_{t}^{T}Ru_{t}\\ \text{subject to}&x_{t+1}=Ax_{t}+Bu_{t}\quad t=0,\ldots,T-1\\ &u_{\min}\leq u_{t}\leq u_{\min}\quad t=0,\ldots,T-1\\ &x_{\min}\leq x_{t}\leq x_{\max}\quad t=1,\ldots,T\\ &|u_{t+1}-u_{t}|\leq\Delta u\quad t=1,\ldots,T-1.\end{array}\] Here, the decision variables are the states \((x_{1},\ldots,x_{T})\) where \(x_{t}\in\mathbf{R}^{n_{x}}\) and the controls \((u_{1},\ldots,u_{T-1})\) where \(u_{t}\in\mathbf{R}^{n_{u}}\). The dynamics matrices \(A\) and \(B\) are determined by linearizing the dynamics around the current state \(x_{0}\), and the previous control input \(u_{0}\)(Diehl et al., 2009). The matrices \(Q,Q_{T}\in\mathbf{S}_{+}^{n_{x}}\) penalize the distance of the states to the reference trajectory, \((x_{1}^{\text{ref}},\ldots,x_{T}^{\text{ref}})\). The matrix \(R\in\mathbf{S}_{++}^{n_{u}}\) regularizes the controls. The parameter is \(\theta=(x_{0},u_{0},x_{1}^{\text{ref}},\ldots,x_{T}^{\text{ref}})\in\mathbf{R} ^{(T+1)n_{x}+n_{u}}\). We generate many different trajectories where the simulation length is larger than the time horizon \(T\). Numerical example.We discretize our continuous time model with a value of \(\Delta t=0.05\) seconds. The gravitational constant is \(9.8\). Each trajectory has a length of \(100\), and the horizon we consider at each timestep for each QP is \(T=10\). We use state bounds of \(x_{\max}=-x_{\min}=(1,1,1,10,10,10,1,1,1,1)\). We constrain the controls with \(u_{\max}=(20,6,6,6)\) and \(u_{\min}=(2,-6,-6,-6)\), and set \(\Delta u=(18,6,6,6)\). For each simulation, the quadcopter is initialized at \(p=v=0\) and \(q=(1,0,0,0)\). We sample \(5\) waypoints for each of the \((x,y,z)\) coordinates for each trajectory from the uniform distribution, \(\mathcal{U}[-0.5,0.5]\). Then we use a B-spline (de Boor, 1972) to smoothly interpolate between the waypoints to generate \(100\) points for the entire trajectory. Since each reference trajectory is made up of \((x,y,z)\) coordinates rather than the full state vector, we shorten the parameter size to \(\theta\in\mathbf{R}^{n_{x}+n_{u}+3T}\). Results.Figure 6 and Table 6 show the convergence behavior of our method. While all of the warm starts learned with the regression losses deliver substantial improvements over the baselines, our method using \(k=60\) with the fixed-point residual loss stands out as the best for a larger number of steps. To simulate a strict latency requirement, we also compare various initialization techniques in a closed-loop system where only \(15\) OSQP iterations are allowed per QP in Figure 7. The learned warm start can more accurately track the reference trajectory compared with the other two methods. \begin{table} \end{table} Table 6: Quadcopter. Figure 6: Quadcopter results. Learned warm starts offer substantial improvements over the baselines. In particular, warm starts learned with \(k=60\) and the fixed-point residual loss have the largest gain for evaluation steps over about 50. problem (Beck and Teboulle, 2009; Benvenuto et al., 2010) as \[\begin{array}{ll}\text{minimize}&\|Ax-b\|_{2}^{2}+\lambda\|x\|_{1}\\ \text{subject to}&0\leq x\leq 1.\end{array}\] Here, the matrix \(A\in\mathbf{R}^{n\times n}\) is the blur operator which represents a two-dimensional convolutional operator. The regularization hyperparameter \(\lambda\in\mathbf{R}_{++}\), weights the fidelity term \(\|Ax-b\|_{2}^{2}\), relative to the \(\ell_{1}\) penalty. The \(\ell_{1}\) penalty is used as it less sensitive to outliers and encourages sparsity (Beck and Teboulle, 2009). The constraints ensure that the deblurred image has pixel values within its domain. Numerical example.We consider handwritten letters from the EMNIST dataset (Cohen et al., 2017). We apply a Gaussian blue of size 8 to each letter and then add i.i.d. Gaussian noise with standard deviation 0.001. The hyperparameter weighting term is \(\lambda=1e-4\). Results.Figure 8 and Table 7 show the convergence behavior of our method. Learned warm starts with the regression loss tend to outperform the learned warm starts with the fixed-point residual loss. We show visualizations of our method in Figure 9. For images that Figure 7: Visualizing closed-loop MPC of flying a quadcopter to track a reference trajectory. Each row corresponds to a different unseen reference trajectory. Each column uses a different initialization scheme to track the same unseen black reference trajectory in a closed-loop. Each technique is given a budget of 15 OSQP iterations to solve each QP. The learned approach which is trained on \(k=5\) with the regression loss tracks the trajectory well compared against the other two. are particularly challenging, the image quality after 50 OSQP steps is significantly better for the learned warm start than the baseline initializations. ### Scs In this subsection, we apply our learning framework to the SCS (O'Donoghue et al., 2019) algorithm from Table 1 to solve convex conic optimization problems. We compare solve times using SCS code written in C for our learned warm starts against the baselines. We run our experiments on two second-order cone programs (SOCPs) in robust Kalman filtering in Section 6.4.1 and robust non-negative least squares in Section 6.4.2 and two semidefinite programs (SDPs) in phase retrieval in Section 6.4.3 and sparse PCA in Section 6.4.4. #### 6.4.1 Robust Kalman filtering Kalman filtering (Kalman, 1960) is a widely used technique for predicting system states in the presence of noise in dynamic systems. In our first SOCP example, we use robust Kalman filtering (Xie and Soh, 1994) which mitigates the impact of outliers on the filtering process and model misspecifications to track a moving vehicle from noisy data location as in Venkataraman and Amos (2021). The dynamical system is modeled by \[x_{t+1}=Ax_{t}+Bw_{t},\quad y_{t}=Cx_{t}+v_{t},\quad\text{for}\quad t=0,1,\dots, \tag{19}\] Figure 8: Image deblurring. Warm starts learned with the regression loss provide bigger gains compared with those learned with the fixed-point residual loss. \begin{table} \end{table} Table 7: Image deblurring. Figure 9: EMNIST image deblurring. Each row corresponds to an unseen sample from the EMNIST dataset. The last three columns depict several different initialization techniques after 50 OSQP steps. In the learned column, we use the regression loss with \(k=5\). To adjust the difficulty of the images displayed, we select images corresponding to different percentiles of distance from the nearest neighbor to the optimal solution. where \(x_{t}\in\mathbf{R}^{n_{x}}\) is the state, \(y_{t}\in\mathbf{R}^{n_{o}}\) is the observation, \(w_{t}\in\mathbf{R}^{n_{u}}\) is the input, and \(v_{t}\in\mathbf{R}^{n_{o}}\) is a perturbation to the observation. The matrices \(A\in\mathbf{R}^{n_{x}\times n_{x}}\), \(B\in\mathbf{R}^{n_{x}\times n_{u}}\), and \(C\in\mathbf{R}^{n_{o}\times n_{x}}\) give the dynamics of the system. Our goal is to recover the state \(x_{t}\) from the noisy measurements \(y_{t}\). To do so, we solve the problem \[\begin{array}{ll}\text{minimize}&\sum_{t=1}^{T-1}\|w_{t}\|_{2}^{2}+\mu\psi_ {\rho}(v_{t})\\ \text{subject to}&x_{t+1}=Ax_{t}+Bw_{t}\quad t=0,\ldots,T-1\\ &y_{t}=Cx_{t}+v_{t}\quad t=0,\ldots,T-1.\end{array}\] Here, the Huber penalty function (Huber, 1964) parametrized by \(\rho\in\mathbf{R}_{++}\) that robustifies against outliers is \[\psi_{\rho}(a)=\begin{cases}\|a\|_{2}&\|a\|_{2}\leq\rho\\ 2\rho\|a\|_{2}-\rho^{2}&\|a\|_{2}\geq\rho\end{cases},\] and \(\mu\in\mathbf{R}_{++}\) weights this penalty term. The decision variables are the \(x_{t}\)'s, \(w_{t}\)'s, and \(v_{t}\)'s. The parameters are the observed \(y_{t}\)'s, _i.e._, \(\theta=(y_{0},\ldots,y_{T-1})\). In this example, we take advantage of rotational invariance of the problem. We rotate the noisy trajectory so that \(y_{T}\) is on the x-axis for every problem. After solving the transformed problem (for any initialization) we reverse the rotation to obtain the solution of the original problem. Numerical example.As in Venkataraman and Amos (2021), we set \(n_{x}=4\), \(n_{o}=2\), \(n_{u}=2\), \(\mu=2\), \(\rho=2\), and \(T=50\). The dynamics matrices are \[A=\begin{bmatrix}1&0&(1-(\gamma/2)\Delta t)\Delta t&0\\ 0&1&0&(1-(\gamma/2)\Delta t)\Delta t\\ 0&0&1-\gamma\Delta t&0\\ 0&0&0&1-\gamma\Delta t\end{bmatrix},B=\begin{bmatrix}1/2\Delta t^{2}&0\\ 0&1/2\Delta t^{2}\\ \Delta t&0\\ 0&\Delta t\end{bmatrix},C=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\end{bmatrix},\] \begin{table} \begin{tabular}{l l l l l} \hline \hline & Kalman filter & robust least squares & phase retrieval & sparse PCA \\ \hline constraints \(m\) & 600 & 2102 & 3480 & 4022 \\ variables \(n\) & 550 & 802 & 1600 & 2420 \\ parameter size \(d\) & 100 & 500 & 120 & 55 \\ \hline zero & 600 & 0 & 240 & 1 \\ non-negative & 550 & 800 & 0 & 3201 \\ second-order & \(100\times[3]\) & [801,501] & 0 & 0 \\ positive semidefinite & 0 & 0 & [80] & [40] \\ \hline \hline \end{tabular} \end{table} Table 8: Sizes of conic problems from Table 1 that we use SCS to solve. We give the number of primal constraints (\(m\)), size of the primal variable (\(n\)), and the parameter size, \(d\). Then, we provide the sizes of the cones for each conic program. For the second-order and the positive semidefinite cones, we supply arrays specifying the lengths of each respective cone. The notation \(100\times[3]\) means that there are 100 second-order cones each of size 3. where \(\Delta t=0.5\) and \(\gamma=0.05\) are fixed to be respectively the sampling time and the velocity dampening parameter. We generate the problem instances in the following way. We generate true trajectories \(\{x_{0}^{*},\ldots,x_{T-1}^{*}\}\) of the vehicle by first letting \(x_{0}^{*}=0\). Then we sample the inputs as \(w_{t}\sim\mathcal{N}(0,0.01)\) and \(v_{t}\sim\mathcal{N}(0,0.01)\). The trajectories are then fully defined via the dynamics equations in Equation (19) with the sampled \(w_{t}\)'s and \(v_{t}\)'s. Results.Since this is a control example, we use the shifted previous solution as a warm start from Section 6.3.1. Figure 10 and Table 9 show the convergence behavior of our method. In this example, the learned warm starts do well with the fixed-point residual loss for \(k=5\) and \(k=15\) and the regression loss for \(k=5\), but hardly improve in the other cases. In all cases, the gains relative to the cold start remain nearly constant throughout the evaluation iterations. Figure 11 illustrates how our learned solutions after 5 iterations outperforms the solution returned after 5 iterations from the baselines. #### 6.4.2 Robust non-negative least squares We consider the problem of non-negative least squares with matrix uncertainty \[\min_{x\geq 0} \max_{\|\Delta A\|\leq\rho}\|(\hat{A}+\Delta A)x-b\|_{2},\] \begin{table} \end{table} Table 9: Robust Kalman filtering. where the right-hand side vector \(b\in\mathbf{R}^{m}\), nominal matrix \(\hat{A}\in\mathbf{R}^{m\times n}\), and maximum perturbation \(\rho\in\mathbf{R}_{++}\) are the problem data. The decision variable of the minimizer and maximizer are \(x\in\mathbf{R}^{n}\) and \(\Delta A\in\mathbf{R}^{m\times n}\) respectively. Here, \(\|\Delta A\|\) denotes the largest singular value of the perturbation matrix \(\Delta A\). El Ghaoui and Lebret (1997) provide an SOCP formulation for this problem \[\begin{array}{ll}\mbox{minimize}&u+\rho v\\ \mbox{subject to}&\|\hat{A}x-b\|_{2}\leq u\\ &\|x\|_{2}\leq v\\ &x\geq 0,\end{array}\] where \(x\in\mathbf{R}^{n}\), \(u\in\mathbf{R}\), and \(v\in\mathbf{R}\) are the decision variables. The parameter is \(\theta=b\). Numerical example.We pick \(\rho=4\) and \(\hat{A}\in\mathbf{R}^{500\times 800}\) where the entries of \(\hat{A}\) are sampled the uniform distribution \(\mathcal{U}[-1,1]\). We sample \(b\) in an i.i.d. fashion from \(\mathcal{U}[1,2]\). Results.Figure 12 and table 10 show the convergence behavior of our method. The learned warm starts with positive \(k\) substantially improve upon the baselines for both losses. Figure 12 show linear convergence of our method; this results in the gains from the learned warm starts staying roughly constant as the number of evaluation steps increases. Figure 10: Robust Kalman filtering. The learned warm starts that train with \(k=5\) for both losses and with \(k=15\) for the fixed-point residual loss have significant gains over the baselines. Figure 11: Visualizing test problems for robust Kalman filtering. Each plot is a separate test problem. The noisy, observed trajectory are the red points which serve as problem data for the SOCP. The robust Kalman filtering recovery, the optimal solution of the SOCP, is shown as green dots. After 5 iterations, SCS with our learned warm start using the regression loss with \(k=5\) is very close to the optimal solution while SCS initialized with both the shifted previous solution and the nearest neighbor still is noticeably far away from optimality. Figure 12: Robust non-negative least squares. All of the learned warm starts apart from the ones with \(k=0\) substantially improve the gain over the cold start. #### 6.4.3 Phase retrieval Our first SDP example is the problem of phase retrieval (Fienup, 1982) where the goal is to recover an unknown signal \(x\in\mathbf{C}^{n}\) from observations. This problem has applications in X-ray crystallography (Millane, 1990) and coherent diffractive imaging (Shechtman et al., 2015). Specifically, for known vectors \(a_{i}\in\mathbf{C}^{n}\), we have \(m\) scalar measurements: \(b_{i}=|\langle a_{i},x\rangle|^{2},\quad i=1,\ldots,m\). Since the values are complex, we denote the conjugate transpose of a matrix \(A\) by \(A^{*}\). Noting that \(|\langle a_{i},x\rangle|^{2}=(a_{i}^{*}x)(x^{*}a_{i})\), we introduce a matrix variable \(X\in\mathbf{S}_{+}^{n}\) and matrices \(A_{i}=a_{i}a_{i}^{*}\). The exact phase retrieval problem becomes a feasibility problem over the matrix variable with a rank constraint \[\begin{array}{ll}\mbox{find}&X\\ \mbox{subject to}&\mathbf{tr}(A_{i}X)=b_{i},\quad i=1,\ldots,m\\ &\mathbf{rank}(X)=1,\quad X\succeq 0.\end{array}\] We arrive at the following SDP relaxation by dropping the rank constraint: \[\begin{array}{ll}\mbox{minimize}&\mathbf{tr}(X)\\ \mbox{subject to}&\mathbf{tr}(A_{i}X)=b_{i},\quad i=1,\ldots,m\\ &X\succeq 0.\end{array}\] To parameterize each problem, we let \(\theta=b\in\mathbf{R}^{m}\). \begin{table} \end{table} Table 10: Robust non-negative least squares. Numerical example.For the signal, we sample \(x\) from a complex normal distribution, _i.e._, we sample the real and imaginary parts of each component independently from \(\mathcal{N}(\mu,\sigma^{2})\). To construct the constraint matrices, we use the coded diffraction pattern model (Candes et al., 2015). The specific modulating waveforms follow the setup from Yurtsever et al. (2021, Section F.1). For a signal of size \(n\), we generate \(d=3n\) measurements. Specifically, we draw 3 independent modulating waveforms \(\psi_{j}\in\mathbf{R}^{n},\ j=1,\ldots,3\). Each component of \(\psi_{j}\) is the product of two random variables, with one drawn uniformly from \(\{1,i,-1,-i\}\) and the other drawn from \(\{\sqrt{2}/2,\sqrt{3}\}\) with probabilities 0.8 and 0.2, respectively. Then, each \(a_{i}\) corresponds to computing a single entry of the Fourier transform of \(x\) after being modulated by the waveforms. Letting \(W\) be the \(n\times n\) discrete Fourier transform matrix, the \(a_{i}\)'s can be written explicitly as \(a_{(j-1)n+l}=W_{l}^{T}(\mathbf{diag}(\psi_{j}))^{\star}\) where \(W_{l}^{T}\) is the \(l\)-th row of \(W\). We take \(n=40\), \(\mu=5\), and \(\sigma=1\). Results.Figure 13 and Table 11 show the convergence behavior of our method. In this case, while the decoupled approach with \(k=0\) offers the largest gains over the first few iterations, the gain degrades as \(t\) increases to the point where it's performance becomes worse than that of the nearest-neighbor initialization. The learned warm starts with the regression loss for positive \(k\) tend to sustain their gains for a larger value of \(t\) compared with the learned warm starts that use the fixed-point residual loss. #### 6.4.4 Sparse PCA Next, we examine the problem of sparse PCA (Zou et al., 2006). Unlike standard PCA (Jolliffe, 2005), which typically finds principal components that depend on all observed variables, sparse PCA identifies principal components that rely on only a small subset of the variables. The Sparse PCA problem is maximize \[x^{T}Ax\] (20) subject to \[\|x\|_{2}\leq 1,\quad\mathbf{card}(x)\leq c,\] where \(x\in\mathbf{R}^{n}\) is the decision variable and \(\mathbf{card}(x)\) is the number of nonzero terms of vector \(x\). The covariance matrix \(A\in\mathbf{S}_{+}^{n}\) and desired cardinality \(c\in\mathbf{R}_{+}\) are problem data. We consider an SDP relaxation of the non-convex problem (20) which takes the form maximize \[\mathbf{tr}(AX)\] (21) subject to \[\mathbf{tr}(X)=1\] \[\mathbf{1}^{T}|X|\mathbf{1}\leq c\] \[X\succeq 0,\] where the decision variable is \(X\in\mathbf{S}_{+}^{n}\). We use an \(r\)-factor model (Boyd et al., 2017) and set \(A=F\Sigma F^{T}\) where \(F\in\mathbf{R}^{n\times r}\) is the factor loading matrix and \(\Sigma\in\mathbf{S}_{+}^{n}\) is a matrix that holds the factor scores. The parameter is \(\theta=\mathbf{vec}(\Sigma)\). Figure 13: Phase retrieval results. Other than the \(k=0\) case, the learned warm starts with regression loss improvements are maintained for many evaluation steps. Numerical example.We run our experiments with matrix size of \(n=40\), a factor size of \(r=10\), and a cardinality size of \(c=10\). To generate the covariance matrices, we first generate a random nominal matrix \(A_{0}\), whose entries are sampled as an i.i.d. standard Gaussian. We then take the singular value decomposition of \(A_{0}=U\Sigma_{0}U^{T}\), and let the shared factor loading matrix \(F\in\mathbf{R}^{n\times r}\) be the first \(r\) singular vectors of \(U\). Let \(B_{0}\in\mathbf{S}_{+}^{r}\) be the diagonal matrix found by taking the square root of the first \(r\) singular values of \(A_{0}\). Then, for each problem, we take \(\Sigma=BB^{T}\) where \(B=\Delta+B_{0}\). Here, the elements of \(\Delta\in\mathbf{R}^{r\times r}\) are sampled i.i.d. from the uniform distribution \(\mathcal{U}[-0.1,0.1]\). Results.Figure 14 and table 12 show the convergence behavior of our method. In this example, both the fixed-point residual loss and the regression loss perform with \(k=0\). All of the other learned warm starts with the regression loss and some with the fixed-point residual loss show good performance. ## Appendix A Examples of fixed-point algorithms Gradient descent.Here, \(z\in\mathbf{R}^{n}\) is the decision variable and \(f_{\theta}\) is a convex and \(L\)-smooth function. Recall that \(f:\mathbf{R}^{n}\rightarrow\mathbf{R}\) is \(L\)-smooth if \(\|\nabla f(x)-\nabla f(y)\|_{2}\leq L\|x-y\|_{2}\quad\forall x,y\in\mathbf{R} ^{n}\). If \(\alpha\in(0,2/L)\), then the iterates of gradient descent are guaranteed to converge to an optimal \begin{table} \begin{tabular}{l l l l l l l l l l l l} \multirow{2}{*}{Fp res.} & Cold & Nearest & Fp & Fp & Fp & Fp & Reg & Reg & Reg & Reg & Reg \\ & Start & Neighbor & \(k=0\) & \(k=5\) & \(k=15\) & \(k=30\) & \(k=60\) & \(k=0\) & \(k=5\) & \(k=15\) & \(k=30\) & \(k=60\) \\ \hline 0.1 & 26 & 1 & **0** & 2 & 8 & 10 & 10 & **0** & 5 & 8 & 12 & 17 \\ 0.01 & 122 & 62 & 262 & 92 & 70 & **33** & 38 & 262 & **33** & 37 & 44 & 55 \\ 0.001 & 338 & 269 & 491 & 313 & 289 & 169 & 160 & 490 & **145** & 151 & 154 & 172 \\ 0.0001 & 982 & 822 & 1000 & 881 & 935 & 766 & 738 & 1000 & **681** & 698 & 689 & 709 \\ \hline \multicolumn{10}{c}{(b) Mean reduction in iterations from a cold start to a given fixed-point residual (Fp res.)} \\ \multicolumn{10}{c}{Fp res.} & Cold & Nearest & Fp & Fp & Fp & Fp & Reg & Reg & Reg & Reg & Reg \\ & Start & Neighbor & \(k=0\) & \(k=5\) & \(k=15\) & \(k=30\) & \(k=60\) & \(k=0\) & \(k=5\) & \(k=15\) & \(k=30\) & \(k=60\) \\ \hline 0.1 & 0 & 0.96 & **1.0** & 0.92 & 0.69 & 0.62 & 0.62 & 1.0 & 0.81 & 0.69 & 0.54 & 0.35 \\ 0.01 & 0 & 0.49 & -1.15 & 0.25 & 0.43 & **0.73** & 0.69 & -1.15 & **0.73** & 0.7 & 0.64 & 0.55 \\ 0.001 & 0 & 0.2 & -0.45 & 0.07 & 0.14 & 0.5 & 0.53 & -0.45 & **0.57** & 0.55 & 0.54 & 0.49 \\ 0.0001 & 0 & 0.16 & -0.02 & 0.1 & 0.05 & 0.22 & 0.25 & -0.02 & **0.31** & 0.29 & 0.3 & 0.28 \\ \hline \multicolumn{10}{c}{(c) Mean solve times (in milliseconds) in SCS with absolute and relative tolerances set to tol.} \\ \multicolumn{10}{c}{tol.} & Cold & Nearest & Fp & Fp & Fp & Fp & Reg & Reg & Reg & Reg & Reg \\ & Start & Neighbor & \(k=0\) & \(k=5\) & \(k=15\) & \(k=30\) & \(k=60\) & \(k=0\) & \(k=5\) & \(k=15\) & \(k=30\) & \(k=60\) \\ \hline 0.1 & 9.66 & 0.59 & **0.65** & 10.62 & 8.35 & 8.22 & 8.22 & 0.73 & 10.06 & 8.27 & 8.25 & 8.4 \\ 0.01 & 26.34 & 9.63 & 97.97 & 14.31 & 8.81 & 8.25 & **8.22** & 88.35 & 10.09 & 8.25 & 8.3 & 9.94 \\ 0.001 & 67.88 & 45.91 & 121.94 & 71.17 & 43.23 & 17.39 & 17.23 & 110.4 & 20.09 & **16.92** & 18.56 & 25.31 \\ 0.0001 & 136.21 & 107.61 & 187.43 & 140.64 & 100.76 & 66.36 & 63.95 & 170.05 & 73.65 & **61.11** & 62.77 & 70.46 \\ 1e-05 & 255.52 & 204.37 & 309.51 & 264.94 & 201.32 & 156.11 & 151.28 & 279.27 & 174.63 & **144.53** & 146.15 & 153.85 \\ \hline \end{tabular} \end{table} Table 12: Sparse PCA. solution (Ryu and Boyd, 2015). If \(f_{\theta}\) is strongly convex, then the fixed-point operator is a contraction (Ryu and Boyd, 2015). Proximal gradient descent.Here, \(z\in\mathbf{R}^{n}\) is the decision variable, \(f_{\theta}\) is a convex and \(L\)-smooth function, and \(g_{\theta}\) is a convex but possibly non-smooth function. The iterations of proximal gradient descent converge to a solution if \(\alpha\in(0,2/L)\)(Parikh and Boyd, 2014). Alternating direction method of multipliers (ADMM).Here, \(u\in\mathbf{R}^{n}\) is the decision variable and \(f_{\theta}\) and \(g_{\theta}\) are closed, convex, proper, and possibly non-smooth functions. The iterations of ADMM generate a sequence of iterates, resulting in the convergence of both \(\tilde{u}^{i}\) and \(u^{i}\) to each other and to a solution of the problem. The \(z\in\mathbf{R}^{n}\) variable serves as the associated dual variable. We use the equivalence of ADMM to Douglas-Rachford splitting (Gabay, 1983) and write the Douglas-Rachford splitting iterations in Table 1. While the associated fixed-point operator to ADMM is averaged (Ryu and Boyd, 2015), ADMM is known to converge linearly under certain conditions (Eckstein, 1989; Giselsson and Boyd, 2017). Osqp.The operator splitting quadratic program (OSQP) (Stellato et al., 2020) solver is based on ADMM. Here, \(P\in\mathbf{S}_{+}^{n}\), \(A\in\mathbf{R}^{m\times n}\), \(c\in\mathbf{R}^{n}\), \(l\in\mathbf{R}^{m}\), and \(u\in\mathbf{R}^{m}\) are problem data, and \(\Pi_{[l,u]}\) is the projection onto the box, \([l,u]\). The decision variable is \(x\in\mathbf{R}^{n}\). While the algorithm uses \(x\), \(w\), and \(y\) variables, the fixed-point operator is represented as an operator on a smaller vector, as shown in Banjac et al. (2019). Figure 14: Sparse PCA. The learned warm starts with positive \(k\) that use the regression loss provide large gains. Scs.The splitting conic solver (SCS) (O'Donoghue, 2021) is also based on ADMM. Here, \(P\in\mathbf{S}_{+}^{n}\), \(A\in\mathbf{R}^{m\times n}\), \(c\in\mathbf{R}^{n}\), and \(b\in\mathbf{R}^{m}\) are problem data, and \(\Pi_{\mathcal{C}}\) is the projection onto the cone \(\mathcal{C}\). The decision variables are \(x\in\mathbf{R}^{n}\) and \(s\in\mathbf{R}^{m}\). For simplicity, Table 1 includes the simplified version of the SCS algorithm without the homogeneous self-dual embedding. The SCS algorithm, the one we use in the numerical experiments in Section 6.4, is based on the homogeneous self-dual embedding; see O'Donoghue (2021) for the details. As in Venkataraman and Amos (2021), our implementation normalizes the fixed-point residual by the \(\tau\) scaling factor to ensure that the fixed-point residual is not artificially small. ## Appendix B Proofs ### Proof of Lemma 1 Let \(w^{\prime}=w+u\) and let \(S_{w}\) be the set of perturbations \(w^{\prime}\) such that \[S_{w}\subset\left\{w^{\prime}\mid\max_{\theta\in\Theta}\|h_{w^{\prime}}( \theta)-h_{w}(\theta)\|_{2}\leq\gamma/2\right\}.\] Let \(q\) be the probability density function over \(w^{\prime}\). We construct a new distribution \(\tilde{Q}\) over predictors \(h_{\tilde{w}}\) where \(\tilde{w}\) is restricted to \(S_{w}\) with the probability density function \(\tilde{q}(\tilde{w})=(1/Z)q(\tilde{w})\) if \(\tilde{w}\in S_{w}\) and otherwise \(0\), where \(Z\) is a normalizing constant. By the assumption of the lemma, \(Z=\mathbf{P}(w^{\prime}\ \in S_{w})\geq 1/2\). By the definition of \(\tilde{Q}\), we have \[\max_{\theta\in\Theta}\|h_{\tilde{w}}(\theta)-h_{w}(\theta)\|_{2}\leq\gamma/2.\] Therefore, \(\ell_{\theta}^{\text{fp}}(T_{\theta}^{t}(h_{w}(\theta)))\leq g_{\gamma/2, \theta}^{t}(h_{\tilde{w}}(\theta))\leq g_{\gamma,\theta}^{t}(h_{w}(\theta))\) almost surely for every \(\theta\in\Theta\). Hence, for every \(\tilde{w}\) drawn from the probability density function \(\tilde{Q}\), almost surely, \[R^{t}(w)\leq R_{\gamma/2}^{t}(\tilde{w}),\quad\hat{R}_{\gamma/2}^{t}(\tilde{w })\leq\hat{R}_{\gamma}^{t}(w). \tag{22}\] Now using these two inequalities above and the PAC-Bayes theorem, we get \[R^{t}(w) \leq\mathbf{E}_{\tilde{w}}[R_{\gamma/2}^{t}(\tilde{w})]\] \[\leq\mathbf{E}_{\tilde{w}}[\hat{R}_{\gamma/2}^{t}(\tilde{w})]+2C _{\gamma/2}(t)\sqrt{(2KL(\tilde{w}||\pi)+\log(2N/\delta))/(N-1)}\] \[\leq\hat{R}_{\gamma}^{t}(w)+2C_{\gamma/2}(t)\sqrt{(2KL(\tilde{w }||\pi)+\log(2N/\delta))/(N-1)}\] \[\leq\hat{R}_{\gamma}^{t}(w)+4C_{\gamma/2}(t)\sqrt{(2KL(w^{\prime }||\pi)+\log(6N/\delta))/(N-1)}.\] The first and third inequalities come from (22), and the second inequality follows from (11). The last inequality comes from the following calculation which we repeat from Neyshabur et al. (2018, Section 4). Let \(S_{w}^{c}\) denote the complement of \(S_{w}\) and \(\tilde{q}^{c}\) denote the density function \(q\) restricted to \(S_{w}^{c}\) and normalized. Then we get \[\text{KL}(q||p)=Z\text{KL}(\tilde{q}||p)+(1-Z)\text{KL}(\tilde{q}^{c}||p)-H( Z),\] where \(H(Z)=-Z\log Z-(1-Z)\log(1-Z)\) is the binary entropy function. Since the KL-divergence is always positive, \[\text{KL}(\tilde{q}||p)=[\text{KL}(q||p)+H(Z)-(1-Z)KL(\tilde{q}^{c}||p)]/Z \leq 2(\text{KL}(q||p)+1).\] Using the additive properties of logarithms, \(1+\log(2N/\delta)\leq\log(6N/\delta)\). ### Proof of Theorem 2 Our proof follows a similar structure as the proof of Neyshabur et al. (2018, Theorem 1). Let \(\zeta=(\Pi_{i=1}^{L}\|W_{i}\|_{2})^{1/L}\) and consider a neural network with weights \(\tilde{W}_{i}=\zeta W_{i}/\|W_{i}\|_{2}\). Due to the homogeneity of the ReLU, we have \(h_{w}(\theta)=h_{\tilde{w}}(\theta)\) for all \(\theta\in\Theta\)(Neyshabur et al., 2018). Since \((\Pi_{i=1}^{L}\|W_{i}\|_{2})^{1/L}=(\Pi_{i=1}^{L}\|\tilde{W}_{i}\|_{2})^{1/L}\) and \(\|W_{i}\|_{F}/\|W_{i}\|_{2}=\|\tilde{W}_{i}\|_{F}/\|\tilde{W}_{i}\|_{2}\), inequality (13) is the same for \(w\) and \(\tilde{w}\). Therefore, it is sufficient to prove the theorem only for the normalized weights \(\tilde{w}\) and we can assume that the spectral norm of the weight matrix is equal across all layers, _i.e._, \(\|W_{i}\|_{2}=\zeta\). Now, we break our proof into two cases depending on the product of the spectral norm of the weight matrices. The main difference between our proof and the proof for Neyshabur et al. (2018, Theorem 1) is that we introduce a secondary case. The main case analysis is similar. Main case.In the main case, \(\zeta^{L}\geq\gamma/(2B)\). We choose the prior distribution \(\pi\) to be \(\mathcal{N}(0,\sigma^{2})\) and consider the perturbation \(u\sim\mathcal{N}(0,\sigma^{2})\). As in Neyshabur et al. (2018), since the prior distribution \(\pi\) cannot depend on \(\zeta\), we consider predetermined values of \(\tilde{\zeta}\) on a grid and then do a union bound. For now, we consider \(\tilde{\zeta}\) fixed and consider all \(\zeta\) such that \(|\zeta-\tilde{\zeta}|\leq\zeta/L\). This ensures that each relevant value of \(\zeta\) is covered by some \(\tilde{\zeta}\) on the grid. Since \(|\zeta-\tilde{\zeta}|\leq\zeta/L\) we get the inequalities \[\zeta^{L-1}/e\leq\tilde{\zeta}^{L-1}\leq e\zeta^{L-1}. \tag{23}\] This follows from the inequalities \((1+1/x)^{x-1}\leq e\) and \(1/e\leq(1-1/x)^{x-1}\) which themselves are consequences the inequality \(1+y\leq e^{y}\) for all \(y\). Since the entries of each \(U_{i}\) are drawn from \(\mathcal{N}(0,\sigma^{2})\), we have the bound on the spectral norm of each \(U_{i}\)(Tropp, 2011) \[\mathbf{P}_{U_{i}\sim\mathcal{N}(0,\sigma^{2})}(\|U_{i}\|_{2}>t)\leq 2\bar{h}e^{ -t^{2}/(2\bar{h}\sigma^{2})}.\] We can take a union bound to get \[\mathbf{P}_{U_{1},\ldots,U_{L}\sim\mathcal{N}(0,\sigma^{2})}(\|U_{1}\|_{2}\leq t,\ldots,\|U_{L}\|_{2}\leq t)\geq 1-2L\bar{h}e^{-t^{2}/(2\bar{h}\sigma^{2})}. \tag{24}\] By setting the right hand side of (24) to \(1/2\), we establish that with probability at least \(1/2\), the spectral norm of every perturbation \(U_{i}\) is bounded by \(\sigma\sqrt{2\bar{h}\log(4L\bar{h})}\) simultaneously. We choose \(\sigma=\gamma/(21LB\tilde{\zeta}^{L-1}\sqrt{h\log(4hL)})\) and now verify that with probability at least \(1/2\), \(\|U_{i}\|_{2}\leq\|W_{i}\|_{2}/L=\zeta/L\) holds, a condition of Neyshabur et al. (2018, Lemma 2): \[\|U_{i}\|_{2} \leq\sigma\sqrt{2\bar{h}\log(4L\bar{h})}=\gamma\sqrt{2}/(21LB \tilde{\zeta}^{L-1})\] \[\leq e2\sqrt{2}\gamma/(42LB\zeta^{L-1})\leq 2\sqrt{2}e\zeta/(21L) \leq\zeta/L.\] In the first line, the inequality comes from the perturbation bound on \(\|U_{i}\|_{2}\), and the equality follows from plugging in for \(\sigma\). The second line follows from (23), and the assumption from the main case that \(\zeta^{L}>\gamma/(2B)\). Now that the conditions are met, we apply Neyshabur et al. (2018, Lemma 2). The following holds with probability at least \(1/2\): \[\max_{\theta\in\Theta}\|h_{w}(\theta)-h_{w+u}(\theta)\|_{2} \leq eB\zeta^{L-1}\sum_{i=1}^{L}\|U_{i}\|_{2}\] \[\leq e^{2}LB\tilde{\zeta}^{L-1}\sigma\sqrt{2\tilde{h}\log(4L \tilde{h})}\leq\gamma/2.\] In the second inequality, we use (23). The last inequality follows from the choice of \(\sigma\). Now we calculate the KL-term with \(\pi\sim\mathcal{N}(0,\sigma^{2})\) and \(u\) chosen with the above value of \(\sigma\): \[\mathrm{KL}(w+u||\pi) \leq\tfrac{\|w\|_{2}^{2}}{2\sigma^{2}}=\tfrac{2^{12}L^{2}B^{2} \tilde{\zeta}^{2L-2}\tilde{h}\log(4\tilde{h}L)}{2\gamma^{2}}\sum_{i=1}^{L}\|W _{i}\|_{F}^{2}\] \[\leq\tfrac{2^{12}\zeta^{2L}}{2\gamma^{2}}B^{2}L^{2}\bar{h}\log(4 L\bar{h})\sum_{i=1}^{L}\tfrac{\|W_{i}\|_{F}^{2}}{\tilde{\zeta}^{2}}. \tag{25}\] What remains is to take a union bound over the different choices of \(\tilde{\zeta}\). We only need to consider values of \(\zeta\) in the range of \[(\gamma/(2B))^{1/L}\leq\zeta\leq(\gamma\sqrt{N}/(2B))^{1/L}. \tag{26}\] Since we are in the main case, we do not have to consider \(\zeta^{L}<\gamma/(2B)\). Alternatively, if \(\zeta^{L}>\gamma\sqrt{N}/(2B)\), then the upper bound on the KL term in (25) is greater than \(N\). To see this, first note that the frobenius norm is always at least the operator norm of a given matrix, so \(\|W_{i}\|_{F}\geq\zeta\) for \(i=1,\dots,L\). Then, the right hand side of (25) becomes at least \(21^{2}L^{2}\bar{h}\log(4L\bar{h})N/8\) which is greater than \(N\). Theorem 2 is obtained by using the bound in the right hand side of (25) for the KL term in Lemma 1. Therefore Theorem 2 holds trivially since \(C_{\gamma/2}(t)\) upper bounds \(R^{t}(w)\) and the entire square root term in Lemma 1 is at least one. Hence, we only need to consider \(\zeta\) in the range of (26). The condition \(L|\tilde{\zeta}-\zeta|\leq(\gamma/(2B))^{1/L}\) is sufficient to satisfy the required condition that \(|\tilde{\zeta}-\zeta|\leq\zeta/L\) since \(\zeta^{L}\geq\gamma/(2B)\). For each \(\tilde{\zeta}\) that we pick, we consider \(\zeta\) within a distance of \((\gamma/(2B))^{1/L}/L\). We need to pick enough \(\tilde{\zeta}\)'s to cover the whole region in (26). Picking a cover size of \(LN^{\frac{1}{2L}}\) satisfies this condition since \[\frac{(\tfrac{\gamma\sqrt{N}}{2B})^{1/L}-(\tfrac{\gamma}{2B})^{1/L}}{\tfrac{1 }{L}(\tfrac{\gamma}{2B})^{1/L}}=L(N^{1/(2L)}-1).\] Therefore, by using Lemma 1, with probability at most \(\tilde{\delta}\) and for all \(\tilde{w}\) such that \(|\zeta-\tilde{\zeta}|\leq\zeta/L\), the following bound is violated: \[R^{t}(w)\leq\hat{R}^{t}_{\gamma}(\tilde{w})+\mathcal{O}\left(\sqrt{\tfrac{B^{2 }L^{2}\log(L\bar{h})\Pi_{j=1}^{L}\|\tilde{W}_{j}\|_{2}^{2}\sum_{i=1}^{L}\tfrac {\|\tilde{W}_{i}\|_{F}^{2}}{\|\tilde{W}_{i}\|_{F}^{2}}+\log(\tfrac{N}{\delta} )}{\gamma^{2}N}}\right).\] By applying the union bound over the cover size, with probability at most \(\tilde{\delta}LN^{1/(2L)}\), the same bound is violated for at least one of the \(\tilde{\zeta}\)'s out of the cover. Setting \(\delta=\tilde{\delta}LN^{1/(2L)}\) and recalling that the proof generalizes from normalized weights \(\tilde{w}\) to weights \(w\) gives the final result. Secondary case.In this case, \(\|h_{w}(\theta)\|_{2}\leq B(\Pi_{i=1}^{L}\|W_{i}\|_{2})\leq\gamma/2\). We get the following: \[R^{t}(w) \leq R^{t}_{\gamma/2}(0)\] \[\leq\hat{R}^{t}_{\gamma/2}(0)+C_{\gamma/2}(t)\sqrt{\log(1/\delta)/ (2N)}\quad\text{w.p. at least $1-\delta$}\] \[\leq\hat{R}^{t}_{\gamma}(w)+C_{\gamma/2}(t)\sqrt{\log(1/\delta)/ (2N)}\quad\text{w.p. at least $1-\delta$}\] The first and third lines come from \(\|h_{w}(\theta)\|_{2}\leq\gamma/2\) and the definition of the marginal fixed-point residual. The second lines uses Hoeffding's inequality as in Alquier (2023, Equation 1.3), which is permissible since the prediction is the zero vector and is therefore independent of the data. ### Proof of Lemma 3 First, let \(z^{\star}(\theta)\) be the nearest fixed-point of the operator \(T_{\theta}\) to \(z\) so that \(r_{\theta}(z)=\|z-z^{\star}(\theta)\|_{2}\). \[\ell_{\theta}^{\text{fp}}(z)=\|T_{\theta}(z)-z\|_{2}\leq\|T_{\theta}(z)-z^{ \star}(\theta)\|_{2}+\|z-z^{\star}(\theta)\|_{2}\leq 2r_{\theta}(z)\] The first inequality uses the triangle inequality, and the second inequality uses the non-expansiveness of \(T_{\theta}\). ### Proof of Lemma 5 \[|r_{\theta}(T^{t}_{\theta}(z))-r_{\theta}(T^{t}_{\theta}(w))| =\|T^{k}_{\theta}(z)-\Pi_{\mathbf{fix}\,T_{\theta}}(T^{k}_{ \theta}(z))\|_{2}-\|T^{k}_{\theta}(w)-\Pi_{\mathbf{fix}\,T_{\theta}}(T^{k}_{ \theta}(w))\|_{2}|\] \[\leq\|T^{k}_{\theta}(z)-\Pi_{\mathbf{fix}\,T_{\theta}}(T^{k}_{ \theta}(z))+\Pi_{\mathbf{fix}\,T_{\theta}}(T^{k}_{\theta}(w))-T^{k}_{\theta}(w )\|_{2}\] \[\leq\|T^{k}_{\theta}(z)-T^{k}_{\theta}(w)\|_{2}+\|\Pi_{\mathbf{fix }\,T_{\theta}}(T^{k}_{\theta}(z))-\Pi_{\mathbf{fix}\,T_{\theta}}(T^{k}_{ \theta}(w))\|_{2}\] \[\leq 2\|T^{k}_{\theta}(z)-T^{k}_{\theta}(w)\|_{2}\leq 2\|z-w\|_{2}\] The first two inequalities use the reverse triangle inequality and triangle inequality. Since \(T_{\theta}\) is non-expansive, \(\mathbf{fix}\,T_{\theta}\) is a convex set (Ryu and Boyd, 2015, Section 2.4.1). The third inequality follows since the projection onto a convex set is non-expansive (Ryu and Boyd, 2015, Section 3.1). In the last inequality, we use the non-expansiveness of \(T_{\theta}\).
2309.13840
Constraining the Charge of a Black Hole with Electromagnetic Radiation from a Black Hole-Neutron Star System
Black hole-neutron star (BH-NS) mergers are expected to emit gravitational-wave (GW) and electromagnetic (EM) counterparts when the NS is tidally disrupted or plunges into the BH. Recently, GW 200105 and GW200115 were claimed as originating in BH-NS mergers, even GW 200105 remains in debate. Several optical source candidates are reported to possible associate with the two GW events, but not confirmed yet. In this work, we assume that the BH is charged (the NS is naturally charged) and try to constrain the charge of the BH by using the possible associated EM emission from the charged BH and NS system working in the inspiral regime. We adopt electric and magnetic dipole radiations for the binaries which power a Poynting-flux-dominated outflow to accelerate electrons. Then, it produces the observed EM radiation via synchrotron radiation. We find that the conversion efficiency in the X-ray band is much higher than that of the ultraviolet (UV), near-infrared, and radio bands. The estimated maximum charge-to-mass ratio (the charge for unit mass) of the BH is $1.12\times 10^{-6}$ and $1.53\times 10^{-6}$ esu for the binary systems of GW200105 and GW200115, respectively, if magnetic field strength $B_{p}\lesssim ~10^{16}$ G and period $P>~1$ ms for the NS spin.
Hao-Yu Yuan, Hou-Jun Lü, Jared Rice, En-Wei Liang
2023-09-25T02:54:32Z
http://arxiv.org/abs/2309.13840v1
Constraining the Charge of a Black Hole with Electromagnetic Radiation from a Black Hole-Neutron Star System ###### Abstract Black hole-neutron star (BH-NS) mergers are expected to emit gravitational-wave (GW) and electromagnetic (EM) counterparts when the NS is tidally disrupted or plunges into the BH. Recently, GW 200105 and GW200115 were claimed as originating in BH-NS mergers, even GW 200105 remains in debate. Several optical source candidates are reported to possible associate with the two GW events, but not confirmed yet. In this work, we assume that the BH is charged (the NS is naturally charged) and try to constrain the charge of the BH by using the possible associated EM emission from the charged BH and NS system working in the inspiral regime. We adopt electric and magnetic dipole radiations for the binaries which power a Poynting-flux-dominated outflow to accelerate electrons. Then, it produces the observed EM radiation via synchrotron radiation. We find that the conversion efficiency in the X-ray band is much higher than that of the ultraviolet (UV), near-infrared, and radio bands. The estimated maximum charge-to-mass ratio (the charge for unit mass) of the BH is \(1.12\times 10^{-6}\) and \(1.53\times 10^{-6}\) esu for the binary systems of GW200105 and GW200115, respectively, if magnetic field strength \(B_{\rho}\lesssim~{}10^{16}\) G and period \(P>~{}1\) ms for the NS spin. ## I Introduction Compact binary coalescences, including black hole-black hole (BH-BH), black hole-neutron star(BH-NS), and neutron star-neutron star (NS-NS) mergers, are expected to be a strong source for the production of gravitational wave (GW) radiation in the Universe [1, for a review]. Catching the GW signal from such binary systems is the main target of current ground-based GW detectors, e.g., advanced Laser Interferometer Gravitational-Wave Observatory [LIGO; 2], advanced Virgo Interferometer [Virgo; 3], and KAGRA [4]. The first detected GW signals from the two events GW 150914 and GW 151226 with LIGO are proposed to be black hole binary mergers [5]. More interestingly, weak electromagnetic (EM) transients associated with the two GW events in the \(\gamma\)-ray band were claimed to be detected, but this is still highly debated [6; 7; 8]. On 17 August 2017, advanced LIGO and Virgo first directly detected both the GW signal (GW170817) and its EM counterparts (e.g., GRB 170817A and kilonova AT2017gfo) from the proposed merger of a binary NS system [9; 10; 11; 12; 13; 14; 15; 16]. Despite the growing number of detected GW events, only several candidate BH-NS mergers have been reported, namely GW190426 [17], GW190814 [18], GW190917 [19], GW191219 [20], and GW200210 [21]. Recently, GW 200105 and GW200115 were claimed as originating from the merger of BH-NS systems by the LIGO-Virgo-KAGRA (LVK) Collaboration during the third observing run (O3) stage [22], even GW 200105 remains in debate. Unfortunately, no conclusive EM counterparts have been detected yet, except for several possible optical source candidates associated with the two GW events [23; 24; 25; 26; 27; 28; 29; 30]. Whether or not EM counterparts accompany BH-NS mergers remains an open question. From the theoretical point of view, NSs can be tidal disrupted if the tidal disruption radius of the NS is larger than the innermost stable circular orbit (ISCO) radius of the BH and the massive debris can be expelled or accreted onto the newborn BH to power the EM counterparts [31; 32]. However, the results of numerical simulations have shown that the conditions for NS disruption within inspiraling BH-NS binaries are extremely strict. For example the NS equations of state (EOS) need to be stiff enough or the BH spin projected to the orbital angular momentum needs to be extremely large [33; 34]. On the other hand, the observed GW and EM data from LIGO, Virgo, and other telescopes are not likely to support the stiff NS EOS and the extremely high BH spin [35; 36; 37; 38; 39]. Given the observational evidences, NSs plunging into the BHs during BH-NS mergers seems to be a natural physical processes during stellar evolution. Within in this scenario, reference [40] proposed that EM radiation associated with the GW signal of the BH-NS merger can be produced if at least one of the members of BH-NS merger is charged. The system can raise intense electric dipole and magnetic dipole radiation near the orbital plane, and it propagates outward with a Poynting-flux-dominated outflow [40]. Finally, magnetic energy is dissipated to accelerate the electrons and can power the EM radiation with different radiation mecha nisms (e.g., synchrotron radiation). On the other hand, the physical properties of a BH can be described simply with three parameters (mass, angular momentum, and charge). The mass and angular momentum of the BH can be roughly measured based on abundant observational data, but the charge of a BH is poorly understood [41; 42; 43]. Motivated by reference [40], one interesting question is whether or not we can constrain the charge of the BH using either the observed EM radiation or radiation upper limits in the BH-NS binary system if we assume that the BH is charged. In this paper, we try to constrain the charge of BHs using the observed upper limits of EM radiation for GW200105 and GW200115 by assuming the BHs in the BH-NS systems are charged. We organize this paper as follows. The theoretical derivation of EM counterpart production during the merger of a NS with a charged BH is presented in Section II. In section III, we describe the observations of GW200105 and GW200115. The constrained results are shown in Section IV. The conclusions are drawn in Section V with some additional discussions. Throughout this paper, we use cgs unit and adopt a concordance cosmology with parameters \(H_{0}=67.4\;\mathrm{km\;s^{-1}\;Mpc^{-1}},\;\Omega_{\mathrm{M}}=0.315,\; \mathrm{and}\;\Omega_{\Lambda}=0.685\)[44]. ## II General theory of charged BH-NS mergers ### Electric dipole and magnetic dipole radiations Two different channels for the formation of a BH-NS system are discussed in literatures [45; 46; 47], but they are remaining in debate. One is binary systems with each member of the binary undergoing massive star collapse when the thermal force of the star can not support its gravitational force [for a review 45; 46]. The other one is the dynamical interactions between BHs and NSs in globular clusters which contain many compact stars (e.g., BH and NS). Then, these interactions can produce the binary systems that contain two or more compact objects [47]. After the BH-NS binary is formed, the orbital separation between the objects gradually decreases until coalescence because of the energy loss due to GW emission. The NS has a strong surface magnetic field and a rapid rotation, and its surface magnetic field changes with time as it is orbiting with the BH. The changing NS magnetic field can produce an electric field, so that both a magnetic field and an electric field live on the surface of the NS. For our BH-NS system, we assume that NS plunges into the BH without tidal disruption and that the BH is charged. If this is the case, the EM signals may be produced via both electric dipole and magnetic dipole radiation and propagate outward with a Poynting-flux-dominated outflow. Following the method of references [48] and [40], \(M_{i},\;\hat{q}_{i}\) are defined as the mass and relative charge of the BH or NS, respectively. Here, the subscript \(i\) represents either the BH or the NS. \(\hat{q}_{i}\) is defined as the ratio \(Q_{i}/M_{i}\)[48], where \(Q_{i}\) and \(M_{i}\) are the absolute charge and mass of the BH or NS, and \(G\) is the gravitational constant. Moreover, we also define other parameters as follows: Total mass: \[M=M_{BH}+M_{NS};\] Mass ratio: \[q=M_{BH}/M_{NS};\] Reduced mass: \[M_{r}=M_{BH}M_{NS}/(M_{BH}+M_{NS});\] Chirp mass: \[M_{c}=M_{r}^{3/5}M^{2/5};\] Horizon mass: \[M_{h}=M_{r}^{2/5}M^{3/5}.\] The charged BH-NS system can give rise to electric dipole and magnetic dipole radiation due to the interaction between the magnetosphere of the binaries during inspiral. We present more details of both electric dipole and magnetic dipole radiation in the following. **(1) Luminosity of electric dipole radiation:** In general, considering that just one member of a BH-NS binary is charged (e,g., NS is charged), the electric dipole radiation luminosity can be written as by Larmor formula[40; 48; 49] \[\begin{split} L_{e,NS}&=\frac{2Q_{NS}^{2}|\vec{r}_{ NS}|^{2}}{3c^{3}}\\ &=\frac{1}{24}\frac{c^{5}}{G^{2}}\hat{q}_{NS}^{2}\left[\frac{r_{s}( M_{BH})}{a}\right]^{2}\left[\frac{r_{s}(M_{NS})}{a}\right]^{2}\end{split} \tag{1}\] where \(c\) is speed of light, and \(r_{s}(M_{i})=\frac{2GM_{i}}{c^{2}}\). Hence, \(r_{s}(M_{BH})\) and \(r_{s}(M_{NS})\) are the Schwarzschild radii of the masses \(M_{BH}\) and \(M_{NS}\), respectively. \(|\vec{r}_{NS}|=\frac{GM_{BH}}{a^{2}}\) is the amplitude of the acceleration of the NS and \(a\) is the separation of the BH-NS binary. Similarly, one may easily write the luminosity if only the BH is charged. However, if both NS and BH are charged, one should consider not only the contributions from charged NS and charged BH itself, but also the cross-term effect between the charged BH and charged NS. The total dipole moment of the system is \(d=Q_{\mathrm{NS}}r_{\mathrm{NS}}+Q_{\mathrm{BH}}r_{\mathrm{BH}}\) by choosing the origin of coordinates at the centre of mass, where \(r_{\mathrm{NS}}\) and \(r_{\mathrm{BH}}\) are the radius vector of neutron star and black hole, respectively. Based to Larmor formula, one has \[L_{e}=\frac{4}{3c^{3}}\hat{d}^{2} \tag{2}\] \[M_{i}\vec{r}_{i}=\pm\frac{GM_{NS}M_{BH}}{r^{2}}\mp\frac{Q_{NS}Q_{BH}}{r^{2}}, \tag{3}\] where \(r=r_{NS}-r_{BH}\) is the relative position of them. We can derive the luminosity of electric dipole radiation for both charged BH and charged NS by adopting \(a\approx|r|\)[50; 51; 52], \[\begin{split} L_{e}&\approx\frac{4}{3}\frac{c^{5}}{G ^{2}}(\hat{q}_{NS}-\hat{q}_{BH})^{2}\left(1-\frac{\hat{q}_{NS}\hat{q}_{BH}}{G} \right)^{2}\\ &\left[\frac{r_{s}(M_{BH})}{2a}\right]^{2}\left[\frac{r_{s}(M_{ NS})}{2a}\right]^{2}.\end{split} \tag{4}\] Here, we ignore the effect of EM force on the orbital evolution, and only consider the gravitational force between the BH and NS to calculate the orbital evolution of the systems, because the charge of BH is small enough. We compare with the luminosity between EM and GW at different radius for different charge, and derive the critical condition for the validity of this assumption. The GW luminosity is expressed as, \[L_{GW}=\frac{32}{5}\frac{G^{4}}{c^{5}}\frac{M_{r}^{2}M^{3}}{a^{5}}f(e). \tag{5}\] where \(f(e)=(1+\frac{73}{24}e^{2}+\frac{37}{96}e^{4})/(1-e^{2})^{7/2}\) is a coefficient for elliptic orbits, and \(e\) is the orbital eccentricity. We adopt a reasonable value of \(e=0\) in the calculations below1[40]. Footnote 1: The orbital eccentricity is decreasing during inspiral, and it would be close to zero due to GW or EM radiations, and ejected energy (the electric dipole and magnetic dipole radiations) are close to maximum values with the increasing of orbital frequency when the orbital eccentricity is close to zero. Thus, the ejected energy at this stage can represent the total energy which ejected during the inspiral and coalescence. Combining these accounts, orbital eccentricity equal to zero is a reasonable approximation [40]. In addition, the orbital separation \(a\) gradually decreases due to the energy loss of GW emission during inspiral, and the rate of change of \(a\) can be written as, \[\frac{da}{dt} =-\frac{64}{5}\frac{G^{3}}{c^{5}}\frac{M_{NS}\cdot M_{BH}(M_{NS}+ M_{BH})}{a^{3}}f(e) \tag{6}\] \[=-\frac{64}{5}\frac{G^{3}}{c^{5}}\frac{M_{r}M^{2}}{a^{3}}f(e)\] **(2) Luminosity of magnetic dipole radiation:** When a charged BH-NS binary inspiral, a loop current can be produced because of the interaction between their magnetospheres. Generally speaking, BH is considered to be electrically neutral. However, if the BH is in the non-electricity neutral environment or formed in the strong magnetic fields, the BH would be charged and generated magnetospheres [50; 53]. In classical electromagnetism, the magnetic dipole moment is the product of the current and effective area of the loop [48], \[\mu =\frac{\pi}{c}I\left(\frac{a}{2}\right)^{2} \tag{7}\] \[=\sqrt{GMa}\ \frac{Q_{BH}+Q_{NS}}{8c},\] where \(I=(Q_{BH}+Q_{NS})/P_{s}\) is the loop current, and \(P_{s}=2\pi a^{3/2}/\sqrt{GM}\) is the orbital period[54; 55]. The luminosity of magnetic dipole radiation can be expressed as \[L_{m} =\frac{2\tilde{\mu}^{2}}{3c^{3}} \tag{8}\] \[\approx 8.56\times 10^{2}\frac{G^{13}M_{r}^{4}M^{9}}{c^{25}a^{15}}( \hat{q}_{BH}\cdot M_{BH}+\hat{q}_{NS}\cdot M_{NS})^{2}\] where \(\tilde{\mu}\) is the second derivative of magnetic dipole moment. ### Calculation of the charge of a NS Reference [56] proposed the distribution of spatial charge density (\(\rho_{e}\)) of a magnetized NS, \(\rho_{e}=-\mathbf{\Omega}\cdot\mathbf{B}/(2\pi c)\), where \(\mathbf{\Omega}\) and \(\mathbf{B}\) are the angular velocity and magnetic field strength of the NS, respectively. Assuming that the surface magnetic field of the NS is a dipole field, \(B=B_{p}R_{NS}^{3}/(r^{3})(3cos^{2}\theta+1)^{1/2}\), we adopt a simple situation that \(\mathbf{\Omega}\cdot\mathbf{B}<0\). The charge of the NS (\(Q_{NS}\)) consists of two parts: the charge within the magnetosphere (\(Q_{mag}\)) and inside the NS (\(Q_{in}\)). For \(Q_{mag}\), one needs to integrate along the distance from the NS surface (\(R_{NS}\)) to \(r\), which is the distance between the reference point and the NS center, \[Q_{mag} =\int_{0}^{2\pi}\int_{0}^{\pi}\int_{R_{NS}}^{r}-\frac{\mathbf{ \Omega}\cdot\mathbf{B}}{2\pi c}r^{2}sin\theta drd\varphi d\theta \tag{9}\] \[=\frac{\Omega B_{p}R_{NS}^{3}}{c}\int_{R_{NS}}^{r}\frac{1}{r}dr \int_{0}^{\pi}cos\theta sin\theta\sqrt{3cos^{2}\theta+1}d\theta,\] where \(B_{p}\) is the strength of surface dipole magnetic field of the NS, and \(\theta\) is azimuthal angle. Because the antisymmetry of the integrand around \(\pi/2\) for the integral over \(\theta\), one can guarantee that the value of integrating for \(\theta\) is zero, namely, \(Q_{mag}=0\). Similarly, if the NS is uniformly magnetized, one can calculate \(Q_{in}\) as [57], \[Q_{in}=\frac{2}{3}\Omega B_{p}R_{NS}^{3}. \tag{10}\] The total charge of the NS depends on the properties of the NS (e.g., angular velocity, dipole magnetic field strength, and the radius), and can be written as \[Q_{NS}=Q_{mag}+Q_{in}. \tag{11}\] ### Synchrotron radiation of Poynting-flux-dominated outflow In this section, we will discuss how the BH-NS system will radiate energy. For either magnetic dipole or electric dipole radiation, the radiation frequency is equal to the orbital frequency of \(\sim\) kHz. However, the orbital frequency is much lower than the intrinsic frequency of interstellar medium [\(\sim 10^{4}\) Hz, 58]. Here, we do not consider the affect of the high interstellar medium (\(\sim 1\) cm\({}^{-3}\)). The intrinsic (or oscillation) frequency of interstellar medium (\(\sim 10^{4}n_{e}^{1/2}\) Hz) originates from the thermal motion of electrons within the Debye length, where \(n_{e}\) is the electron number density per cm\({}^{3}\) in the interstellar medium. The low-frequency electromagnetic radiation can be prevented from propagating when EM frequency is less than the intrinsic frequency of interstellar medium. Therefore, it is inevitable that the radiation would be trapped in the vicinity of the binary. In practice, the energy of the dipole radiation can propagate outward with a Poynting-flux-dominated outflow. The Poynting luminosity at a radius (\(r_{p}\)) is written as [59; 60], \[\begin{split} L_{p}&=c\frac{(r_{p}B_{jet})^{2}}{4 \pi}\\ &=(L_{e}+L_{m})\left(1-\frac{\Gamma}{\Gamma_{sat}}\right)\\ &=L_{tot}\left(1-\frac{\Gamma}{\Gamma_{sat}}\right),\end{split} \tag{12}\] where \(B_{jet}\) and \(\Gamma\) are the jet magnetic field strength and bulk Lorentz factor within the outflow, respectively. The total energy from the electric dipole and magnetic dipole radiation (\(L_{tot}=L_{e}+L_{m}\)) is used to inject the outflow, and \(\Gamma_{sat}\) is the bulk Lorentz factor at the saturation radius \(r_{sat}\) of the outflow. Subsequently, the magnetic energy will be dissipated gradually via magnetic reconnection to accelerate electrons. Moreover, particle-in-cell (PIC) simulations suggest that the energy spectrum of accelerated electrons is roughly a power-law distribution [61; 62; 63], \[N(\gamma_{e})d\gamma_{e}\propto\gamma_{e}^{-p}d\gamma_{e},\gamma_{e}\geq \gamma_{m}, \tag{13}\] where \(\gamma_{e}\) and \(\gamma_{m}\) are the Lorentz factor and minimum Lorentz factor of accelerated electrons, respectively, and \(p\) is the power-law index of accelerated electrons. Here, we adopt \(p=4\sigma^{-0.3}\) according to a reasonable fit for the results of numerical calculations, and \(\sigma\) is the magnetization parameter of the outflow. Qualitatively, it is the ratio between magnetic energy and kinetic energy in the outflow, namely \(\sigma=L_{p}/L_{k}\). As the energy of accelerated electrons is dissipated, it can produce broadband radiation via different radiation mechanisms (e.g., synchrotron radiation and inverse compton scattering). In this paper, we consider only the pure synchrotron radiation2 which can generate multi-band afterglow of GRBs [from X-ray to radio, 59; 65]. Footnote 2: The synchrotron radiation is the main radiation mechanism in GRB study via the observational data [64]. Here, we do not consider other radiation mechanisms, such as curvature radiation, compton scattering, and inverse Compton scattering. Reference [66] considered a relativistic shock propagating through a uniform cold medium, and assumed that the shock undergoes adiabatic and radiative hydrodynamic evolution. Within this scenario, they calculated the radiated spectrum for two different regions, e.g., fast cooling and slow cooling cases. In order to distinguish those two spectral regions, one define a parameter \(\gamma_{c}\) which is the critical Lorentz factor for electron synchrotron radiation. The energy loss through synchrotron radiation becomes significance when the minimum Lorentz factor of electrons exceed the threshold exceed. So that, one can separates those two regions by comparing with \(\gamma_{c}\), e.g., \(\gamma_{m}>\gamma_{c}\) (fast cooling) and \(\gamma_{m}<\gamma_{c}\) (slow cooling). We list the luminosity (\(L_{\nu}\)) at each frequency (\(\nu\)) as follows for these two cases. 1. **Fast cooling regime:** \[L_{\nu}=\begin{cases} L_{\nu,max}\left(\frac{\nu}{\nu_{c}} \right)^{\frac{1}{3}},&\nu<\nu_{c}\\ L_{\nu,max}\left(\frac{\nu}{\nu_{c}}\right)^{-\frac{1}{2}},&\nu_{c}<\nu<\nu_{ m}\\ L_{\nu,max}\left(\frac{\nu_{m}}{\nu_{c}}\right)^{-\frac{1}{2}}\left(\frac{ \nu}{\nu_{m}}\right)^{-\frac{p}{2}},&\nu_{m}<\nu<\nu_{max}\end{cases}\] (14) 2. **Slow cooling regime:** \[L_{\nu}=\begin{cases} L_{\nu,max}\left(\frac{\nu}{\nu_{m}} \right)^{\frac{1}{3}},&\nu<\nu_{m}\\ L_{\nu,max}\left(\frac{\nu}{\nu_{m}}\right)^{-\frac{p-1}{2}},&\nu_{m}<\nu<\nu _{c}\\ L_{\nu,max}\left(\frac{\nu_{c}}{\nu_{m}}\right)^{-\frac{p-1}{2}}\left(\frac{ \nu}{\nu_{c}}\right)^{-\frac{p}{2}},&\nu_{c}<\nu<\nu_{max}\end{cases}\] (15) where \(L_{\nu,max},~{}\nu_{m},~{}\nu_{c}\) and \(\nu_{max}\) correspond to the maximal luminosity, typical frequency, cooling frequency, and the maximal frequency, respectively. The maximal synchrotron spectral luminosity, \[\begin{split} L_{\nu,max}&=\frac{m_{e}c^{2}\sigma_{T}B_{ jet}N_{e}}{3e}\\ &=\frac{m_{e}c^{2}\sigma_{T}N_{e}}{3e}\left[\frac{4\pi L_{tot}}{ cr_{p}^{2}}\left(1-\frac{\Gamma}{\Gamma_{sat}}\right)\right]^{1/2}\end{cases}\end{split} \tag{16}\] where \(\sigma_{T}\), \(e\) and \(N_{e}\) are the Thomson scattering cross-section, the electron charge, and the total number of emitting electrons in the jet at \(r_{p}\), respectively. Eq. (14) and (15) are invalid as long as the frequency is below the synchrotron self-absorption (SSA) frequency \(\nu_{a}\), and \[\frac{2\nu_{a}^{2}}{c^{2}}\gamma_{a}\Gamma m_{e}c^{2}\frac{\pi r_{p}^{2}}{ \Gamma^{2}}=L_{\nu_{a}}, \tag{17}\] where \(m_{e}\) and \(\gamma_{a}\) are the electron mass and Lorentz factor corresponding to \(\nu_{a}\), respectively. The SSA effect might play an important role when \(\nu<\nu_{a}\), and the spectral shape become \(L_{\nu}\propto\nu^{11/8}\)[67]. ## III Observations of the GW200105 and GW200115 events During the third observing run (O3), the LVK reported that two GW events (GW200115 and GW200105) are originated from BH-NS binaries [22], but the GW 200105 remains in debate later due to the high likelihood of detector noise [108]. Based on the GW signals from the two events, the inferred mass of their primary are \(8.9^{+1.2}_{-1.5}M_{\odot}\) and \(5.7^{+1.8}_{-2.1}M_{\odot}\), respectively. These primary masses are well in excess of the maximum NS mass, but fall into the mass range of BHs. On the other hand, the mass of the companions are \(1.9^{+0.3}_{-0.2}M_{\odot}\) and \(1.5^{+0.7}_{-0.3}M_{\odot}\), respectively. These are consistent with the mass range of known NSs and are below the maximum NS mass. Moreover, the LVK Collaboration claims that GW observations of LIGO and Virgo have led to the identification of five BH-NS candidates, GW190426 [17], GW190814 [18], GW190917 [19], GW191219 [20], and GW200210 [21]. The inferred primary mass, secondary mass, chirp mass, mass ratio, and redshift are collected in Table 1 for these GW events and their candidate BH-NS merger objects. Searching for the EM counterparts of a BH-NS merger has so far remained main targets and expectations of space and ground telescopes. There are no EM counterparts associated with the GW candidates of BH-NS merger that were claimed by the LVK Collaboration. For the GW200105 and GW200115 events, no significant signals were caught in the \(\gamma-\)ray band above the background because of the short-duration and rapid decline of the expected short GRB3. [68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86]. However, the Swift/XRT team claimed that several possible X-ray emission candidates within the localization area of the GW trigger S200115j were caught [87; 88]. Afterwards, possible associated optical/IR observations have been reported within an error box larger than that of the error boxes of GW200105 and GW200115, though they cannot be confirmed. Footnote 3: The other possibility is that no gamma-ray signals were produced when they are in non-disrupting systems. Global MASTER-Net telescopes started to scan the GW200105 error box \(\sim 3.2\) hours after the GW trigger, and got a series of upper limits in V- and C-band4[89]. The Zwicky Transient Facility (ZTF) inspected the localization area that covered \(\sim 51.7\%\) of the enclosed area of the GW trigger S200105ae in the g- and r-band, and reported more than 20 optical transients as the possible EM candidates of GW200105 [90; 91], although several other observations stated that there are five candidates that actually are consistent with supernovae (SNe) [92; 93; 94]. For GW200115, the ZTF and Global Relay of Observatories Watching Transients Happen (GROWTH) collaborations accidentally observed the localization region of the GW trigger S200115j and covered 22% of the localization probability [95]. Srivastav & Smartt [96] also reported the transients within one of the eastern lobes of this trigger with the Pan-STARRS2 telescope. The optical/IR observations reported by GCN are summarized in Table 2 and 3. Footnote 4: C-band is a clear (unfiltered) band. ## IV Results To date, there is indeed not directly evidence to observe the EM counterparts of BH-NS systems, only report the optical candidates or upper limits of EM counterparts in the two events GW200105 and GW200115. If this is the case, one can constrain the BH charge according the hypothesis of charged BH-NS systems by adopting the upper limits. In this section, we assume that either the observed optical transients above or even upper limits as the EM counterparts of GW200105 and GW200115 events, and adopt those observed EM counterparts to constrain the BH charge by given the hypothesis of a charged BH-NS system. ### Conversion efficiency of electric dipole and magnetic dipole radiation The charged BH and NS system can produce both electric dipole and magnetic dipole radiation due to the interaction between the magnetospheres of the objects during inspiral. The total energy from the electric dipole and magnetic dipole radiation (\(L_{tot}\)) is used to inject into the outflow, and it propagates outward with a Poynting-flux-dominated outflow. Within the synchrotron radiation scenario, the conversion efficiency (\(\eta\)) is defined as the ratio between the luminosity of the observed energy-bands and total luminosity (e.g., the sum of electric and magnetic dipole radiation), \[\eta=\frac{\int_{\nu_{1}}^{\nu_{2}}L_{\nu}d\nu}{L_{tot}}, \tag{18}\] where \(\nu_{1}\) and \(\nu_{2}\) are the frequency range of a certain energy-band or detector, respectively. The conversion efficiency is dependent on the frequency and the synchrotron radiation luminosity \(L_{\nu}\). For a given energy band, Eq.(18) can be calculated by adopting Eqs.(14), (15), and (17). Here, we adopt the \(\Gamma_{\rm sat}=\sigma_{0}\Gamma_{0}=\sigma_{0}^{3/2}=1000\) to calculate the \(L_{\nu,max}\), where \(\Gamma_{0}\) and \(\sigma_{0}\) are the initial Lorentz factor and magnetization parameter of the outflow, respectively. One can calculate the conversion efficiency as a function of the given total luminosity [65]. Fig.1 shows the conversion efficiency as function of \(L_{tot}\) from the X-ray to radio bands. It is clear to see that the conversion efficiency in the X-ray band is much higher than that of the ultraviolet (UV), near-infrared, and radio bands, and increases with \(L_{tot}\) in X-ray, UV, and near-infrared. However, the conversion efficiency in the radio band initially increase slowly, and then decreases with \(L_{tot}\). ### Constraining the Charge of a BH: Application for the GW200105 and GW200115 events Based on the Eq.(4) and Eq.(8), the \(L_{\rm tot}=L_{e}+L_{m}\) is dependent on the charge and mass of both the BH and NS. The charge of the NS (\(Q_{NS}\)) also depends on the parameters of the NS, e.g., period, the strength of magnetic field, and the radius, but we know little about those parameters. In order to test the magnitude of \(Q_{NS}\), we fix the radius \(R_{\rm NS}=12\) km, and adopt five groups of typical period (\(P=\)1 ms, 5 ms, 10 ms, 50 ms, and 100 ms) and strength of magnetic field (\(B_{p}=10^{12}\) G, \(10^{13}\) G, \(10^{14}\) G, \(10^{15}\) G, and \(10^{16}\) G). Then we calculated the NS charge using various combinations of the above parameters. Applying this analysis to the GW200105 and GW200115 events, we can estimate the parameters of the BH and NS (e.g., mass of the BH and NS, luminosity distance, mass ratio, and chirp mass) via the GW observations. On the other hand, by assuming that the observed optical transients or even upper limits in section 3 are the EM counterparts of the GW200105 and GW200115 events, then, we can calculate the luminosity (\(L_{\rm obs}\)) based on the luminosity distance5 and observed magnitude in the optical/IR band. Here, we calculate the \(L_{\rm obs}\) by adopting the maximum and minimum magnitudes in the optical/IR in table 2 and 3 as marked \(L_{\rm obs,max}\) and \(L_{\rm obs,min}\), respectively. The range of \(L_{\rm obs}\) is \(L_{\rm obs,min}<L_{\rm obs}<L_{\rm obs,max}\), and adopt \(L_{\rm obs}\) corresponding \(L_{\nu}\). Based on the Fig.1, one can roughly estimate the conversion efficiency in the optical/IR bands. Combining with Eqs.(4-8), one can derive the luminosity of electric and magnetic dipole radiation at each orbital separation (\(a\)). Then, we take the sum of the luminosity at each orbital separation as the injected energy of Poynting-flux-dominated outflow. Finally, by adopting Eqs.(11-17), one can roughly estimate the charge on the BH for \(L_{\nu}=L_{\rm obs,min}\) and \(L_{\nu}=L_{\rm obs,max}\). The results are presented in Table 4. In this table, we define \(\hat{q}_{BH,1}\) and \(\hat{q}_{BH,2}\) in unit of \(Q_{BH}/M_{BH}\) (the charge for unit mass) are derived by the magnitude of the brightest and dimmest EM candidates for GW 200105 and GW200115, respectively. Footnote 5: The luminosity distance is related to the parameters of cosmology, and it can be expressed as \(D_{L}(z)=(1+z)\frac{c}{H_{0}}\int_{0}^{z}\frac{dz^{\prime}}{\sqrt{12M}(1+z^{ \prime})^{3}+\Omega_{\Lambda}}\), where \(z\) is the redshift. For GW200105, we find that the estimated maximal charge-to-mass ratio of BH is between \(9.73\times 10^{-8}\) and \(1.12\times 10^{-6}\)\(\,\)esu with \(B_{p}\lesssim 10^{16}\) G and \(P>~{}1\) ms for the NS. Similarly, for GW200115, we also estimate the maximal charge-to-mass ratio of BH which ranges from \(8.84\times 10^{-8}\) to \(1.53\times 10^{-6}\)\(\,\)esu with \(B_{p}\lesssim~{}10^{16}\) G and \(P>~{}1\) ms for the NS. ## V Conclusion and Discussion The physical properties of any BH can be described with its mass, angular momentum, and charge. The mass and angular momentum of the BH can be roughly measured based on currently observed data, but to infer the charge of a BH remains an open question. One proposal is that a charged BH and NS merger could be a potential approach to constrain the BH charge [40]. Recently, two GW events (GW200105 and GW200115) originating from the merger of a BH-NS system are confirmed to be detected by aLIGO and Virgo, and several optical source candidates are reported to possible associate with the two GW events, but not confirmed yet. In this paper, by assuming that the possible optical sources are associated with GW200105 and GW200115, we try to estimate the BH charge via the observed upper limits of the EM radiation in the charged BH and NS system. A charged BH merging with a NS can produce electric and magnetic dipole radiation. Then, the energy from the electric and magnetic dipole radiation injects into the outflow which is Poynting flux dominated, and the magnetic energy can convert into the kinetic energy of electrons by magnetic reconnection and turbulence to accelerate electrons. It can produce the observed EM radiation by assuming synchrotron radiation in the outflow. Within this scenario, we calculate the conversion efficiency of the electric dipole and magnetic dipole radiation within different energy bands, and constrain the BH charge for given NS physical parameters (e.g., period and surface magnetic field). The following interesting results are obtained. * We find that the conversion efficiency in the X-ray band is much higher than that of the ultra-violet (UV), near-infrared, and radio bands, and it increases with \(L_{tot}\) in the X-ray, UV, and near-infrared. However, the conversion efficiency in the radio band initially increases slowly, and then decreases with \(L_{tot}\). * For GW200105, we find that the estimated maximal charge-to-mass ratio (the charge for unit mass) of the BH is between about \(9.73\times 10^{-8}\) and \(1.12\times 10^{-6}\)\(\,\)esu with \(B_{p}\lesssim~{}10^{16}\) G and \(P>~{}1\) ms for the NS. Similarly, for GW200115, we also estimate the maximal charge-to-mass ratio of the BH which ranges from \(8.84\times 10^{-8}\) to \(1.53\times 10^{-6}\)\(\,\)esu with \(B_{p}\lesssim~{}10^{16}\) G and \(P>~{}1\) ms for the NS. In fact, the radiation is only found in the leading order post-newtonian expansion, which for the electric dipole is even lower order (-1PN) than the GW quadrupole. In our calculations, we do not consider the contribution of EM force for modification of the orbit, because the EM force is much smaller than gravitational force in our analysis. However, the contribution of EM force is not negligible if EM and gravitational forces are comparable between each other [51; 52]. It has always been expected to be able to simultaneously observe GW and EM signals from BH-NS merger systems, and to search for the EM counterparts of the GW event from such mergers remains an interesting and hot topic in astrophysics. Reference [97] proposed that the NS charge can be transformed to the BH during inspiral phase of the BH-NS system. Within this scenario, the
2302.00596
Accelerated and Improved Stabilization for High Order Moments of Racah Polynomials
One of the most effective orthogonal moments, discrete Racah polynomials (DRPs) and their moments are used in many disciplines of sciences, including image processing, and computer vision. Moments are the projections of a signal on the polynomial basis functions. Racah polynomials were introduced by Wilson and modified by Zhu for image processing and they are orthogonal on a discrete set of samples. However, when the moment order is high, they experience the issue of numerical instability. In this paper, we propose a new algorithm for the computation of DRPs coefficients called Improved Stabilization (ImSt). In the proposed algorithm, {the DRP plane is partitioned into four parts, which are asymmetric because they rely on the values of the polynomial size and the DRP parameters.} The logarithmic gamma function is utilized to compute the initial values, which empower the computation of the initial value for a wide range of DRP parameter values as well as large size of the polynomials. In addition, a new formula is used to compute the values of the initial sets based on the initial value. Moreover, we optimized the use of the stabilizing condition in specific parts of the algorithm. ImSt works for wider range of parameters until higher degree than the current algorithms. We compare it with the other methods in a number of experiments.
Basheera M. Mahmmod, Sadiq H. Abdulhussain, Tomáš Suk
2022-12-30T17:07:26Z
http://arxiv.org/abs/2302.00596v1
# Accelerated and Improved Stabilization for High Order Moments of Racah Polynomials ###### Abstract One of the most effective orthogonal moments, discrete Racah polynomials (DRPs) and their moments are used in many disciplines of sciences, including image processing, and computer vision. Moments are the projections of a signal on the polynomial basis functions. Racah polynomials were introduced by Wilson and modified by Zhu for image processing and they are orthogonal on a discrete set of samples. However, when the moment order is high, they experience the issue of numerical instability. In this paper, we propose a new algorithm for the computation of DRPs coefficients called Improved Stabilization (ImSt). In the proposed algorithm, the DRP plane is partitioned into four parts, which are asymmetric because they rely on the values of the polynomial size and the DRP parameters. The logarithmic gamma function is utilized to compute the initial values, which empower the computation of the initial value for a wide range of DRP parameter values as well as large size of the polynomials. In addition, a new formula is used to compute the values of the initial sets based on the initial value. Moreover, we optimized the use of the stabilizing condition in specific parts of the algorithm. ImSt works for wider range of parameters until higher degree than the current algorithms. We compare it with the other methods in a number of experiments. Racah polynomials Orthogonal moments Recurrence algorithm Stabilizing condition ## 1 Introduction Moment can be understood as the projection of a signal to a polynomial basis. The moments are widely used as features for recognition of images and various image-like data. The moments can be divided to non-orthogonal and orthogonal. The non-orthogonal geometric and complex moments have advantage in easier construction of invariants to various geometric and radiometric transformations, e.g. rotation [1], [2], affine transformation [3], convolution with symmetric filter [4], [5] etc. On the other hand, they are very correlated each other, what leads to precision loss in lower orders than that of the orthogonal moments (the order equals the degree of the polynomial). That is why we use orthogonal polynomials. They can be further divided into continuous and discrete. The relation of orthogonality of the continuous polynomials is based on integral over some interval, an example can be Fourier Mellin moments [6]. When we compute the continuous moment from a digital image that is only defined in discrete pixels, we obtain the value with some error caused by the approximate computation of the definition integrals. Therefore the polynomials with discrete orthogonality that is based on the sum over some finite set of discrete samples are intensively studied. Different types of discrete orthogonal polynomials have been derived over the ages. Here, we mention only that with significance for image processing. Besides his famous continuous polynomials, Chebyshev published also discrete ones. Mukundan derived efficient algorithm for computation of the discrete Chebyshev polynomials [7]. Krawtchouk polynomials have parameter \(p\in\langle 0,1\rangle\). It moves the zeros over the image and we can use it for adjustment of the region of interest. Efficient algorithm for their computation can be found in [8] or [9], non-traditional way of computation by filters was published in [10]. The generalization of the Krawtchouk polynomials are Meixner polynomials; the efficient algorithm is in [11]. Other group of discrete orthogonal polynomials contains e.g. Hahn polynomials. They can be computed by the algorithm from [12]. Dual Hahn polynomials were derived by swapping coordinate and order of the Hahn polynomials. The result is the non-uniform lattice \(x(s)=s(s+\gamma+\delta+1)\), see [13]. It is difficult to use in image processing, therefore Zhu et al. [14] slightly changed the definition and used the index \(s\) as coordinate in the digital image. Efficient algorithm can be found in [15]. The Racah polynomials were first published by Wilson in [16] and named after physicist and mathematician Giulio Racah. They have the similar non-uniform lattice \(x(s)=s(s+\gamma+\delta+1)\), as the dual Hahn, see [13]. Zhu et al. [17] solved it also similarly. The Racah moments were used in skeletonization of craft images [18], Chinese character recognition [19], handwritten digit recognition [20], and face recognition [21]. In this paper, we propose an efficient algorithm for computation of the Racah polynomials. The paper is organized as follows. Sec. 2 is summary of definitions and state-of-the-art algorithms, our proposed method is in Sec. 3, we show its properties in numerical experiments in Sec. 4 and Sec. 5 concludes the paper. recognition [20], and face recognition [21]. In this paper, we propose an efficient algorithm for computation of the Racah polynomials. The paper is organized as follows. Sec. 2 is summary of definitions and state-of-the-art algorithms, our proposed method is in Sec. 3, we show its properties in numerical experiments in Sec. 4 and Sec. 5 concludes the paper. ## 2 Preliminaries and Related Work In this section, the mathematical definitions and fundamentals of the discrete Racach polynomial (DRP) and their moments are presented. The current methods of their computation are summarized. ### The mathematical definition of DRPs The original Wilson's definition [16] is \[\boldsymbol{\mathcal{R}}_{n}^{\binom{\alpha,\beta}{\gamma,\delta}}(\lambda(x) ;N)\!=\!_{4}F_{3}\left(\left.\begin{array}{c}-n,n+\alpha+\beta+1,-x,x+\gamma +\delta+1\\ \alpha+1,\beta+\delta+1,\gamma+1\end{array}\right|\,1\right), \tag{1}\] where \({}_{4}F_{3}(\cdot)\) is the hypergeometric series. It is defined \[{}_{4}F_{3}\left(\left.\begin{array}{c}a,b,c,d\\ e,f,g\end{array}\right|\,z\right)=\sum_{k=0}^{\infty}\frac{(a)_{k}\left(b \right)_{k}\left(c\right)_{k}\left(d\right)_{k}}{\left(e\right)_{k}\left(f \right)_{k}\left(g\right)_{k}}\cdot\frac{(z)^{k}}{k!}\;. \tag{2}\] The symbol \((\cdot)_{m}\) is the Pochhammer symbol defined as \[(a)_{m}=a(a+1)(a+2)\cdots(a+m-1)\;. \tag{3}\] Zhu et al. in [17] introduced a new variable \(s\) and defined \(x=s(s+1)\). Then the \(n\)th order of the DRPs \(\boldsymbol{\mathcal{R}}_{n}^{\binom{\alpha,\beta}{a,b}}(s;N)\) are given by \[\boldsymbol{\mathcal{R}}_{n}^{\binom{\alpha,\beta}{a,b}}(s;N)=\frac{1}{n!}(a+b +\alpha+1)_{n}(\beta+1)_{n}(a-b+1)_{n}\times{}_{4}F_{3}\left(\left.\begin{array} []{c}-n,a-s,a+s+1,\alpha+\beta+n+1\\ \beta+1,a+b+\alpha+1,a-b+1\end{array}\right|\,1\right), \tag{4}\] where \(a>-1/2\), \(b>a\), \(b-a=N\) must be integer, \(\alpha>-1\), \(\beta>-1\), and \(\beta<2a+1\). The DRPs satisfy the condition of orthogonality \[\sum_{s=a}^{b-1}\mathbf{\mathcal{R}}_{n}^{\left(\alpha,\beta\right)}(s;N)\,\mathbf{ \mathcal{R}}_{n}^{\left(\alpha,\beta\right)}(s;N)\,\rho(s)\Delta x\left(s- \frac{1}{2}\right)=d_{n}^{2}\delta_{nm}\;, \tag{5}\] where \(\delta_{nm}\) is the Kronecker delta, \(\Delta x\left(s-\frac{1}{2}\right)\) is the difference of the \(x\) shifted by a half, i.e. \(\Delta x\left(s-\frac{1}{2}\right)=\left(s+\frac{1}{2}\right)\left(s+\frac{3}{2 }\right)-\left(s-\frac{1}{2}\right)\left(s+\frac{1}{2}\right)=(2s+1)\), \(\rho\) is the weight function of DRP \[\rho(s)=\frac{\Gamma(a\!+\!s\!+\!1)\Gamma(b\!+\!s\!+\!\alpha\!+\!1)\Gamma(b\!+ \!\alpha\!-\!s)\Gamma(s\!-\!a\!+\!\beta\!+\!1)}{\Gamma(b\!+\!s\!+\!1)\Gamma(b\!- \!s)\Gamma(s\!-\!a\!+\!1)\Gamma(a\!-\!\beta\!+\!s\!+\!1)} \tag{6}\] and \(d_{n}^{2}\) is the norm function of DRP \[d_{n}^{2}\!=\!\frac{\Gamma(\alpha\!+\!n\!+\!1)\Gamma(\beta\!+\!n\!+\!1)\Gamma( a\!+\!b\!+\!\alpha\!+\!n\!+\!1)\Gamma(b\!-\!a\!+\!\alpha\!+\!\beta\!+\!n\!+\!1)}{( \alpha\!+\!\beta\!+\!2n\!+\!1)\Gamma(n\!+\!1)\Gamma(b\!-\!a\!-\!n)\Gamma( \alpha\!+\!\beta\!+\!n\!+\!1)\Gamma(a\!+\!b\!-\!n\!-\!\beta)} \tag{7}\] The \(n\)th degree of the weighted DRP is given by \[\hat{\mathbf{\mathcal{R}}}_{n}^{\left(\alpha,\beta\right)}(s;N)=\mathbf{\mathcal{R}} _{n}^{\left(\alpha,\beta\right)}(s;N)\,\sqrt{\frac{\rho_{\!(}s)}{d_{n}^{2}} \cdot\Delta x\left(s-\frac{1}{2}\right)}. \tag{8}\] ### The state-of-the-art algorithms We can find significant algorithms of two authors in the literature, the original Zhu's paper and Daoui's approach. For convenience, we will use the simplified notation \(\hat{\mathbf{\mathcal{R}}}_{n}^{\left(\alpha,\beta\right)}(s;N)=\hat{\mathbf{\mathcal{ R}}}_{n}(s)\) with \(b=a+N\). #### 2.2.1 Zhu's algorithms Zhu et al. in [17] published two algorithms for Racah polynomial computation: recurrence over the order \(n\) and recurrence over the index \(s\). The recurrence formula of weighted Racah polynomials over the order \(n\) is \[\hat{\mathbf{\mathcal{R}}}_{n+1}(s)=\left(B\frac{d_{n}}{d_{n+1}}\hat{\mathbf{\mathcal{ R}}}_{n}(s)-C\frac{d_{n-1}}{d_{n+1}}\hat{\mathbf{\mathcal{R}}}_{n-1}(s)\right)/A \tag{9}\] with initial conditions \[\begin{array}{l}\hat{\mathbf{\mathcal{R}}}_{0}(s)\!=\!\sqrt{\frac{\rho(s)}{d_{n }^{2}}(2s+1)},\\ \hat{\mathbf{\mathcal{R}}}_{1}(s)\!=\!-\!\sqrt{\frac{\rho(s)}{d_{n}^{2}}(2s\!+\!1) }\times\left(\frac{\varrho(\pi\!+\!1)(\pi\!+\!\alpha)(s\!+\!1\!+\!0)(s\!+\!1\! +\!0\!-\!\beta)(b\!+\!1\!+\!0\!-\!\beta)(b\!+\!1\!+\!0\!-\!\beta-\!1)}{\varrho( s)(2s\!+\!1)}-\frac{\varrho(s)(\pi\!-\!0)(s\!+\!b)(s\!-\!\beta)(b\!+\!\alpha\!-\!s)}{ \varrho(s)(2s\!+\!1)}\right),\end{array} \tag{10}\] where \[\begin{array}{l}A\!=\!\frac{(n+1)(\alpha+\beta+n)}{(\alpha+\beta+2n+1)( \alpha+\beta+2n+2)},\\ B\!=\!s(s\!+\!1)-\frac{a^{2}\!+\!b^{3}\!+\!(a-\!\beta)^{2}\!+\!(b\!+\!\alpha)^{2 }}{4}\!+\!\frac{(\alpha+\beta+2n)(\alpha+\beta+2n+2)}{(\alpha+\beta+2n)( \alpha+\beta+2n+2)},\\ C\!=\!\frac{(\alpha\!+\!n)(\beta+n)}{(\alpha+\beta+2n)(\alpha+\beta+2n+1)} \!\left[\left(a\!+\!b\!+\!\frac{\alpha-\beta}{2}\right)^{2}\!\!-\!\left(n\!+ \!\frac{\alpha+\beta}{2}\right)^{2}\right]\!\!\left[\left(b\!-\!a\!+\!\frac{ \alpha+\beta}{2}\right)^{2}\right]\!\!\left[\left(b\!-\!a\!+\!\frac{\alpha+ \beta}{2}\right)^{2}\right]\!\!\left[\left(b\!-\!a\!+\!\frac{\alpha+\beta}{2} \right)^{2}-\left(n\!+\!\frac{\alpha+\beta}{2}\right)^{2}\right]\!\!\left[ \left(b\!-\!a\!+\!\frac{\alpha+\beta}{2}\right)^{2}\right]\!\!\left[\left(b\!- \!a\!+\!\frac{\alpha+\beta}{2}\right)^{2}-\left(n\!+\!\frac{\alpha+\beta}{2} \right)^{2}\right]\!\!\left[\cdot\right.\end{array} \tag{11}\] The second algorithm is recurrence over the index \(s\) \[\begin{array}{l}\hat{\mathbf{\mathcal{R}}}_{n}(s)=\frac{(2s-1)[\sigma(s-1)+(s-1) \tau(s-1)-2\lambda s(s-1)]}{(s-1)[\sigma(s-1)+(2s-1)\tau(s-1)]}\sqrt{\frac{ \rho(s)(2s+1)}{\rho(s-1)(2s-1)}}\hat{\mathbf{\mathcal{R}}}_{n}(s-1)-\\ \frac{2\sigma(s-1)}{(s-1)[\sigma(s-1)+(2s-1)\tau(s-1)]}\sqrt{\frac{\rho(s)(2s +1)}{\rho(s-2)(2s-3)}}\hat{\mathbf{\mathcal{R}}}_{n}(s-2)\,,\end{array} \tag{12}\] where \[\begin{array}{l}\sigma(s)=(s+a-\beta(b+\alpha-s))(s-a)(s+b)\\ \tau(s)=a(\alpha\!+\!1)(a\!-\!\beta)\!+\!b(b\!+\!\alpha)(\beta\!+\!1)\!-\!( \alpha\!+\!1)(\beta\!+\!1)\!-\!s(s\!+\!1)(\alpha\!+\!\beta\!+\!2)\\ \lambda=n(n+1+\alpha+\beta).\end{array} \tag{13}\] The declared initial values are \[\begin{array}{l}\hat{\mathbf{\mathcal{R}}}_{n}(a)=\frac{(-1)^{n}}{(n!)^{2}}(a+1)_{n} (\beta-a+1)_{n}(b+\alpha+1)_{n}(b-n)_{n}\sqrt{\frac{\rho(0)}{d_{n}^{2}}},\\ \hat{\mathbf{\mathcal{R}}}_{n}(a+1)=\frac{2}{(n+2)(n+1)}\left[\frac{\rho_{n}(1)}{ \rho_{n}(0)}-\frac{n(n+1)}{2}\right]\sqrt{\frac{3\rho(1)}{\rho(0)}}\hat{\mathbf{ \mathcal{R}}}_{0}(s)\,,\end{array} \tag{14}\] where \[\rho_{n}(s)=\frac{\Gamma(a+s+n+1)\Gamma(s-a+\beta+n+1)\Gamma(b+\alpha-s)\Gamma(b +\alpha+s+n+1)}{\Gamma(a-\beta+s+1)\Gamma(s-a+1)\Gamma(b-s-n)\Gamma(b+s+1)}. \tag{15}\] These initial conditions does not work; to resolve this issue, either the recurrence over \(n\) for \(s=a\) and \(s=a+1\) is used or one of the following algorithms can be used. #### 2.2.2 Daoui's algorithms Daoui et al. in [22] proposed stabler algorithm for DRP computation with two modifications. One problem is overflow of the initial value \(\hat{\mathbf{\mathcal{R}}}_{0}(a)\) for high values of the parameter \(\beta\). When \(\beta\) is integer, we can compute \(\hat{\mathbf{\mathcal{R}}}_{0}(a)\) by recurrence \[\begin{array}{l}F(0)=\frac{\alpha+1}{(a+b)(\alpha+b-a)}\\ F(k)=\frac{(\alpha+k+1)(2a-k+1)}{(a+b-k)(b-a+\alpha+k)}F(k-1),\qquad\qquad k=1,2,\ldots,\beta\\ \hat{\mathbf{\mathcal{R}}}_{0}(a)=\sqrt{F(\beta)(2a+1)}.\end{array} \tag{16}\] The factor \((\alpha+k)/(\alpha+k)\) from the original paper can be omitted. The other values are obtained by the recurrence relation over \(n\) as in Eq. (9). It is called Algorithm 1. Another algorithm is based on the recurrence over \(s\). It begins by the same way, computation of \(\hat{\mathbf{\mathcal{R}}}_{0}(a)\) by Eq. (16). The initial values of higher degrees are \[\begin{array}{l}\hat{\mathbf{\mathcal{R}}}_{n}(a)=\frac{(a-b+n)(\beta+n)(a+b+ \alpha+n)}{n}\sqrt{D}\hat{\mathbf{\mathcal{R}}}_{n-1}(a)\,,\\ D=\frac{n(\alpha+\beta+2n+1)(a+\beta+n)}{(\alpha+n)(b-a+\alpha+\beta+n)(a+b+ \alpha+n)(\alpha+\beta+2n-1)(a+b-\beta-n)(b-a-n)}.\end{array} \tag{17}\] In the paper, there is incorrect factor \((\alpha+b-\beta-n)\) instead of \((a+b-\beta-n)\) in the denominator of \(D\). The rest of the initial values is computed as \[\hat{\mathbf{\mathcal{R}}}_{n}(a+1)\!=\!E\sqrt{\frac{\rho(a+1)}{\rho(a)}\cdot \frac{2a+3}{2a+1}}\hat{\mathbf{\mathcal{R}}}_{n}(a)\,, \tag{18}\] where \[E=\left(1+\frac{2n(\alpha+\beta+n+1)(a+1)}{(a-b+1)(\beta+1)(a+b+\alpha+1)}\right) \tag{19}\] and \[\begin{array}{l}\frac{\rho(a+1)}{\rho(a)}=\frac{(2a+1)(\beta+1)(b+\alpha+a+ 1)(b-a+1)}{(b+\alpha-a-1)(2a-\beta+1)(a+b+1)}.\end{array} \tag{20}\] It is incorrect, the correct version is \[\begin{array}{lcl}E&=&\left(1-\frac{2\lambda(a+1)}{\tau(a)}\right)=\\ &=&\left(1+\frac{2n(\alpha+\beta+n+1)(a+1)}{(a+1)(\beta+1)+a(a+1)(\alpha+ \beta+2)-a(\alpha+1)(a-\beta)-b(\beta+1)(b+\alpha)}\right),\end{array} \tag{21}\] i.e. the denominator is completely incorrect and \[\begin{array}{l}\frac{\rho(a+1)}{\rho(a)}=\frac{(2a+1)(\beta+1)(b+\alpha+a+ 1)(b-a-1)}{(b+\alpha-a-1)(2a-\beta+1)(a+b+1)},\end{array} \tag{22}\] i.e. in the numerator, there should be \((b-a-1)\) instead of \((b-a+1)\). Finally, Daoui et al. use the stabilizing condition. When \(\hat{\mathbf{\mathcal{R}}}_{n}(s)\) is computed by Eq. (12), the new value is tested. When \[n>\frac{N}{6}\,\wedge\,\left|\hat{\mathbf{\mathcal{R}}}_{n}(s)\right|<10^{-6}\, \wedge\,\left|\hat{\mathbf{\mathcal{R}}}_{n}(s)\right|>\left|\hat{\mathbf{\mathcal{R} }}_{n}(s-1)\right|, \tag{23}\] the value of \(\hat{\mathbf{\mathcal{R}}}_{n}(s)\) is substituted by zero. The symbol \(\wedge\) means the logical and. It erases senselessly high values distorted by propagated error. It is called Algorithm 3 in the paper. We will use it, after the error corrections, as the reference algorithm. #### 2.2.3 Gram-Schmidt Orthogonalization Gram-Schmidt orthogonalization process (GSOP) is a way, how to change a set of functions to another set of orthogonal functions. It can be used for derivation of completely new orthogonal polynomials, e.g. GSOP applied on a set \(\{1,x,x^{2},\ldots\}\) in the interval \(\langle-1,1\rangle\) gives Legendre polynomials, see e.g.[23]. We can use GSOP also for increasing precision of orthogonal polynomials computed by another method. Here we have computed \(\hat{\mathbf{\mathcal{R}}}_{n}(s)\), but we are not sure, if it is sufficiently precise. We can compute correction \[\mathbf{\mathcal{T}}(s)=\sum_{k=0}^{n-1}\hat{\mathbf{\mathcal{R}}}_{k}(s)\left(\sum_{ i=a}^{a+N-1}\hat{\mathbf{\mathcal{R}}}_{n}(i)\,\hat{\mathbf{\mathcal{R}}}_{k}(i) \right),\qquad s=a,a+1,\ldots,a+N-1. \tag{24}\] This correction is then subtracted from the original value \[\tilde{\mathbf{\mathcal{R}}}_{n}(s)=\hat{\mathbf{\mathcal{R}}}_{n}(s)-\mathbf{\mathcal{T}}(s), \qquad s=a,a+1,\dots,a+N-1. \tag{25}\] Then we must correct also the norm \[\tilde{\mathbf{\mathcal{R}}}_{n}(s)=\check{\mathbf{\mathcal{R}}}_{n}(s)\,/\left(\sqrt{ \sum_{i=a}^{a+N-1}\check{\mathbf{\mathcal{R}}}_{n}(s)^{2}}+\varepsilon\right), \qquad s=a,a+1,\dots,a+N-1, \tag{26}\] where \(\varepsilon\) is some small value preventing division by zero. In Matlab \(\varepsilon=2.2204\cdot 10^{-16}\). \(\check{\mathbf{\mathcal{R}}}_{n}(s)\) is now version of \(\hat{\mathbf{\mathcal{R}}}_{n}(s)\) with increased precision. GSOP works well, its main disadvantage is the high computing complexity \(\mathcal{O}(N^{3})\) (if we compute all the degrees up to \(n=N-1\)), while the computing complexity of all other algorithms mentioned in this paper is \(\mathcal{O}(N^{2})\). It is big limitation of this method, the computing time may not be acceptable for very high \(N\). ### The definition of discrete Racah moments (DRM) DRMs represent the projection of a signal (speech or images) on the basis of DRP. The computation of the DRMs (\(\phi_{nm}\)) for a 2D signal, \(f(x,y)\), with a size of \(N_{1}\times N_{2}\) is performed by \[\phi_{nm}=\sum_{x=0}^{N_{1}-1}\sum_{y=0}^{N_{2}-1}f(x,y)\hat{\mathbf{ \mathcal{R}}}_{n}^{\begin{pmatrix}\alpha_{1},\beta_{1}\\ a_{1},b_{1}\end{pmatrix}}(x;N_{1})\,\hat{\mathbf{\mathcal{R}}}_{m}^{\begin{pmatrix} \alpha_{2},\beta_{2}\\ a_{2},b_{2}\end{pmatrix}}(y;N_{2}) \tag{27}\] \[n=0,1,\dots,N_{1}-1\;\;\mathrm{and}\;\;m=0,1,\dots,N_{2}-1.\] The reconstruction of the 2D signal (image) from the Racah domain (the space of Racah moments) into the spatial domain can be carried out by \[\hat{f}(x,y)=\sum_{n=0}^{N_{1}-1}\sum_{m=0}^{N_{2}-1}\phi_{nm} \hat{\mathbf{\mathcal{R}}}_{n}^{\begin{pmatrix}\alpha_{1},\beta_{1}\\ a_{1},b_{1}\end{pmatrix}}(x;N_{1})\,\hat{\mathbf{\mathcal{R}}}_{n}^{\begin{pmatrix} \alpha_{2},\beta_{2}\\ a_{2},b_{2}\end{pmatrix}}(y;N_{2}) \tag{28}\] \[x=0,1,\dots,N_{1}-1\;\;\mathrm{and}\;\;y=0,1,\dots,N_{2}-1.\] ## 3 The Proposed Methodology This section presents the proposed methodology for computing DRPs. We call it improved stabilization (ImSt). \[\hat{\mathbf{\mathcal{R}}}_{n}(s)=\sqrt{\frac{\Gamma(a{+}N{+}\alpha{-}s)\Gamma(a{ +}N{+}\alpha{+}s{+}1)\left(\alpha{+}1{+}2n\right)\Gamma(N{-}n)\Gamma(2a{+}N{-} n)\left(2s{+}1\right)}{\Gamma\left(a{+}N{-}s\right)\Gamma\left(a{+}N{+}s{+}1 \right)\Gamma\left(N{+}\alpha{+}1{+}n\right)\Gamma\left(2a{+}N{+}\alpha{+}n{+} 1\right)}}\times\] \[\left(-N{+}1\right)_{n}\left(2a{+}N{+}\alpha{+}1\right)_{n}{}_{4}F_{3}\left( \left.\begin{pmatrix}-n,a{-}s,a{+}s{+}1,\alpha{+}n{+}1\\ 1,2a{+}N{+}\alpha{+}1,-N{+}1\end{pmatrix}1\right) \tag{29}\] The DRP matrix is partitioned into four parts. They are shown in Figure 1 as Part 1, Part 2, Part 3, and Part 4. In the following subsections, the detailed steps are given. First of all, we must compute initial values. ### The First Initial Value The selection of the first initial value, specifically its location and how it is computed, is considered crucial because all the other values of the the polynomial rely on that initial value. The computation of the initial value in the existing algorithms limits the ability to compute the entire values of DRPs. For example, in [22], the formula for computation of the first initial value is as follows \[\hat{\mathbf{\mathcal{R}}}_{0}(a)=\sqrt{F\times\left(2a+1\right)}\,,\] \[F=\frac{\Gamma(2a{+}1)\Gamma(\alpha{+}\beta{+}2)\Gamma(b{-}a{+} \alpha)\Gamma(a+b-\beta)}{\Gamma(a{+}b{+}1)\Gamma(\alpha{+}1)\Gamma(2a{+}1- \beta)\Gamma(-a{+}b{+}\alpha{+}\beta{+}1)}\,. \tag{30}\] This formula (30) is uncomputable for a wide range of parameter values \(a\), \(\alpha\), and \(\beta\) as shows in Figure 2a. Thus, in the proposed algorithm, we begin the computation at the last value of the first row, i.e. at \(s=a+N-1\) as follows \[\hat{\mathbf{\mathcal{R}}}_{0}(N-1+a)=\sqrt{\frac{\Gamma(\alpha{+}\beta{+}2)\Gamma(2 a{+}N)\Gamma(\beta{+}N)\Gamma(2a{+}2N{+}\alpha)}{\Gamma(2a{+}2N-1)\Gamma( \beta{+}1)\Gamma(\alpha{+}\beta{+}N+1)\Gamma(2a{+}N{+}\alpha{+}1)}}, \tag{31}\] however, the Gamma function (\(\Gamma(\cdot)\)) make this equation uncomputable. To fix this issue, we rewrite Equation (31) as \[\begin{array}{c}Y=\psi(\alpha{+}\beta{+}2){+}\psi(2a{+}N){+}\psi(\beta{+}N){ +}\psi(2a{+}2N{+}\alpha)-\\ (\psi(2a{+}2N{-}1){+}\psi(\beta{+}1){+}\psi(\alpha{+}\beta{+}N+1){+}\psi(2a{+} N{+}\alpha{+}1))\\ \hat{\mathbf{\mathcal{R}}}_{0}(a+N-1)=\exp(Y/2)\end{array} \tag{32}\] where \(\psi(\cdot)\) represents the logarithmic gamma function: \(\psi(x)=\log(\Gamma(x))\), \(\log(\cdot)\) is natural logarithm. Using (32) the first initial value is computable for a wide range of the DRP parameters as shown in Figure 2b. Figure 1: The matrix of DRPs. Note: the matrix is given by \(n\) and \(x\), where \(x=s-a\). ### The Initial sets After computing the first initial value, the initial sets in the first two rows \(\hat{\mathbf{\mathcal{R}}}_{0}(s)\) and \(\hat{\mathbf{\mathcal{R}}}_{1}(s)\) are computed by the two-term recurrence relation. These initial sets will be used for computation of the remaining coefficients of DRPs (the coefficients in Parts 1, 2, 3, and 4). The values of the coefficients \(\hat{\mathbf{\mathcal{R}}}_{0}(s)\) are calculated as follows \[\hat{\mathbf{\mathcal{R}}}_{0}(s)= \sqrt{\frac{\left(2s{+}1\right)\left(a{-}\beta{+}s{+}1\right) \left(b{+}s{+}1\right)\left(b{+}\alpha{-}s{-}1\right)\left(a{-}s{-}1\right)}{ \left(a{+}s{+}1\right)\left(b{+}\alpha{+}s{+}1\right)\left(a{-}\beta{-}s{-}1 \right)\left(2s{+}3\right)\left(b{-}s{-}1\right)}}\ \hat{\mathbf{\mathcal{R}}}_{0}(s{+}1)\,, \tag{33}\] \[s=a+N-2,a+N-3,\ldots,a.\] After computation of the values \(\hat{\mathbf{\mathcal{R}}}_{0}(s)\), the values \(\hat{\mathbf{\mathcal{R}}}_{1}(s)\) are computed using the previously computed coefficients. The values of the coefficients of \(\hat{\mathbf{\mathcal{R}}}_{1}(s)\) are computed \[\hat{\mathbf{\mathcal{R}}}_{1}(s)=-\big{(}\left((-a+b-1)\alpha+b^{2}- s^{2}-a-s-1\right)\beta+(a^{2}-s^{2}+b-s-1)\alpha+a^{2}+b^{2}-2(s^{2}+s)-1 \big{)}\times\] \[\sqrt{\frac{\alpha{+}\beta{+}3}{\left(a{-}b{+}1\right)\left(a{+} b{-}\beta{-}1\right)\left(\alpha{+}1\right)\left(\beta{+}1\right)\left(a{-}b{-} \alpha{-}\beta{-}1\right)\left(a{+}b{+}\alpha{+}1\right)}}\times\hat{\mathbf{ \mathcal{R}}}_{1}(s{+}1)\,,s=a,a+1,\ldots,a+N-1.\] ### The Controlling Indices in the First and Last Columns To control the stability of the computation of the DRP coefficients, we present a controlling indices that are used to stabilize the computation of the coefficients. We first compute the coefficients \(\hat{\mathbf{\mathcal{R}}}_{n}(a)\) and \(\hat{\mathbf{\mathcal{R}}}_{n}(a+N-1)\) in the first and last columns. Then, the location, where the peak values occur, are found. To compute the coefficients for \(\hat{\mathbf{\mathcal{R}}}_{n}(a)\), the two-term recurrence relation is used \[\hat{\mathbf{\mathcal{R}}}_{n+1}(a)=-\sqrt{\frac{\left(N{-}n{-}1 \right)\left(\alpha{+}\beta{+}2n{+}3\right)\left(\alpha{+}\beta{+}n{+}1\right) \left(\beta{+}n{+}1\right)\left(2a{+}N{+}\alpha{+}n{+}1\right)}{\left(2a{+}N{ -}\beta{-}n{-}1\right)\left(\alpha{+}\beta{+}2\,n{+}1\right)\left(\alpha{+}n {+}1\right)\left(N{+}\alpha{+}\beta{+}n{+}1\right)\left(n{+}1\right)}}\times \hat{\mathbf{\mathcal{R}}}_{n}(a)\,, \tag{35}\] \[n=1,2,\ldots,N-2.\] Also, we present a new two-term recurrence relation to compute the coefficients of \(\hat{\mathbf{\mathcal{R}}}_{n}(a+N-1)\) as follows \[\hat{\mathbf{\mathcal{R}}}_{n+1}(a+N-1)=\sqrt{\frac{\left(N{-}n{-}1 \right)\left(\alpha{+}\beta{+}2n{+}3\right)\left(\alpha{+}\beta{+}n{+}1\right) \left(\alpha{+}n{+}1\right)\left(2a{+}N{-}\beta{-}n{-}1\right)}{\left(2a{+}N{ +}\alpha{+}n{+}1\right)\left(\alpha{+}\beta{+}2n{+}1\right)\left(\beta{+}n{+}1 \right)\left(N{+}\alpha{+}\beta{+}n{+}1\right)\left(n{+}1\right)}}\times\hat{ \mathbf{\mathcal{R}}}_{n}(a+N-1)\,, \tag{36}\] \[n=1,2,\ldots,N-2.\] The peak value at the last column \(s=a+N-1\), i.e. the index \[ind_{N-1}=\operatorname*{arg\,max}_{n=0,1,\ldots N-1}\hat{\mathbf{\mathcal{R}}}_{n} (a+N-1) \tag{37}\] then creates the border between Part 1 and Part 3, while the border between Part 2 and Part 4 is the peak value at the first column \(s=a\), i.e. the index \[ind_{0}=\operatorname*{arg\,max}_{n=0,1,\ldots N-1}\hat{\mathbf{\mathcal{R}}}_{n}(a)\,. \tag{38}\] ### The Controlling Index in the Last Row We would also need the index \(N_{s}\) as the border between Part 1 and Part 2. The ideal value would be the peak value at the last row. We cannot compute it directly because of underflow for high \(N\), then we can use substitutional value \(N_{s}=\lfloor N/4+0.5\rfloor\) as is written in Figure 1. The symbol \(\lfloor\cdot\rfloor\) is the fuction floor, the index \(N_{s}\) is rounded to the nearest integer. There is another possibility. Some values of \(\hat{\mathbf{\mathcal{R}}}_{n}(s)\) can underflow for high \(N\), but the ratio of the adjacent values does not, therefore we can compute \(N_{s}\) in logarithms. There is one complication, we need the logarithm of a sum \(\log(a+b)\), but when \(a\) and \(b\) are similar, we can compute it as \(\log((a/b+1)b)\) = \(\log(a/b+1)+\log(b)\) \(\log(\exp(\log(a)-\log(b))+1)+\log(b)\). The whole algorithm is then as follows. First, we compute logarithm of the first value \[L_{0,0}=\log(\hat{\boldsymbol{\mathcal{R}}}_{0}(a))=(\psi(2a+2)+\psi(N+\alpha)+ \psi(\alpha+\beta+2)+\psi(2a+N-\beta)- \tag{39}\] Again, \(\psi(x)\) is the logarithmic gamma function. Then we compute values in the first column. We need not remember them, we need only the last value \(L_{N,0}\). \[\begin{array}{l}L_{n,0}=L_{n-1,0}+(\log(N-n-1)+\log(\alpha+\beta+2n+3)+\log( \alpha+\beta+n+1)+\log(\beta+n+1)+\\ \log(2a+N+\alpha+n+1)-\log(2a+N-\beta-n-1)-\log(\alpha+\beta+2n+1)-\\ \log(\alpha+n+1)-\log(N+\alpha+\beta+n+1)-\log(n+1))/2\\ n=1,2,\ldots,N.\end{array} \tag{40}\] The signum of the result must be computed separately \[S_{N,0}=(-1)^{(N-1)}. \tag{41}\] The second value in the last row \[\begin{array}{l}L_{N,1}=\log(|LE|)+\log(LD)/2+L_{N,0}\\ S_{N,1}=\mathrm{sign}(LE)S_{N,0},\end{array} \tag{42}\] where \[\begin{array}{l}LE=1-\frac{2(N-1)(\alpha+\beta+n+1)(a+1)}{a(\alpha+1)(a- \beta)+b(\beta+1)(b+\alpha)-(\alpha+1)(\beta+1)-a(a+1)(\alpha+\beta+2)}\\ LD=\frac{(2a+1)(\beta+1)(a+b+\alpha+1)(b-a-1)(2a+3)}{(b-a+\alpha-1)(2a-\beta+1 )(a+b+1)(2a+1)}.\end{array} \tag{43}\] The factor \(LA\) equals \(E\) from Eq. (21) and \[LD=\frac{\rho(a+1)}{\rho(a)}\cdot\frac{2a+3}{2a+1}\] from Eq. (22). The last row is then computed by the recurrence \[\begin{array}{l}L_{N,x}=\log(|LA|)+\log(|B1|)+\log(|B|)/2+L_{N,x-2}\\ S_{N,x}=S_{1}\cdot S_{3}\\ x=2,3,\ldots\\ s=a+x,\end{array} \tag{44}\] where \[\begin{array}{l}A=\frac{(a+s)(s-a+\beta)(b+\alpha+s)(b-s)(2s+1)}{(b+\alpha -s)(a-\beta+s)(s-a)(s+b)(2s-1)}\\ B=A\frac{(a+s-1)(s-a+\beta-1)(b+\alpha+s-1)(b-s+1)(2s-1)}{(b+\alpha-s+1)/(a- \beta+s-1)(s-a-1)(s+b-1)(2s-3)}\\ A1=\frac{(2s-1)(\sigma(s-1)+(s-1)\tau(s-1)-2\lambda s(s-1))}{(s-1)(\sigma(s-1)+ (2s-1)\tau(s-1))}\\ B1=\frac{s\sigma(s-1)}{(s-1)(\sigma(s-1)+(2s-1)\tau(s-1))}\\ S_{1}=\mathrm{sign}(B1)S_{N,x-2}\\ S_{2}=\mathrm{sign}(A1)S_{N,x-1}S_{1}\\ LA=S_{2}\exp(\log(|A1|)+\log(A)/2+L_{N,x-1}-\log(|B1|)-\log(B)/2-L_{N,x-1}-1\\ S_{3}=\mathrm{sign}(LA),\end{array} \tag{45}\] and the functions \(\sigma(s)\), \(\tau(s)\), and \(\lambda\) are given in Eq. (13). When we find the maximum, i.e. the point \(x\), where \(L_{N,x-1}>L_{N,x}\wedge L_{N,x-1}>L_{N,x-2}\), then we have found the index \(N_{s}=x-1=s-a-1\). It is better to stop the computation here, because if \(\sigma(s)\) is higher than about \(4\cdot 10^{15}\), the computation looses its accuracy. In our tests it was always after the finding \(N_{s}\). It is also possible to compute the maximum from the end of the last row. The value \(L_{0,N}\) equals \(Y/2\), where \(Y\) is from Eq. (32). Then we compute the values in the last column because of the last value \[\begin{array}{c}L_{n,N}=L_{n-1,N}+(\log(N-n-1)+\log(\alpha+\beta+2n+3)+\log( \alpha+\beta+n+1)+\\ \log(\alpha+n+1)+\log(2a+N-\beta-n-1)-\log(\alpha+\beta+2n+1)-\\ \log(\beta+n+1)-\log(N+\alpha+\beta+n+1)-\log(n+1)-\\ \log(2a+N+\alpha+n+1))/2,\hskip 71.13189ptn=1,2,\ldots,N.\end{array} \tag{46}\] The last but one value in the last row \[\begin{array}{c}L_{N,N-1}=L_{N,N}+\log(|LF|)-\log(LG)/2\\ S_{N,N-1}=\mbox{sign}(LF),\end{array} \tag{47}\] where \[\begin{array}{c}LF=\frac{(2s-1)(\sigma(s-1)+(s-1)\tau(s-1)-2\lambda_{n}s(s- 1))}{s\sigma(s-1)}\\ LG=\frac{(a+s-1)(s-a+\beta-1)(b+\alpha+s-1)(b-s+1)(2s-1)}{(b+\alpha-s+1)(a- \beta+s-1)(s-a-1)(s+b-1)(2s-3)}\\ s=a+N.\end{array} \tag{48}\] We can invert the recurrence for the direct computation \[\begin{array}{c}L_{N,x-2}\!=\!\log(|LA|)-\log(|B1|)-\log(|B|)/2+L_{N,x-1}\\ S_{N,x-2}\!=\!S_{1}\cdot S_{3}\cdot\mbox{sign}(B1)\\ x=N-2,N-3,\ldots\end{array} \tag{49}\] where \(A\), \(B\), \(A1\), \(B1\) is the same as in Eq. (44) and \[\begin{array}{c}S_{1}=S_{N,x}\\ S_{2}=\mbox{sign}(A1)S_{N,x-1}S_{1}\\ LA=S_{2}\exp(\log(|A1|)+\log(A)/2+L_{N,x-1}-L_{N,x})-1\\ S_{3}=\mbox{sign}(LA).\end{array} \tag{50}\] The peak value \(N_{n}\) is then the first value \(x\), where \(L_{N,x-1}>L_{N,x}\wedge L_{N,x-1}>L_{N,x-2}\). Then \(N_{n}=x-1=s-a-1\). Again, we should stop the computation here. If \(N_{n}=N_{s}\), it is good indication that we have the correct value. ### The coefficients for Parts 1 and 2 The coefficients in Parts 1 and 2 are computed using the three-term recurrence algorithm in the \(n\)-direction as follows \[\boldsymbol{\hat{\mathcal{R}}}_{n}(s)=\Theta_{1}\,\boldsymbol{\hat{\mathcal{ R}}}_{n-1}(s)+\Theta_{2}\,\boldsymbol{\hat{\mathcal{R}}}_{n-2}(s)\,, \tag{51}\] where \[\Theta_{1}=\frac{\Theta_{11}}{\Theta_{0}}\sqrt{\Theta_{12}},\qquad\Theta_{2}= \frac{\Theta_{21}}{\Theta_{0}}\sqrt{\Theta_{12}\Theta_{22}} \tag{52}\] and \[\Theta_{0}=\frac{n\left(\alpha+\beta+n\right)}{\left(\alpha+\beta +2n-1\right)\left(\alpha+\beta+2n\right)} \tag{53}\] \[\Theta_{11}=s(s\!+\!1)\!-\!\frac{1}{4}\left(a^{2}\!+\!b^{2}\!+\!(a \!-\!\beta)^{2}+(b\!+\!\alpha)^{2}\!-\!2\right)+\] (54) \[\frac{1}{8}\left(\left(\alpha\!+\!\beta+2n\!-\!2\right)\left( \alpha\!+\!\beta\!+\!2n\right)\right)-\frac{1}{2}\left(\frac{\left(\beta^{2}\! -\!\alpha^{2}\right)\left(\!\!(b\!+\!\alpha/2)^{2}-\!(a\!-\!\beta/2)^{2}\! \right)}{\left(\alpha\!+\!\beta\!+\!2n\!-\!2\right)\left(\alpha\!+\!\beta\!+\! 2n\right)}\right)\] \[\Theta_{21}=\!\!-\!\frac{\left(\alpha\!+\!n\!-\!1\right)\left( \beta\!+\!n\!-\!1\right)}{\left(\alpha+\beta\!+\!2n\!-\!2\right)\left(\alpha \!+\!\beta\!+\!2n\!-\!1\right)}\!\left(\!\left(a\!+\!b\!+\!\frac{\alpha\!-\! \beta}{2}\right)^{\!2}\!\!-\!\left(n\!-\!1\!+\!\frac{\alpha+\beta}{2}\right)^ {\!2}\!\right)\!\times\] \[\left(\left(b\!-\!a\!+\!\frac{\alpha+\beta}{2}\right)^{2}-\left(n \!-\!1\!+\!\frac{\alpha\!+\!\beta}{2}\right)^{2}\right) \tag{55}\] \[\Theta_{12}= \frac{n\left(\alpha+\beta+n\right)\left(\alpha+\beta+2n+1\right)}{ \left(\alpha+n\right)\left(\beta+n\right)\left(\alpha+\beta+2n-1\right)\left(a -b-\alpha-\beta-n\right)\left(a-b+n\right)}\times\frac{1}{\left(a+b+\alpha+n \right)\left(a+b-\beta-n\right)} \tag{56}\] \[\Theta_{22}= \frac{\left(n-1\right)\left(\alpha+\beta+n-1\right)\left(\alpha+ \beta+2n-1\right)}{\left(\alpha+n-1\right)\left(\beta+n-1\right)\left(\alpha+ \beta+2n-3\right)\left(a-b-\alpha-\beta-n+1\right)\left(a-b+n-1\right)}\times \frac{1}{\left(a+b+\alpha+n-1\right)\left(a+b-\beta-n+1\right)}. \tag{57}\] The border between Part 1 and Part 3 is the index \(ind_{N-1}\), see Eq. (37), the recurrence algorithm is applied for \(s=a+N_{s},a+N_{s}+1,\ldots,a+N-2\) and \(n=2,3,\ldots,ind_{N-1}\), while the border between Part 2 and Part 4 is the index \(ind_{0}\), see Eq. (38), the recurrence algorithm is applied for \(s=a+1,a+2,\ldots,a+N_{s}-1\) and \(n=2,3,\ldots,ind_{0}\). ### The coefficients for Parts 3 and 4 The coefficients in Parts 3 and 4 are computed using the same three-term recurrence algorithm in the \(n\)-direction as in (51). After computation of each value, the following stabilizing condition is applied for each order \(n\) \[\hat{\boldsymbol{\mathcal{R}}}_{n}(s)=0\text{ if }\left|\hat{\boldsymbol{ \mathcal{R}}}_{n}(s)\right|< 10^{-5}\wedge\left|\hat{\boldsymbol{\mathcal{R}}}_{n}(s)\right|>\left|\hat{ \boldsymbol{\mathcal{R}}}_{n-1}(s)\right|. \tag{58}\] In Part 3, we add a codition, that there must exist \(\hat{\boldsymbol{\mathcal{R}}}_{i}(s)\) such that \(|\hat{\boldsymbol{\mathcal{R}}}_{i}(s)|<10^{-5}\) for some \(i<n\). The recurrence algorithm for Part 3 is applied in the range \(s=a+N_{s},a+N_{s}+1,\ldots,a+N-2\) and \(n=ind_{N-1}+1,ind_{N-1}+2,\ldots,N-1\); while for Part 4, the recurrence algorithm is carried out in the range \(s=a+1,a+2,\ldots,a+N_{s}-1\) and \(n=ind_{0},ind_{0}+1,\ldots,N-1\). ### Special case of Racah Polynomials In this section, a special case of DRPs is presented. The parameter \(\beta\) affects on the energy compaction as its value becomes larger than 0. So, the case \(\hat{\boldsymbol{\mathcal{R}}}_{n}^{\left(0,0\right)}(s)\), where \(a=\alpha=\beta=0\), has special significance. In this case, the \(\hat{\boldsymbol{\mathcal{R}}}_{n}^{\left(0,0\right)}(s)\) is given as follows \[\hat{\boldsymbol{\mathcal{R}}}_{n}^{\left(0,0\right)}(s)= \frac{(b+1)_{n}(1)_{n}(-b+1)_{n}}{n!}\times{}_{4}F_{3}\left( \left.\begin{array}{c}-n,-s,s+1,n+1\\ 1,b+1,-b+1\end{array}\right|\,1\right)\times\sqrt{\frac{\Gamma(s+1)\Gamma(b+s +1)\Gamma(b-s)\Gamma(s+1)}{\Gamma(b+s+1)\Gamma(b-s)\Gamma(s+1)\Gamma(s+1)}}{ \frac{\Gamma(n+1)\Gamma(n+1)\Gamma(b+n+1)\Gamma(b+n+1)}{(2n+1)\Gamma(n+1) \Gamma(b-n)\Gamma(n+1)\Gamma(b-n)}}(2s+1)\] \[= \frac{(b+1)_{n}(1)_{n}(-b+1)_{n}}{n!}\times{}_{4}F_{3}\left( \left.\begin{array}{c}-n,-s,s+1,n+1\\ 1,b+1,-b+1\end{array}\right|\,1\right)\times\sqrt{\frac{(2n+1)\Gamma(b-n) \Gamma(b-n)}{\Gamma(b+n+1)\Gamma(b+n+1)}}(2s+1)\] \[= \frac{(b+1)_{n}(1)_{n}(-b+1)_{n}}{n!}\times{}_{4}F_{3}\left( \left.\begin{array}{c}-n,-s,s+1,n+1\\ 1,b+1,-b+1\end{array}\right|\,1\right)\times\frac{\Gamma(b-n)}{\Gamma(b+n+1) }\sqrt{(2n+1)(2s+1)}\] \[= \frac{\Gamma(b+n+1)n!\Gamma(-b+1+n)}{n!\Gamma(b+1)\Gamma(-b+1)} \times{}_{4}F_{3}\left(\left.\begin{array}{c}-n,-s,s+1,n+1\\ 1,b+1,-b+1\end{array}\right|\,1\right)\times\frac{\Gamma(b-n)}{\Gamma(b+n+1)} \sqrt{(2n+1)(2s+1)}\] \[= \frac{\Gamma(b-n)\Gamma(-b+1+n)}{\Gamma(b+1)\Gamma(-b+1)}\times{}_ {4}F_{3}\left(\left.\begin{array}{c}-n,-s,s+1,n+1\\ 1,b+1,-b+1\end{array}\right|\,1\right)\times\sqrt{(2n+1)(2s+1)}. \tag{59}\] Roman [24] shows the property of factorial \[c!(-c-1)!=(-1)^{c+(c<0)}, \tag{60}\] where \[(c<0)=\begin{cases}1&\text{ if }c<0\\ 0&\text{ if }c\geq 0.\end{cases} \tag{61}\] It is well known that \(c!=\Gamma(c+1)\); thus (60) can be written by this way \[\Gamma(c+1)\Gamma(-c)=(-1)^{c+(c<0)}. \tag{62}\] Using (62), the term \(\Gamma(b-n)\Gamma(-b+1+n)\) from (59) can be expressed \[\Gamma(b-n)\Gamma(-b+1+n) = \Gamma(b-n)\Gamma(-(b-n)+1) = (-1)^{-(b-n)+1} = -(-1)^{-b}(-1)^{n}. \tag{63}\] Also, the term \(\Gamma(b+1)\Gamma(-b+1)\) from (59) can be expressed \[\Gamma(b+1)\Gamma(-b+1)=\Gamma(b+1)\Gamma(-b)(-b)=(-b)(-1)^{b+0}=-b(-1)^{-b}. \tag{64}\] From (63) and (64), (59) can be expressed \[\hat{\boldsymbol{\mathcal{R}}}_{n}^{\binom{0,0}{0,b}}(s)=\frac{-( -1)^{-b}(-1)^{n}}{-b(-1)^{-b}}\sqrt{(2n{+}1)(2s{+}1)}\,_{4}F_{3}\left(\left. \begin{array}{c}-n,-s,s{+}1,n{+}1\\ 1,b{+}1,-b{+}1\end{array}\right|1\right)\] \[=\frac{(-1)^{n}\sqrt{(2n+1)(2s{+}1)}}{b}\,\,_{4}F_{3}\left(\left. \begin{array}{c}-n,-s,s{+}1,n{+}1\\ 1,b{+}1,-b{+}1\end{array}\right|1\right). \tag{65}\] For (65), replacing \(n\) by \(s\), we obtain \[\hat{\boldsymbol{\mathcal{R}}}_{s}^{\binom{0,0}{0,b}}(n)=\frac{(-1)^{s}\sqrt{ (2s+1)(2n{+}1)}}{b}\,\,_{4}F_{3}\left(\left.\begin{array}{c}-s,-n,n{+}1,s{+} 1\\ 1,b{+}1,-b{+}1\end{array}\right|1\right). \tag{66}\] By comparing (65) with (66), we obtain the following symmetry relation \[\hat{\boldsymbol{\mathcal{R}}}_{s}^{\binom{0,0}{0,b}}(n)=(-1)^{(s-n)}\hat{ \boldsymbol{\mathcal{R}}}_{n}^{\binom{0,0}{0,b}}(s)\,. \tag{67}\] Thus, from (67), we can compute the coefficients for 50% and the rest of the coefficients using the symmetry relation. In other words, the coefficients are computed in the range \(n=0,1,\ldots,N-1\) and \(s=n,n+1,\ldots,N-1\) (Parts 1 and 3). The rest of the coefficients are computed using the symmetry relation (Part 2) as shown in Figure 3. The Eq. (51) becomes \[\hat{\boldsymbol{\mathcal{R}}}_{n}(s)=\Theta_{10}\,\hat{\boldsymbol{\mathcal{ R}}}_{n-1}(s)+\Theta_{20}\,\hat{\boldsymbol{\mathcal{R}}}_{n-2}(s)\,, \tag{68}\] where \[\Theta_{10}=\frac{\left(2s(s{+}1){+}n\,(n{-}1)\,{-}N^{2}{+}1 \right)\sqrt{4n^{2}-1}}{n(N{-}n)(N{+}n)} \tag{69}\] \[\Theta_{20}=-\frac{(n{-}1)\,(N{-}n{+}1)\,(N{+}n{-}1)}{n\,(N{+}n) }\sqrt{\frac{2n{+}1}{2n{-}3}}. \tag{70}\] ### Implementation of the proposed algorithm In this section the pseudo code is presented. The pseudo code of the proposed algorithm for the general case is presented in Algorithm 1. In addition, the pseudo code for the special case (\(a=\alpha=\beta=0\)) is given in Algorithm 2. The values of the Racah polynomials for \(a=800\), \(b=1800\), \(\alpha=400\), and \(\beta=100\) (i.e. N=1000) in artificial colors are in Figure 4. ## 4 Experimental Analysis This section evaluates the proposed algorithm for DRP and compares it with the existing algorithms. Three evaluation procedures are carried out to check the performance of the proposed algorithm which are: maximum size generated, computational cost, and signal reconstruction. The experiments were carried out using MATLAB version 2019b on the computer with the processor Intel(R) Core(TM) i9-7940X CPU with frequency 3.10GHz, memory 32,0 GB and with 64-bit Windows 10 Pro. ### Maximum Degree We searched the maximum signal size \(N\), where the orthogonality error \(E\) is less than 0.001. We changed the parameter values \(a\), \(\alpha\) and \(\beta\) as ratios of \(N\). It has an advantage, that the pattern of non-zero values looks similar and is not moved. The orthogonality error is defined \[E=\max_{n,m=0,1,\ldots,N-1}\left|\sum_{s=a}^{b-1}\hat{\boldsymbol{\mathcal{R} }}_{n}(s)\,\hat{\boldsymbol{\mathcal{R}}}_{m}(s)-\delta_{nm}\right|\,. \tag{71}\] The results are in Tab. 1. ``` 1:Input:\(Ord,a,b,\alpha,\beta\) 2:\(Ord\) is the maximum degree of DRP, \(Ord<b-a\). \(a,b,\alpha,\beta\) represents the parameter of DRP. Output:\(\hat{\boldsymbol{\mathcal{R}}}_{n}(s)\) 3:\(N\gets b-a\)\(\triangleright\)\(N\) represents the size of DRP 4:\(\Theta=10^{-5}\)\(\triangleright\) Threshold for stabilizing condition 5:Compute \(\hat{\boldsymbol{\mathcal{R}}}_{0}(N-1+a)\) using (32) 6:for\(s=a+N-2:a\)do 7: Compute \(\hat{\boldsymbol{\mathcal{R}}}_{0}(s)\) using (33) 8:endfor 9:for\(s=a:a+N-1\)do 10: Compute \(\hat{\boldsymbol{\mathcal{R}}}_{1}(s)\) using (34) 11:endfor 12:for\(n=1:Ord-1\)do 13: Compute \(\hat{\boldsymbol{\mathcal{R}}}_{n}(a)\) using (35) 14: Compute \(\hat{\boldsymbol{\mathcal{R}}}_{n}(N-1+a)\) using (36) 15:endfor 16:\(N_{s}\) from Sec. 3.4 or \(N_{s}\leftarrow\lfloor\frac{N}{4}+0.5\rfloor\)\(\triangleright\)\(\lfloor x\rfloor\) is integer part of \(x\) 17:\(ind_{0}\leftarrow\underset{n=0,1,\ldots N-1}{\operatorname*{arg\max}}\)\(\hat{\boldsymbol{\mathcal{R}}}_{n}(a)\)\(\triangleright\) Index of maximum \(\hat{\boldsymbol{\mathcal{R}}}_{n}(a)\) 18:\(ind_{N-1}\leftarrow\underset{n=0,1,\ldots N-1}{\operatorname*{arg\max}}\)\(\hat{\boldsymbol{\mathcal{R}}}_{n}(a+N-1)\)\(\triangleright\) Index of maximum \(\hat{\boldsymbol{\mathcal{R}}}_{n}(a+N-1)\) 19:for\(s=a+N_{s}:a+N-1\)do\(\triangleright\) Part 1 20:for\(n=2:ind_{N-1}-1\)do 21: Compute \(\hat{\boldsymbol{\mathcal{R}}}_{n}(s)\) using (51) 22:endfor 23:endfor 24:endfor 25:for\(s=a:a+N_{s}-1\)do\(\triangleright\) Part 2 26:for\(n=2:ind_{0}\)do 27: Compute \(\hat{\boldsymbol{\mathcal{R}}}_{n}(s)\) using (51) 28:endfor 29:endfor 30:for\(s=a+N_{s}:a+N-1\)do\(\triangleright\) Part 3 31:for\(n=ind_{N-1}:Ord\)do 32: Compute \(\hat{\boldsymbol{\mathcal{R}}}_{n}(s)\) using (51) 33:if\(\left|\hat{\boldsymbol{\mathcal{R}}}_{n}(s)\right|<\Theta\wedge\left|\hat{ \boldsymbol{\mathcal{R}}}_{n}(s)\right|>\left|\hat{\boldsymbol{\mathcal{R}}}_{ n-1}(s)\right|\wedge\exists\left|\hat{\boldsymbol{\mathcal{R}}}_{i}(s)\right|> \Theta,\;i<n\)then 34:\(\hat{\boldsymbol{\mathcal{R}}}_{n}(s)=0\) 35: Exit inner loop 36:endif 37:endfor ``` **Algorithm 1** Computation of the DRP coefficients using the proposed algorithm. ``` 1:Input:\(N,Ord\) 2:\(N\) represents the size of the DRP, \(Ord\) is the maximum degree of the DRP, \(Ord<N\). Output:\(\hat{\mathcal{R}}_{n}(s)\) ``` ``` 1:\(\Theta=10^{-5}\)\(\triangleright\) Threshold for stabilizing condition 2:\(\hat{\mathcal{R}}_{0}(N-1)\leftarrow\sqrt{2N-1}/N\) 3:for\(s=N-2:0\)do 4:\(\hat{\mathcal{R}}_{0}(s)\leftarrow\sqrt{(2s+1)/(2s+3)}\times\hat{\mathcal{R}}_ {0}(s+1)\) 5:endfor 6:for\(s=1:Ord\)do 7:\(\hat{\mathcal{R}}_{s}(0)\leftarrow(-1)^{s}\hat{\mathcal{R}}_{0}(s)\) 8:endfor 9:for\(s=0:N-1\)do 10:\(\hat{\mathcal{R}}_{1}(s)\leftarrow-(N^{2}-2s^{2}-s2-1)\sqrt{3}/(N^{2}-1)\times \hat{\mathcal{R}}_{0}(s)\) 11:endfor 12:for\(s=2:Ord\)do 13:\(\hat{\mathcal{R}}_{s}(1)\leftarrow(-1)^{s-1}\hat{\mathcal{R}}_{1}(s)\) 14:endfor 15:for\(n=1:Ord-1\)do 16:\(\hat{\mathcal{R}}_{n+1}(N-1)\leftarrow(N-n-1)\sqrt{2n+3}/(N+n+1)/\sqrt{2n+1} \times\hat{\mathcal{R}}_{n}(N-1)\) 17:endfor 18:\(ind_{N-1}\leftarrow\operatorname*{arg\,max}_{n=0,1,\ldots N-1}\hat{\mathcal{R }}_{n}(N-1)\)\(\triangleright\) position of maximum in \(\hat{\mathcal{R}}_{n}(N-1)\) 19:for\(n=2:ind_{N-1}-1\)do\(\triangleright\) Part 1 20:for\(s=n:N-1\)do 21: Compute \(\hat{\mathcal{R}}_{n}(s)\) using (68) 22:endfor 23:endfor 24:for\(n=ind_{N-1}:Ord\)do\(\triangleright\) Part 3 25:for\(s=n:N-1\)do 26: Compute \(\hat{\mathcal{R}}_{n}(s)\) using (68) 27:if\(\left|\hat{\mathcal{R}}_{n}(s)\right|<\Theta\wedge\left|\hat{\mathcal{R}}_{n}(s )\right|>\left|\hat{\mathcal{R}}_{n-1}(s)\right|\)then 28:\(\hat{\mathcal{R}}_{n}(s)=0\) 29: Exit inner loop 30:endif 31:endfor 32:endfor 33:for\(s=3:Ord\)do\(\triangleright\) Part 2 34:for\(n=2:s-1\)do 35:\(\hat{\mathcal{R}}_{s}(n)\leftarrow(-1)^{s-n}\hat{\mathcal{R}}_{n}(s)\) 36:endfor 37:endfor ``` **Algorithm 2** Computation of the DRP coefficients using the proposed algorithm for the special case \(a=\alpha=\beta=0\). \begin{table} \begin{tabular}{|l|c c c|c|} \hline & \(a=0\) & \(a=\lceil N/10000+0.5\rceil\) & \(a=\lfloor N/4+0.5\rceil\) & \(a=\lfloor N/2+0.5\rceil\) \\ & \(\alpha=0\) & \(\alpha=N/10000\) & \(\alpha=\lfloor N/8+0.5\rceil\) & \(\alpha=\lfloor N/2+0.5\rceil\) \\ & \(\beta=0\) & \(\beta=N/10000\) & \(\beta=\lfloor N/16+0.5\rceil\) & \(\beta=\lfloor N/4+0.5\rceil\) \\ \hline Zhu \(n\) & 23 & 25 & 37 & 32 \\ Zhu \(s\) & 21 & 26 & 35 & 32 \\ Daoui & 1165 & 4 & 65 & 53 \\ GSOP & \(9649^{\dagger}\) & \(9834^{\dagger}\) & 1075 & 504 \\ ImSt & \(56000^{\ddagger}\) & 25580 & 6770 & 4659 \\ \hline \end{tabular} \end{table} Table 1: Maximum sizes \(N\) of the Racah polynomials reachable by various algorithms. Usually, the limit is the algorithm precision, i.e. the orthogonality error \(E\leq 10^{-3}\), \(\dagger\) the limit is computing time \(\leq\)1 hour, \(\ddagger\) the limit is computer memory 32 GB. In the first column, when \(a=0\), \(\alpha=0\) and \(\beta=0\), our algorithm 2 is used, in the other cases, it is our algorithm 1. The limit \(N=56000\) is not limit of our algorithm, it is the memory limit of our computer. We are not able to check the orthogonality error because of the "Out of memory" error. Another problem is long computation of GSOP. In the case \(a=\alpha=\beta=0\) and \(N=56000\), the error of orthogonality \(E\) was also under the threshold 0.001, but the computation of GSOP took 15 days. We cannot test the precise maximum size, when the computing times are such long. That is why we added another criterion, the result must be available in the time less then one hour. The sizes for GSOP in the first two columns are limited by this condition. ### Computing Time We tested also the computing times. There is one problem, the maximum sizes of Daoui and particularly Zhu algorithms are so low, that sufficient analysis of computing times is not possible. Finally, we tested these algorithms even if the error of orthogonality was higher than our threshold. We choose these values of the parameters: \(n=N-1\), \(a=\max(N/4,1)\), \(b=a+N\), \(\alpha=N/8\), and \(\beta=N/16\). We repeated each computation ten times, and took average time. The results are in Figure 5. The fastest algorithm is Zhu's recurrence over the degree, our algorithm ImSt is based on the similar principle, it is only a little bit slower. Daoui's algorithm is a little bit slower than ours and Zhu's recurrence over the coordinate is significantly slower, but it has still computing complexity \(\mathcal{O}(N^{2})\), only with worse constant. The computing complexity \(\mathcal{O}(N^{3})\) of GSOP is clearly visible in the graph; from beginning, it is fast, but it cannot be used for high \(N\). ### Restriction Error Analysis The distribution of moments is diverse from each other based on the discrete transforms [25]. To correctly reconstruct the signal information, the sequence of moments is important and should be recognized. Therefore, the moment energy distribution of DRP is examined first; then the signal reconstruction analysis is performed. To acquire the distribution of moments, the procedure presented by Jian [26] is followed. The procedure is given in Algorithm 3. Input: \(\rho\)=covariance coefficient Output: Order of DRP. ``` 1: Generate the covariance matrix \(\Sigma\) with zero mean and length \(N\): \[\Sigma=\begin{bmatrix}1&\rho&\cdots&\rho^{N-1}\\ \rho&1&&\vdots\\ \vdots&&\ddots&\rho\\ \rho^{N-1}&\cdots&\rho&1\end{bmatrix}\] (72) 2: Transform the covariance matrix \(\Sigma\) into the domain of the discrete Racah moments (\(M\)) using \[M=R\times\Sigma\times R^{T}\] (73) 3: Find the diagonal coefficients \(\sigma_{\ell}^{2}=M_{\ell\ell}\) of the discrete Racah moments \(M\). 4: Find the order of moments according to the values of the diagonal coefficients. 5:return result ``` **Algorithm 3** Find the moment order of DRP. The covariance matrix \(\Sigma\) is used instead of an image. Then the matrix multiplication \(R\times\Sigma\times R^{T}\) can be used for moment computation, \(R\) is the matrix of Racah polynomials, \(R_{n,s-a}=\hat{\boldsymbol{\mathcal{R}}}_{n}(s)\). For the covariance coefficients, three values are used, \(\rho=0.90\), \(\rho=0.95\), and \(\rho=0.98\) with length \(N=16\); then, the results are reported in Table 2. From Table 2, it can observed that the maximum value of DRP is found at \(\ell=0\) and the values are descendingly ordered. This declares that the DRP moment order used for signal reconstruction is \(n=0,1,\ldots,N-1\). The energy compaction property of the discrete transformation based on orthogonal polynomials is considered one of the important properties. It is the fraction of the number of coefficients that reflect most of the signal energy to the total number of coefficients. This characteristic is used to assess a DRP's ability to reconstruct a significant portion of the signal information from a very small number of moment coefficients. To examine the impact of the DRP parameters \(a\) \(\alpha\) and \(\beta\) on the energy compaction, the restriction error, \(\mathcal{J}\), is used as follows [26] \[\mathcal{J}_{m}=\frac{\sum\limits_{k=m}^{N-1}\sigma_{k}^{2}}{\sum\limits_{k=0}^{ N-1}\sigma_{k}^{2}};\;\;m=0,1,2,\ldots,N-1, \tag{74}\] where \(\sigma_{k}^{2}\) represents diagonal values of the transform coefficients ordered descendingly. In our case, the coefficients are already ordered, i.e. \(k=\ell\). Figure 6 shows the restriction error using the covariance coefficient \(\rho=0.95\) with DRP parameters of \(a=\alpha\) and \(\beta=0\). From Figure 6, the DRP parameters affect the restriction error, which reveals that DRPs with parameters \(a=\alpha=30\) and \(\beta=0\) shows better energy compaction than other parameter values in the range of \(m<96\). However, when \(a=\alpha=50\) and \(\beta=0\) presents better energy compaction compared to other DRP parameters in the range \(m>96\). Figure 7 shows the restriction error of DRP with parameters of \(a\), \(\alpha=\{0,a/2\}\) and \(\beta=\{0,a/2\}\). It can be observed from Figure 7 that DRPs with parameters \(a=50\), \(\alpha=25\) and \(\beta=0\) shows better energy compaction than other parameter values in the range of \(m<96\). However, when \(a=100\), \(\alpha=50\) and \(\beta=0\) presents better energy compaction compared to other DRP parameters in the range \(m>96\). On the other hand, Figure 8 shows the restriction error for DRP with parameters of \(a\), \(\alpha=a\), and \(\beta=\{0,a/2,a\}\). The best energy compaction in with this DRP parameters is \(a=50\), \(\alpha=50\), and \(\beta=0\) for the entire range of retained samples \(m\). Thus, the NMSE is expressed as: \[NMSE(I,I_{r})=\frac{\sum\limits_{x,y}\left[I(x,y)-I_{r}(x,y)\right]^{2}}{\sum \limits_{x,y}I(x,y)^{2}}, \tag{75}\] where \(I\) and \(I_{r}\) represent the original image and the reconstructed image, respectively. NMSE is used as the reconstruction error. From that the peek signal to noise ratio (PSNR) \[PSNR(I,I_{r})= \tag{76}\] \[=10\left(\log\left(\max\left(I(x,y)^{2}\right)\right)-\log\left( \frac{1}{N_{1}N_{2}}\sum\limits_{x,y}\left[I(x,y)-I_{r}(x,y)\right]^{2}\right) \right),\] where \(N_{1}\times N_{2}\) is the size of the image \(512\times 512\). First, the reconstruction error analysis is carried out for \(\alpha=a\) and \(\beta=0\). The order of moments used to reconstruct the image is varied in the set \(1,32,64,\ldots,512\). The obtained results are depicted in Figure 10. The obtained results show that at moment order of 64, the best NMSE is occurred at DRP parameters of \(a=10,\alpha=a,\beta=0\) with NMSE of 0.0328. The next three best NMSE are 0.0339, 0.0371, and 0.0463 for DRP parameters \(a=30,\alpha=a,\beta=0\), \(a=0,\alpha=a,\beta=0\), and \(a=50,\alpha=a,\beta=0\), respectively. However, for moment order of 128, the NMSE are 0.0161, 0.0163, 0.01635, and 0.0165 for DRP parameters \(a=80,\alpha=a,\beta=0\), \(a=50,\alpha=a,\beta=0\), \(a=100,\alpha=a,\beta=0\), and \(a=30,\alpha=a,\beta=0\), respectively. Moreover, for moment order of 256, the best NMSE is occurred at DRP parameters \(a=10,\alpha=a,\beta=0\) with NMSE of 0.0052. For better inspection, Reconstruction error between the original and the reconstructed image is acquired and the PSNR is reported for different values of DRP parameters as shown Figure 11. Second, the DRP parameter values in the range (\(a\) and \(\alpha=\{0,a/2\}\) with \(\beta=\{0,a/2\}\)) is used to perform the reconstruction error analysis. The same moment orders of the first experiment is used in this experiment. Figure 12 shows the obtained NMSE results of the second experiment. From Figure 12, the results demonstrate that at moment order of 64, the best NMSE is 0.0333 for DRP parameters of \(a=50,\alpha=25,\beta=0\). The second best NMSE appears at \(a=50,\alpha=0,\beta=0\) with NMSE of 0.0344; while the third best NMSE occurs at \(a=100,\alpha=0,\beta=0\) with NMSE of 0.0347. For moment order of 128, the best NMSE is 0.0166 for DRP parameters \(a=100,\alpha=50,\beta=0\). In addition, the best NMSE,for moment order of 256, is occurred at DRP parameters \(a=50,\alpha=25,\beta=0\) with NMSE of 0.00547. For the sake of clarity, the visual reconstruction error between the original and the reconstructed image is acquired and the PSNR is reported for different values of DRP parameters as shown Figure 13. Finally, the DRP parameter values in the range (\(a\) and \(\alpha=a\) with \(\beta=\{0,a/2,a\}\)) is used to carry out the reconstruction error analysis. Figure 14 shows the reported NMSE results for this experiment. From Figure 14, the results demonstrate that at moment order of 64, the best NMSE is 0.0463 for DRP parameters of \(a=50,\alpha=50,\beta=0\). The second best NMSE appears at \(a=100,\alpha=100,\beta=0\) with NMSE of 0.104; while the third best NMSE occurs at \(a=50,\alpha=50,\beta=25\) with NMSE of 0.161. For moment order of 128, the best NMSE is 0.0163 for DRP parameters \(a=50,\alpha=50,\beta=0\) and \(a=100,\alpha=100,\beta=0\). In addition, the best NMSE, for moment order of 256, is occurred at DRP parameters \(a=200,\alpha=200,\beta=100\) with NMSE of 0.00465. The visual reconstruction error between the original and the reconstructed image is acquired and the PSNR is reported for different values of DRP parameters as shown Figure 15. ## 5 Conclusion This paper proposed a new algorithm for computing the coefficient values of DRP. We use the logarithmic gamma function for computation the initial values. The utilization of the logarithmic gamma function empower the computation of the initial value for a wide range of DRP parameter values as well as large size of the polynomials. In addition, a new formula is used to compute the values of the initial sets based on the initial value. The rest of DRP coefficients are computed by partitioning the DRP plane into four parts. To compute the values in the four parts, the recurrence relation in the \(x\) and \(n\) directions are conjoined together. To clear out the propagation error, a stabilizing condition is forced. The performance of the proposed algorithm is tested against different values of DRP parameters. In addition, the proposed algorithm is compared with existing algorithms. These experiments show that the proposed algorithm reduced the computation cost compared to the existing algorithms. Moreover, the proposed algorithm is able to generate DRP for large sizes without propagation error. Finally, restriction error and reconstruction error analyses are performed to show the influence of the used parameter values. ## Acknowledgments This work has been supported by the Czech Science Foundation (Grant No. GA21-03921S) and by the _Praemium Academiae_. We would also like to acknowledge University of Baghdad. for general and financial support. ## Declarations We declare we have no conflict of interest.
2309.14849
Numerical study of fractional Camassa-Holm equations
A numerical study of fractional Camassa-Holm equations is presented. Smooth solitary waves are constructed numerically. Their stability is studied as well as the long time behavior of solutions for general localised initial data from the Schwartz class of rapidly decreasing functions. The appearence of dispersive shock waves is explored.
Christian Klein, Goksu Oruc
2023-09-26T11:26:36Z
http://arxiv.org/abs/2309.14849v1
# Numerical study of fractional Camassa-Holm equations ###### Abstract. A numerical study of fractional Camassa-Holm equations is presented. Smooth solitary waves are constructed numerically. Their stability is studied as well as the long time behavior of solutions for general localised initial data from the Schwartz class of rapidly decreasing functions. The appearence of dispersive shock waves is explored. This work was partially supported by the ANR-17-EURE-0002 EIPHI, the Bourgogne Franche-Comte Region, the European fund FEDER, and by the European Union Horizon 2020 research and innovation program under the Marie Sklodowska-Curie RISE 2017 grant agreement no. 778010 IPaDEGAN. ## 1. Introduction This paper is concerned with the numerical study of solutions to the fractional Camassa-Holm (CH) equation given by \[u_{t}+\kappa_{1}u_{x}+3uu_{x}+D^{\alpha}u_{t}=-\kappa_{2}[2D^{\alpha}(uu_{x})+ uD^{\alpha}u_{x}], \tag{1}\] where \(\kappa_{1}\), \(\kappa_{2}\) are real constants. The fractional derivative \(D^{\alpha}\) (also called fractional Laplacian) is defined via its Fourier symbol \[\mathcal{F}D^{\alpha}=|k|^{\alpha}, \tag{2}\] where \(\mathcal{F}\) denotes the Fourier transform and where \(k\) is the dual Fourier variable, see (6). ### Background In the case \(\alpha=2\), \(\kappa_{1}=2\omega\) and \(\kappa_{2}=\frac{1}{3}\), the equation (1) turns into the Camassa Holm (CH) equation \[u_{t}+2\omega u_{x}+3uu_{x}-u_{xxt}=2u_{x}u_{xx}+uu_{xxx}. \tag{3}\] The CH equation was first introduced in [21] in a formal study of a class of integrable equations. In [4] the CH equation was presented to model unidirectional propagation of small-amplitude shallow water waves above a flat bottom. It has been also derived as geodesic flow on the circle in [10, 11, 32] and recently in the context of nonlinear dispersive elastic waves in [16]. The CH equation is completely integrable and has an infinite number of local conserved quantities [20], three of which are given in the following form: \[H_{0}=\int_{\mathbb{R}}udx,\ \ \ \ H_{1}=\frac{1}{2}\int_{\mathbb{R}}(u^{2}+u_{ x}^{2})dx,\ \ \ \ H_{2}=\frac{1}{2}\int_{\mathbb{R}}(u^{3}+uu_{x}^{2}+2\omega u^{2})dx.\] The CH equation has smooth solitary wave solutions and peaked solitons (_peakons_) in the cases \(\omega>0\) and \(\omega=0\), respectively. The existence of peaked solitary waves has been established in [2]. A classification of travelling wave solutions that also contain cusped solitons (_cuspons_) has been proposed in [35]. The orbital stability of the travelling waves has been obtained for smooth solitons in [13], for peakons in [12] and for periodic peakons in [33, 34]. Wave breaking phenomena which cause solutions to remain bounded whereas their slopes blow up in finite time have been investigated in [9]. Many numerical approaches for the CH equation have been developed such as finite difference methods, finite-volume methods, pseudo-spectral methods, local discontinuous Galerkin methods, see for instance [25, 7, 8, 3, 27, 40, 19, 5, 23, 1]. Even though the CH equation has been studied extensively, there are only few results on the fractional CH equation which has been obtained in [16] via a multi-scales expansion for a fractional Boussinesq equation appearing in elasticity. In [15] the local well-posedness of the Cauchy problem to the following form of fractional CH equation \[u_{t}+u_{x}+\frac{1}{2}(u^{2})_{x}+\frac{3}{4}D^{\alpha}u_{x}+\frac{5}{4}D^{ \alpha}u_{t}=-\frac{1}{4}[2D^{\alpha}(uu_{x})+uD^{\alpha}u_{x}] \tag{4}\] has been proven for the initial data \(u_{0}\in H^{s}(\mathbb{R})\), \(s>\frac{5}{2}\) when \(\alpha>2\) via Kato's semigroup approach for quasilinear evolution equations. In [18] the local well-posedness criteria for the same Cauchy problem has been refined in an appropriate Besov space as \(B^{s_{0}}_{2,1}\) with \(s_{0}=2\alpha-\frac{1}{2}\) for \(\alpha>3\) and \(s_{0}=\frac{5}{2}\) for \(2<\alpha\leq 3\). In [17] the local well-posedness results given in [18] have been recently extended to the Cauchy problem for the generalized fractional CH equation \[u_{t}+u_{x}+\frac{1}{2}(u^{p+1})_{x}+\frac{3}{4}D^{\alpha}u_{x}+\frac{5}{4}D^ {\alpha}u_{t}=-\frac{p+1}{8}[2D^{\alpha}(u^{p}u_{x})+u^{p}D^{\alpha}u_{x}] \tag{5}\] with \(\alpha>2\) and \(p\in\mathbb{N}^{+}\). A blow-up criterion for the solutions has been obtained. We note here that in the standard case (\(\alpha=2\)) the equation (4) is related to well known integrable shallow water equation derived in [14]. To the best of our knowledge there is no study related to fractional CH equation when \(\alpha\leq 2\) and the current paper is the first numerical study on the fractional CH equation in the literature. ### Main results In this paper we always consider positive values of the constant \(\kappa_{1}=2\omega\). For the CH equation it is known that the solitary waves with velocity \(c>2\omega\) are always smooth, see [26]. Our numerical study indicates that there might not be smooth solitary waves for positive \(\omega\) and all values of \(\alpha\). If \(\alpha\) and thus the dispersion is too small, there might be no such solitary waves. Thus it could be that for \(\omega>0\), there exists a minimal value of \(\alpha\) depending on \(\omega\) and \(c\), \(\alpha_{s}(\omega,c)>0\), such that only for \(\alpha>\alpha_{s}(\omega,c)\) there exist smooth solitary waves. It also appears that for given \(\alpha\) and \(\omega>0\), there exists a minimal velocity \(c_{s}(\omega,\alpha)\) such that there exist smooth solitary waves for \(c>c_{s}\). Since we construct the solitary waves numerically with a Newton iteration, the failure of the iteration only gives an indication that there are no smooth solutions for certain parameters \(\alpha\), \(c\), \(\omega\). However, this could also mean that there are no appropriate initial iterates known. By studying perturbations of the numerically constructed solitary waves, we get **Main conjecture I:** The smooth solitary waves are orbitally stable. For the CH equation, it was shown in [37], that solutions for smooth initial data \(u_{0}\) subject to the condition \(u_{0}-\partial_{xx}u_{0}+\omega>0\) will stay smooth for all times. The precise nature of the singularity that can appear in finite time for initial data not satifying this non-breaking condition does not appear to be known. Our numerical experiments indicate that for sufficiently small \(\alpha\), initial data satisfying a condition of the form \(u_{0}+D^{\alpha}u_{0}+\omega>0\) can develop a cusp in finite time. We have **Main conjecture II:** For sufficiently small \(\alpha<\alpha_{c}(\omega)\), initial data of sufficiently large mass lead to a blow-up of the fCH solution in finite time. The blow-up near a point \(x_{s}\) is a cusp of the form \(u\propto\sqrt{|x-x_{s}|}\). The precise value of \(\alpha_{c}(\omega)\) and the conditions on the initial data are not known. This paper is organized as follows: in Section 2 basic facts on the standard CH and the fractional CH equation are gathered and some useful notation is reviewed. In Section 3 the numerical approach is introduced for solitary waves to the fractional CH equation. In Section 4 we investigate numerically the stability properties of solitary waves. The long time behavior of solution to fractional CH equation is studied in the case of initial data from the Schwartz class in Section 5. In Section 6, we study the appearence of rapid modulated oscillations, _dispersive shock waves_, in the vicinity of shocks to the corresponding dispersionless equation. We add some concluding remarks in Section 7. ## 2. Preliminaries In this section we collect some basic facts on the standard CH and the fractional CH equation. We apply the standard definition for a Fourier transform for tempered distributions \(u(x)\) denoted by \((\mathcal{F}u)(k)\) with dual variable \(k\) and its inverse, \[(\mathcal{F}u)(k)=\hat{u}=\int_{\mathbb{R}}u(x)e^{-ikx}dx,\quad k \in\mathbb{R}, \tag{6}\] \[u(x)=\frac{1}{2\pi}\int_{\mathbb{R}}(\mathcal{F}u)(k)e^{ikx}dk, \quad x\in\mathbb{R}.\] ### Conserved Quantities We give the derivation of the conserved quantities for the equation (1). To this end we consider sufficiently smooth solutions which tend to \(0\) as \(x\to\mp\infty\). Integrating equation (1) over the real line, we get \[\frac{d}{dt}\int_{\mathbb{R}}\left(I+D^{\alpha}\right)udx+\int_{\mathbb{R}} \left(\kappa_{1}u+\frac{3}{2}u^{2}+\kappa_{2}D^{\alpha}u^{2}\right)_{x}dx+ \kappa_{2}\int_{\mathbb{R}}uD^{\alpha}u_{x}dx=0. \tag{7}\] By using Plancherel's theorem, we can rewrite the last integral in equation (7) as \[\int_{\mathbb{R}}u(x,t)D^{\alpha}u_{x}(x,t)dx = \int_{\mathbb{R}}\hat{u}(k,t)|k|^{\alpha}\overline{ik\hat{u}(k,t) }\frac{dk}{2\pi},\] \[= \int_{\mathbb{R}}|k|^{\frac{\alpha}{2}}\hat{u}|k|^{\alpha} \overline{ik\hat{u}}\frac{dk}{2\pi},\] \[= \int_{\mathbb{R}}D^{\frac{\alpha}{2}}uD^{\frac{\alpha}{2}}u_{x}dx,\] \[= \frac{1}{2}\int_{\mathbb{R}}|D^{\frac{\alpha}{2}}u(x,t)|_{x}^{2}dx.\] This implies for equation (7) \[\frac{d}{dt}\int_{\mathbb{R}}\left(I+D^{\alpha}\right)udx+\int_{\mathbb{R}} \left(\kappa_{1}u+\frac{3}{2}u^{2}+\kappa_{2}D^{\alpha}u^{2}+\frac{\kappa_{2} }{2}|D^{\frac{\alpha}{2}}u|^{2}\right)_{x}dx=0, \tag{8}\] which gives the conserved mass of the fCH equation in the following form \[I_{1}=\int_{\mathbb{R}}\left(u(x,t)+D^{\alpha}u(x,t)\right)dx. \tag{9}\] For the second conserved quantity, we first multiply both sides of the fCH equation by \(u\) and integrate over \(\mathbb{R}\). Then we have \[\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}}\left(u^{2}+|D^{\frac{\alpha}{2}}u|^{ 2}\right)dx+\int_{\mathbb{R}}\left(\frac{\kappa_{1}}{2}u^{2}+u^{3}+\kappa_{2} u^{2}D^{\alpha}u\right)_{x}dx=0.\] Here we have used \[\int_{\mathbb{R}}u(x,t)D^{\alpha}(u^{2}(x,t))_{x}dx=\int_{\mathbb{R}}D^{\alpha }u(x,t)(u^{2}(x,t))_{x}dx.\] Finally, the following identity is obtained as formal conserved energy of fCH equation \[I_{2}=\int_{\mathbb{R}}\left(u^{2}(x,t)+|D^{\frac{\alpha}{2}}u(x,t)|^{2}\right)dx. \tag{10}\] ### Solitary Waves of the CH equation Solitary waves are localised traveling waves, i.e., solutions of the form \(u(x,t)=Q_{c}(\xi),\ \xi=x-ct\) with constant propagation speed \(c\) and fall-off condition \(\lim\limits_{|\xi|\to\infty}Q_{c}(\xi)=0\). This ansatz leads to equation (14) for the fCH equation. For the integrable CH equation (\(\alpha=2\), \(\kappa_{1}=2\omega\), \(\kappa_{2}=1/3\)), equation (14)) can be integrated explicitly leading to \[(-c(1-\partial_{xx})+2\omega)Q_{c}+\frac{3}{2}Q_{c}^{2}=Q_{c}Q_{c}^{\prime \prime}+\frac{1}{2}(Q_{c}^{\prime})^{2}. \tag{11}\] As discussed in [26], the soliton is given implicitly by \[Q_{c}=\frac{(c-2\omega){\rm sech}^{2}(\theta)}{{\rm sech}^{2}(\theta)+2(\omega /c)\tanh^{2}(\theta)} \tag{12}\] where \[x-ct=\sqrt{\frac{4c}{c-2\omega}}\theta+\ln\frac{\cosh(\theta-\theta_{0})}{ \cosh(\theta+\theta_{0})} \tag{13}\] with \(\theta_{0}={\rm arctanh}(\sqrt{1-(2\omega/c)})\). There is a smooth soliton for \(\omega>0\) and \(c>2\omega\). These solitons are orbitally stable, see [26]. ## 3. Solitary Waves In this section we numerically construct solitary waves of the fractional CH equation. ### Defining equations With the traveling wave ansatz \(u(x,t)=Q_{c}(\xi),\ \xi=x-ct\), equation (1) reduces to the following equation \[-cQ_{c}^{\prime}+\kappa_{1}Q_{c}^{\prime}+\frac{3}{2}(Q_{c}^{2})^{\prime}-cD^ {\alpha}Q_{c}^{\prime}+\kappa_{2}[D^{\alpha}(Q_{c}^{2})^{\prime}+Q_{c}D^{ \alpha}Q_{c}^{\prime}]=0. \tag{14}\] Here \({}^{\prime}\) denotes the derivative with respect to \(\xi\). Integrating we get with the fall-off condition at infinity \[(-c(1+D^{\alpha})+\kappa_{1})Q_{c}+\frac{3}{2}Q_{c}^{2}+\kappa_{2}(D^{\alpha}Q _{c}^{2}+\partial_{\xi}^{-1}(Q_{c}D^{\alpha}Q_{c}^{\prime}))=0. \tag{15}\] Here the antiderivative is defined via its Fourier symbol, \(\mathcal{F}\partial_{x}^{-1}=1/(ik)\), i.e., \(\partial_{x}^{-1}=\frac{1}{2}(\int_{-\infty}^{x}-\int_{x}^{\infty})\). ### Numerical approach To construct solitary waves numerically, we apply as in [31] a Fourier spectral method. We study equation (15) in the Fourier domain \[(-c(1+|k|^{\alpha})+\kappa_{1})\mathcal{F}Q_{c}+\frac{3}{2}\mathcal{F}(Q_{c}^ {2})+\kappa_{2}(|k|^{\alpha}\mathcal{F}(Q_{c}^{2})+\frac{1}{ik}\mathcal{F}(Q_ {c}D^{\alpha}Q_{c}^{\prime}))=0. \tag{16}\] The Fourier transform is approximated on a sufficiently large torus (\(x\in L[-\pi,\pi]\) with \(L\gg 1\)) via a discrete Fourier transform (DFT) which is conveniently computed by a fast Fourier transform (FFT). This means we introduce the standard discretisation \(x_{n}=-\pi L+nh\), \(n=1,\ldots,N\), \(h=2\pi L/N\) with \(N\in\mathbb{N}\) the number of Fourier modes. The dual variable is then given by \(k=(-N/2+1,\ldots,N/2)/L\). In an abuse of notation, we will use the same symbols for the DFT as for the standard Fourier transform. With the DFT discretisation, equation (16) becomes a system of \(N\) nonlinear equations for \(\mathcal{F}Q_{c}\) which is solved with a Newton-Krylov method. This means that the action of the inverse of the Jacobian in a standard Newton method is computed iteratively with the Krylov subspace method GMRES [38]. Since the convergence of a Newton method is local, the choice of the initial iterate is important. Therefore we apply a tracing technique: the implicit solution for CH (\(\alpha=2\)) for some value of the velocity \(c\) is taken as the initial iterate for some smaller value of \(\alpha\). The result of this iteration is taken as an initial iterate of an even smaller value of \(\alpha\). Numerically challenging is the computation of the last term in (16) since there is a division by the dual variable \(k\) that can vanish. We apply the same approach as a in [29], the limiting value for \(k=0\) is computed via de l'Hospital's rule, \(\lim_{k\to 0}\hat{f}(k)/k=\hat{f}^{\prime}(0)\) if \(\hat{f}^{\prime}(k)\) is bounded for \(k=0\). The derivative for \(k=0\) is computed in standard way via the sum of the inverse Fourier transform \(f\) of the function \(\hat{f}\) times \(x\) sampled at the collocation points in \(x\), see [29], i.e., \(\hat{f}^{\prime}(0)\approx\sum_{n=1}^{N}x_{n}f_{n}\). ### Examples To study concrete examples, we work with \(L=100\) and \(N=2^{16}\) as in [31] for fractional Korteweg-de Vries (fKdV) equations. We consider the case \(\kappa_{2}=1/3\), \(\kappa_{1}=2\omega\) being integrable for \(\alpha=2\). We choose \(\omega=3/5\) as in [16]. We show solitary waves for \(c=2\) for several values of \(\alpha\) in Fig. 1. It can be seen that the smaller \(\alpha\) and thus the dispersion, the more the solitary wave is peaked and the slower the decay towards infinity. We were not able to numerically find a solution for even smaller values of \(\alpha\) since the iteration did not converge to a smooth solution. The behavior of the DFT coefficients in the iteration indicates that it converges to a peakon or cuspon. This does not mean that there are no smooth solitary waves for smaller values of \(\alpha\) for this velocity, we just could not find them with a Newton iteration. But this could indicate that there might be for a given \(c\) a lower limit \(\alpha_{s}(\omega,c)\) for \(\alpha\) below which there are no smooth solitons; we recall that there are smooth solitons for CH for all positive \(\omega\) with \(c>2\omega\). Note, however, that it is difficult to identify such a limit with an iterative method since the failure of the latter to converge does not mean that there is no such solution. It can be that the initial iterate was just not sufficiently close. The amplitude of the solitary waves increases with the velocity \(c\). We show this behavior on the left of Fig. 2 for \(\alpha=1.5\), \(\omega=0.6\) and several values of \(c\). For CH solitons one has \(c>2\omega\). There is clearly also a lower bound on the velocity for smooth solitons larger than the \(c>2\omega\) limit for CH, but it is not clear whether it is larger than the value for CH. Once more such a value cannot be determined with an iterative approach. If one fixes \(c\) and \(\alpha\), there does not appear to be an obvious dependence between \(\omega\) and the amplitude as in the case of CH as can be seen on the right of Fig. 2. Note that the precise fall-off of the solitary waves for \(|x|\to\infty\) appears to be unknown, it should be the same \(|x|^{-(1+\alpha)}\) rate of solitary waves of the fractional Korteweg-de Vries and nonlinear Schrodinger equation, see [22]. But it is numerically difficult to determine the exact fall-off rate on the real line via an approximation on a torus. ## 4. Stability of the solitary waves In this section we study the stability of the solitary waves under small perturbations. For various perturbations, the solitary waves appear to be stable providing numerical evidence to the first part of the Main Conjecture. ### Numerical approach To study the time evolution of solutions to the fractional CH equation (1), we use the same spatial discretisation as in the previous section, i.e., a standard FFT approach. The time integration is done with the well known explicit Runge-Kutta method of 4th order. The accuracy of the time integration is controlled via the conserved energy (10) which will numerically depend on time due to unavoidable numerical errors. As discussed in [28], this numerically computed energy will overestimate the numerical error by 2-3 orders of magnitude. Thus we will always track in the following the relative energy and the DFT coefficients to control the numerical resolution. As a test we propagate the solitary wave for \(c=2\), \(\alpha=1.5\), \(\omega=3/5\) and use \(N_{t}=10^{4}\) time steps for \(t\in[0,1]\). The relative energy is conserved during the whole computation to the order of \(10^{-15}\). The DFT coefficients decrease to machine precision (here \(10^{-16}\)) which means that the solution is well resolved both Figure 2. Solitary waves (15) for \(\kappa_{2}=1/3\) and \(\alpha=1.5\), on the left for \(\omega=0.6\) and several values of \(c\), on the right for \(c=2\) and several values of \(\omega\). in space and in time. The difference between the solitary wave travelling with velocity \(c=2\) and the numerically computed solution for \(t=1\) is of the order \(10^{-15}\). This shows that the code is able to propagate solitary waves with machine precision as indicated by the numerical conservation of the energy and the DFT coefficients. In addition it shows that the solitary waves numerically constructed in the previous section are indeed solutions to equation (15) with a similar accuracy. ### Perturbed solitary waves We consider perturbations of the solitary waves in the form of perturbed initial data, \[u(x,0)=Q_{c}+Ae^{-x^{2}}, \tag{17}\] where \(A\) is a small real constant. In all examples below, the DFT coefficients always decrease to machine precision, and the relative energy is conserved to better than \(10^{-6}\). First we consider the case \(\alpha=1.5\) for \(Q_{2}\) and \(A=\pm 0.08\). This corresponds to a perturbation of the order of \(10\%\). This is not a small perturbation, but in order to see numerical effects of a perturbation in finite time, it is convenient to consider perturbations that are of the order of a few percent. We use \(N_{t}=10^{4}\) time steps for \(t\leq 40\). In Fig. 3 we show on the left the solution for \(t=40\). The solution appears to be a solitary wave with some radiation propagating towards \(-\infty\). This is confirmed by the \(L^{\infty}\) norm of the solution on the right of the same figure. After some time the \(L^{\infty}\) norm appears to reach an asymptotic value corresponding to a slightly faster soliton. This is due to the fact that we considered a perturbation of almost \(10\%\). The final state is thus a solitary wave with larger mass, the solitary wave appears to be stable. We obtain a similar result if we consider a perturbation of the solitary wave with slightly smaller mass than the unperturbed solitary wave. In Fig. 4, we choose initial data of the form (17) with \(A=-0.08\) with the same parameters as in Fig. 3. The solution at the final time \(t=40\) appears to be again a solitary wave plus radiation. This is confirmed by the \(L^{\infty}\) norm on the right of the same figure. The final state of the solution appears to be a solitary wave with a slightly smaller mass and velocity than \(Q_{2}\). The solitary wave seems to be again stable even against comparatively large perturbations. It is an interesting question whether a similar behavior can be observed for smaller values of \(\alpha\), i.e., for a fractional CH equation with less dispersion. We Figure 3. Solution to the fractional CH equation for initial data of the form (17) with \(A=0.08\) for \(\alpha=1.5\) and \(c=2\): on the left the solution for \(t=40\), on the right the \(L^{\infty}\) norm of the solution in dependence of time. consider the case \(\alpha=0.9\) for which we could construct solitary waves with \(c=2\). Here we consider smaller perturbations of the order of \(1\%\) for larger times than before. We apply \(N_{t}=2*10^{4}\) time steps for \(t\leq 100\) to initial data of the form (17). In Fig. 5 we show on the left the \(L^{\infty}\) norm of the solution for \(A=0.01\) and on the right for \(A=-0.01\). In both cases the final state of the solution appears to be a solitary wave plus radiation. The small oscillations in the \(L^{\infty}\) norm are due to the fact that we are working on a torus where the radiation can reenter the computational domain on the other side and that we determine the maximum of the solution on grid points. The picture is very similar for different types of perturbations. We consider the solution for intial data of the form \(u(x,0)=\lambda Q_{2}(x)\) for \(\alpha=0.9\) and real \(\lambda\) close to \(1\). In Fig. 6, the \(L^{\infty}\) norms of the solution in dependence of time is shown, on the left for \(\lambda=0.99\), on the right for \(\lambda=1.01\). In both cases the final state appears to be a solitary wave of slightly different mass. ## 5. Localized initial data In this section we study the long time behavior of initial data from the Schwartz class of smooth rapidly decreasing functions. We are interested whether solitary Figure 4. Solution to the fractional CH equation for initial data of the form (17) with \(A=-0.08\) for \(\alpha=1.5\) and \(c=2\): on the left the solution for \(t=40\), on the right the \(L^{\infty}\) norm of the solution in dependence of time. Figure 5. \(L^{\infty}\) norm of the solution to the fractional CH equation for initial data of the form (17) for \(\alpha=0.9\) and \(c=2\): on the left for \(A=0.01\), on the right for \(A=-0.01\). waves appear in the solution asymptotically as expected from the _soliton resolution conjecture_, or whether there is a _blow-up_, a loss of regularity of the solution in finite time. Concretely we will study Gaussian initial data, \[u(x,0)=A\exp(-x^{2}), \tag{18}\] where \(A>0\) is constant. We will use \(N_{t}=10^{4}\) time steps and \(N=2^{14}\) Fourier modes for \(x\in 3[-\pi,\pi]\). We first study the case \(\alpha=1.5\). Small initial data will be simply radiated towards infinity. But if we take initial data of the form (18) with \(A=1\), we get the solution shown in Fig. 7. The initial hump breaks up into several humps. One of them, possibly three, appear to be solitary waves. It seems that the solution indeed decomposes into solitary waves and radiation. The precise number of solitary waves appearing for large times is unknown. The interpretation of the largest hump as a solitary wave is confirmed by the \(L^{\infty}\) norm of the solution shown in Fig. 8 on the right. It seems to reach a constant asymptotic value as expected for a solitary wave. Since the solitary waves for the fractional CH equation do not have a simple scaling with the velocity \(c\) as the fKdV solitary wave, a fit of the hump to the solitary waves is not obvious. The solution for \(t=20\) is shown on the left of Fig. 8. For smaller values of \(\alpha\), the picture can change. If we take \(\alpha=0.9\) and initial data of the form (18) with \(A=0.5\), the initial hump will be radiated away. The solution for \(t=40\) is shown on the left of Fig. 9. The \(L^{\infty}\) norm of the solution on the right of the same figure appears to decrease monotonically with time. However for initial data of the form (18) of larger mass, i.e., larger values of \(A\), we do not find solitary waves in the long time behavior of the solution. Instead for \(A=1\), a cusp appears to form in finite time, for \(t\sim 1.7667\) in this case. The solution at this time can be seen in Fig. 10 on the left. It is not only the figure that indicates a cusp formation, this is also confirmed by the Fourier coefficients (more precisely the DFT of \(u\)) on the right of the same figure. The algebraic decay of these coefficients with the index \(|k|\) indicates that a singularity of the analytic continuation of the function \(u\) to the complex plane will hit at the critical time the real axis. It is well known that an essential singularity in the complex plane of the form \(u\sim(z-z_{j})^{\mu_{j}}\), \(\mu_{j}\notin\mathbb{Z}\), with \(z_{j}=\alpha_{j}-i\delta_{j}\) in the lower half plane (\(\delta_{j}\geq 0\)) implies for \(k\to\infty\) the following asymptotic behavior of Figure 6. \(L^{\infty}\) norm of the solution to the fractional CH equation for initial data of the form \(u(x,0)=\lambda Q_{2}(x)\) for \(\alpha=0.9\) and \(c=2\): on the left for \(\lambda=0.99\), on the right for \(\lambda=1.01\). the Fourier coefficients (see e.g. [6]), \[\hat{u}\sim\sqrt{2\pi}\mu_{j}^{\mu_{j}+\frac{1}{2}}e^{-\mu_{j}}\frac{(-i)^{\mu_{j }+1}}{k^{\mu_{j}+1}}e^{-ik\alpha_{j}-k\delta_{j}}. \tag{19}\] For a single such singularity with positive \(\delta_{j}\), the modulus of the Fourier coefficients decreases exponentially for large \(k\) until \(\delta_{j}=0\), when the modulus of the Fourier coefficients has an algebraic dependence on \(k\). As first shown in [39], one can use (19) to numerically characterize the singularity as discussed in detail in [30] (the DFT coefficients have a similar behavior as the Fourier transform). Essentially a least square method is applied to \(\ln|\hat{u}|\) to fit the parameters for \(k>100\). The reader is referred to [30] for details. For the case shown in Fig. 10, we find \(\mu=0.5007\). This Figure 8. Solution to the fractional CH equation with \(\alpha=1.5\) for initial data \(u(x,0)=\exp(-x^{2})\), on the right the \(L^{\infty}\) norm, on the left the solution for \(t=20\). Figure 7. Solution to the fractional CH equation with \(\alpha=1.5\) for initial data \(u(x,0)=\exp(-x^{2})\). means that a square root type singularity is observed in this case which provides numerical evidence of the second part of the main conjecture in the introduction. Since global existence in time does not appear to hold for solutions to initial data of sufficiently large mass, no solitary waves are observed in this case. Note that this is in apparent contradiction to the stability of the solitary waves observed numerically in the previous section. However, it has to be remembered that the solitary waves have a slow algebraic fall-off towards infinity whereas we consider exponentially decaying data in this section. For \(L^{2}\)-critical generalized Korteweg-de Vries equations, a blow-up in finite time is only observed if the initial data are sufficiently rapidly decreasing, see [36] and references therein. The situation appears to be similar for fCH solutions for sufficiently small \(\alpha\). ## 6. Dispersive shock waves In this section we study the appearence of dispersive shock waves (DSWs) in fCH solutions. A convenient way to study the formation of zones of rapid oscillations in the solutions of dispersive PDEs is to consider the solution for large times on large scales. This can be done by introducing a small parameter \(\epsilon\ll 1\) and rescale \(t\), \(x\) according to \(t\mapsto t/\epsilon\), \(x\mapsto x/\epsilon\). This leads for equation (1) to \[u_{t}+\kappa_{1}u_{x}+3uu_{x}+\epsilon^{\alpha}D^{\alpha}u_{t}=-\kappa_{2} \epsilon^{\alpha}[2D^{\alpha}(uu_{x})+uD^{\alpha}u_{x}], \tag{20}\] Figure 10. Solution to the fractional CH equation with \(\alpha=0.9\) for initial data \(u(x,0)=\exp(-x^{2})\), on the left the solution for \(t=1.7667\), on the right the DFT coefficients for this solution. Figure 9. Solution to the fractional CH equation with \(\alpha=0.9\) for initial data \(u(x,0)=0.5\exp(-x^{2})\), on the left the solution for \(t=40\), on the right the \(L^{\infty}\) norm. where we have kept the same notation as for the case \(\epsilon=1\). Thus equation (20) is simply equation (1) with \(D^{\alpha}\) replaced by \(\epsilon^{\alpha}D^{\alpha}\). In the formal limit \(\epsilon\to 0\), equation (20) reduces to the Hopf equation \[u_{t}+\kappa_{1}u_{x}+3uu_{x}=0, \tag{21}\] where the term linear in \(u_{x}\) can be absorbed by a Galilei transformation. The Hopf equation is known to develop a gradient catastrophe for hump-like initial data in finite time. Dispersive regularisations of this equation as (20) are expected to lead to solutions with zones of rapid oscillations in the vicinity of the shocks of the Hopf solution for the same initial data. In [23], we have studied numerically the onset of oscillations in solutions of the CH equation. It was conjectured that a special solution to the second equation in the Painleve I hierarchy, see for instance [24], provides an asymptotic description of the break-up of CH solutions in this case. For larger times the oscillatory zone was studied in [1]. A conjecture to describe the leading edge of the oscillatory zone in terms of a Painleve transcendent was given. We will study below similar examples as in [23, 1] for fCH, initial data of the form \(u(x,0)=\mathrm{sech}^{2}x\) for several values of \(\epsilon\). In Fig. 11, we show the fCH solution for \(\alpha=1.5\) and \(\epsilon=10^{-2}\) for several values of \(t\). We use \(N=2^{14}\) Fourier modes and \(N_{t}=10\) time steps for \(t\leq 1\). A first oscillation forms for \(t\sim 0.4\) (the critical time for the Hopf solution is \(t_{c}\sim 0.433\)), then a well defined zone of oscillations appears also called the Whitham zone. The Whitham zone becomes more defined and more oscillatory the smaller \(\epsilon\) is. Thus there is no strong limit \(\epsilon\to 0\) for DSWs. Note th Figure 11. Solution to the fractional CH equation (20) with \(\alpha=1.5\), \(\omega=0.6\) and \(\epsilon=10^{-2}\) for initial data \(u(x,0)=\mathrm{sech}^{2}x\) for several values of time (on the top for \(t=0\), \(t=0.35\), in the bottom for \(t=0.7\), \(t=1\)). are numerically more demanding than the corresponding KdV solutions for which special integrators exist, see for instance [28]. For CH time integration is more problematic since the dispersive terms are nonlinear in contrast to KdV. The reduced dispersion in CH compared to KdV because of the nonlocality \(((1-\partial_{xx})u_{t}\) leads to less oscillations in CH than in similar KdV situations, but to stronger gradients). This is amplified in fCH solutions since the dispersion is smaller than in CH. To treat the case \(\epsilon=10^{-3}\) for the situation shown in Fig. 11, we apply \(N=2^{18}\) Fourier modes and \(N_{t}=10^{5}\) time steps. We show the fCH solution for three values of \(\epsilon\) in Fig. 12 for the same initial data at the same time. For smaller \(\alpha\), the dispersion is even weaker. For the same initial data as in Fig. 12, we get for \(\alpha=0.9\) and \(\epsilon=10^{-1}\) again a DSW as can be seen in Fig. 13. As expected the smaller dispersion than in Fig. 12 leads to less oscillatory behavior. However the situation is different for smaller \(\epsilon\) than in Fig. 13, \(\epsilon=10^{-2}\), as can be seen in Fig. 14. We use \(N=2^{15}\) Fourier modes and \(N_{t}=10^{4}\) time steps for \(t\leq 0.7\). There is once more a DSW forming, but the first peak appears to develop in finite time into a cusp, see the left of Fig. 14. It is not surprising that a smaller \(\epsilon\) leads to a cusp formation that was already observed in the previous section for initial data of sufficient mass in this case. Since the formal rescaling of \(x\) and \(t\) with \(\epsilon\) leads also to a rescaling of the mass, the same initial data will have more mass in the original setting (1) the smaller \(\epsilon\). The cusp formation is confirmed by the DFT coefficients on the right of the figure which gives more numerical evidence to the second part of the Main Conjecture. A fitting of the coefficients to (19) indicates an exponent \(\mu\sim 0.68\). This is slightly larger than the factor \(1/2\) found Figure 13. Solution to the fractional CH equation (20) with \(\alpha=0.9\), \(\omega=0.6\) and \(\epsilon=10^{-1}\) for initial data \(u(x,0)=\mbox{sech}^{2}x\) for \(t=1.005\) on the left and \(t=1.5\) on the right. in the previous section, but the accuracy in identifying the factor \(\mu\) is always less than for the exponent \(\delta\) in (19), in particular here where the DSW already leads to a slower decay of the DFT coefficients with the index \(k\). Thus there is strong evidence for cusp formation, but the exact character of the singularity needs to be justified analytically. ## 7. Outlook In this paper, we have started a numerical study of the fractional CH equation. Solitary waves have been numerically constructed, and indications have been found that there could be a minimal value of \(\alpha\) for given velocity \(c\) and positive parameter \(\omega\) below which there are no smooth solitary waves. It was shown that the numerically constructed smooth solitary waves are stable. A study of initial data from the Schwartz class of smooth rapidly decreasing functions led for initial data of small mass to scattering. For higher mass and sufficiently large \(\alpha\), solitary waves seem to be observed for large times in accordance with the soliton resolution conjecture. However, for smaller values of \(\alpha\), cusps can appear in finite time for such initial data. We also studied the formation of dispersive shock waves. An interesting question raised by this study is to identify the parameter space \(\alpha\), \(\omega\) and \(c\) for which smooth solitary waves exist. The fall-off behavior of these solutions should be proven. The orbital stability of these solitary waves is a question to be addressed also analytically. Of particular interest is the question of a blow-up already in the CH equation, and even more so in fCH, for which inital data a globally smooth solution in time can be expected, and for which data a blow-up in finite time is to be expected. The type of blow-up appears to be a gradient catastrophe, but this needs to be confirmed analytically. The formation of DSWs was shown numerically. In [23, 1], the onset of the oscillations as well as the boundary of the Whitham zone was conjectured to be asymptotically given by certain Painleve transcendents. It is an interesting question whether there are fractional ODEs that play a role in this context for the fCH equation. It will be the subject of further research to address such questions.
2305.00531
Random packing in three dimensions
Unraveling the complexities of random packing in three dimensions has long puzzled physicists. While both experiments and simulations consistently show a maximum density of 64 percent for tightly packed random spheres, we still lack an unambiguous and universally accepted definition of random packing. This paper introduces an innovative standpoint, depicting random packing as spheres closest to a quenched Poisson field of random points. We furnish an efficacious algorithm to probe this proposed model numerically. We unearth a unique out-of-equilibrium thermodynamic phenomenon, akin to a `latent heat', that emerges at $\phi_J \approx 0.65$ in three dimensions. This phenomenon is accompanied by global and local structural rearrangements, marking a jamming transition from an unjammed state to a jammed one. Notably, such a `jamming' transition is absent for two-dimensional random packing. Our innovative approach paves a new avenue for defining random packing and provides novel insights into the behavior of amorphous materials.
Chaoming Song
2023-04-30T17:08:50Z
http://arxiv.org/abs/2305.00531v2
# Random packing in three dimensions ###### Abstract Unraveling the complexities of random packing in three dimensions has long puzzled physicists. While both experiments and simulations consistently show a maximum density of 64 percent for tightly packed random spheres, we still lack an unambiguous and universally accepted definition of random packing. This paper introduces an innovative standpoint, depicting random packing as spheres closest to a set of quenched random points. We furnish an efficacious algorithm to probe this proposed model numerically. We unearth a unique out-of-equilibrium thermodynamic phenomenon, akin to a 'latent heat', that emerges at \(\phi_{J}\approx 0.645\) in three dimensions. This phenomenon is accompanied by global and local structural rearrangements, marking a jamming transition from an unjammed state to a jammed one. Notably, such a 'jamming' transition is absent for two-dimensional random packing. Our innovative approach paves a new avenue for defining random packing and provides novel insights into the behavior of amorphous materials. ## Introduction The maximal spherical packing has a well-known packing fraction of \(\phi_{m}=\pi/\sqrt{18}\approx 0.74\), originally conjectured by Kepler and proven recently [1]. In contrast, the nature of Random Close Packing (RCP), a term coined by Bernal et al. in the 1960s [2], remains elusive. RCP refers to the densest random arrangement of spheres, with an empirically observed packing fraction of approximately \(\phi_{\rm RCP}\approx 0.64\). Despite over half a century's efforts to understand random packing, little progress has been made toward an unambiguous and widely-acceptable definition of RCP. The main challenge lies in the very nature of "randomness." While it is easy to define a random set of uncorrelated points, such as a Poisson point process, it is difficult to generalize these definitions to random packing. This is because overlaps are disallowed in sphere packing, and therefore, some intrinsic correlation must exist. On the other hand, existing numerical simulations [3; 4; 5; 6; 7] and experiments [8; 9; 2; 10] provide evidence for the existence of RCP, characterized by a consistent packing fraction and other structural features. A commonly used computational method to generate RCP involves the Lubachevsky-Stillinger (LS) algorithm for hard spheres [3; 4], starting from a relatively random initial condition at very low density and then compressing the packing rapidly to achieve higher densities and avoid the crystallization. It is important to note that this process is typically out of equilibrium, and thus the resulting random packings are compression rate dependent. In particular, their final packing fraction decreases as the compression rate. It has been argued that the RCP corresponds to the infinite compression rate. Complementarily, numerical simulations of soft spheres have also observed a Jamming transition at the packing fraction close to \(\phi_{\rm RCP}\)[5; 6; 11]. Since the methodology of preparing RCP is protocol-dependent, it remains unclear whether there exists a uniquely well-defined RCP [4]. There are several proposals for defining random packing rigorously, each with its own advantages and disadvantages. One approach is to define metrics that evaluate the randomness of packings and then demonstrate that the RCP reaches maximum randomness based on these metrics [12; 4; 13]. While this approach is straightforward, there is no natural measure of randomness, and different metrics may lead to different types of disorders. Another approach is to define the ensemble of random packings instead of one particular packing[14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. This idea is appealing as it is rooted in probability theory and statistical mechanics. Yet, integrating the out-of-equilibrium nature of random packing into an equilibrium statistical mechanics framework presents a significant challenge. The mean-field theory (MFT), developed by the spin glass community, moves towards this goal by introducing intrinsic couplings between the replicas of packings [19; 20; 24; 25; 26]. However, its validity hinges implicitly on the replica symmetry breaking (RSB) hypothesis [27]. While the RSB has proven effective for infinite-dimensional systems, its validity is currently debated for finite dimensions, even for spin glass models [28; 29; 30]. Specifically, applying MFT to three-dimensional random packing precludes the possibility of partially ordered structures. Edwards, in contrast, proposed an ensemble of packings sharing the same volume, each with an equal probability of being visited [31; 32]. Yet, despite limited efforts[17; 33; 34], analyzing the Edwards ensemble remains a challenging task. More recently, there have been proposals to consider the jamming transition as a dynamic phase transition [35; 36], but these approaches also rely on the protocol of generating random packing. In this paper, we introduce a novel description of random packing, utilizing the ensemble approach outlined above. Our method is inspired by established spin glass models, where the inherent randomness, known as the quenched disorder, drives the system out of equilibrium. For instance, in the Edwards-Anderson (EA) model [37], the coupling constant between two neighboring spins is randomly drawn from a preset distribution. We suggest a similar strategy can be employed to depict random packing, allowing us to define it explicitly as an ensemble of packing configurations closest to a quenched Poisson random point field. These configurations exhibit maximum randomness in that they resemble uncorrelated random points, i.e., ideal gas, as closely as possible. Notably, our definition is independent of packing-generating protocols. Moreover, efficient algorithms exist for examining the proposed random packing in two and three dimensions. Our findings indicate a jamming transition at \(\phi_{J}\approx 0.645\), aligning closely with the empirically observed value for RCP. At this point, the three-dimensional random packing configuration transits from local to global rearrangements. Additionally, we identify a novel out-of-equilibrium thermodynamic identity, where the jamming transition corresponds to the emergence of an analogous "heat". This innovative approach provides a pathway to overcoming the challenges associated with a natural and unambiguous definition of random packing and lays a robust groundwork for future explorations in this intriguing field. ## Problem definition Generating RCP typically begins with spheres placed at random positions, a common procedure shared across various protocols. Initially, the packing fraction is kept very small to avoid overlaps between the spheres. As the packing fraction increases, either by compression or increasing particle size, the process implicitly establishes an intrinsic correlation between the initial condition and the final dense packing. The quenched disorder in random packing arises from the random initial condition. This observation motivates us to introduce an ensemble of random packing. For a \(d\)-dimensional system of \(N\) particles with positions \(\vec{\mathbf{r}}=\{\mathbf{r}_{i}\}\), we introduce the partition function \[Z(\mathbf{\zeta})=\int\exp(-\beta\left(\sum_{i<j}U(\mathbf{r}_{i},\mathbf{r}_{j})+\epsilon \sum_{i}(\mathbf{r}_{i}-\mathbf{\zeta}_{i})^{2}\right)d^{N}\vec{\mathbf{r}}, \tag{1}\] where \(U(\mathbf{r}_{1},\mathbf{r}_{2})\) captures the pairwise potential between particles. The initial positions \(\vec{\mathbf{\zeta}}=\{\mathbf{\zeta}_{i}\}\) are quenched random variables, which we obtain from a Poisson random point field. The coupling constant \(\epsilon\) controls the position correlation between the initial condition and final packing. One may consider \(\epsilon\) to play a similar role as the compression rate in the LS algorithm. However, the model (1) is independent of the particular choice of packing-generating protocols, yet captures the essence of random packing. Choosing \(U\) to be the hard-sphere potential, Eq. (1) reduces to \[Z_{HS}(\mathbf{\zeta})=\sum_{\text{admissible packings}}\exp\left(-\beta\epsilon \sum_{i}(\mathbf{r}_{i}-\mathbf{\zeta}_{i})^{2}\right), \tag{2}\] where "admissible packings" satisfies the hard-sphere constraint, \((\mathbf{r}_{i}-\mathbf{r}_{j})^{2}<\sigma^{2}\), where \(\sigma\) is the diameter of spheres. The central physical quantity is the mean square displacement (MSD) \(\Delta\equiv\frac{1}{N}\sum_{i=1}^{N}\overline{\langle(\mathbf{r}_{i}-\mathbf{\zeta}_{ i})^{2}\rangle}=\frac{\partial\overline{\mathbf{r}}}{\partial\beta}\), where the averaged free energy \(\beta\overline{F}=-\overline{\ln Z}\). \(\langle\ldots\rangle\) and \(\overline{\cdots}\) represent ensemble and disorder averages, respectively. To make \(\Delta\) dimensionless, we rescale it by \(\rho^{2/d}\), where \(\rho\equiv N/V\) is the density, and \(d\) is the dimensionality of the system. Therefore, we define the dimensionless quantity \(\rho^{-2/d}\Delta\) as the MSD, and refer to it as such throughout the article unless otherwise stated. Note, applying the replica trick to Eq.(1) by taking \(m\)-copies of replicas and averaging over quenched disorder, leading to a replicated partition function \(\overline{Z^{m}}\sim\int\prod_{a=1}^{m}d^{N}\vec{\mathbf{r}}^{a}\exp(-\beta\left( \sum_{a=1}^{m}\sum_{i<j}U(\mathbf{r}_{i}^{a},\mathbf{r}_{j}^{a})+\frac{\epsilon}{m} \sum_{a<b}\sum_{i}(\mathbf{r}_{i}^{a}-\mathbf{r}_{i}^{b})^{2}\right)\). This is the starting point of the MFT approach [19]. However, our model does not depend on the RSB assumption of the MTF and potentially leads to different physics, as we will discuss below. As suggested by existing numerical works, RCP corresponds to close packing for an infinite compression rate, which corresponds to the strong coupling limit \(\epsilon\rightarrow\infty\). This limit has a simple physical interpretation: the packing configuration is as close as possible to the random initial condition and therefore exhibits maximal randomness. In the following discussion, we will primarily focus on these packings and maintain our consideration within this strong coupling limit, unless stated otherwise. In this limit, Eq. (2) can be transformed into the following quadratically constrained quadratic program (QCQP) problem, \[\begin{split}&\text{minimize}\ \sum_{i}(\mathbf{r}_{i}-\mathbf{\zeta}_{i})^{2},\\ &\text{subject to}\ -(\mathbf{r}_{i}-\mathbf{r}_{j})^{2}+\sigma^{2} \leq 0\ \text{for all}\ i<j.\end{split} \tag{3}\] This optimization problem corresponds to finding an admissible packing closest to the quenched random configuration \(\mathbf{\zeta}\), which we refer to as _maximally random_ packing. Figure 1 demonstrates two-dimensional maximally random packings of \(N=50\) circles at three different \(\phi\) values. The reduced energy density \(\frac{1}{N}\lim_{\epsilon\rightarrow\infty}E/\epsilon=\Delta\) becomes the MSD, i.e., the objective function of the optimization problem (3). Note that optimization problem (3) is a well-defined mathematical object, independent of the protocols. Moreover, we define the reduced force on particle \(i\) as \(\mathbf{f}_{i}=\lim_{\epsilon\rightarrow\infty}\mathbf{F}_{i}/\epsilon=-\partial_{i} \Delta=2(\mathbf{\zeta}_{i}-\mathbf{r}_{i})\), and the reduced Figure 1: **Maximally random packing.** Circle packing with \(N=50\) circles, closest to a quenched random Poisson field \(\vec{\mathbf{\zeta}}\) (red dots) for packing fractions \(\phi=0.8\), \(0.6\) and \(0.4\), respectively. Arrows represent the displacements from the best-matching random points. pressure \(\pi=\frac{\phi}{dN}\sum_{i}\mathbf{f}_{i}\cdot\mathbf{r}_{i}\). The reduced force and pressure are also dimensionless quantities, which can be obtained by rescaling the density. ## IV Jamming transition We develop an efficient algorithm of solving the optimization problem (3) (see Methods and Appendix A), and apply it to the three-dimensional packing of \(N=4,000\) particles. We measure the MSD and the reduced pressure as functions of packing fraction \(\phi\). With the packing fraction reduced, the random packing undergoes a rearrangement to find a new configuration closer to the quenched random field. If such a rearrangement is local under infinitesimal \(\phi\) changes, we can show that (see Appendix C) \[\pi=-(\partial\Delta/\partial(\phi^{-1}))_{\rm local}, \tag{4}\] which is analogous to the thermodynamic relation \(P=-(\partial U/\partial V)_{S}\). Figure 2 plots the reduced pressure \(\pi\) and the MSD change \(-\partial\Delta/\partial(\phi^{-1})\) as functions of packing fraction \(\phi\) for the three-dimensional packing. We find that Eq. (4) holds only for packing fractions greater than a critical value \(\phi_{J}\approx 0.645\). Below \(\phi_{J}\), Eq. (4) breaks down, implying potential global rearrangements. This finding allows us to introduce an analogous Figure 2: **Random sphere packing (3D).** (a) The reduced pressure \(\pi\) and MSD change \(-d\Delta/d\phi^{-1}\) versus the packing fraction \(\phi\) for three dimensions for \(N=4,000\) spheres, averaged over five realizations of the quenched random points \(\vec{\mathbf{\zeta}}\). (b) The “heat” \(\Theta\) as a function of the packing fraction \(\phi\). thermodynamic identity \[\mathrm{d}\Delta=\,\mathrm{d}\Theta-\pi\mathrm{d}\phi^{-1}, \tag{5}\] where the analogous "heat" \(\Theta\) accounts for the discrepancy between the internal energy and work. Integrating Eq. (5) leads to \[\Theta=\Delta-\Delta^{\mathrm{local}}, \tag{6}\] where the work \(\Delta^{\mathrm{local}}\equiv-\int_{\phi_{m}^{-1}}^{\phi^{-1}_{-1}}\pi\mathrm{d }\phi^{-1}=\int_{\phi_{m}^{-1}}^{\phi^{-1}_{-1}}(\partial\Delta/\partial(\phi^ {-1}))_{\mathrm{local}}d\phi^{-1}\) captures the MSD of local rearrangement. Consequently, the analogous heat corresponds to the excess MSD that accounts for non-local rearrangement. Figure 2b plots \(\Theta\) as a function of \(\phi\), showing a phase transition at \(\phi=\phi_{c}\), below which a non-zero heat emerges. To understand the emergence of heat better, it is worth noting that the MSD change \(-\partial\Delta/\partial(\phi^{-1})\) shows a prominent peak at \(\phi\approx 0.59\), resulting in negative slopes between \(\phi\approx 0.59\) and \(\phi_{J}\). In contrast, to maintain positive compressibility \(\kappa\equiv\phi\mathrm{d}\pi/\mathrm{d}\phi\), the reduced pressure must increase monotonically with \(\phi\). As a result, a non-zero heat is required to resolve this contradiction, implying that global rearrangement is unavoidable. Indeed, this non-zero heat \(\Theta\) is related to the instability of local rearrangement when the kissing number \(z\) falls below the isostatic bound, i.e., \(z<2d\) (see Appendix D). In simpler terms, the packing remains jammed for packing fraction \(\phi>\phi_{J}\) whereas becomes unjammed for \(\phi<\phi_{J}\). This result indicates a jamming transition at the critical packing fraction \(\phi_{J}\), where the random packing transits from local to global rearrangements. Notably, the critical packing fraction \(\phi_{J}\) aligns closely with the empirically observed \(\phi_{\mathrm{RCP}}\). We thus propose that maximal random packing with the packing fraction \(\phi_{J}\) corresponds to RCP, namely, the ensemble of packings that are closest to the random points while still remaining jammed. It is worth noting that the analogous thermodynamic identity (5) is an out-of-equilibrium phenomenon. Specifically, the heat \(\Theta\) does not seem to originate from an entropy change. While there may exist degenerated configurations with a closed distance to the random points \(\vec{\zeta}\), yielding a finite entropy density \(s\), the equilibrium relation \(\mathrm{d}s=\epsilon\,\mathrm{d}\Theta\) fails at the strong coupling limit \(\epsilon\to\infty\). In other words, an infinite entropy density would be required to have a finite heat change, which is unphysical and underscores the out-of-equilibrium nature of random packing. For comparison, we apply the same algorithm to two-dimensional packing. Figure (3)a demonstrates that the reduced pressure closely aligns with the change in MSD across the en tire \(\phi\) spectrum, in agreement with Eq. (4). This result implies that two-dimensional random packing involves only local arrangement under infinitesimal \(\phi\) changes, a characteristic that distinctly contrasts with its three-dimensional counterpart. Moreover, Figure(3)b reveals that \(\Delta(\phi)\) exhibits a logarithmic divergence when approaching the maximum packing fraction \(\phi_{m}\), i.e., \(\Delta(\phi)\sim\log(\phi_{m}-\phi)\). In contrast, \(\Delta(\phi_{m})\) is finite in three dimensions, indicating that only a finite distance between the close-packing and random points (see Appendix B). Collectively, these observations highlight a remarkable difference between three-dimensional random packing and its two-dimensional counterpart. However, whether the absence of the jamming transition in two dimensions is associated with the divergence of \(\Delta(\phi_{m})\) remains an open question. This intriguing matter is set aside for future exploration. The observed jamming transition in the three dimensions is further corroborated by the measurement of the kissing number \(z\), which measures the average number of contacts each particle maintains with its neighbors. Figure 4a plots \(z(\phi)\) as a function of packing fraction \(\phi\) for both two and three dimensions. In the three-dimensional setting, we observe a plateau around \(z=6\) for random packings with packing fractions near \(\phi_{J}\). In particular, the slope \(dz/d\phi\) vanishes when \(\phi=\phi_{J}\) (Fig. 4b), thus providing additional evidence of the jamming phase transition at \(\phi_{J}\). It is plausible that \(z-z_{c}\sim(\phi-\phi_{J})^{3}\) follows a Landau Figure 3: **Random circle packing (2D).** (a) The reduced pressure \(\pi\) and MSD change \(-d\Delta/d\phi^{-1}\) versus the packing fraction \(\phi\) for \(N=4,200\) circles, averaged over five realizations of the quenched field \(\vec{\xi}\). (b) Linear-log plot for \(\Delta\) versus \(\phi_{m}-\phi\) for random circle packing, showing a logarithmic divergence (dashed line) when \(\phi\to\phi_{m}\). type cubic relation, but more meticulous numerical simulations are needed to clarify this point in the future. As mentioned earlier, this finding concurs with the Maxwell counting argument [38; 39] that local rearrangement instability arises when \(z<2d\) (see Appendix D). In contrast, no such phenomenon is observed in the two dimensions. Moreover, at a low packing fraction \(\phi\), we anticipate the kissing number satisfies \(z\sim 2^{d}\phi\). This prediction is based on an argument that the radial distribution function of an "ideal" random packing follows \(g(r)=\Theta(r-\sigma)+z\frac{\sigma}{d}\delta(r-\sigma)\). Notably, this behavior markedly deviates from simple liquid theory, where \(z\neq 0\) solely at the maximum packing fraction \(\phi_{m}\), and from the MFT, where \(z\neq 0\) only at the jamming transition point \(\phi_{J}\). Figure 2b validates this hypothesis, showing that \(\lim_{\phi\to 0}dz/d\phi=2^{d}\). The analysis presented thus far elucidates the confusion surrounding the definition of random packing, positing that the RCP aligns with the local-global rearrangement transition. A natural query emerges: how does our approach correlate with established theories? As mentioned earlier, applying the replica trick to Eq. (2) leads to the MFT approach, which is believed to be exact when \(d\to\infty\)[40]. Thus, our model should coincide with the MFT predictions at a sufficiently large dimension. However, for the most interesting case of three dimensions, there exists a finite-dimensional effect that cannot be treated by the MFT. For instance, the MFT approach predicates its theory on the assumption that any weak Figure 4: **Kissing number.** (a) The kissing number \(z\) as a function of packing fraction \(\phi\) for both random sphere (red, solid squares) and circle (blue, open triangles) packings. (b) The derivative \(dz/d\phi\) as a function of packing fraction \(\phi\). coupling \(\epsilon\to 0^{+}\) effectively restricts packing to the amorphous phase, thereby circumventing crystallization. This hypothesis may hold in infinite-dimensional scenarios [40], yet it appears ill-suited for three-dimensional contexts. Our findings suggest that partial crystallization persists even in this strong coupling limit, thereby contradicting the MFT premise. Consequently, the MFT anticipates the MSD, interpretable as the cage size, to vanish at the jamming point. Contrary to this, our numerical simulations demonstrate a finite \(\Delta\), underscoring the disparity between the MFT and our approach for three-dimensional random packing. On the other hand, an intriguing link to the MFT does indeed exist. The MFT rules out the prospect of crystallization. In our terms, it eliminates all local rearrangements that could potentially allow the packing fraction to reach \(\phi_{m}\). As a result, the MFT solely accounts for the excess MSD, obtained by subtracting all local MSD contributions, which is precisely the analogous "heat" defined in Eq. (6). Indeed, this excess MSD vanishes at the jamming point as shown in Fig. 2b. Moreover, the peak of \(-\mathrm{d}\Delta/\mathrm{d}\phi^{-1}\) occurring at \(\phi\approx 0.59\) seems to relate to the so-called dynamic transition predicted by the Mode-Coupling theory (MCT) [15]. A comprehensive exploration of the connections between our approach and these existing models is left for future studies. Nonetheless, our approach does not rely on assumptions of these existing theories, thus providing an unambiguous picture of three-dimensional random packing, regardless of the validity of MFT or MCT. ## Amorphous materials Although the current work primarily focuses on the strong coupling limit of Eq. (2), it is worth mentioning that a similar approach can be applied for finite \(\epsilon\) values. This is crucial for explaining different observations under various computational and experimental protocols. For instance, in the LS proposal, a range of amorphous close packing with packing fractions from \(0.64-0.68\) was discovered. In our model, this corresponds to the jamming transitions \(\phi_{J}(\epsilon)\) for different coupling \(\epsilon\). The infinitely large compression rate corresponds to the strong coupling limit, with \(\phi_{J}(\epsilon\rightarrow\infty)\approx 0.645\) shown in this work. Given that \(\epsilon\) is conjugate to \(\Delta\), a finite \(\epsilon\) will introduce fluctuations to the maximal random packing as \(\Delta\rightarrow\Delta+O(1/\epsilon)\). A more quantitative study requires the development of a new Monte-Carlo algorithm, a task beyond the scope of this paper. Our approach can be generalized to any molecular system beyond hard spheres to investigate amorphous materials [39] and structural glass [41; 42]. A general protocol for generating amorphous materials typically involves quenching, such as rapid cooling or fast compression. This process generates a coupling between the initial and final states. Generally, one can choose an arbitrary \(\vec{\boldsymbol{\zeta}}\) instead of a completely random point field. For instance, it is empirically more relevant to choose \(\boldsymbol{\zeta}\) as an equilibrium liquid configuration at the temperature when the cooling starts. Although the whole quenching process is a complex dynamic process, our model (2) offers a simple and natural description that captures the essential off-equilibrium coupling. In general, one would expect the following thermodynamic identity: \[dU=TdS+\,\mathrm{d}Q_{\mathrm{o}\mathrm{e}}-PdV-\epsilon dD, \tag{7}\] where \(D=N\Delta\) is the square displacement from the initial state, and \(Q_{\mathrm{o}\mathrm{e}}\) captures extra off-equilibrium "heat" which potentially relates to a configurational entropy. In the case of hard spheres, we have \(dD=Td(S/\epsilon)+\,\mathrm{d}(Q_{\mathrm{o}\mathrm{e}}/\epsilon)-(P/\epsilon)dV\), which recovers Eq. (5) when \(\epsilon\rightarrow\infty\). In this sense, the coupling constant \(\epsilon\) as playing a role of a "temperature" that controls the degree of the non-equilibrium: before quenching, the system is under equilibrium, i.e, \(\epsilon=0\). The quenching process "heats up" the material by introducing a non-zero coupling, which drives it out of equilibrium. After the quenching stops, the environment sets \(\epsilon\) back to zero. However, the material itself is still "hot" with a non-zero \(\epsilon=-\partial U/\partial D\). Then it gradually "cools down" during contact with the environment, a "zero-temperature" cold reservoir (\(\epsilon=0\)). This relaxation process explains qualitatively how aging occurs. This relaxation process potentially provides a physical picture to explain the aging dynamics observed empirically. In some cases, this relaxation takes an infinitely long time, which corresponds to a gapless evolution operator, and thus the system is stuck in an out-of-equilibrium state with a finite \(\epsilon\), that is, a true glassy state. Unlike the hard-sphere system, for structural glasses, temperature plays an essential role in competing with the coupling, and a more complicated phase diagram is expected. Overall, our approach provides a novel description of random packing and unveils new insights into amorphous materials, with potential impact on a wide array of related fields. ###### Acknowledgements. We are grateful to Yuliang Jin for the insightful discussions and valuable feedback on the draft. ## Methods The optimization problem (3) is mathematically well-defined, setting our model apart from prevailing computational approaches. These methods frequently lack rigorous, precise definitions, with the character of random packing often determined by the algorithms themselves. In contrast, any effective algorithm designed to solve Eq.(3) should produce the same result. However, the non-convex QCQP nature of the problem presents a significant computational challenge in practice, as it is generally recognized as NP-hard [43]. As postulated by MFT, the number of local minima increase rapidly with \(\phi\), making the optimization challenging for a high packing fraction. Let us consider the most challenging scenario, where the packing fraction \(\phi\) reaches its maximum value, \(\phi_{m}\). With Hassel's validation of the Kepler conjecture [1], only two potential close-packings remain: the face-centered-cubic (FCC) and hexagonal close-packed (HCP) lattices. As a result, the phase space fragments into \(2N!\) disconnected energy barriers, each subject to certain finite-dimensional global symmetries. Upon initial inspection, the optimization problem (3) seems computationally impracticable in this scenario, given that the spheres cannot be smoothly transitioned to alter the packing across \(N\) factorial configuration subspaces. Finding the nearest configuration to the random points \(\vec{\mathbf{\zeta}}\) is then transformed into a combinatorial optimization problem of determining the best matching of random points across the \(N\) spheres. This problem is known as the assignment problem or minimum cost perfect matching problem, which involves optimization over all possible \(N\) factorial combinations. Contrary to intuition, the assignment problem can be efficiently solved within polynomial time [44], offering a scalable numerical resolution to ascertain the optimal distance from the random points for maximal close packing. It is worth noting that once the optimal assignment is found, the exchange energy for any pair of particles \(i\) and \(j\), \((\mathbf{r}_{i}-\mathbf{\zeta}_{i})^{2}+(\mathbf{r}_{j}-\mathbf{\zeta}_{j})^{2}\leq(\mathbf{r}_{i }-\mathbf{\zeta}_{j})^{2}+(\mathbf{r}_{j}-\mathbf{\zeta}_{i})^{2}\), or equivalently, \(\mathbf{r}_{ij}\cdot\mathbf{\zeta}_{ij}\leq 0\), where \(\mathbf{r}_{ij}\equiv\mathbf{r}_{i}-\mathbf{r}_{j}\) and \(\mathbf{\zeta}_{ij}\equiv\mathbf{\zeta}_{i}-\mathbf{\zeta}_{j}\); otherwise, exchanging labels \(i\) and \(j\) would lead to a lower energy. The proposed solution may not be relevant when dealing with highly degenerate, amorphous maximal close packing. In such situations, the challenge extends beyond identifying the best match; it also entails pinpointing the optimal packing configurations among numerous possibilities sharing the same maximal packing fraction. Nevertheless, in scenarios with only a limited number of close-packing configurations, the assignment algorithm can be applied to solve Eq. (3) efficiently. This remarkable feature stems from Hales's proof of the Kepler conjecture [1], which helps us determine the ground state of sphere packing. This allows efficient algorithms for computationally investigating random packing, distinguishing it from spin glass models. In contrast, the ground state of the EA model remains a subject of debate [28; 29; 30]. Given that there are \(N!\) energy barriers to search, the optimization process for packing fractions \(\phi<\phi_{m}\) involves two distinct parts: 1) A global optimization across \(N!\) barriers using the assignment algorithm to establish the sphere and random point matching, thus determining the globally optimal energy barrier. This step can be initiated with any initial admissible packing, and the assignment algorithm can be applied. 2) A local optimization that rearranges the packing configurations to minimize the MSD within the global optimal energy barrier. We utilize the well-established interior-point method for this purpose [45]. We monitor particle permutations to keep \(\mathbf{r}_{ij}\cdot\mathbf{\zeta}_{ij}\) negative to ensure that local optimization remains within the appropriate energy barrier. If this condition is unmet, we swap labels \(i\) and \(j\) to attain lower energy. It is crucial to underscore that the final optimal packing remains independent of the initial interior point. This suggests that any starting choice of admissible packing can be utilized (see Appendix A for details). In contrast, traditional computational approaches often become trapped in a local minimum due to jamming, which prevents the discovery of an admissible packing for packing fraction \(\phi>\phi_{J}\), rendering them dependent on initial conditions and algorithm choice. However, this does not imply that admissible packings do not exist. Instead, they form a zero-measure set. Diverging from these traditional methods, our proposed algorithm ensures initial admissibility, effectively circumventing the issue of entrapment within local optimization. ## Appendix A Numerical Method In this section, we present an algorithm to address the following optimization problem \[\begin{split}&\text{minimize }\sum_{i}(\mathbf{r}_{i}-\mathbf{\zeta}_{i})^{2},\\ &\text{subject to }-(\mathbf{r}_{i}-\mathbf{r}_{j})^{2}+\sigma^{2}\leq 0 \text{ for all }i<j.\end{split} \tag{10}\] for a range of packing fractions \(\phi\). We establish our system within a box of size \(L\), subject to periodic boundary conditions (PBC). To uphold these conditions, we ensure the center of mass coincides with the box center, i.e., \(\sum_{i}\mathbf{r}_{i}=0\). Our proposed algorithm unfolds in several steps: 1. Begin with \(\phi=\phi_{m}\), where the packing configurations represent maximum close packing - either FCC or HCP for 3D, and Hexagonal packing for 2D. Generate \(N\) quenched points \(\mathbf{\zeta}\) randomly within the box. Construct a complete bipartite graph \(K_{N,N}\) between the particles and quenched points, with each edge assigned a weight \(w_{ij}=(\mathbf{r}_{i}-\zeta_{j})^{2}\). Apply the assignment algorithm proposed by Jonker and Volgenant [44] to determine the optimal match that minimizes the objective function in (10). Because of PBC, a global translation \(\mathbf{\zeta}\rightarrow\mathbf{\zeta}+\mathbf{r}_{0}\) with a constant \(\mathbf{r}_{0}\) is permissible. This can be addressed by shifting the center of mass of \(\mathbf{\zeta}\) to zero. 2. Decrease packing fraction: Shift \(\phi\) to \(\phi-\Delta\phi\) (with a typical choice of \(\Delta\phi=0.1\)) by correspondingly reducing the particle diameter \(\sigma\). At this new packing fraction, find the new minimum based on the packing optimized at the previous fraction. We start from any admissible packing with the optimal assignment, i.e., the best label matches between the particles and quenched random points such that \(\mathbf{r}_{ij}\cdot\mathbf{\zeta}_{ij}\leq 0\). This can be achieved by the assignment algorithm in Step 1). We then implement the interior-point method to optimize the following auxiliary free energy: \[f(\mathbf{R},\sigma)\equiv\sum_{i}(\mathbf{r}_{i}-\mathbf{\zeta}_{i})^{2}-t\sum_{i<j}U(r_ {ij}/\sigma),\] (11) where \(\mathbf{R}\equiv\{\mathbf{r}_{i}\}\), \(\mathbf{r}_{ij}\equiv\mathbf{r}_{i}-\mathbf{r}_{j}\), and \(r_{ij}=|\mathbf{r}_{ij}|\). Here, \(U(x)=\ln(x^{2}-1)\), aan auxiliary entropy represented by a logarithmic barrier function that forces the search step to fall inside the hard-sphere constraints, and the parameter \(t\) serves as the auxiliary temperature. Readers should not confuse these auxiliary parameters with the physical temperature and free energy defined in Eq. (1). The interior-point method commences with a relatively high \(t\) value, which is iteratively halved until it reaches a sufficiently small value, typically \(t=10^{-10}\). In this sense, the barrier method is similar to simulated annealing, where we start with a sufficiently large auxiliary temperature \(t\) and cool down the system by reducing the auxiliary temperature until reaching a minimum. Note that the interior-point method is insensitive to initial conditions, allowing us to use an initial interior point from the previous larger packing fractions, which is admissible with the optimal assignment having been solved, to save computational time. Alternatively, we can also start from any admissible packing, followed by the assignment algorithm. We observe numerically that different choices of initial conditions lead to the same optimal packing. 3. For a given \(t\), the optimization (A2) is solved using the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm [46]. To avoid stagnation at a local minimum during the optimization process, we maintain a watch on particle matches to ensure \(\mathbf{r}_{ij}\cdot\mathbf{\zeta}_{ij}\leq 0\). If not, we exchange the labels \(i\) and \(j\) to achieve a lower energy. 4. Repeat Step 2 iteratively across the entire range of packing fractions. The forces exerted on the particles, and consequently, the reduced pressure can be evaluated from the optimal configuration. This can be elucidated by observing that, for a fixed parameter \(t\), the optimal packing \(\mathbf{R}^{*}\) of Eq. (A2) requires the gradient \(\mathbf{g}(r,\sigma)\equiv\nabla_{\mathbf{R}}f(\mathbf{R},\sigma)\) vanish, i.e., \(\mathbf{g}(\mathbf{R}^{*},\sigma)=0\), leading to \[2(\mathbf{r}_{i}-\zeta_{i})=t\sum_{j}\nabla_{i}U(r_{ij}/\sigma)=\frac{t}{\sigma} \sum_{j}U^{\prime}(r_{ij}/\sigma)\mathbf{\hat{n}}_{ij},\] (A3) where \(\mathbf{\hat{n}}_{ij}\equiv\mathbf{r}_{ij}/r_{ij}\). From this, we can derive the force \(\mathbf{f}_{i}\) acting on particle \(i\) as \[\mathbf{f}_{i}=-2(\mathbf{r}_{i}-\zeta_{i})=\sum_{j}\mathbf{f}_{ij}.\] (A4) Here, \(\mathbf{f}_{ij}\) is the force of contact between particles \(i\) and \(j\), as \[\mathbf{f}_{ij}=-f_{ij}\mathbf{\hat{n}}_{ij},\] (A5) with its magnitude \[f_{ij}\equiv\frac{t}{\sigma}U^{\prime}(r_{ij}/\sigma).\] (A6) In the limit as \(t\) approaches \(0\), only neighboring contacts where \(r_{ij}\rightarrow\sigma\) remain non-zero. Hence, Eq. (10) provides a method to compute the pairwise contact force \(f_{ij}\) for the optimization algorithm. ## Appendix B MSDs for \(\phi=\phi_{m}\) Figure 5 investigate the finite scaling of MSD as a function of the system size \(N\). Our results show that, for \(d<3\), \(\Delta(\phi_{m})\) diverges with increasing \(N\), indicating a delocalized phase. In particular, for \(d=1\), the MSD scales linearly with size, \(\Delta(\phi_{m})\sim N/12\) for the periodic boundary condition and \(\sim N/6\) for the open boundary condition. For two dimensions, our numerical results suggest a logarithmic divergence, i.e., \(\Delta(\phi_{m})\sim\ln N\). In contrast, \(\Delta(\phi_{m})\) is finite in three dimensions for both HCP and FCC packings. This result is rather surprising because one would expect a large configurational change from the most ordered packings (FCC/HCP) to the most disordered ordered ones (Poisson). However, we find the opposite: a bounded displacement is sufficient to transform one to the other, implying that all three-dimensional packings are close to each other. It appears that \(\Delta\sim(\phi_{m}-\phi)^{2-d}+O(1)\), suggesting that \(d=2\) is the lower critical dimension of the delocalization, which appears linked to the jamming transition. ## Appendix C Local rearrangement In this section, we derive the relation between the reduced pressure \(\pi\) and the change of MSD \(\mathrm{d}\Delta/\mathrm{d}\phi^{-1}\) under a local rearrangement. By local rearrangement, we mean that under a small change of diameter \(\sigma\) and thereby the packing fraction \(\phi\), the optimal packing configuration only deforms slightly. Starting with a fixed diameter \(\sigma\) and the corresponding optimal packing \(\mathbf{R}^{*}\), we expand \(f(\mathbf{R},\sigma)\) around \(\mathbf{R}^{*}\), \[f(\mathbf{R},\sigma)=f(\sigma)+\mathbf{g}(\mathbf{R}^{*},\sigma)\cdot\delta\mathbf{R}^{*}+ \frac{1}{2}(\delta\mathbf{R}^{*})^{T}\mathbf{H}(\mathbf{R}^{*},\sigma)\delta\mathbf{R}^{*}+O(( \delta\mathbf{R}^{*})^{3}), \tag{11}\] where \(f(\sigma)\equiv f(\mathbf{R}^{*},\sigma)\), the gradient \(\mathbf{g}(\mathbf{R},\sigma)\equiv\nabla_{\mathbf{R}}f(\mathbf{R},\sigma)\) satisfies \(\mathbf{g}(\mathbf{R}^{*},\sigma)=0\), and the Hessian \(\mathbf{H}(\mathbf{R},\sigma)\equiv\nabla_{\mathbf{R}}\otimes\nabla_{\mathbf{R}}f(\mathbf{R},\sigma)\). Considering a small perturbation \(\sigma\rightarrow\sigma+\delta\sigma\), and \(\mathbf{R}^{*}\rightarrow\mathbf{R}^{*}+\delta\mathbf{R}^{*}\), where \(\delta\mathbf{R}^{*}\) only undergoes a local rearrangement and is of the same order as \(\delta\sigma\), the new minimum is \(f(\sigma+\delta\sigma)\equiv f(\mathbf{R}^{*}+\delta\mathbf{R}^{*},\sigma+\delta\sigma)\). Expanding \(f\) around \((\mathbf{R}^{*},\sigma)\), we have \[f(\sigma+\delta\sigma)=f(\sigma)+\partial_{\sigma}f(\mathbf{R}^{*},\sigma)\delta \sigma+\partial_{\sigma}\mathbf{g}(\mathbf{R}^{*},\sigma)\cdot\delta\mathbf{R}^{*}\delta \sigma+\frac{1}{2}\left(\partial_{\sigma}^{2}f(\mathbf{R}^{*},\sigma)\delta\sigma ^{2}+(\delta\mathbf{R}^{*})^{T}\mathbf{H}(\mathbf{R}^{*},\sigma)\delta\mathbf{R}^{*}\right)+O( \delta\sigma^{3}). \tag{10}\] Minimizing \(f(\mathbf{R}^{*}+\delta\mathbf{R}^{*},\sigma+\delta\sigma)\) requires \(\partial_{\delta\mathbf{R}^{*}}f(\mathbf{R}^{*}+\delta\mathbf{R}^{*},\sigma+\delta\sigma)=0\), leading to \[\delta\mathbf{R}^{*}=-\mathbf{H}^{-1}(\mathbf{R}^{*},\sigma)\partial_{\sigma}\mathbf{g}(\mathbf{ R}^{*},\sigma)\delta\sigma \tag{11}\] Substituting into Eq. (10), we obtain \[f(\sigma+\delta\sigma)=f(\sigma)+\frac{\mathrm{d}f(\sigma)}{\mathrm{d}\sigma }\delta\sigma+\frac{1}{2}\frac{\mathrm{d}^{2}f(\sigma)}{\mathrm{d}\sigma^{2} }\delta\sigma^{2}+O(\delta\sigma^{3}), \tag{12}\] where \[\frac{\mathrm{d}f(\sigma)}{\mathrm{d}\sigma}=\partial_{\sigma}f(\mathbf{R}^{*}, \sigma), \tag{13a}\] \[\frac{\mathrm{d}^{2}f(\sigma)}{\mathrm{d}\sigma^{2}}=\partial_{\sigma}^{2}f( \sigma,\mathbf{R}^{*})-\partial_{\sigma}\mathbf{g}(\mathbf{R}^{*},\sigma)^{T}h(\mathbf{R}^{*}, \sigma)^{-1}\partial_{\sigma}\mathbf{g}(\mathbf{R}^{*},\sigma). \tag{13b}\] Note that from Eq. (11), we have \[\mathbf{R}\cdot\nabla_{\mathbf{R}}f(\mathbf{R},\sigma)=\mathbf{R}\cdot\mathbf{F}-\sigma\partial_{ \sigma}f(\mathbf{R},\sigma), \tag{14}\] Figure 5: **Maximum close packing.** The mean square displacement \(\Delta(\phi_{m})\) as a function of system size \(N\), for (a) FCC (red, solid squares) and HCP (blue, crossed squares) sphere packings, and two-dimensional hexagonal packing (orange, solid triangles), and for (b) one-dimensional packing (black, solid squares). The orange and black lines represent \(\phi_{m}\sim\ln N\) for 2D, and \(\phi_{m}\sim N/12\) for 1D, respectively. where \(\mathbf{F}\equiv\{\mathbf{f}_{i}\}\). The validity of this identity is independent of the choices of \(t\) and \(U\) functions, and thus holds also in the optimization limit \(t\to 0\). Setting \(\mathbf{R}=\mathbf{R}^{*}\) and \(\mathbf{F}=\mathbf{F}^{*}\) and taking the limit \(t\to 0\), we have \(\lim_{t\to 0}f(\mathbf{R}^{*},\sigma)=N\Delta(\sigma)\). Substituting Eq. (14a) into Eq. (14) and noting that the gradient \(\nabla_{\mathbf{R}}f(\mathbf{R},\sigma)\) vanishes at \(\mathbf{R}^{*}\), we obtain \[\frac{1}{N}\mathbf{R}^{*}\cdot\mathbf{F}^{*}=\sigma\frac{\mathrm{d}\Delta}{\mathrm{d} \sigma}=\phi d\frac{\mathrm{d}\Delta}{\mathrm{d}\phi}. \tag{15}\] Recall the definition of the reduced pressure \[\pi\equiv\frac{\phi}{dN}\sum_{i<j}\mathbf{r}_{ij}\cdot\mathbf{f}_{ij}=\frac{\phi}{dN} \mathbf{R}^{*}\cdot\mathbf{F}^{*}, \tag{16}\] where the second equality arises from the Virial theorem, \(\mathbf{R}^{*}\cdot\mathbf{F}^{*}=\sum_{i}\mathbf{r}_{i}\cdot\mathbf{f}_{i}=\frac{1}{2}\sum_{ ij}\left(\mathbf{r}_{i}\cdot\mathbf{f}_{ij}+\mathbf{r}_{j}\cdot\mathbf{f}_{ji}\right)=\sum_{i<j} \mathbf{r}_{ij}\cdot\mathbf{f}_{ij}\). Substituting into Eq. (15), we obtain the relation between \(\pi\) and \(\Delta\) for the local rearrangement \[\pi=-\frac{\mathrm{d}\Delta}{\mathrm{d}\phi^{-1}}. \tag{17}\] This completes our proof. ## Appendix D Jamming transition In this section, we show the condition under which local rearrangement is valid and its association with the jamming transition. The validity of Taylor expansion in Eq. (13) requires a well-defined second derivative \[\sigma^{2}\frac{\mathrm{d}^{2}f(\sigma)}{\mathrm{d}\sigma^{2}}=\sigma^{2} \partial_{\sigma}^{2}f(\sigma,\mathbf{R}^{*})-\mathbf{G}^{T}\mathbf{H}^{-1}\mathbf{G}. \tag{18}\] where \(\mathbf{G}\equiv\sigma\partial_{\sigma}\mathbf{g}(\mathbf{R}^{*},\sigma)\). By respectively applying \(\nabla_{\mathbf{R}}\) and \(\partial_{\sigma}\) to both sides of Eq. (14), we obtain \[\mathbf{H}\mathbf{R}^{*}=(\mathbf{F}^{*}-2\mathbf{R}^{*})-\mathbf{G}, \tag{19a}\] \[\mathbf{R}^{*}\cdot\mathbf{G}=-\mathbf{R}^{*}\cdot\mathbf{F}^{*}-\sigma^{2} \partial_{\sigma}^{2}f(\sigma,\mathbf{R}^{*}). \tag{19b}\] Upon integrating these results, we find \[\sigma^{2}\frac{\mathrm{d}^{2}f(\sigma)}{\mathrm{d}\sigma^{2}}=(2\mathbf{R}^{*}- \mathbf{F}^{*})\cdot\mathbf{q}-\mathbf{R}^{*}\cdot\mathbf{F}^{*}, \tag{20}\] where \(\mathbf{q}\equiv\mathbf{H}^{-1}\mathbf{G}\). One can write Eq. (20) as \[\sigma^{2}\frac{\mathrm{d}^{2}f(\sigma)}{\mathrm{d}\sigma^{2}}=\sum_{i<j}(2N^{ -1}\mathbf{r}_{ij}-\mathbf{f}_{ij})\cdot\mathbf{q}_{ij}-\mathbf{f}_{ij}\cdot\mathbf{r}_{ij}, \tag{21}\] where \(\mathbf{q}_{ij}\equiv\mathbf{q}_{i}-\mathbf{q}_{j}\). Upon taking the limit as \(t\to 0\), Eq. (101) implies that \(U^{\prime}\sim t^{-1}\) as \(r_{ij}\to\sigma\), ensuring that \(\mathbf{F}^{*}\) is finite. Conversely, both \(\mathbf{G}\) and \(\mathbf{H}\) diverge because they involve \(U^{\prime\prime}\), which exhibits a stronger divergence as \(r_{ij}\to\sigma\). To have a finite \(f^{\prime\prime}(\sigma)\), we require \[\mathbf{q}^{*}\equiv\lim_{t\to 0}\mathbf{q}=\lim_{t\to 0}\mathbf{H}^{-1}\mathbf{G}, \tag{102}\] to be well-defined. Indeed, Equation (101) suggests \[\frac{\delta\mathbf{R}^{*}}{\delta\sigma}=-\sigma^{-1}\mathbf{q}^{*}. \tag{103}\] Therefore, the local rearrangement requires \(\mathbf{q}\) to be finite. To illustrate this more clearly, we find \[\mathbf{G}_{i}=-\mathbf{f}_{i}-\frac{t}{\sigma^{2}}\sum_{j}U^{\prime\prime}(x_{ij}) \mathbf{r}_{ij} \tag{104}\] where \(x_{ij}\equiv r_{ij}/\sigma\). Similarly, we find \[\mathbf{H}_{ij}=\frac{t}{\sigma^{2}}\left(-U^{\prime\prime}(x_{ij})\mathbf{\hat{n}}_{ ij}\otimes\mathbf{\hat{n}}_{ij}+x_{ij}^{-1}U^{\prime}(x_{ij})(\mathbf{\hat{n}}_{ij} \otimes\mathbf{\hat{n}}_{ij}-I_{d})\right) \tag{105}\] is applicable for off-diagonal elements \(i\neq j\), where \(I_{d}\) is the \(d\)-dimensional identity matrix, and \[\mathbf{H}_{ii}=2I_{d}-\sum_{j\neq i}\mathbf{H}_{ij} \tag{106}\] is used for diagonal elements. We assume that \(U^{\prime\prime}\) diverges as \((t\epsilon(t))^{-1}\) for small \(t\), where the function \(\epsilon(t)\) captures the asymptotic behavior such that \(-\lim_{t\to 0}\frac{\epsilon(t)t}{\sigma^{2}}U^{\prime\prime}(x_{ij})=u_{ij}\) is finite for neighboring contacts. For instance, for the logarithmic barrier function \(U(x)=\ln(x-1)\), \(U^{\prime}(x)=\frac{1}{x-1}\) and \(-U^{\prime\prime}(x)=\frac{1}{(x-1)^{2}}\), Eq. (101) implies that only the neighboring contact \(r_{ij}=\sigma+\frac{1}{f_{ij}}t+O(t^{2})\) contributes. Additionally, setting \(\epsilon(t)=t\), we find that \(-\lim_{t\to 0}\epsilon(t)\frac{t}{\sigma^{2}}U^{\prime\prime}(x_{ij})=f_{ij}^{2} =u_{ij}\). The argument below will be maintained for a general \(U\) form, as the specific choice of the barrier function does not alter the result. Equation (102) is consequently transformed into \[\mathbf{H}^{*}\mathbf{q}^{*}=\mathbf{G}^{*}, \tag{107}\] where \(\mathbf{H}_{ij}^{*}\equiv\sigma^{-2}u_{ij}\mathbf{r}_{ij}\otimes\mathbf{r}_{ij}\) with \(\mathbf{H}_{ii}^{*}\equiv-\sum_{j\neq i}\mathbf{H}_{ij}^{*}\), and \(\mathbf{G}_{i}^{*}\equiv\sum_{j}u_{ij}\mathbf{r}_{ij}\). We expand \[\mathbf{H}^{*}=\sum_{ij}h_{ij}|\mathbf{v}^{(ij)}\rangle\langle\mathbf{v}^{(ij)}|, \tag{108}\] in terms of the basis \(\{\mathbf{v}^{(ij)}\}\), where the coefficient \(h_{ij}\equiv-\sigma^{-2}u_{ij}\), and \(\mathbf{v}^{(ij)}=(\mathbf{e}^{(i)}-\mathbf{e}^{(j)})\otimes(\mathbf{r}_{i}-\mathbf{r}_{j})\) are \(dN\)-dimensional vectors. Here, \(\mathbf{e}^{(i)}\) is the \(N\)-dimensional vector with a unit at position \(i\) and zeros elsewhere. The number of bases \(\{\mathbf{v}^{(ij)}\}\) is equivalent to the number of contacts \(zN/2\), with \(z\) representing the kissing number. The basis set \(\{\mathbf{v}^{(ij)}\}\) is not orthogonal and possesses a \(d\)-dimensional non-trivial kernel \(\mathbf{T}=\sum_{i}\mathbf{e}^{(i)}\otimes\mathbf{r}_{0}\) with an arbitrary \(d\)-dimensional vector \(\mathbf{r}_{0}\), such that \(\mathbf{T}\cdot\mathbf{v}^{(ij)}=0\). This property corresponds to the global translational invariance of the PBC, i.e., \(\mathbf{r}_{i}\rightarrow\mathbf{r}_{i}+\mathbf{r}_{0}\). Consequently, \(\mathbf{H}^{*}\) has a maximum rank \(d(N-1)\) and is not directly invertible. However, Eq. (44) does offer a solution since \(\mathbf{G}^{*}\) resides outside of the null space, i.e., \(\mathbf{G}^{*}\cdot\mathbf{T}=0\). In fact, it encompasses a \(d\)-dimensional solution space, as if \(\mathbf{q}^{*}\) serves as a solution, then \(\mathbf{q}^{*}+\mathbf{T}\) is also a valid solution. This ambiguity, however, does not affect Eq. (43) as \(\mathbf{q}_{ij}\) is invariant under the global translation. Eq. (44) allows solutions when the rank of \(\mathbf{H}^{*}\) reaches its maximum value \(d(N-1)\). Given that there are \(zN/2\) bases, it requires \(zN\geq 2d(N-1)\). In the thermodynamic limit of \(N\rightarrow\infty\), we recover the Maxwell counting argument \[z\geq 2d, \tag{45}\] which provides a necessary condition for the validity of the local rearrangement. To ensure its sufficiency, we need the rank of these \(zN/2\) base vectors equal to \(d(N-1)\), a condition that depends on the configuration \(\mathbf{R}^{*}\). Our numerical results, however, indicate that this condition is likely met, at least statistically. Thus, the isostatic condition corresponds to the critical value \(z_{c}=2d\) at the jamming transition. Any value below this threshold triggers a global rearrangement of the system, leading to its unjammed state.
2305.00525
Stability for backward problems in time for degenerate parabolic equations
For solution $u(x,t)$ to degenearte parabolic equations in a bounded domain $\Omega$ with homogenous boundary condition, we consider backward problems in time: determine $u(\cdot,t_0)$ in $\Omega$ by $u(\cdot,T)$, where $t$ is the time variable and $0\le t_0 < T$. Our main results are conditional stability under boundedness assumptions on $u(\cdot,0)$. The proof is based on a weighted $L^2$-estimate of $u$ whose weight depends only on $t$, which is an inequality of Carleman's type. Moreover our method is applied to semilinear degenerate parabolic equations.
Piermarco Cannarsa, Masahiro Yamamoto
2023-04-30T16:51:05Z
http://arxiv.org/abs/2305.00525v1
# Stability for backward problems in time for ###### Abstract. For solution \(u(x,t)\) to degenerate parabolic equations in a bounded domain \(\Omega\) with homogenous boundary condition, we consider backward problems in time: determine \(u(\cdot,t_{0})\) in \(\Omega\) by \(u(\cdot,T)\), where \(t\) is the time variable and \(0\leq t_{0}<T\). Our main results are conditional stability under boundedness assumptions on \(u(\cdot,0)\). The proof is based on a weighted \(L^{2}\)-estimate of \(u\) whose weight depends only on \(t\), which is an inequality of Carleman's type. Moreover our method is applied to semilinear degenerate parabolic equations. **Key words.** degenerate parabolic equation, backward problem, Carleman estimate, stability **AMS subject classifications.** 35R30, 35K65, 35R25 \({}^{1}\) Departmento di Matematica, Universita di Roma Tor Vergata, Via della Ricerca Scientifica, 00133, Roma, Italy e-mail: [email protected] \({}^{2}\) Graduate School of Mathematical Sciences, The University of Tokyo 3-8-1 Komaba, Meguro-ku, Tokyo, 153-8914, Japan. e-mail: [email protected] Introduction The study of the nonlinear Schrodinger equation (NLS) in the We set \[H^{2,1}(Q):=\left\{u\in L^{2}(Q);\,\partial_{t}u,\partial_{i}u,\partial_{i} \partial_{j}u\in L^{2}(Q),\quad 1\leq i,j\leq d\right\}.\] Let \(u\in H^{2,1}(Q)\) satisfy (1.1). Then we consider **Backward problem.** _Let \(0\leq t_{0}<T\) be given. Determine \(u(x,t_{0})\), \(x\in\Omega\) by \(u(x,T)\), \(x\in\Omega\)._ The conditional stability has been studied well for the non-degenerate parabolic equation where \[\sigma(x)>0\quad\text{on }\overline{\Omega}. \tag{1.6}\] For the case (1.6), as available methodologies, we refer to * Logarithmic convexity: Ames and Straughan [2], Payne [9], Chapter 3 in Isakov [4], for example. * the time analyticity and the maximum principle for holomorphic functions: Krein and Prozorovskaya [5]. * Weight energy methods: [2], Lees and Protter [6], Payne [9]. Our main purpose of this article is to establish the conditional stability for the backward problem in time for the degenerate case \[\sigma(x)\geq 0,\quad x\in\overline{\Omega}, \tag{1.7}\] which means that \(\sigma(x)\) in (1.5) is admitted to have zeros. We introduce the main assumptions on the degeneracy: \[\text{There exists a constant }\lambda_{1}>0\text{ such that}\] \[\sum_{i,j=1}^{d}(\lambda_{1}a_{ij}(x,t)-\partial_{t}a_{ij}(x,t))\xi_{i}\xi_{ j}\geq 0,\quad(x,t)\in\overline{Q},\,\xi_{1},...,\xi_{d}\in\mathbb{R}. \tag{1.8}\] and \[\text{there exists a constant }C>0\text{ such that }\quad|b(x,t)|\leq C\sqrt{\sigma(x)}, \quad(x,t)\in\overline{Q}. \tag{1.9}\] If \(|b(x,t)|\equiv 0\) in \(Q\), then (1.9) is automatically satisfied. **Examples of (1.8) - (1.9).** (a) If \(a_{ij}(x)\), \(1\leq i,j\leq d\) are \(t\)-independent, then (1.8) is satisfied. (b) Let \(b\equiv 0\) in \(Q\). We assume that we can choose \(\mu_{k}\in C(\overline{\Omega})\), \(\mu_{k}\geq 0\) on \(\overline{\Omega}\), \(\widehat{a_{ij}^{k}}\in C^{1}([0,T];L^{\infty}(\Omega))\), \(1\leq i,j\leq d\), such that there exists a constant \(\sigma_{1}>0\) such that \[a_{ij}(x,t)=\sum_{k=1}^{N}\mu_{k}(x)\widehat{a_{ij}^{k}}(x,t),\] \[\sum_{i,j=1}^{d}\widehat{a_{ij}^{k}}(x,t)\xi_{i}\xi_{j}\geq\sigma_{1}\sum_{j=1 }^{d}\xi_{j}^{2}\quad(x,t)\in\overline{Q},\,\xi_{1},...,\xi_{d}\in\mathbb{R}, \quad 1\leq k\leq N. \tag{1.10}\] Then (1.8) - (1.9) are satisfied. For example, \(\widehat{a_{ij}^{k}}(x,t)=\delta_{ij}:=\left\{\begin{array}{ll}1&i=j,\\ 0&i\neq j\end{array}\right.\) for \(1\leq k\leq N\) and \(\mu_{k}(x)=r_{k}|x-x_{0}^{k}|^{\rho_{k}}\) with \(x_{0}^{k}\in\Omega\) and \(r_{k}>0\), \(\rho_{k}>0\) are constants for \(1\leq k\leq N\). (c) Let \(\mu\in C(\overline{\Omega})\), \(\geq 0\) on \(\overline{\Omega}\), \(\widehat{a_{ij}}\in C^{1}([0,T];L^{\infty}(\Omega))\), \(b_{k}\in L^{\infty}(Q)\), \(1\leq i,j,k\leq d\), \(a_{ij}(x,t)=\mu(x)\widehat{a_{ij}}(x,t)\) and \(b_{k}(x,t)=\mu(x)\widehat{b_{k}}(x,t)\) for \(1\leq i,j,k\leq d\). We further assume that we can find a constant \(\sigma_{1}>0\) such that \[\sum_{i,j=1}^{d}\widehat{a_{ij}}(x,t)\xi_{i}\xi_{j}\geq\sigma_{1}\sum_{k=1}^{d }\xi_{k}^{2}\quad(x,t)\in\overline{Q},\,\xi_{1},...,\xi_{d}\in\mathbb{R}.\] Then (1.8) is satisfied. Now we are ready to state our main results. **Theorem 1 (case \(0<t_{0}<T\)).** _We assume (1.8) and (1.9), and \(u\in H^{2,1}(Q)\) satisfy (1.1) and_ \[\|u(\cdot,0)\|_{H^{1}(\Omega)}\leq M \tag{1.11}\] _with arbitrarily chosen constant \(M>0\)._ **Case (1.2).** _Then for \(0<t_{0}<T\), there exist constants \(C>0\) and \(\theta\in(0,1)\) dependent on \(t_{0}\) and \(M\) such that_ \[\|u(\cdot,t_{0})\|_{L^{2}(\Omega)}\leq C(\|u(\cdot,T)\|_{H^{1}(\Omega)}^{\theta }+\|u(\cdot,T)\|_{H^{1}(\Omega)}). \tag{1.12}\] **Case (1.3).** _We further assume that \(\sigma(x)\) in (1.5) satisfies_ \[\sigma(x)>0\quad\text{for }x\in\partial\Omega. \tag{1.13}\] _Then we have (1.12)._ **Theorem 2 (case \(t_{0}=0\)).** _We assume (1.8) and (1.9), and \(u\in H^{2,1}(Q)\) satisfy \(\partial_{t}u,\partial_{t}^{2}u\in H^{2,1}(Q)\) and (1.1), and_ \[\sum_{k=0}^{2}\|\partial_{t}^{k}u(\cdot,0)\|_{H^{1}(\Omega)}\leq M \tag{1.14}\] _with arbitrarily chosen constant \(M>0\)._ **Case (1.2).** _Then, for any \(\alpha\in(0,1)\), there exists a constant \(C>0\) such that_ \[\|u(\cdot,0)\|_{L^{2}(\Omega)}\leq C\left(\log\frac{1}{D}\right)^{-\alpha}, \tag{1.15}\] _provided that_ \[D:=\sum_{k=0}^{2}\|\partial_{t}^{k}u(\cdot,T)\|_{H^{1}(\Omega)}\] _is small._ **Case (1.3).** _Assume (1.13) additionally. Then estimate (1.15) holds._ Our method is quite feasible, and is applicable for example, to semilinear equations. We can consider more comprehensive class of nonlinear equations, but for discussing the essence, we are restricted to the following case with \(0<t_{0}<T\): \[\begin{cases}&\partial_{t}u(x,t)=\sum_{i,j=1}^{d}\partial_{i}(a_{ij}(x,t) \partial_{j}u(x,t))+c(x,t)u+f(x,t,u(x,t)),\quad(x,t)\in Q,\\ &\partial_{\nu_{A}}u+r(x)u=0\quad\text{or}\quad u=0\quad\text{on }\partial \Omega\times(0,T),\end{cases} \tag{1.16}\] where \(f(x,t,\eta)\), \(x\in\overline{\Omega}\), \(0\leq t\leq T\) and \(\eta\in\mathbb{R}\), satisfies \[f,\,\partial_{\eta}f\in C(\overline{\Omega}\times[0,T]\times\mathbb{R}). \tag{1.17}\] Then we can prove **Theorem 3.** _Let \(0<t_{0}<T\). Let \(u,v\in H^{2,1}(Q)\) satisfy (1.16) and_ \[\|u(\cdot,0)\|_{H^{1}(\Omega)},\,\|v(\cdot,0)\|_{H^{1}(\Omega)}\leq M,\quad \|u\|_{L^{\infty}(Q)},\,\|v\|_{L^{\infty}(Q)}\leq M\] _with arbitrarily chosen constant \(M>0\). Then there exist constants \(C>0\) and \(\theta\in(0,1)\), depending on \(t_{0}\) and \(M\), such that_ \[\|u(\cdot,t)-v(\cdot,t)\|_{L^{2}(\Omega)}\leq C(\|u(\cdot,T)-v(\cdot,T)\|_{H^{1} (\Omega)}^{\theta}+\|u(\cdot,T)-v(\cdot,T)\|_{H^{1}(\Omega)}).\] This article is composed of four sections. In Section 2, we prove a key estimate of a Carleman type. Sections 3 and 4 are devoted to the proofs of Theorem 1 and 3, and Theorem 2 respectively. ## 2. Key estimate of a Carleman estimate We set \[\varphi(t)=e^{\lambda t},\quad t>0,\] where a constant \(\lambda>0\) is chosen later. We state the key inequality without assumptions (1.8) and (1.9) on the degeneracy. **Lemma 1**.: **Case (1.2).** _There exists a constant \(\lambda_{0}>0\) such that for any \(\lambda>\lambda_{0}\), we can choose a constant \(s_{0}(\lambda)>0\) satisfying: there exist constants \(C=C(s_{0},\lambda_{0})>0\) and \(C_{0}>0\) such that_ \[\int_{Q}\Biggl{\{}\frac{1}{s\varphi}|\partial_{t}u|^{2}+\sum_{i,j=1}^{d}( \lambda a_{ij}-C_{0}\partial_{t}a_{ij})(\partial_{i}u)(\partial_{j}u)+s \lambda^{2}\varphi|u|^{2}\Biggr{\}}e^{2s\varphi}dxdt \tag{2.1}\] \[\leq C\int_{Q}|F|^{2}e^{2s\varphi}dxdt+C\int_{Q}\sum_{j=1}^{d}|b_{j} \partial_{j}u|^{2}e^{2s\varphi}dxdt\] \[+C(s\lambda\varphi(T)\|u(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|u( \cdot,T)\|_{H^{1}(\Omega)}^{2})e^{2s\varphi(T)}\] \[+C(s\lambda\|u(\cdot,0)\|_{L^{2}(\Omega)}^{2}+\|u(\cdot,0)\|_{H^ {1}(\Omega)}^{2})e^{2s}\] _for all \(s>s_{0}\) and all \(u\in H^{2,1}(Q)\) satisfying \(Lu=F\) in \(Q\) and the boundary condition (1.2)._ **Case (1.3).** _There exists a constant \(\lambda_{0}>0\) such that for any \(\lambda>\lambda_{0}\), we can choose a constant \(s_{0}(\lambda)>0\) satisfying: there exist constants \(C=C(s_{0},\lambda_{0})>0\) and \(C_{0}>0\) such that_ \[\int_{Q}\Biggl{\{}\frac{1}{s\varphi}|\partial_{t}u|^{2}+\sum_{i,j=1}^{d}( \lambda a_{ij}-C_{0}\partial_{t}a_{ij})(\partial_{i}u)(\partial_{j}u)+s \lambda^{2}\varphi|u|^{2}\Biggr{\}}e^{2s\varphi}dxdt \tag{2.2}\] \[\leq C\int_{0}^{T}\int_{\partial\Omega}\lambda|u|^{2}e^{2s\varphi}dSdt\] \[+ C\int_{Q}|F|^{2}e^{2s\varphi}dxdt+C\int_{Q}\sum_{j=1}^{d}|b_{j} \partial_{j}u|^{2}e^{2s\varphi}dxdt\] \[+ C(s\lambda\varphi(T)\|u(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|u( \cdot,T)\|_{H^{1}(\Omega)}^{2})e^{2s\varphi(T)}+C(s\lambda\|u(\cdot,0)\|_{L^{2 }(\Omega)}^{2}+\|u(\cdot,0)\|_{H^{1}(\Omega)}^{2})e^{2s}\] _for all \(s>s_{0}\) and all \(u\in H^{2,1}(Q)\) satisfying \(Lu=F\) in \(Q\) and the boundary condition (1.3)._ _Here the constant \(C_{0}\) is independent of \(s_{0},\lambda_{0}\)._ We emphasize that we do not assume (1.8) and (1.9), so that Lemma 1 holds true if \[\sum_{i,j=1}^{d}a_{ij}(x,t)\xi_{i}\xi_{j}\geq 0,\quad(x,t)\in\overline{Q},\,\xi_ {1},...,\xi_{d}\in\mathbb{R}.\] In particular, **Proposition 1**.: _We assume_ \[\left\{\begin{array}{rl}&a_{ij}\text{, }1\leq i,j\leq d\text{ are time-independent and }b_{1}=\cdots=b_{d}=0\text{ on }\overline{Q},\\ &\sum_{i,j=1}^{d}a_{ij}(x)\xi_{i}\xi_{j}\geq 0,\quad x\in\overline{\Omega},\,\xi_ {1},...,\xi_{d}\in\mathbb{R}.\end{array}\right.\] _Then exists a constant \(\lambda_{0}>0\) such that for any \(\lambda>\lambda_{0}\), we can choose a constant \(s_{0}(\lambda)>0\) satisfying: there exists a constant \(C=C(s_{0},\lambda_{0})>0\) such that_ \[\int_{Q}\Biggl{\{}\frac{1}{s\varphi}|\partial_{t}u|^{2}+s\lambda^ {2}\varphi|u|^{2}\Biggr{\}}e^{2s\varphi}dxdt\leq C\int_{Q}|F|^{2}e^{2s\varphi} dxdt\] \[+ C(s\lambda\varphi(T)\|u(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|u( \cdot,T)\|_{H^{1}(\Omega)}^{2})e^{2s\varphi(T)}+C(s\lambda\|u(\cdot,0)\|_{L^{ 2}(\Omega)}^{2}+\|u(\cdot,0)\|_{H^{1}(\Omega)}^{2})e^{2s}\] _for all \(s>s_{0}\) and all \(u\in H^{2,1}(Q)\) satisfying \(Lu=F\) in \(Q\) and the boundary condition (1.2), and_ \[\int_{Q}\Biggl{\{}\frac{1}{s\varphi}|\partial_{t}u|^{2}+s\lambda^ {2}\varphi|u|^{2}\Biggr{\}}e^{2s\varphi}dxdt\leq C\int_{0}^{T}\int_{\partial \Omega}\lambda|u|^{2}e^{2s\varphi}dSdt\] \[+ C\int_{Q}|F|^{2}e^{2s\varphi}dxdt\] \[+ C(s\lambda\varphi(T)\|u(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|u( \cdot,T)\|_{H^{1}(\Omega)}^{2})e^{2s\varphi(T)}+C(s\lambda\|u(\cdot,0)\|_{L^{ 2}(\Omega)}^{2}+\|u(\cdot,0)\|_{H^{1}(\Omega)}^{2})e^{2s}\] _for all \(s>s_{0}\) and all \(u\in H^{2,1}(Q)\) satisfying \(Lu=F\) in \(Q\) and the boundary condition (1.3)._ **Remark 2.1**.: As a method with a similar spirit, we can refer to the weight energy method. Concerning the weight energy method, there are many papers and see monographs Ames and Straughan [2], Lees and Protter [6], Payne [9], and the references therein. Except for Murray and Protter [8] for equations of hyperbolic types, all the papers use just \(t\) as weight function, and do not use the second large parameter \(\lambda\). In Murray and Protter [8], the weight function \(e^{st^{\lambda}}\) is used to prove properties for the asymptotic behaviour. The essential diffference from the existing papers is the introduction of the second large parameter \(\lambda>0\). Such a second large parameter is very flexible and gains a lot of possibility for better estimates. ### Proof of Lemma 1 #### First Step Set \[Lu:=\partial_{t}u-\sum_{i,j=1}^{d}\partial_{i}(a_{ij}(x)\partial_{j}u),\quad G :=F(x,t)+\sum_{k=1}^{d}b_{k}(x,t)\partial_{k}u(x,t),\] \[w=e^{s\varphi}u,\quad Pw=e^{s\varphi}L(e^{-s\varphi}w)=e^{s\varphi}G.\] Then \[e^{s\varphi}\partial_{t}(e^{-s\varphi}w)=\partial_{t}v-s\lambda\varphi w,\] \[e^{s\varphi}\sum_{i,j=1}^{d}\partial_{i}(a_{ij}\partial_{j}(we^{-s\varphi}))= \sum_{i,j=1}^{d}\partial_{i}(a_{ij}\partial_{j}w),\] and \[Pw=e^{s\varphi}L(e^{-s\varphi}w)=\partial_{t}w-\left(s\lambda\varphi w+\sum_{ i,j=1}^{d}\partial_{i}(a_{ij}\partial_{j}w)\right)=e^{s\varphi}G.\] We have \[\|e^{s\varphi}G\|_{L^{2}(Q)}^{2}\] \[= \int_{Q}|\partial_{t}w|^{2}dxdt+2\int_{Q}(\partial_{t}w)\left(-s \lambda\varphi w-\sum_{i,j=1}^{d}\partial_{i}(a_{ij}\partial_{j}w)\right)dxdt\] \[+ \int_{Q}\left|s\lambda\varphi w+\sum_{i,j=1}^{d}\partial_{i}(a_{ ij}\partial_{j}w)\right|^{2}dxdt\] \[\geq \int_{Q}|\partial_{t}w|^{2}dxdt+2\int_{Q}\partial_{t}w\left(-\sum _{i,j=1}^{d}\partial_{i}(a_{ij}\partial_{j}w)\right)dxdt\] \[+ \int_{Q}\left|s\lambda\varphi w+\sum_{i,j=1}^{d}\partial_{i}(a_{ ij}\partial_{j}w)\right|^{2}dxdt\] \[+ \[+2\int_{Q}(\partial_{t}w)(-s\lambda\varphi)wdxdt\] \[=:\int_{Q}|\partial_{t}w|^{2}dxdt+J_{1}+J_{2}. \tag{2.3}\] Thus \[\int_{Q}|G|^{2}e^{2s\varphi}dxdt\geq J_{1}+J_{2} \tag{2.4}\] and \[\int_{Q}|\partial_{t}w|^{2}dxdt\leq\int_{Q}|G|^{2}e^{2s\varphi}dxdt-J_{1}-J_{2}. \tag{2.5}\] Henceforth \(C_{j}>0\) denote generic constants which are independent of \(s,\lambda\). We assume that \(s>1\) and \(\lambda>1\). First we consider the boundary condition (1.2). By noting \(a_{ij}=a_{ji}\), the boundary condition (1.2) and integration by parts yields \[J_{1}=-2\int_{Q}(\partial_{t}w)\sum_{i,j=1}^{d}\partial_{i}(a_{ij}\partial_{j} w)dxdt=2\int_{Q}\sum_{i,j=1}^{d}(\partial_{i}\partial_{t}w)a_{ij}(\partial_{j} w)dxdt \tag{2.6}\] \[= 2\int_{Q}\sum_{i>j}a_{ij}((\partial_{j}w)\partial_{i}\partial_{t} w+(\partial_{i}w)\partial_{j}\partial_{t}w)\,dxdt+2\sum_{i=1}^{n}\int_{Q}a_{ii}( \partial_{i}w)(\partial_{i}\partial_{t}w)\,dxdt\] \[= 2\int_{Q}\sum_{i>j}a_{ij}\partial_{t}((\partial_{i}w)(\partial_{ j}w))\,dxdt+\int_{Q}\sum_{i=1}^{n}a_{ii}\partial_{t}((\partial_{i}w)^{2})\,dxdt\] \[= \int_{Q}\sum_{i,j=1}^{d}a_{ij}\partial_{t}((\partial_{i}w)\partial _{j}w)dxdt\] \[= -\int_{Q}\sum_{i,j=1}^{d}(\partial_{t}a_{ij})(\partial_{i}w) \partial_{j}w\,dxdt+\sum_{i,j=1}^{d}\left[a_{ij}(\partial_{i}w)(\partial_{j}w )\right]_{t=0}^{t=T}dx\] \[= -\int_{Q}\sum_{i,j=1}^{d}(\partial_{t}a_{ij})(\partial_{i}w)( \partial_{j}w)\,dxdt\] \[+ \int_{\Omega}\sum_{i,j=1}^{d}(a_{ij}(x,T)(\partial_{i}w)(x,T)( \partial_{j}w)(x,T)-a_{ij}(x,0)(\partial_{i}w)(x,0)(\partial_{j}w)(x,0))dx.\] On the other hand, \[J_{2}=-s\lambda\int_{Q}2(\partial_{t}w)w\varphi dxdt=-s\lambda \int_{Q}\partial_{t}(w^{2})\varphi dxdt\] \[= s\lambda\int_{Q}(\partial_{t}\varphi)w^{2}dxdt-s\lambda\int_{ \Omega}\left[\varphi w^{2}\right]_{t=0}^{t=T}dx\] \[=s\lambda^{2}\int_{Q}\varphi w^{2}dxdt-s\lambda\int_{\Omega}(\varphi(T)|w(x,T)|^{2} -|w(x,0)|^{2})dx. \tag{2.7}\] Hence \[\|e^{s\varphi}G\|_{L^{2}(Q)}^{2}\geq s\lambda^{2}\int_{Q}\varphi w^{2}dxdt-\int_ {Q}\sum_{i,j=1}^{d}(\partial_{t}a_{ij})(\partial_{i}w)(\partial_{j}w)dxdt\] \[-s\lambda\int_{\Omega}(\varphi(T)|w(x,T)|^{2}+|w(x,0)|^{2})dx-C_{1}\int_{ \Omega}(|\nabla w(x,T)|^{2}+|\nabla w(x,0)|^{2})dx. \tag{2.8}\] ### Second Step On the right-hand side of (2.8), the term \(-\int_{Q}\sum_{i,j=1}^{d}(\partial_{t}a_{ij})(\partial_{i}w)(\partial_{j}w)dxdt\) appears, and so we have to estimate the integrals including \((\partial_{i}w)(\partial_{j}w)\). For it, we consider \(\int_{Q}(Pw)w\,dxdt\): \[\int_{Q}(Pw)wdxdt=\int_{Q}(\partial_{t}w)wdxdt-\int_{Q}s\lambda \varphi w^{2}dxdt-\int_{Q}\sum_{i,j=1}^{d}\partial_{i}(a_{ij}\partial_{j}w)w \,dxdt\] \[=: I_{1}+I_{2}+I_{3}.\] We have \[|I_{1}|=\left|\int_{Q}(\partial_{t}w)wdxdt\right|=\left|\frac{1}{ 2}\int_{Q}\partial_{t}(w^{2})dxdt\right|\] \[= \left|\frac{1}{2}\int_{\Omega}\left[|w(x,t)|^{2}\right]_{t=0}^{t= T}dx\right|\leq\frac{1}{2}\int_{\Omega}(|w(x,T)|^{2}+|w(x,0)|^{2})dx.\] Next \[|I_{2}|=\left|-\int_{Q}s\lambda\varphi w^{2}dxdt\right|\leq C_{2}\int_{Q}s \lambda\varphi w^{2}dxdt\] and \[I_{3}=-\sum_{i,j=1}^{d}\int_{Q}\partial_{i}(a_{ij}\partial_{j}w)wdxdt=\sum_{i, j=1}^{d}\int_{Q}a_{ij}(\partial_{j}w)(\partial_{i}w)dxdt. \tag{2.9}\] Hence \[\int_{Q}\lambda(Pw)w\,dxdt\geq\lambda\int_{Q}\sum_{i,j=1}^{d}a_{ij}(\partial_ {i}w)(\partial_{j}w)dxdt-C_{2}\int_{Q}s\lambda^{2}\varphi w^{2}dxdt\] \[-\frac{1}{2}\lambda\int_{\Omega}(|w(x,T)|^{2}+|w(x,0)|^{2})dx. \tag{2.10}\] On the other hand, \[\left|\int_{Q}\lambda(Pw)w\,dxdt\right|\leq\|Pw\|_{L^{2}(Q)}(\lambda\|w\|_{L^{ 2}(Q)})\leq\frac{1}{2}\|Pw\|_{L^{2}(Q)}^{2}+\frac{\lambda^{2}}{2}\|w\|_{L^{2}( Q)}^{2}\] \[=\frac{1}{2}\int|G|^{2}e^{2s\varphi}dxdt+\frac{\lambda^{2}}{2}\|w\|_{L^{2}(Q)}^{2}. \tag{2.11}\] Hence (2.10) yields \[\lambda\int_{Q}\sum_{i,j=1}^{d}a_{ij}(\partial_{i}w)(\partial_{j}w) dxdt\leq C_{2}\int_{Q}s\lambda^{2}\varphi w^{2}dxdt\] \[+\frac{1}{2}\int_{Q}|Ge^{s\varphi}|^{2}dxdt+\frac{\lambda^{2}}{2} \int_{Q}w^{2}dxdt+\frac{1}{2}\lambda\int_{\Omega}(|w(x,T)|^{2}+|w(x,0)|^{2})dx.\] Estimating the first term on the right-hand side by (2.8), we obtain \[\lambda\int_{Q}\sum_{i,j=1}^{d}a_{ij}(\partial_{i}w)(\partial_{j} w)dxdt\] \[\leq C_{3}\int_{Q}|Ge^{s\varphi}|^{2}dxdt+C_{3}\int_{Q}\sum_{i,j=1}^{d }(\partial_{t}a_{ij})(\partial_{i}w)(\partial_{j}w)\,dxdt+C_{3}\int_{Q}\lambda ^{2}w^{2}dxdt\] \[+C_{3}\lambda(\|w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|w(\cdot,0)\|_ {L^{2}(\Omega)}^{2})+C_{3}(\|\nabla w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|\nabla w (\cdot,0)\|_{L^{2}(\Omega)}^{2})\] \[+C_{3}s\lambda(\varphi(T)\|w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|w( \cdot,0)\|_{L^{2}(\Omega)}^{2})\] \[\leq C_{3}\int_{Q}|Ge^{s\varphi}|^{2}dxdt+C_{3}\int_{Q}\sum_{i,j=1}^{d }(\partial_{t}a_{ij})(\partial_{i}w)(\partial_{j}w)\,dxdt+C_{3}\int_{Q}\lambda ^{2}w^{2}dxdt\] \[+C_{3}s\lambda(\varphi(T)\|w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|w( \cdot,0)\|_{L^{2}(\Omega)}^{2})+C_{3}(\|\nabla w(\cdot,T)\|_{L^{2}(\Omega)}^{ 2}+\|\nabla w(\cdot,0)\|_{L^{2}(\Omega)}^{2}) \tag{2.12}\] #### Third Step. Adding (2.8) and (2.12), we have \[\int_{Q}s\lambda^{2}\varphi w^{2}dxdt+\lambda\int_{Q}\sum_{i,j=1} ^{d}a_{ij}(\partial_{i}w)(\partial_{j}w)dxdt\] \[\leq C_{4}\int_{Q}|Ge^{s\varphi}|^{2}dxdt+C_{3}\int_{Q}\sum_{i,j=1}^{ d}(\partial_{t}a_{ij})(\partial_{i}w)(\partial_{j}w)dxdt+C_{4}\int_{Q} \lambda^{2}w^{2}dxdt\] \[+C_{4}(\|\nabla w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|\nabla w( \cdot,0)\|_{L^{2}(\Omega)}^{2})+C_{4}s\lambda(\varphi(T)\|w(\cdot,T)\|_{L^{2} (\Omega)}^{2}+\|w(\cdot,0)\|_{L^{2}(\Omega)}^{2}).\] By \(\varphi=e^{\lambda t}\geq 1\), we take \(s>0\) and \(\lambda>0\) large to absorb the third term on the right-hand side into the left-hand side. Hence, \[\int_{Q}s\lambda^{2}\varphi w^{2}dxdt+\int_{Q}\sum_{i,j=1}^{d}( \lambda a_{ij}-C_{3}\partial_{t}a_{ij})(\partial_{i}w)(\partial_{j}w)dxdt\] \[\leq C_{5}\int_{Q}|Ge^{s\varphi}|^{2}dxdt\] \[+C_{5}s\lambda(\varphi(T)\|w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|w(\cdot,0)\|_{L^{2}( \Omega)}^{2})\] \[+C_{5}(\|\nabla w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|\nabla w(\cdot,0)\|_{L^{2}( \Omega)}^{2}). \tag{2.13}\] **Fourth Step.** Next we will estimate \(|\partial_{t}w|^{2}\). Since \(u=e^{-s\varphi}w\), we have \(\partial_{t}u=-s\lambda\varphi e^{-s\varphi}w+e^{-s\varphi}\partial_{t}w\), and \[\frac{1}{s\varphi}|\partial_{t}u|^{2}e^{2s\varphi}\leq 2s\lambda^{2}\varphi w^{ 2}+\frac{2}{s\varphi}|\partial_{t}w|^{2}.\] Let \(\varepsilon\in\left(0,\frac{1}{2}\right)\) be a constant which we choose later. We note that \(\frac{1}{s\varphi}=\frac{1}{se^{\lambda t}}\leq\frac{1}{s}\leq 1\) for \(s\geq 1\). Therefore, for all large \(s>0\) and \(\lambda>0\), we have \[\int_{Q}\frac{\varepsilon}{s\varphi}|\partial_{t}u|^{2}e^{2s \varphi}dxdt\leq\int_{Q}2\varepsilon s\lambda^{2}\varphi w^{2}dxdt+\int_{Q} \frac{2\varepsilon}{s\varphi}|\partial_{t}w|^{2}dxdt\] \[\leq 2\varepsilon\int_{Q}s\lambda^{2}\varphi w^{2}dxdt+\varepsilon \int_{Q}|\partial_{t}w|^{2}dxdt\] \[\leq 2\varepsilon\int_{Q}s\lambda^{2}\varphi w^{2}dxdt+ \varepsilon\int_{Q}G^{2}e^{2s\varphi}dxdt+\varepsilon(-J_{1}-J_{2}) \tag{2.14}\] by (2.5). By (2.6) and (2.7), we have \[\varepsilon(-J_{1}-J_{2})=\int_{Q}\varepsilon\sum_{i,j=1}^{d}( \partial_{t}a_{ij})(\partial_{i}w)(\partial_{j}w)dxdt-s\lambda^{2}\varepsilon \int_{Q}\varphi w^{2}dxdt\] \[-\varepsilon\int_{\Omega}\sum_{i,j=1}^{d}(a_{ij}(x,T)(\partial_{i }w)(x,T)(\partial_{j}w)(x,T)-a_{ij}(x,0)(\partial_{i}w)(x,0)(\partial_{j}w)(x, 0))dx\] \[+\varepsilon s\lambda\int_{\Omega}(\varphi(T)|w(x,T)|^{2}-|w(x,0 )|^{2})dx,\] and the substitution of this into (2.13) yields \[\varepsilon\int_{Q}\frac{1}{s\varphi}|\partial_{t}u|^{2}e^{2s \varphi}dxdt\leq 2\varepsilon\int_{Q}s\lambda^{2}\varphi w^{2}dxdt+ \varepsilon\int_{Q}|G|^{2}e^{2s\varphi}dxdt\] \[+\sum_{i,j=1}^{d}\int_{Q}\varepsilon(\partial_{t}a_{ij})(\partial _{i}w)(\partial_{j}w)dxdt-\varepsilon\int_{Q}s\lambda^{2}\varphi w^{2}dxdt\] \[+C_{6}\varepsilon\int_{\Omega}(|\nabla w(x,T)|^{2}+|\nabla w(x,0 )|^{2})dx\] \[+\varepsilon s\lambda\int_{\Omega}(\varphi(T)|w(x,T)|^{2}+|w(x,0 )|^{2})dx. \tag{2.15}\] Adding (2.13) and (2.15), we obtain \[\int_{Q}s\lambda^{2}\varphi w^{2}dxdt+\int_{Q}\sum_{i,j=1}^{d}( \lambda a_{ij}-C_{3}\partial_{t}a_{ij})(\partial_{i}w)(\partial_{j}w)dxdt+ \varepsilon\int_{Q}\frac{1}{s\varphi}|\partial_{t}u|^{2}e^{2s\varphi}dxdt\] \[\leq C_{7}\int_{Q}|G|^{2}e^{2s\varphi}dxdt+\varepsilon\int_{Q}s \lambda^{2}\varphi w^{2}dxdt\] \[+\int_{Q}\varepsilon\sum_{i,j=1}^{d}(\partial_{t}a_{ij})(\partial _{i}w)(\partial_{j}w)dxdt\] \[+C_{7}(s\lambda\varphi(T)\|w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\| \nabla w(\cdot,T)\|_{L^{2}(\Omega)}^{2})\] \[+C_{7}(s\lambda\|w(\cdot,0)\|_{L^{2}(\Omega)}^{2}+\|\nabla w( \cdot,0)\|_{L^{2}(\Omega)}^{2}).\] Choosing \(0<\varepsilon<\frac{1}{2}\), we can absorb the second term on the right-hand side into the left-hand side, we obtain \[\varepsilon\int_{Q}\frac{1}{s\varphi}|\partial_{t}u|^{2}e^{2s \varphi}dxdt\] \[+\int_{Q}\sum_{i,j=1}^{d}(\lambda a_{ij}-(C_{3}+\varepsilon) \partial_{t}a_{ij})(\partial_{i}u)(\partial_{j}u)e^{2s\varphi}dxdt+\frac{1}{2 }\int_{Q}s\lambda^{2}\varphi w^{2}dxdt\] \[\leq C_{7}\int_{Q}|G|^{2}e^{2s\varphi}dxdt\] \[+C_{7}(s\lambda\varphi(T)\|w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\| \nabla w(\cdot,T)\|_{L^{2}(\Omega)}^{2})\] \[+C_{7}(s\lambda\|w(\cdot,0)\|_{L^{2}(\Omega)}^{2}+\|\nabla w( \cdot,0)\|_{L^{2}(\Omega)}^{2}).\] Thus the proof of Lemma 1 is complete in the case (1.2). **Fifth Step.** We will prove in the case (1.3). We recall the additional assumption (1.13). In the above arguments, we need to modify only (2.6) and (2.9). That is, \[J_{1}=-2\int_{Q}(\partial_{t}w)\sum_{i,j=1}^{d}\partial_{i}(a_{ ij}\partial_{j}w)dxdt\] \[= 2\int_{Q}\sum_{i,j=1}^{d}(\partial_{i}\partial_{t}w)a_{ij}( \partial_{j}w)dxdt-2\int_{\partial\Omega\times(0,T)}\sum_{i,j=1}^{d}a_{ij}( \partial_{j}w)\nu_{i}(\partial_{t}w)dSdt\] \[= 2\int_{Q}\sum_{i,j=1}^{d}a_{ij}(\partial_{i}\partial_{t}w)( \partial_{j}w)dxdt-2\int_{\partial\Omega\times(0,T)}(\partial_{\nu_{A}}w)( \partial_{t}w)dSdt.\] By (1.3), we obtain \[-2\int_{\partial\Omega\times(0,T)}(\partial_{\nu_{A}}w)(\partial_{t} w)dSdt=2\int_{\partial\Omega\times(0,T)}rw(\partial_{t}w)dSdt\] \[= \int_{\partial\Omega\times(0,T)}r\partial_{t}(w^{2})dSdt=\int_{ \partial\Omega}\left[rw^{2}\right]_{t=0}^{t=T}dS.\] Thus, similarly to (2.6), we have \[-J_{1}=-2\int_{Q}\sum_{i,j=1}^{d}a_{ij}(\partial_{i}\partial_{t} w)(\partial_{j}w)dxdt-\int_{\partial\Omega}\left[rw^{2}\right]_{t=0}^{t=T}dS\] \[= \int_{Q}\sum_{i,j=1}^{d}(\partial_{t}a_{ij})(\partial_{i}w)( \partial_{j}w)dxdt\] \[- \int_{Q}\sum_{i,j=1}^{d}(a_{ij}(x,T)(\partial_{i}w)(x,T)(\partial _{j}w)(x,T)-a_{ij}(x,0)(\partial_{i}w)(x,0)\partial_{j}w(x,0))dx\] \[- \int_{\Omega}r(w^{2}(x,T)-w^{2}(x,0))dS\] and by the trace theorem: \(\|u\|_{L^{2}(\partial\Omega)}\leq C\|u\|_{H^{1}(\Omega)}\), we can obtain the same estimate \[-J_{1}\leq\int_{Q}\sum_{i,j=1}^{d}(\partial_{t}a_{ij})(\partial_{i}w)( \partial_{j}w)dxdt+C_{8}(\|w(\cdot,T)\|_{H^{1}(\Omega)}^{2}+\|w(\cdot,0)\|_{H^ {1}(\Omega)}^{2}). \tag{2.16}\] Therefore (2.4) and (2.7) yield \[\int_{Q}|G|^{2}e^{2s\varphi}dxdt\geq J_{1}+J_{2}\] \[\geq \int_{Q}s\lambda^{2}\varphi w^{2}dxdt-s\lambda(\varphi(T)\|w( \cdot,T)\|_{L^{2}(\Omega)}^{2}+\|w(\cdot,0)\|_{L^{2}(\Omega)}^{2})\] \[- \int_{Q}\sum_{i,j=1}^{d}(\partial_{t}a_{ij})(\partial_{i}w)( \partial_{j}w)dxdt-C_{8}(\|w(\cdot,T)\|_{H^{1}(\Omega)}^{2}+\|w(\cdot,0)\|_{ H^{1}(\Omega)}^{2}),\] that is, \[\int_{Q}s\lambda^{2}\varphi w^{2}dxdt \tag{2.17}\] \[\leq \int_{Q}|G|^{2}e^{2s\varphi}dxdt+s\lambda(\varphi(T)\|w(\cdot,T)\|_{L^{2} (\Omega)}^{2}+\|w(\cdot,0)\|_{L^{2}(\Omega)}^{2})\] \[+ \int_{Q}\sum_{i,j=1}^{d}(\partial_{t}a_{ij})(\partial_{i}w)( \partial_{j}w)dxdt+C_{8}(\|w(\cdot,T)\|_{H^{1}(\Omega)}^{2}+\|w(\cdot,0)\|_{ H^{1}(\Omega)}^{2}).\] Moreover, for \(I_{3}\) in (2.9), by (1.5) we have \[I_{3}=\sum_{i,j=1}^{d}\int_{Q}a_{ij}(\partial_{i}w)(\partial_{j}w) dx-\int_{\partial\Omega\times(0,T)}w(\partial_{\nu_{A}}w)\,dSdt\] \[= \sum_{i,j=1}^{d}\int_{Q}a_{ij}(\partial_{i}w)(\partial_{j}w)dx+ \int_{\partial\Omega\times(0,T)}rw^{2}dSdt\] \[\geq \int_{Q}\sum_{i,j=1}^{d}a_{ij}(\partial_{i}w)(\partial_{j}w)dxdt- C_{9}\|w\|_{L^{2}(0,T;L^{2}(\partial\Omega))}^{2}.\] Hence, since we can obtain the same estimates for \(I_{1}\) and \(I_{2}\), we see \[\int_{Q}\lambda(Pw)wdxdt=\lambda I_{1}+\lambda I_{2}+\lambda I_{3} \tag{2.18}\] \[\geq \int_{Q}\sum_{i,j=1}^{d}\lambda a_{ij}(\partial_{i}w)(\partial_{ j}w)dxdt-C_{10}\|w\|_{L^{2}(0,T;L^{2}(\partial\Omega))}^{2}\] \[- \frac{1}{2}\lambda(\|w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|w(\cdot,0)\|_{L^{2}(\Omega)}^{2})-C_{10}\int_{Q}s\lambda^{2}\varphi w^{2}dxdt.\] Similarly to (2.12), using (2.11), by (2.18) we obtain \[\int_{Q}\sum_{i,j=1}^{d}\lambda a_{ij}(\partial_{i}w)(\partial_{ j}w)dxdt\] \[\leq \int_{Q}\lambda(Pw)wdxdt+C_{10}\lambda\|w\|_{L^{2}(0,T;L^{2}( \partial\Omega))}^{2}+\frac{1}{2}\lambda(\|w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+ \|w(\cdot,0)\|_{L^{2}(\Omega)}^{2})\] \[+ C_{10}\int_{Q}s\lambda^{2}\varphi w^{2}dxdt\] \[\leq \frac{1}{2}\|Ge^{s\varphi}\|_{L^{2}(Q)}^{2}+\frac{\lambda^{2}}{2} \|w\|_{L^{2}(Q)}^{2}+C_{10}\lambda\|w\|_{L^{2}(0,T;L^{2}(\partial\Omega))}^{2}\] \[+ \frac{1}{2}\lambda(\|w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|w(\cdot,0)\|_{L^{2}(\Omega)}^{2}+C_{10}\int_{Q}s\lambda^{2}\varphi w^{2}dxdt.\] Hence (2.17) implies \[\int_{Q}\sum_{i,j=1}^{d}\lambda a_{ij}(\partial_{i}w)(\partial_{ j}w)dxdt \tag{2.19}\] \[\leq \frac{1}{2}\|Ge^{s\varphi}\|_{L^{2}(Q)}^{2}+C_{11}\lambda^{2}\|w \|_{L^{2}(Q)}^{2}\] \[+ C_{11}\lambda\|w\|_{L^{2}(0,T;L^{2}(\partial\Omega))}^{2}+\frac{ 1}{2}\lambda(\|w(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|w(\cdot,0)\|_{L^{2}(\Omega)} ^{2})\] \[+C_{11}\int_{Q}|G|^{2}e^{2s\varphi}dxdr+C_{11}s\lambda(\varphi(T)\|w( \cdot,T)\|^{2}_{L^{2}(\Omega)}+\|w(\cdot,0)\|^{2}_{L^{2}(\Omega)})\] \[+\int_{Q}\sum_{i,j=1}^{d}(C_{11}\partial_{t}a_{ij})(\partial_{i}w) (\partial_{j}w)dxdt\] \[+C_{11}(\|w(\cdot,T)\|^{2}_{H^{1}(\Omega)}+\|w(\cdot,0)\|^{2}_{H^ {1}(\Omega)}).\] Thus (2.17) and (2.19) yield \[\int_{Q}\sum_{i,j=1}^{d}(\lambda a_{ij}-C_{11}\partial_{t}a_{ij}) (\partial_{i}w)(\partial_{j}w)dxdt+\int_{Q}s\lambda^{2}\varphi w^{2}dxdt\] \[\leq C_{12}\int_{Q}|G|^{2}e^{2s\varphi}dxdt+C_{12}\lambda\|w\|^{2}_{L ^{2}(0,T;L^{2}(\partial\Omega))}+C_{12}(\|w(\cdot,T)\|^{2}_{H^{1}(\Omega)}+\| w(\cdot,0)\|^{2}_{H^{1}(\Omega)})\] \[+C_{12}s\lambda(\varphi(T)\|w(\cdot,T)\|^{2}_{L^{2}(\Omega)}+\|w(\cdot,0)\|^{ 2}_{L^{2}(\Omega)}). \tag{2.20}\] Here we used \[\frac{1}{2}(\|w(\cdot,T)\|^{2}_{L^{2}(\Omega)}+\|w(\cdot,0)\|^{2}_{L^{2}( \Omega)})\leq C_{12}s\lambda(\varphi(T)\|w(\cdot,T)\|^{2}_{L^{2}(\Omega)}+\|w( \cdot,0)\|^{2}_{L^{2}(\Omega)})\] and we absorbed the term \[C_{11}\lambda^{2}\|w\|^{2}_{L^{2}(Q)}=C_{11}\int_{Q}\lambda^{2}w^{2}dxdt\] into \(\int_{Q}s\lambda^{2}\varphi w^{2}dxdt\). Next as for the estimate of \(\partial_{t}w\), we can proceed similarly to the argument starting (2.14) as follows. In terms of (2.7) and (2.16), we have \[\varepsilon(-J_{1}-J_{2}) \tag{2.21}\] \[\leq C_{13}\varepsilon\int_{Q}\sum_{i,j=1}^{d}(\partial_{t}a_{ij})( \partial_{i}w)(\partial_{j}w)dxdt+C_{13}\varepsilon(\|w(\cdot,T)\|^{2}_{H^{1} (\Omega)}+\|w(\cdot,0)\|^{2}_{H^{1}(\Omega)})\] \[-C_{13}\varepsilon\int_{Q}s\lambda^{2}\varphi w^{2}dxdt+ \varepsilon s\lambda(\varphi(T)\|w(\cdot,T)\|^{2}_{L^{2}(\Omega)}+\|w(\cdot,0 )\|^{2}_{L^{2}(\Omega)}).\] Thus, by (2.21) and (2.14), we can estimate \(\int_{Q}\frac{1}{s\varphi}|\partial_{t}u|^{2}e^{2s\varphi}dxdt\), and so (2.21) completes the proof of Lemma 1 in the case (1.3). \(\blacksquare\) ## 3. Proofs of Theorems 1 and 3 In Lemma 1, in order to estimate \(\left(\frac{1}{s\varphi}|\partial_{t}u|^{2}+s\lambda^{2}\varphi|u|^{2}\right)e^{2 s\varphi}\), we have to assume (1.8) and (1.9) because the term \[\sum_{i,j=1}^{d}(\lambda a_{ij}-C_{0}\partial_{t}a_{ij})(\partial_{i}u)( \partial_{j}u)e^{2s\varphi}\] is not necessarily non-negative. More precisely, we can state **Lemma 2**.: _We assume (1.8) and (1.9). In case (1.3), we further assume (1.13). Then there exists a constant \(\lambda_{0}>0\) such that for any \(\lambda>\lambda_{0}\), we can choose a constant \(s_{0}(\lambda)>0\) satisfying: there exists a constant \(C=C(s_{0},\lambda_{0})>0\) such that_ \[\int_{Q}\Biggl{\{}\frac{1}{s\varphi}|\partial_{t}u|^{2}+s\lambda^{2}\varphi|u| ^{2}\Biggr{\}}e^{2s\varphi}dxdt \tag{3.1}\] \[\leq C\int_{Q}|F|^{2}e^{2s\varphi}dxdt+C(s\lambda\varphi(T)\|u(\cdot,T)\|_{L ^{2}(\Omega)}^{2}+\|u(\cdot,T)\|_{H^{1}(\Omega)}^{2})e^{2s\varphi(T)}\] \[+C(s\lambda\|u(\cdot,0)\|_{L^{2}(\Omega)}^{2}+\|u(\cdot,0)\|_{H^ {1}(\Omega)}^{2})e^{2s}\] _for all \(s>s_{0}\) whenever a solution \(u\in H^{2,1}(Q)\) satisfies (1.1) with (1.2) or (1.3)._ **Proof of Lemma 2.** **Case (1.2).** First let \(b:=(b_{1},\cdots,b_{d})=0\) in \(\Omega\). By (1.8), we can choose \(\lambda>0\) large such that \[\sum_{i,j=1}^{d}(\lambda a_{ij}-C_{0}\partial_{t}a_{ij})\xi_{i}\xi_{j}\geq 0 \quad\text{for $(x,t)\in\overline{Q}$ and $\xi_{1},...,\xi_{d}\in\mathbb{R}$}. \tag{3.2}\] Indeed, by (1.8) and (1.5) with \(\sigma(x)\geq 0\), we verify that \[\sum_{i,j=1}^{d}(\lambda a_{ij}-C_{0}\partial_{t}a_{ij})(\partial _{i}w)(\partial_{j}w)\] \[= \sum_{i,j=1}^{d}(\lambda_{1}C_{0}a_{ij}-C_{0}\partial_{t}a_{ij}) (\partial_{i}w)(\partial_{j}w)+(\lambda-\lambda_{1}C_{0})\sum_{i,j=1}^{d}a_{ ij}(\partial_{i}w)(\partial_{j}w)\] \[\geq C_{0}\sum_{i,j=1}^{d}(\lambda_{1}a_{ij}-\partial_{t}a_{ij})( \partial_{i}w)(\partial_{j}w)\geq 0\quad\text{on $\overline{Q}$},\] provided that \(\lambda>0\) is so large that \(\lambda-\lambda_{1}C_{0}\geq 0\), which verifies (3.2). We set \(\varphi(t)=e^{\lambda t}\). Since \(b=0\) in \(\Omega\), by Lemma 1, we can readily obtain (3.1). Second let \(b\not\equiv 0\) in \(\Omega\). Then (1.8) and (1.5) yield \[\sum_{i,j=1}^{d}(\lambda a_{ij}-C_{0}\partial_{t}a_{ij})(\partial _{i}w)(\partial_{j}w)\] \[= \sum_{i,j=1}^{d}(\lambda-\lambda_{1}C_{0})a_{ij}(\partial_{i}w)( \partial_{j}w)+C_{0}\sum_{i,j=1}^{d}(\lambda_{1}a_{ij}-\partial_{t}a_{ij})( \partial_{i}w)(\partial_{j}w)\] \[\geq \sum_{i,j=1}^{d}(\lambda-\lambda_{1}C_{0})a_{ij}(\partial_{i}w)( \partial_{j}w)\geq(\lambda-\lambda_{1}C_{0})\sigma(x)|\nabla w(x,t)|^{2},\quad (x,t)\in\overline{Q}.\] Therefore, (1.9) implies \[\int_{Q}\sum_{i,j=1}^{d}(\lambda a_{ij}-C_{0}\partial_{t}a_{ij})( \partial_{i}w)(\partial_{j}w)dxdt-C\int_{Q}\sum_{j=1}^{d}|b_{j}\partial_{j}w|^ {2}dxdt \tag{3.3}\] \[\geq \int_{Q}(\lambda-\lambda_{1}C_{0})\sigma(x)|\nabla w(x,t)|^{2}dxdt -C\int_{Q}\sigma(x)|\nabla w(x,t)|^{2}dxdt\] \[\geq (\lambda-\lambda_{1}C_{0}-C)\int_{Q}\sigma(x)|\nabla w(x,t)|^{2} dxdt.\] Hence, choosing \(\lambda>0\) sufficiently large, we can absorb the second term on the right-hand side of (2.1) into the left-hand side. Fixing such sufficiently large \(\lambda>0\), we obtain (3.1). **Case (1.3).** We assume that as a neighborhood of \(\partial\Omega\), by means of (1.13), we can find a subdomain \(\Omega^{\prime}\) satisfying \(\overline{\Omega^{\prime}}\subset\Omega\) and a constant \(\varepsilon_{1}>0\) such that \[\partial\Omega^{\prime}\supset\partial\Omega,\quad\sigma(x)\geq\varepsilon_{1 }\quad\text{for }x\in\Omega^{\prime}. \tag{3.4}\] We fix \(\delta>0\) sufficiently small. Then, the interpolation inequality and the trace theorem (e.g., Adams [1]) imply that for any \(\delta>0\) there exists a constant \(C_{2}(\delta)>0\) such that \[\|u(\cdot,t)\|_{L^{2}(\partial\Omega)}\leq C_{1}\|u(\cdot,t)\|_{H^{\frac{1}{2} +\delta_{0}}(\Omega^{\prime})}\leq\delta\|\nabla u(\cdot,t)\|_{L^{2}(\Omega^{ \prime})}+C_{2}(\delta)\|u(\cdot,t)\|_{L^{2}(\Omega^{\prime})}. \tag{3.5}\] Since (3.3) holds true also in the case (1.3), from (2.2) in Lemma 1, we can derive \[\int_{Q}\bigg{(}\frac{1}{s\varphi}|\partial_{t}u|^{2}+s\lambda^{2}\varphi|u|^{ 2}\bigg{)})e^{2s\varphi}dxdt+(\lambda-\lambda_{1}C_{0}-C)\int_{Q}\sigma(x)| \nabla u|^{2}e^{2s\varphi}dxdt \tag{3.6}\] \[\leq C\int_{0}^{T}\int_{\partial\Omega}\lambda|u|^{2}e^{2s\varphi}dSdt+C \int_{Q}|F|^{2}e^{2s\varphi}dxdt\] \[+ C(s\lambda\varphi(T)\|u(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|u(\cdot,T)\|_{H^{1}(\Omega)}^{2})e^{2s\varphi(T)}\] \[+ C(s\lambda\|u(\cdot,0)\|_{L^{2}(\Omega)}^{2}+\|u(\cdot,0)\|_{H^ {1}(\Omega)}^{2})e^{2s}.\] Choosing \(\lambda>0\) further large such that \(\lambda-\lambda_{1}C_{0}-C>0\), we see by (3.3) that \[(\lambda-\lambda_{1}C_{0}-C)\int_{Q}\sigma(x)|\nabla u|^{2}e^{2s\varphi}dxdt \geq(\lambda-\lambda_{1}C_{0}-C)\int_{\Omega^{\prime}\times(0,T)}\sigma(x)| \nabla u|^{2}e^{2s\varphi}dxdt \tag{3.7}\] \[\geq C_{3}\varepsilon_{1}\int_{\Omega^{\prime}\times(0,T)}|\nabla u|^{2}e^{2s \varphi}dxdt.\] Therefore, in terms of (3.5), we obtain \[\lambda\|u(\cdot,t)e^{s\varphi(t)}\|_{L^{2}(\partial\Omega)}^{2}\] \[\leq C\lambda\delta\|\nabla u(\cdot,t)e^{s\varphi(t)}\|_{L^{2}( \Omega^{\prime})}^{2}+C\lambda C_{2}(\delta)\|u(\cdot,t)e^{s\varphi(t)}\|_{L^ {2}(\Omega^{\prime})}^{2}.\] We substitute these inequalities into the second term on the left-hand side and the first term on the right-hand side of (3.6) to reach \[\int_{Q}\left(\frac{1}{s\varphi}|\partial_{t}u|^{2}+s\lambda^{2} \varphi|u|^{2}\right))e^{2s\varphi}dxdt+C_{3}\varepsilon_{1}\int_{\Omega^{ \prime}\times(0,T)}|\nabla u|^{2}e^{2s\varphi}\] \[\leq C\lambda\delta\int_{\Omega^{\prime}\times(0,T)}|\nabla u|^{2}e^{ 2s\varphi}dxdt+C\lambda C_{2}(\delta)\int_{\Omega^{\prime}\times(0,T)}|u|^{2}e ^{2s\varphi}dxdt\] \[+ C\int_{Q}|F|^{2}e^{2s\varphi}dxdt\] \[+ C(s\lambda\varphi(T)\|u(\cdot,T)\|_{L^{2}(\Omega)}^{2}+\|u( \cdot,T)\|_{H^{1}(\Omega)}^{2})e^{2s\varphi(T)}\] \[+ C(s\lambda\|u(\cdot,0)\|_{L^{2}(\Omega)}^{2}+\|u(\cdot,0)\|_{H^ {1}(\Omega)}^{2})e^{2s}.\] Choosing \(\delta>0\) sufficiently small and \(s>0\) sufficiently large, we can absorb the first and the second terms on the right-hand side into the second and the first terms on the left-hand side. Thus the proof of Lemma 2 is complete. \(\blacksquare\) Now we proceed to **Proof of Theorem 1.** Using (1.11) and \(\varphi(t_{0})\leq\varphi(t)\) for \(t_{0}\leq t\leq T\), we shrink the integral region \(Q\) on the right-hand side of (3.1), we obtain \[e^{2s\varphi(t_{0})}\int_{\Omega\times(t_{0},T)}\left(\frac{1}{s \varphi}|\partial_{t}u|^{2}+s\lambda^{2}\varphi|u|^{2}\right)dxdt\] \[\leq Cs\lambda\varphi(T)\|u(\cdot,T)\|_{H^{1}(\Omega)}^{2}e^{2s\varphi( T)}+Cs\lambda M^{2}e^{2s}.\] for \(s\geq s_{0}\). Fixing \(\lambda>0\) sufficiently large, we do not need specify the \(\lambda\)-dependency, and so \[\int_{\Omega\times(t_{0},T)}\left(\frac{1}{s}|\partial_{t}u|^{2}+s|u|^{2} \right)dxdt\leq CsD_{0}^{2}e^{2s(\varphi(T)-\varphi(t_{0}))}+CsM^{2}e^{-2s\mu( t_{0})}.\] for \(s\geq s_{0}\). Here the constant \(C>0\) depends on \(T\) and \(\lambda\). We set \[\mu(t_{0}):=\varphi(t_{0})-1=e^{\lambda t_{0}}-1>0,\quad D_{0}:=\|u(\cdot,T) \|_{H^{1}(\Omega)}.\] Thus \[\|\partial_{t}u\|_{L^{2}(t_{0},T;L^{2}(\Omega))}^{2}\leq Cs^{2}D_{0}^{2}e^{2s( \varphi(T)-\varphi(t_{0}))}+Cs^{2}M^{2}e^{-2s\mu(t_{0})} \tag{3.8}\] for all large \(s>0\). We note that the generic contants \(C>0\), \(C_{j}>0\) are independent of also \(t_{0}\in[0,T]\), but dependent on \(T\), \(\Omega\), \(\lambda\). Since \[u(x,t_{0})=\int_{T}^{t_{0}}\partial_{t}u(x,t)dt+u(x,T),\quad x\in\Omega,\] we can choose a constant \(C_{4}>0\) such that \[\|u(\cdot,t_{0})\|_{L^{2}(\Omega)}^{2}\leq C_{4}\|\partial_{t}u\|_{L^{2}( \Omega\times(t_{0},T))}^{2}+C_{4}\|u(\cdot,T)\|_{L^{2}(\Omega)}^{2} \tag{3.9}\] for all \(t_{0}\in[0,T]\). Substituting (3.8) into (3.9) and using \(\varphi(T)>1\), we obtain \[\|u(\cdot,t_{0})\|_{L^{2}(\Omega)}^{2}\leq C_{4}(Cs^{2}D_{0}^{2}e^{2s \varphi(T)}+Cs^{2}M^{2}e^{-2s\mu(t_{0})})+C_{4}\|u(\cdot,T)\|_{L^{2}(\Omega)}^ {2}\] \[\leq C_{5}s^{2}D_{0}^{2}e^{2s\varphi(T)}+C_{5}s^{2}M^{2}e^{-2s\mu(t_{ 0})}\leq C_{6}D_{0}^{2}e^{3s\varphi(T)}+C_{6}M^{2}e^{-s\mu(t_{0})}\] for all large \(s>s_{0}\) and all \(t_{0}\in[0,T]\). Setting \(C_{7}:=C_{6}e^{3s_{0}\varphi(T)}\), we can have \[\|u(\cdot,t_{0})\|_{L^{2}(\Omega)}^{2}\leq C_{7}D_{0}^{2}e^{3s\varphi(T)}+C_{7}M ^{2}e^{-s\mu(t_{0})} \tag{3.10}\] for all large \(s>0\) and all \(t_{0}\in[0,T]\). Now we choose \(s>0\) in order to minimize the right-hand side of (3.10). **Case 1:**\(M\leq D_{0}\). By setting \(s=0\), the inequality (3.10) immediately yields \(\|u(\cdot,t_{0})\|_{L^{2}(\Omega)}^{2}\leq 2C_{7}D_{0}^{2}\). **Case 2: \(M>D_{0}\)**. We choose \(s>0\) such that \[D_{0}^{2}e^{3s\varphi(T)}=M^{2}e^{-s\mu(t_{0})},\] that is, \[s=\frac{2}{3\varphi(T)+\mu(t_{0})}\log\frac{M}{D_{0}}>0,\] so that \[\|u(\cdot,t_{0})\|_{L^{2}(\Omega)}^{2}\leq 2C_{7}M^{\frac{6\varphi(T)}{3\varphi(T )+\mu(t_{0})}}D_{0}^{\frac{2\mu(t_{0})}{3\varphi(T)+\mu(t_{0})}}.\] Setting \(\theta:=\frac{\mu(t_{0})}{3\varphi(T)+\mu(t_{0})}\in(0,1)\), we can obtain (1.12). Also in the case of (1.3), thanks to Lemma 2, the same argument completes the proof of Theorem 1. \(\blacksquare\) Next we provide **Proof of Theorem 3.** We set \(y:=u-v\). Then \[\begin{cases}&\partial_{t}y=\sum_{i,j=1}^{d}\partial_{i}(a_{ij}\partial_{j}y)+ c(x,t)y+(f(x,t,u(x,t))-f(x,t,v(x,t)),\quad(x,t)\in Q,\\ &\partial_{\nu_{A}}y+ry=0\quad\text{or}\quad y=0\quad\text{on }\partial \Omega\times(0,T).\end{cases}\] We apply Lemma 2 to obtain \[\int_{Q}\left(\frac{1}{s\varphi}|\partial_{t}y|^{2}+s\lambda^{2}\varphi|y|^{2} \right)e^{2s\varphi}dxdt \tag{3.11}\] \[\leq \int_{Q}|f(x,t,u(x,t))-f(x,t,v(x,t))|^{2}e^{2s\varphi}dxdt\] \[+ Cs\lambda\varphi(T)\|y(\cdot,T)\|_{H^{1}(\Omega)}^{2}e^{2s \varphi(T)}+Cs\lambda\|y(\cdot,0)\|_{H^{1}(\Omega)}^{2}e^{2s}\] for all large \(s>0\). By (1.2) and \(\|u\|_{L^{\infty}(Q)}\), \(\|u\|_{L^{\infty}(Q)}\leq M\), in terms of the mean value theorem, we have \[|f(x,t,u(x,t))-f(x,t,v(x,t))|=|f(x,t,\eta_{x,t})||u(x,t)-v(x,t)|\leq C_{8}|u(x,t)-v(x,t)|,\] where \(\eta_{x,t}\) is a constant depending on \(x,t\) such that \(|\eta_{x,t}|\leq M\). Therefore, we can absorb the first term on the right-hand side of (3.11) into the left-hand side by choosing \(s>0\) sufficiently large, and we obtain \[\int_{Q}\left(\frac{1}{s\varphi}|\partial_{t}y|^{2}+s\lambda^{2}\varphi|y|^{2} \right)e^{2s\varphi}dxdt\] \[\leq Cs\lambda\varphi(T)\|y(\cdot,T)\|^{2}_{H^{1}(\Omega)}e^{2s\varphi(T)}+ Cs\lambda\|y(\cdot,0)\|^{2}_{H^{1}(\Omega)}e^{2s}\] for all large \(s>0\). Now the same argument as in the proof of Theorem 1 can complete the proof of Theorem 3. \(\blacksquare\) ## 4. Proof of Theorem 2 The proof is based on the stability for \(t_{0}>0\) and \[u(x,0)=\int_{T}^{0}\partial_{t}u(x,t)dt+u(x,T),\quad x\in\Omega.\] We apply the same arguments towards (3.8) for \(\partial_{t}u\) and \(\partial_{t}^{2}u\). We set \[M_{1}:=\sum_{k=0}^{2}\|\partial_{t}^{k}u(\cdot,0)\|_{H^{1}(\Omega)},\quad D:= \sum_{k=0}^{2}\|\partial_{t}^{k}u(\cdot,T)\|_{H^{1}(\Omega)}.\] Then, arguing similarly to the proof of Theorem 1 to \(\partial_{t}^{2}u\), we obtain (3.8) for \(\partial_{t}^{2}u\) replacing \(u\): \[\int_{\Omega\times(t_{0},T)}\left(\frac{1}{s}|\partial_{t}^{3}u|^{2}+s| \partial_{t}^{2}u|^{2}\right)dxdt\leq CsD^{2}e^{2s\varphi(T)}+CsM_{1}^{2}e^{- 2s\mu(t_{0})},\] that is, \[\|\partial_{t}^{2}u\|^{2}_{L^{2}(\Omega\times(t_{0},T))}\leq CD^{2}e^{2s \varphi(T)}+CM_{1}^{2}e^{-2s\mu(t_{0})} \tag{4.1}\] for all \(s\geq s_{0}\): some constant and all \(0<t_{0}<T\). Since \[\partial_{t}u(x,t_{0})=\int_{T}^{t_{0}}\partial_{t}^{2}u(x,t)dt+\partial_{t}u( x,T),\quad x\in\Omega,\] we have \[\|\partial_{t}u(\cdot,t_{0})\|^{2}_{L^{2}(\Omega)}\leq C\|\partial_{t}^{2}u\| ^{2}_{L^{2}(\Omega\times(t_{0},T))}+C\|\partial_{t}u(\cdot,T)\|^{2}_{L^{2}( \Omega)}. \tag{4.2}\] We substitute (4.1) to (4.2), so that \[\|\partial_{t}u(\cdot,t_{0})\|^{2}_{L^{2}(\Omega)} \leq CD^{2}e^{2s\varphi(T)}+CM_{1}^{2}e^{-2s\mu(t_{0})}+C\| \partial_{t}u(\cdot,T)\|^{2}_{L^{2}(\Omega)}\] \[\leq C_{1}D^{2}e^{2s\varphi(T)}+C_{1}M_{1}^{2}e^{-2s\mu(t_{0})} \tag{4.3}\] for all \(s\geq s_{0}\): some constant and all \(0<t_{0}<T\). Here we used \(\|\partial_{t}u(\cdot,T)\|^{2}_{L^{2}(\Omega)}\leq D^{2}\leq CD^{2}e^{2s\varphi (T)}\). Using \[u(x,0)=\int_{T}^{0}\partial_{t}u(x,t_{0})dt_{0}+u(x,T),\quad x\in\Omega,\] we obtain \[\int_{\Omega}|u(x,0)|^{2}dx\leq 2\int_{\Omega}\left|\int_{0}^{T} \partial_{t}u(x,t_{0})dt_{0}\right|^{2}dx+2\int_{\Omega}|u(x,T)|^{2}dx\] \[\leq C\int_{0}^{T}\|\partial_{t}u(\cdot,t_{0})\|_{L^{2}(\Omega)}^{2} dt_{0}+C\|u(\cdot,T)\|_{L^{2}(\Omega)}^{2}.\] Substitution of (4.3) yields \[\|u(\cdot,0)\|_{L^{2}(\Omega)}^{2}\leq C_{1}D^{2}\int_{0}^{T}e^{2s\varphi(T)} dt_{0}+C_{1}M_{1}^{2}\int_{0}^{T}e^{-2s\mu(t_{0})}dt_{0}+C\|u(\cdot,T)\|_{L^{2}( \Omega)}^{2},\] that is, \[\|u(\cdot,0)\|_{L^{2}(\Omega)}^{2}\leq C_{1}D^{2}\,e^{C_{2}s}+C_{1}M_{1}^{2} \int_{0}^{T}e^{-2s\mu(t_{0})}dt_{0}. \tag{4.4}\] We calculate \(\int_{0}^{T}e^{-2s\mu(t_{0})}dt_{0}\) as follows. Changing the variables \(t_{0}\mapsto\xi\) by \(\xi:=\mu(t_{0})=e^{\lambda t_{0}}-1\) and setting \(\widehat{T}:=e^{\lambda T}-1\), we have \[\int_{0}^{T}e^{-2s\mu(t_{0})}dt_{0}=\frac{1}{\lambda}\int_{0}^{ \widehat{T}}e^{-2s\xi}\frac{1}{1+\xi}d\xi\leq\frac{1}{\lambda}\left[\frac{e^{ -2s\xi}}{2s}\right]_{\xi=\widehat{T}}^{\xi=0}\leq\frac{1}{2\lambda s}.\] Hence, (4.4) yields \[\|u(\cdot,0)\|_{L^{2}(\Omega)}^{2}\leq C_{3}D^{2}\,e^{C_{2}s}+\frac{C_{3}}{s} M_{1}^{2} \tag{4.5}\] for all \(s\geq s_{0}\). We choose \(s>0\) suitably for making the right-hand side small. Taking into consideration that the first term \(e^{C_{2}s}\) increases and the second decreases with \(\frac{1}{s}\), we may choose \(s>0\) like order \(\left(\log\frac{1}{D}\right)^{-1}\). Precisely, we argue as follows. First setting \(C_{4}:=C_{3}e^{C_{2}s_{0}}\) and replacing \(s:=s+s_{0}\). we can obtain (4.5) for all \(s>0\). Without loss of generality, we can assume \(D<1\). Setting \[s=\left(\log\frac{1}{D}\right)^{\alpha}>0\] with \(0<\alpha<1\), we have \[e^{C_{2}s}D^{2}=\exp\left(C_{2}\left(\log\frac{1}{D}\right)^{ \alpha}\right)D^{2}\] \[= \exp\left(C_{2}\left(\log\frac{1}{D}\right)^{\alpha}\right)e^{2 \log D}=\exp\left(-2\left(\log\frac{1}{D}\right)+C_{2}\left(\log\frac{1}{D} \right)^{\alpha}\right).\] Since we can find a constant \(C_{5}>0\) such that \[e^{-2\eta+C_{2}\eta^{\alpha}}\leq\frac{C_{5}}{\eta^{\alpha}}\quad\text{for all }\eta>0\] by \(\alpha<1\), we see \[e^{C_{2}s}D^{2}\leq C_{5}\left(\log\frac{1}{D}\right)^{-\alpha},\] and \[C_{4}D^{2}e^{C_{2}s}+\frac{C_{4}}{s}M_{1}^{2}\leq C_{4}C_{5}\left(\log\frac{1}{ D}\right)^{-\alpha}+C_{3}M_{1}^{2}\left(\log\frac{1}{D}\right)^{-\alpha}.\] Thus the proof of Theorem 2 is complete. \(\blacksquare\) **Acknowledgements.** The work was supported by Grant-in-Aid for Scientific Research (A) 20H00117 of Japan Society for the Promotion of Science.
2309.06281
Extending and Defending Attacks on Reset Operations in Quantum Computers
The development of quantum computers has been advancing rapidly in recent years. As quantum computers become more widely accessible, potentially malicious users could try to execute their code on the machines to leak information from other users, to interfere with or manipulate the results of other users, or to reverse engineer the underlying quantum computer architecture and its intellectual property, for example. Among different security threats, previous work has demonstrated information leakage across the reset operations, and it then proposed a secure reset operation could be an enabling technology that allows the sharing of a quantum computer among different users, or among different quantum programs of the same user. This work first shows a set of new, extended reset operation attacks that could be more stealthy by hiding the intention of the attacker's circuit. This work shows various masking circuits and how attackers can retrieve information from the execution of a previous shot of a circuit, even if the masking circuit is used between the reset operation (of the victim, after the shot of the circuit is executed) and the measurement (of the attacker). Based on the uncovered new possible attacks, this work proposes a set of heuristic checks that could be applied at transpile time to check for the existence of malicious circuits that try to steal information via the attack on the reset operation. Unlike run-time protection or added secure reset gates, this work proposes a complimentary, compile-time security solution to the attacks on reset~operation.
Jerry Tan, Chuanqi Xu, Theodoros Trochatos, Jakub Szefer
2023-09-12T14:41:48Z
http://arxiv.org/abs/2309.06281v1
# Extending and Defending Attacks ###### Abstract The development of quantum computers has been advancing rapidly in recent years. As quantum computers become more widely accessible, potentially malicious users could try to execute their code on the machines to leak information from other users, to interfere with or manipulate the results of other users, or to reverse engineer the underlying quantum computer architecture and its intellectual property, for example. Among different security threats, previous work has demonstrated information leakage across the reset operations, and it then proposed a secure reset operation could be an enabling technology that allows the sharing of a quantum computer among different users, or among different quantum programs of the same user. This work first shows a set of new, extended reset operation attacks that could be more stealthy by hiding the intention of the attacker's circuit. This work shows various masking circuits and how attackers can retrieve information from the execution of a previous shot of a circuit, even if the masking circuit is used between the reset operation (of the victim, after the shot of the circuit is executed) and the measurement (of the attacker). Based on the uncovered new possible attacks, this work proposes a set of heuristic checks that could be applied at transpible time to check for the existence of malicious circuits that try to steal information via the attack on the reset operation. Unlike run-time protection or added secure reset gates, this work proposes a complimentary, compile-time security solution to the attacks on reset operation. ## I Introduction Noisy Intermediate-Scale Quantum (NISQ) quantum computers are being rapidly developed, with machines over \(400\) qubits available today [6] and the industry projects \(4000\)-qubit or larger devices before the end of the decade [2]. Many different types of quantum computers exist, with superconducting qubit quantum computers being one of the types available today to researchers and the public through cloud-based services. The superconducting qubit machines are developed by numerous companies, such as IBM [1], Rigetti [4], or Quantum Circuits, Inc. [3]. Quantum computers of these sizes have the potential to fundamentally alter what types of algorithms can run on them, but require specialized facilities and equipment in order to make these quantum computers accessible to users. There is a growing interest in, and practical deployments of, cloud-based quantum computers, also called Quantum as a Service (QaaS) or Quantum Computing as a Service (QCaaS). Cloud-based services such as IBM Quantum, Amazon Bracket, and Microsoft Azure already provide access to quantum computers remotely for users. Following the past success of classical computer cloud-based services, we expect that cloud-based access for remote users to rent quantum computers to be a dominant use-case in the future. In order to support sharing of a quantum computer among different users, there needs to be an efficient way to reset the qubits. Today, the main method to reset the qubit state is by letting qubits decohere, which allows qubits to naturally decay into their ground states. Even though letting qubits decohere erases all the qubit states, it takes a long time, i.e., 250 ns is required for quantum computers on IBM Quantum; it also makes the qubits unusable during that time. As an alternative, a number of companies, such as IBM, have proposed a reset gate or reset operation. The reset operation first measures the qubit state, which collapses it to \(|0\rangle\) or \(|1\rangle\) based on the state of the qubit. Next, if the qubit collapsed into \(|1\rangle\), an X gate (similar to classical NOT gate) is applied to set the qubit state to \(|0\rangle\) state, and the qubit is now fully reset. Mi et al. [8], however, explored the existing (insecure) reset operations used in superconducting quantum computers such as from IBM Quantum and showed that they do not protect fully from information leakage since the reset operation is not perfect. Since the reset operation is conditional on measurement results, its outcomes are closely associated with the error characteristics of the measurement operation. As it was shown [8], an attacker measuring the qubit state post-reset can statistically recover some information about the qubit's state prior to the reset, thus leaking information from the victim user who was using the same qubit prior to the attacker. The fundamental idea behind their attack circuit was for the attacker to perform a qubit measurement immediately when scheduled to execute. Such a malicious circuit, however, can very easily be detected since it only contains a measurement gate. Our work proposes a new, extended attack on reset operations. In particular, our work explores potential ways in which an attacker can add a masking circuit \(C\) before the measurement to "hide" their attack. The main idea behind our design is that by using a masking circuit \(C\) the attacker can make their circuit look like a benign circuit while still being able to recover information across the reset operation as before. In particular, we show that an attacker can use a large number of circuits to target a particular qubit for information leakage, as long as the attacker's circuit is composed of single qubit operations on the target qubit. The attacker can also hide their intention and attack by using two-qubit CX gates, as long as the target qubit of the attack is the control qubit of the CX gates. For single-qubit gates used in the masking \(C\) circuit, the attacker may use simple identity circuits consisting of pairs of X gates, or non-identity circuits consisting of as RX and RZ gates. For multi-qubit gates, an attacker can also hide an attack with CX gates, as long as the target qubit is the control qubit of the CX gate. We also show conditions under which the attack becomes more difficult, such as when qubits are targets of CX gate. We confirm our expectation by running select QASM benchmark circuits, and showing that it is difficult for the attacker to leak the victim's state, due to the presence of multi-qubit gates or other non-identity gates, if the masking circuit \(C\) is a full QASM benchmark, for example. Based on our findings and possible new attacks, we present a new set of heuristics defenses that could be applied to check for existence of the new kind of the malicious circuits before code is executed. Unlike run-time protection or added secure reset-gates, this work proposes a complimentary, compile time security solution to the attacks on reset operation. Note, that previous work [8] proposed a secure reset gate for use at run-time, while we propose a compile-time defense. Our solution meanwhile draws inspiration from different previous work [5] which proposed a quantum computing antivirus that aim to flag suspicious programs that inject malicious crosstalk and degrade the quality of program outcomes. Instead of focusing on crosstalk, we explore how to check circuits for malicious reset operation attacks. Instead of focusing on graph structure of the circuit, we provide a solution based on calculating the matrix representation of the circuit (where possible due to circuit size) as well as based on analyzing types of gates execution on each qubit within a circuit. ### _Contributions_ The main contributions of this work are as follows: * Presentation of a new variant of attacks on reset operations, involving a masking circuit used by the attacker to try to hide their attack circuit. * Evaluation of the efficacy of different masking circuits in the new attack variant. * Description of a set of heuristics to detect existing and the new attacks on reset operation. * Demonstration of a tool and compile-time approach tool for detection of previous attacks and the new attack variant using the heuristics. ## II Background Qubits are the fundamental building blocks of quantum computers. They encode data in quantum states, which can exist as a superposition, and are able to represent a continuum of states in between the classical \(0\) and \(1\). To observe the state of a qubit, the qubit state must be collapsed by a measurement operation, also known as a readout. The two possible measurement results are \(0\) and \(1\), corresponding to eigenstates \(\left|0\right\rangle\) and \(\left|1\right\rangle\). ### _Bloch Sphere_ The Bloch sphere is a geometric representation of a two-level quantum system. It provides a way to visualize an arbitrary state of a qubit as a superposition of the two computational basis vectors, \(\left|0\right\rangle\) and \(\left|1\right\rangle\). The surface of the Bloch sphere can be parameterized by two angles used in the spherical coordinate system: \(\theta\) with respect to the \(z\)-axis, and \(\phi\) with respect to the \(x\)-axis. Given angles \(\theta,\phi,\) we write the corresponding quantum state: \[\left|\psi\right\rangle=\cos\left(\frac{\theta}{2}\right)\left|0\right\rangle +e^{i\phi}\sin\left(\frac{\theta}{2}\right)\left|1\right\rangle,\] where \(0\leq\theta\leq\pi\) and \(0\leq\phi<2\pi\). Quantum circuits are mainly composed of gate operations, also simply called gates, which can be visualized as applying various rotations of the quantum state around the Bloch sphere. ### _Basis Gates_ Quantum gates are used to manipulate quantum states. Reversible operations can be represented by unitary matrices, and quantum gates exist for various unitaries. For each quantum computer, some gates are supported as a native gates, also called basis gates by IBM, for example. Most NISQ quantum computers, including IBM machines, only support a few native gates: the single-qubit gates (I, RZ, X, SX), and one two-qubit gate (CX). Other gates need to be decomposed into these basis gates first before being run on the machines. Among single-qubit gates, I is the identity gate, that performs no operation, but adds delay. The X gate performs a rotation around the \(z\) axis of the Bloch sphere by a fixed \(\pi\) radians angle for the target qubit. It is also analogous to the classical NOT gate, as it maps \(\left|0\right\rangle\) to \(\left|1\right\rangle\) and \(\left|1\right\rangle\) to \(\left|0\right\rangle\), thus "flipping" the qubit. The RZ gate performs a rotation of \(\phi\) radians around the \(z\) axis in the Bloch sphere for the target qubit. The SX gate rotates a qubit around the \(x\)-axis a fixed angle of \(\pi/2\) radians, it effectively adds the rotation angle to \(\theta\) in the Bloch sphere for the target qubit. For two-qubit gates, the CX gate is available. The CX gate operates on two qubits: a control qubit and a target qubit. If the control qubit is in state \(\left|0\right\rangle,\) the CX acts as identity. Otherwise, if the control qubit is in state \(\left|1\right\rangle,\) an X gate is applied to the target qubit, flipping it. The CX gate is sometimes called the CNOT gate. ### _RX Gates_ The RX\((\theta)\) gate performs a rotation of \(\theta\) radians around the \(x\)-axis of the Bloch sphere. The RX gate is not a native gate, but it can be decomposed into native basis gates RZ and SX gates. ### _Measurement Operation_ When a qubit is measured, the result is a classical bit of information, either \(0\) or \(1\). The measurement process collapses the original qubit state, projecting it typically onto the \(z\)-axis of the Bloch sphere. Measurement results of \(0\) and \(1\) correspond to state collapse into \(\left|0\right\rangle\) and \(\left|1\right\rangle,\) respectively. Measurement is an example of a non-unitary operation, as it cannot be reversed. This state collapse is irreversible; after a measurement is made, the original information about the qubit of the state is lost. For a general qubit state \(\left|\psi\right\rangle=\cos\left(\frac{\theta}{2}\right)\left|0\right\rangle +e^{i\phi}\sin\left(\frac{\theta}{2}\right)\left|1\right\rangle,\) the collapse is probabilistic. The probability of a measurement is the square of the magnitude of the coefficient of the corresponding eigenstate. So we measure \(0\) and \(1\) with probabilities \(\cos^{2}(\theta/2)\) and \(\sin^{2}(\theta/2)\), respectively. For example, if \(\theta\) is \(\pi/2\), then probability of \(0\) and \(1\) being measured should be \(50\)%. ### _Reset Operation_ Another non-unitary operation is the reset operation. The reset operation consists of first making a measurement of a qubit onto a classical bit \(c\). Then, an \(\mathtt{X}\) gate is conditionally applied to the qubit if classical bit \(c\) measures \(1\). In more detail, the measurement collapses the qubit to either the \(\left|1\right\rangle\) or \(\left|0\right\rangle\) state. In the former case, the classical bit reads \(1\), and an \(\mathtt{X}\) gate is applied to, resulting in the \(\left|0\right\rangle\) state. In the latter case, no \(\mathtt{X}\) gate is applied and the qubit remains in \(\left|0\right\rangle.\) However, this design of the reset operation is susceptible to readout errors by the measurement operation. If a \(\left|1\right\rangle\) is mistakenly read as \(0\) or a \(\left|0\right\rangle\) as a \(1\), the reset operation incorrectly produces a final state of \(\left|1\right\rangle.\) This error on the real machines leads to possible information leak to a malicious user on the same qubit [8]. ### _Transpilation Process_ Transpilation is the process of transforming an input circuit for execution on specific hardware. It involves matching the circuit to the topology of a quantum device and decomposing the user's gates into native gates supported by the hardware. Similar to classical compilers, transpilers also optimize the programs for performance. Optimizations may involve rewriting non-linear flow logic, processing iterative sub-loops, conditional branches, and other complex behaviors. ## III Extending Quantum Computer Reset Gate Attacks Previous work by Mi et al. [8] has demonstrated information leak across the reset operation on IBM Quantum computers. A malicious attacker can use a circuit consisting of just a measurement gate on the same qubit as a victim to extract information about the amplitude of the \(\left|1\right\rangle\) state, or the equivalent \(\theta\) angle, of the victim state before reset. We assume that a strong attacker is able to run their program immediately after the victim, on the same qubits that the victim used. We also assume the qubits used by the victim are reset before the attacker can access them. Before the victim's reset, we assume the victim likely ends their computation with a measurement on all involved qubits. This collapses the victim qubit states to either \(\left|0\right\rangle\) (where \(\theta=0\)) or \(\left|1\right\rangle\) (where \(\theta=\pi\)). This scenario is most advantageous for the attacker since they only need to distinguish the two ends of the measured output frequency distribution. It has been shown in the prior work that even with multiple reset gates before the attack, information leak still occurs. The attacker model of the prior work is shown in Figure 0(a). In the figure, \(V\) represents the victim circuit, which includes the victim's final measurement. \(R\) represents one or more reset operations executed as a reset sequence between shots of circuit. \(C\) represents the attacker's masking circuit, and \(M\) represents the attacker's measurement. However, a very simple defense mechanism can easily detect such an attack: scan for user circuits consisting of only one measurement gate, or more generally any circuit that begins with a measurement gate and flag these as suspicious. This work shows that an attacker can bypass such simple defenses, and also make a more potent attack circuit, by adding a masking circuit \(C\) before the measurement. By using a masking circuit \(C\), the attacker can make their circuit look like a benign quantum circuit, but still be able to extract information across the reset operation as before. This work shows various masking circuits and how attackers can recover information even if the masking circuit \(C\) is between the reset operation (of the victim) and the measurement (of the attacker). The high-level idea behind the extended quantum computer reset gate attacks is that the masking circuit \(C\) represents unitary operations which can be reversed. With knowledge of the measurement and the masking circuit, the attacker can gain information about the state right before the masking circuit, which is related to the victim's state right before the reset. The attack model is shown in Figure 0(b). ### _Attack Objective_ The first objective of this research work is to analyze the different types of masking circuits \(C\) that an attacker could utilize. By using various masking circuits, the attacker can make their circuit look like a benign circuit, making detection Fig. 1: Attack model, \(q\) represents target qubit and \(c\) represents its corresponding classical register. \(V\) is a shot of victim’s circuit, \(R\) is an inter-shot qubit reset mechanism, \(C\) is a masking circuit used by attacker, and \(M\) is the measurement operation used by attacker to try to guest the state of the \(V\) before \(R\). of the attack harder, while at the same time still being able to carry out the reset gate attack where some information is learned about the state of the qubits prior to the reset. ### _Attacker Circuits_ This work explores and analyzes a variety of possible masking circuits \(C\). Later we show which ones work well, and which ones do not. * circuits consisting of an even number of single-qubit X gates on each qubit, such that the total effective angle of rotation \(\theta\) is \(0\). Since effectively there is no rotation, the attacker's measurement should return the same values as it would be right after the reset operation. * circuits consisting of single-qubit gates with effective \(\theta\) (RX gate) rotation and \(\phi\) (RZ gate) rotation. Because the rotation angle is known, the attacker can infer the qubit \(1\)-output probabilities as they would be right after the reset gate based on their measurement. As we demonstrate, certain rotation angles make the attack more difficult, while others still allow the attacker to make a meaningful measurement. * circuits consisting of two-qubit CX gates where there is entanglement between qubits. The control qubits of CX gate experience delay (due to duration CX gate) but otherwise can be leveraged by an attacker since they do not experience any rotations; meanwhile, the state of the target qubits of CX gate depends both on the prior state and the control qubit, making attacker's use of that qubit more difficult. * circuits from the QASM benchmark suite [7] which are real quantum computing circuits. These include the 2 - and 3-qubit Grover search circuits and the 4-qubit quantum random number generator (QRNG). ### _Hiding Reset Operation Attack with Identity Circuits_ First, we experimented with using a series of X gates as the attacker circuit, as shown in Figure 2. For a variety of input states, we ran experiments increasing the number of reset gates and the number of X gate pairs, which we call the depth of the circuit. Since we use an even number of X gates, the masking circuit is thus always equivalent to identity in this experiment group. As shown later in the Figures 10 and 12, information leak still occurs with X gates added as a masking circuit. Based on the measured \(1\)-output frequency, the attacker can distinguish with high probability between victims initialized with \(\theta=0\) or \(\theta=\pi\). An attacker may try more complex, non-identity circuits, or try to attack victims after a larger number of reset gates to avoid detection. We explain these next. ### _Hiding Reset Operation Attack with RX and RZ Gate Circuits_ Next, we considered RX and RZ rotation gates for the attacker to mask the attack. We ran two experimental groups. For the first set of attacks, we fixed the attack circuit depth at \(1\) RX and \(1\) RZ gate, and we varied the rotation angles. An example is shown in Figure 3. For the second set of attacks, we fixed total rotation angles at \(\theta=\pi\) and \(\phi=\pi/2\). We vary the depth, or number of RX and RZ gates, while keeping the total equivalent rotation angles at a fixed sum of \(\theta=\pi\) and \(\phi=\pi/2\). For depth \(d\), we use \(d\) copies of RX (\(\pi/\texttt{d}\)) followed by \(d\) copies of RZ (\(\pi/\texttt{2d}\)). An example with \(d=2\) is shown in Figure 4. We chose \(\theta=\pi\) because, based on preliminary testing, it is the best non-zero rotation angle for the attacker. For \(\theta=\pi\), \(\phi=\pi/2\) is the choice of \(\phi\) angle that is best for the attacker. ### _Hiding Reset Operation Attack with CX Gate Circuits_ Further, we considered circuits involving multiple qubits. We ran experiments with a series of CX gates, using the victim qubit as the control qubit. CX gates have long duration compared to single-qubit gates. While the control of the CX gate does not affect the qubit state, allowing the attacker to gain information about the victim. The main goal is to evaluate the effect of time delay on the success of the attack. We hope to gain insight into whether duration of a circuit could be used to classify potentially malicious circuits. As shown in Figure 5, we repeat a number of CX gates with the victim qubit, \(q_{0}\), as the control. The attacker only makes a measurement on the control qubit of the CX gates. ### _Hiding Reset Operation Attack with QASM Benchmarks_ Aside from single-qubit masking circuits and circuits with CX gates, an attacker may try more complex and deeper circuits to hide an attack. In particular, they could try to disguise their attack as a benign circuit, for example using some of the QASM benchmark circuits [7]. We evaluate whether it is possible for an attacker to perform a reset attack under our threat model using some common QASM benchmarks. Fig. 3: Example of masking circuit with RX and RZ gates, different number of RX and RZ gates and the angles can be evaluated for efficacy of the masking circuit. Fig. 2: Example of two X gate circuit used as a masking circuit; any even number of X gates applied in sequence forms an identity circuit and can be evaluated for efficacy of the masking circuit. #### Iii-B1 2-Qubit Grover Search Circuit We begin with the 2-qubit Grover search circuit. To start the search algorithm, the qubits need to be initialized into a uniform superposition with Hadamard gates. Then, the Grover operator, \(\mathbb{Q}\), is applied to amplify the amplitude of the correct answer via rotations done by \(\mathbb{Q}\). An example of 2-qubit Grover search is shown in Figure 5(a). We used Grover search with answer bitstring \(11\). The circuit for the algorithm is boxed in Figure 5(a). The Grover operator \(\mathbb{Q}\) is decomposed in Figure 5(b). The attacker uses this circuit after the reset gates and before final measurement, like the previous attacks. Unlike the single-qubit attack circuits, the attacker makes measurements on all involved qubits. The victim qubits are initialized with \(\theta\) rotations independently of each other, that is, the rotation angles are not necessarily the same for each qubit. We limit the range of possible initial angles so that the total number of circuits for each trial does not exceed our limit on the ibmq_jakarta machine of 300 circuits per job. For 2-qubit Grover, each qubit is initialized by the victim with a rotation of \(\theta\in\{0,\frac{\pi}{7},\frac{2\pi}{7},\frac{3\pi}{7},\frac{4\pi}{7},\frac{5 \pi}{7},\frac{6\pi}{7},\pi\}\). #### Iii-B2 3-Qubit Grover Search Circuit We also experimented with the 3-qubit Grover search circuit, which looks similar to 2-qubit Grover search, but has more gates and is deeper. Each qubit is initialized by the victim with a rotation of \(\theta\in\{0,\frac{\pi}{3},\frac{2\pi}{3},\pi\}\). An example of 3-qubit Grover search is shown in Figure 7. #### Iii-B3 Random Number Generator Circuit From the QASM Benchmark suite, there are two small-scale circuits that do not use multi-qubit gates, namely, the quantum random number generator, and the inverse Quantum Fourier Transform (QFT). However, the inverse QFT circuit requires conditional operations, which are currently unavailable on IBM Quantum machines. So we consider the random number generator on \(4\) qubits. The Quantum Random Number generator, shown in Figure 8, uses Hadamard gates to produce a uniform superposition before measurement. This attacker circuit has the smallest depth of the benchmarks tested by this paper, with a depth \(1\). ## IV Reset Operation Error Channel Analysis Before we present evaluation of the different attacks that use masking circuits, we discuss characteristics of the reset operation. Further, we compare behavior of the reset operation on real ibmq_jakarta machine to two types of simulation to motivate our use of real ibmq_jakarta for subsequent evaluation. ### _Behavior of Reset Operation_ Qubits are often implemented with \(\ket{1}\) as a higher energy state than \(\ket{0}\). This results in a higher probability of an incorrect readout for qubit in state \(\ket{1}\) compared to state \(\ket{0}\). Thus, we expect states with a higher amplitude of \(\ket{1}\) to have a higher probability of being the \(\ket{1}\) state after a reset [8]. This error of real machine resets is seen in Figure 8(a), and allows the attacker to extract information about the \(\theta\) angle of the victim qubit based on the measured \(1\)-output frequency [8]. Given the state: \[\ket{\psi}=\cos\left(\frac{\theta}{2}\right)\ket{0}+e^{i\phi}\sin\left(\frac{ \theta}{2}\right)\ket{1},\] recall that the probability of measuring \(1\) is \(\sin^{2}\left(\frac{\theta}{2}\right)\) according to the Born rule interpretation. This motivates an error channel characterization [8] based on the probability of measuring \(1\) post-reset: \[E(\theta)=a\left(b\sin^{2}\left(\frac{\theta}{2}\right)+(b-1)\frac{\theta}{ \pi}\right)+c,\] Fig. 4: Example of different masking circuit with RX and RZ gates where the total rotation angles are fixed. Fig. 5: Example of circuit with CX gates used as a masking circuit, different number of CX gates can be tested for efficacy of the masking circuit. Fig. 6: Example of using 2-qubit Grover circuit used as a masking circuit, circuits with different bitstrings and operators can be tested for efficacy of the masking circuit. The Hadamard, \(\mathbb{H}\), gate can be realized using the basis gates discussed in the text. Fig. 7: Example of using 3-qubit Grover circuit used as a masking circuit, circuits with different bitstrings and operators can be tested for efficacy of the masking circuit. The Hadamard, \(\mathbb{H}\), gate can be realized using the basis gates discussed in the text. where \(a\in[-1,1],b,c\in[0,1]\). On the domain, \(\theta\in[0,\theta],\) the output probability looks like a sigmoid curve. This is seen in Figure 8(a). This error channel parameterization is important to our attack evaluation in Section V. ### _Observed Fidelity Improvements of Reset Operations_ Over the past year, IBM machines have improved in fidelity and yield lower error rates. Indeed, according to IBM's reported error rates through Qiskit's IBMQBackend.properties() method, we found that for qubit \(0\) of ibmq_jakarta, readout error rate has dropped from \(0.0360\) to \(0.0218\) over the past year. In addition, the rate of measuring \(0\) from a \(|1\rangle\) state dropped from \(0.0464\) to \(0.0340\), and the rate of measuring \(1\) from a prepared \(|0\rangle\) state dropped from \(0.0256\) to \(0.0096\). While the error rates and noise have decreased, the current experimental results suggest that the same reset error based on amplitude of \(|1\rangle\) is still present in IBM machines. In comparison to last year, the \(1\)-output frequency of an attacker measuring the victim qubit after \(6\) resets still displays a significantly higher frequency for \(\theta=\pi\) than for \(\theta=0\). At the same time, the noise is of much smaller magnitude, as indicated by the smaller error bars. With decreasing noise to signal ratio, the possibility of a reset error channel attack is becoming actually greater. The attacker is able to recover more information from the victim with ever-increasing probability, even after numerous reset operations. ### _Study of Simulated vs. Real Reset Operations_ We compared different types of simulated reset operations with the real ibmq_jakarta machine. We used AerSimulator, with a noise model directly imported from IBM's ibmq_jakarta backend. In theory, the simulator should behave as the real backend for all qubit gates. Based on our testing, the built-in simulated reset operation does not have the same error as the real machine's reset operation. While the real reset operation has a higher probability of an incorrect reset for qubits with a larger magnitude of \(|1\rangle\), the simulated reset removes this: there is no clear correlation between the victim qubit's original theta angle and the output frequencies post-reset. The data is shown in Figure 8(b). Given the built-in simulated reset operation does not behave as a real one, we then attempted replacing the built-in reset operation with a measurement followed by an X gate conditioned on the measurement being \(1\) - this should in theory represent the behavior of the reset operation. We did observe more realistic results in the case of \(1\) reset, as the sigmoid shape can be seen in Figure 8(c). However, the addition of two or more reset operations with the simulator results in noisy data, and no longer fits a sigmoid curve. This suggests that the simulated reset does not emulate the real machine when using a measurement followed by an X gate as the reset operation. Both the simulator's built-in simulated reset operation and the measurement followed by X gate scheme on the simulator produce a lot of noise: the \(1\)-output frequencies vary a lot depending on the victim qubit's \(\phi\) angle compared to the real machine. At this time, the simulator is unable to accurately replicate the behavior of the reset operation on IBM Quantum machines, and our evaluation in the rest of the paper users' data from real ibmq_jakarta machine. ## V Evaluation of Efficacy of Masking Circuits In this section, we present evaluation results for different masking circuits previously discussed in Section III. The masking circuit evaluation is based on: 1) X gates, 2) RX and R2 gates, 3) CX gates, and 4) QASM benchmarks. For all circuits, we ran experiments on ibmq_jakarta using a varying number of reset gates after the victim and a varying circuit depth for the masking circuits, where possible. ### _Evaluation Metrics_ To evaluate the effectiveness of each attack circuit, we use a metric of signal-to-noise ratio (SNR). We computed the SNR to estimate how much information the attacker could extract from the output frequency data when different types of masking circuits are used. We compute the error channel characterization parameter \(a\), which represents the amplitude of our sigmoid fit. The fit is described in Section IV-A. We compute the standard deviation in 1-output frequency for each fixed \(\theta\) as \(\phi\) varies. Finally, we compute the average standard deviations over all input \(\theta\) values, denoted \(\sigma\). Then the signal-to-noise ratio is defined as \(a/\sigma\), expressed on a log scale (decibels). ### _Reset Schemes_ Using this metric, we can compare the different reset schemes described in Section IV-C. Figure 11 shows the SNR for the three different reset schemes. The SNR metric aligns with the analysis of Section IV-C. We observe a relatively strong SNR for the real reset. For the simulated reset, there is a sharp decline in SNR after adding the first reset. Using a measurement and X gate to simulate reset, the SNR for one reset is relatively high, but adding more resets decreases the SNR drastically. ### _Attack Involving Identity Circuits_ We ran circuits with a series \(0,2,4,6,8,16,\) and \(32\) X gates as the attacker circuit. For each attack circuit, we added up to \(6\) reset gates after the victim. All experiments were run on qubit \(0\) of ibmq_jakarta. Figure 10 displays the 1-output frequency of each attack circuit as a function of the victim qubit's rotation angle \(\theta\). Fig. 8: Example of Quantum Random Number Generator (QRNG) used as a masking circuit. The Hadamard, H, gate can be realized using the basis gates discussed in the text. For the purposes of conserving space, only the results for \(2\) X gates are shown. The graphs for more resets display the same sigmoid shape. We expect that as the depth of the circuit increases or the number of reset gates, the attacker's job becomes harder as more noise is introduced. Figure 12 shows the SNR plotted on a decibel scale for all depths of \(X\) gate circuits and all numbers of reset gates. As expected, increasing the number of resets results in decreasing the signal-to-noise ratio. The correlation coefficient between these two variables is \(-0.862\), indicating a strong negative correlation. The most significant decrease in SNR resulted from the addition of the first reset gate, with subsequent resets having a lesser effect on SNR. The depth of the circuit, measured as number of \(X\) gates, did not appear to have much effect on the SNR, as there is no clear trend of the SNR as depth increases. The correlation coefficient between these two variables is \(-0.057\), indicating no significant correlation. ### _Attack Involving RX and R2 Gate Circuits_ In the first set of experiments, we used \(\theta\in\{0,\frac{\pi}{4},\frac{\pi}{2},\frac{3\pi}{4},\pi\}\) and \(\phi\in\{0,\frac{\pi}{2},\pi,\frac{3\pi}{2}\}\) for the attacker's RX and R2 gates, respectively. We observed that \(\phi=\pi/2\) seems particularly beneficial for the attacker compared to other \(\phi\) angles. The results for this \(\phi\) angle are shown in Figure 13. For \(\theta=\pi/2\), the SNR is the lowest, meaning it is the most difficult for the attacker to extract information about the victim's initial angle. This coincides with our expectation, because after an RX rotation by \(\pi/2\), both initial states \(\ket{0}\) and \(\ket{1}\) have the same output probability of \(\frac{1}{2}\). As the \(\theta\) angle changes from \(\pi/2\) towards \(0\) or \(\pi\), it becomes easier for the attacker to distinguish the victim's initial state. Increasing the number of resets generally decreases the signal-to-noise ratio, as expected. Fig. 10: Example 1-output frequency of X gate masking circuit. Circuits with \(0,2,4,6,8,16\), and \(32\) X gates were used as the attacker circuit. Experiments done with qubit \(0\) of ibmq_jakarta. Only results for \(2\) X gates are shown, with the other graphs having a similar shape. Fig. 9: Qubit state retention, comparison of: (a) reset operation on real machine, (b) simulated reset operation, and (c) simulated reset operation using “measurement + X gate” approach. We then experimented by varying the depth of RX and RZ gates, while keeping the total rotation angles at \(\theta=\pi\) and \(\phi=\pi/2\). Each rotation gate used the same \(\theta\) or \(\phi\) angle. For example, for depth \(2\) we used two RX (\(\pi/2\)) gates and two RZ (\(\pi/4\)) gates. We ran a control group with no attacker, labelled depth \(0\) in Figure 14. For \(3\) resets, increasing the depth decreases the SNR. However, for \(2\) resets, the opposite effect occurs. In general, the correlation between depth and SNR is \(-0.14\), indicating little to no correlation. ### _Attack Involving CX Gate Circuits_ We experimented with a series of CX gates as the attacker. We used qubit \(0\) on ibmq_jakarta as the victim qubit, and we added up to \(6\) CX gates in series after the reset gates, using the victim qubit, qubit \(0\), as the control qubit. Interestingly, increasing the number of reset gates from \(0\) to \(1\) or from \(1\) to \(2\) decreases the SNR, while the increasing the number of reset gates beyond \(2\) seems to increase the SNR, on average. For any number of reset gates, the depth of the CX gates does not have strong correlation with the SNR, with a correlation coefficient of \(0.039\). Due to numerous job requests, the circuits for this set of experiments were executed over several days. This may have introduced noise in the data, as IBM Q machines have slightly different error rates across different execution times. ### _Attack Involving Grover search Circuits_ To compute the signal-to-noise ratio with a multi-qubit circuit, we need a new measure of signal. For each qubit, we Fig. 11: Comparison of SNR for reset on real machine, simulated reset, and simulated reset using “measurement + X gate” approach. Fig. 14: SNR for the first set of RX and RZ attacker experiments. The rotation angles were varied while the depth was fixed at \(1\) of each gate. Fig. 12: SNR for X gate masking circuit experiments. A series of up to \(32\) X gates were tested. Fig. 13: SNR for the first set of RX and RZ attacker experiments. The rotation angles were varied while the depth was fixed at \(1\) of each gate. consider the \(1\)-output frequency as a function of all qubits' initial angles. We compute the sum of the squares of the gradients with respect to each input dimension, then take a square root. This final value, the Root-Mean-Square (RMS) gradient, is roughly a measure of the rate of change in \(1\)-output frequency as we change the input angles. As a measure of noise, we use the average standard deviation in output frequency, as in the single-qubit case. For each combination of initial angles, we did \(8\) trials. We compute the quotient as the SNR for each qubit. Figure 16 shows the results for \(2\)-qubit Grover search. We observed sharp declines in SNR after \(1\) and \(2\) resets. Increasing the number of resets past \(2\) does not appear to significantly impact the SNR. Figure 17 shows the results for \(3\)-qubit Grover search. We observed a sharp decline in SNR after \(1\) reset. Increasing the number of resets past \(1\) does not appear to significantly impact the SNR. We also note the difficulty of drawing a conclusion given the limited data we have, especially for \(3\)-qubit Grover's. ### _Attack Involving QASM Benchmark Circuits_ Below are results for the QRNG circuit on four qubits. We used an initial rotation angle of \(\theta\in\{0,\pi/2,\pi\}\) for each qubit. For every combination of initial angles, we ran \(6\) trials. For three or more resets, the IBM computers ran into internal error. This error also appeared for ibmq_jakarta for large numbers of resets on the Grover search algorithms. Figure 18 represents the mean SNR of all four qubits of the QRNG circuit. Interestingly, increasing the number of reset gates up to \(2\) does not seem to have a significant impact on the SNR. ### _Summary of the Attacks and the Evaluation_ We have shown that for single-qubit gates used in the masking \(C\) circuit, the attacker may use simple identity circuits consisting of pairs of \(\mathbb{X}\) gates, or circuits consisting of RX and RZ gates. For multi-qubit gates, an attacker can also try to hide the attack by using masking circuit with CX gates, as long as the target qubit is the control qubit of the CX gate. We also showed conditions under which the attack becomes more difficult, such as when qubits are targets of CX gate. We confirm our expectation by running select QASM benchmark circuits, and showing that it is difficult for the attacker to leak the victim's state, due to the presence of multi-qubit gates or other non-identity gates, if the masking circuit \(C\) is a full QASM benchmark, for example. Based on these findings, a defense for our extended reset operation attack can be developed. Fig. 16: SNR for 2-qubit Grover circuit experiments. Average gradient is used as the measure of signal for calculating the SNR. Fig. 17: SNR for 3-qubit Grover circuit experiments. Average gradient is used as the measure of signal for calculating the SNR. Fig. 18: SNR for QRNG benchmark circuit experiments. Hadamard gates on each qubit are used to achieve a uniform, random output. Average gradient is used as the measure of signal for calculating the SNR. Fig. 15: SNR for CX gate attacker experiments. The CX gates were used qubit \(0\) as the control qubit. Output results and SNR are based on qubit \(0\). ## VI Defense Against the New Reset Operation Attacks We provide a number of compile-time heuristics that can be used to detect possibly malicious attacks that try to use masking circuits with a measurement to perform a reset operation attack. Our compile-time solution is complimentary to the existing "secure reset" work [8], which is a run-time solution. Further, our approach is different from the existing quantum computer antivirus [5], which focuses on the exact quantum circuit pattern matching. ### _Detecting Attacks that use Identity Circuits_ In the case that the attacker places an identity circuit before the measurement, we scan all gate operations done after the last reset gate and before the final measurement. We use Qiskit's Operator class to convert any potential adversarial circuit into its matrix representation. Then, we check if this matrix is an identity. This is efficient for circuits with a small number of qubits. For large circuits, we can loop through each qubit and check the gates that operate on it. If these are single-qubit gates only, and if these operations are equivalent to identity, our program flags the circuit as suspicious. If a circuit consists of an identity followed by measurement, our program will flag it as suspicious. The size of the matrix representation scales exponentially with the number of qubits involved, so it is limited to smaller circuits. In testing, we generated \(100\) random \(7\)-qubit circuits of depth \(10\), and our program successfully and efficiently flagged all of these as identity circuits. ### _Heuristics for General Attack Detection_ In the most general case, the attacker may use a non-identity circuit as a masking circuit, or he or she may use many qubits that make matrix representations infeasible to work with. In this case, we present an approach that considers each qubit one at a time. For each qubit, we can compute the matrix representation of all gates involving the specific qubit. We first check if the qubit is involved in any multi-qubit gates. Based on our results, circuits involving multi-qubit gates are not susceptible to the reset gate attacks. However, single-qubit gates introduce little error, and even at large depths, the attack can still extract information on these qubits. Thus, any qubits involved in only single-qubit gates, or the control qubit of a CX gate, will be noted by our program. In the case that a qubit is only involved in single-qubit gates, our program checks if the circuit applies an effective RX rotation on the qubit. Based on our results, an effective RX rotation close to \(\pi/2\) makes it difficult for the attacker to perform the attack. So, we propose flagging any qubit with effective rotation \(\theta>3\pi/4\) or \(\theta<\pi/4\). Note that for most circuits, most qubits will have more complex operations that cannot be reduced to an equivalent RX rotation. In this case, our program can still note whether the qubit is effectively identity, or only involves single-qubit gates. ### _Implementation_ We assume our program has access to the circuit that is to be checked, e.g., our program can be used by IBM to scan submitted circuits before they execute on the quantum computers. Given an input circuit, it is simple to count the circuit depth of the possibly malicious input circuit. Additionally, Qiskit provides functionality to convert circuits into their matrix representation. Since the number of resets used is controlled by the quantum computer provider, we assume the number of resets is an input or configuration given to our program. To scan circuits, we first extract the gates from the input quantum circuit, and for any given qubit, check if the gate operates on the qubit. If so, we save the instruction for the gate. In the end, we make a quantum circuit from the list of instructions, yielding the subset of the original circuit that involves each specific qubit. On this smaller circuit, we compute the matrix representation and check for existence of multi-qubit gates, equivalence to identity, and equivalence to a single RX rotation. Based on our testing, for attacker circuits of \(32\)\(\mathrm{X}\) gates, \(6\)\(\mathrm{CX}\) gates, \(2\)-qubit Grover, \(3\)-qubit Grover, and the QRNG Benchmark, our antivirus program can complete a scan in \(0.017\) seconds, \(0.009\) seconds, \(0.024\) seconds, \(0.130\) seconds, and \(0.017\) seconds, respectively. ## VII Related Work Considering attacks on quantum computers, the closest related work is the work which analyzed attacks on reset operations [8]. The authors showed for the first time that imperfections in reset operations can lead to possible information leaks between shots of circuits. Our work extends this prior work and shows more advanced attacks where use of masking circuit is used to help hide the attacker while still allowing for information leak to be extracted by the attacker. Considering protections for quantum computers, previous work has suggested an "antivirus" programs which can be used to detect malicious quantum circuits. The authors used a directed acyclic graph (DAG) to represent an input quantum circuit. In the DAG with non-commutativity (DAGNC) representation, a quantum circuit can be represented as a multigraph. Vertices in the multigraph correspond to gates in the quantum circuit, and edges correspond to orders between gates. The edge from node \(i\) to \(j\) means that the gates corresponding to node \(i\) and \(j\) have at least one qubit or classical bit in common, and the gate corresponding to node \(i\) executes before the gate corresponding to node \(j\). The authors used this representation to find instances of smaller "virus" circuits in the larger input quantum circuit. In contrast, our work does not require use of DAG, but instead scans individual qubits and computes the matrix form of the input circuit. Our defense program could be incorporated into the antivirus as a new feature. ## VIII Conclusion In this work, we demonstrated how a set of new, extended reset operation attacks could lead to critical information leakage from quantum programs executed on quantum computing cloud environments. This work showed that this new kind of reset operation attack could be more stealthy than the previous reset operation attacks, by hiding the intention of the attacker's circuit. The work evaluated how an attacker can mask the circuit by adding simple identity circuits or non-identity circuits consisting of RX and RZ gates for single-qubit gates or CX gates. This work also showed that more complex circuits may render the attack difficult. Based on the findings, this work showed a set of heuristic defenses that could be applied at compile time to check and flag the new kind of malicious circuits.
2306.00075
CAROM Air -- Vehicle Localization and Traffic Scene Reconstruction from Aerial Videos
Road traffic scene reconstruction from videos has been desirable by road safety regulators, city planners, researchers, and autonomous driving technology developers. However, it is expensive and unnecessary to cover every mile of the road with cameras mounted on the road infrastructure. This paper presents a method that can process aerial videos to vehicle trajectory data so that a traffic scene can be automatically reconstructed and accurately re-simulated using computers. On average, the vehicle localization error is about 0.1 m to 0.3 m using a consumer-grade drone flying at 120 meters. This project also compiles a dataset of 50 reconstructed road traffic scenes from about 100 hours of aerial videos to enable various downstream traffic analysis applications and facilitate further road traffic related research. The dataset is available at https://github.com/duolu/CAROM.
Duo Lu, Eric Eaton, Matt Weg, Wei Wang, Steven Como, Jeffrey Wishart, Hongbin Yu, Yezhou Yang
2023-05-31T18:00:17Z
http://arxiv.org/abs/2306.00075v1
# CAROM Air - Vehicle Localization and Traffic Scene Reconstruction from Aerial Videos ###### Abstract Road traffic scene reconstruction from videos has been desirable by road safety regulators, city planners, researchers, and autonomous driving technology developers. However, it is expensive and unnecessary to cover every mile of the road with cameras mounted on the road infrastructure. This paper presents a method that can process aerial videos to vehicle trajectory data so that a traffic scene can be automatically reconstructed and accurately re-simulated using computers. On average, the vehicle localization error is about 0.1 m to 0.3 m using a consumer-grade drone flying at 120 meters. This project also compiles a dataset of 50 reconstructed road traffic scenes from about 100 hours of aerial videos to enable various downstream traffic analysis applications and facilitate further road traffic related research. The dataset is available at [https://github.com/duolu/CAROM](https://github.com/duolu/CAROM). ## I Introduction Road traffic has created many problems that need to be studied with real-world road traffic data. For example, local Departments of Transportation (DOTs) need to count the vehicles on every major road segment for traffic management purposes. It is desirable that each counted vehicle in the data can have fine-grained attributes, such as vehicle type, speed, lane, etc. City planners and transportation system engineers also want to use detailed road traffic data for better decision-making and resource provisioning. Additionally, for researchers and regulators interested in road safety analysis and driver behavior modeling, it is more valuable to capture the comprehensive motion states of vehicles passing through a specific traffic scene instead of obtaining just a count (which does not carry much information) or a crash report (which happens infrequently). For example, aggressive lane switches and frequent close call incidents on a highway segment may indicate that the traffic is reaching the designed capacity. Similarly, reckless driving behaviors can reveal more insights on road safety than reported accidents. Besides the policy makers, vehicle manufacturers and insurance companies can also benefit from datasets of vehicle trajectories, especially if such data can be used to accurately reconstruct and re-simulate the captured traffic scenes. Traditionally, such road traffic data is collected and managed by the DOTs using devices installed on the road infrastructure. There are a few problems. First, it is expensive and unnecessary to cover every mile of the road with sensors and cameras. As a result, many interesting traffic scenes are not captured. Second, although there are many cameras deployed in strategic locations in major cities, it is challenging to process the videos or deliver them over the network due to the sheer volume of the videos. Hence, the operational cost of these cameras further hinders large-scale deployment. Meanwhile, since the cameras cannot move, the captured video data contain redundant information from repeated patterns. Third, the vehicle localization accuracy of infrastructure-based sensors can significantly degrade when a tracked vehicle is far away or occluded by other vehicles. Fourth, due to regulations and privacy issues, it is challenging for researchers outside the DOTs and industrial partners to access these video data for open research purposes. On the other hand, for independent researchers or companies, it is expensive to collect and manage road traffic data. Last, for the researchers and companies who can afford to collect data on the road using cameras and LIDAR sensors, it is time-consuming to label the data to train neural network models for vehicle detection and tracking. This is particularly an issue if the labeling must be done in the 3D space. To address these issues, we propose a framework named CAROM Air ("CARs On the Map tracked from the Air"), which digitizes and reconstructs road traffic scenes from aerial videos taken by drones, as shown in Fig. 1. It is inexpensive and flexible since it does not require any support from road infrastructure. The core of this framework is a pipeline that can track vehicles on the aerial videos and localize them on the map accurately through the detection Fig. 1: An overview of CAROM Air: (a) tracked vehicles on the aerial video, (b) the reconstructed traffic scene, (c) vehicle keypoints, and (d) the map with semantic annotation. of vehicle keypoints. This allows us to convert the aerial videos to vehicle trajectory data which can be delivered over communication networks for reconstruction or further analysis using programs. Such vehicle trajectory data does not have any personal identifiable information, and hence, they can be shared without causing privacy issues. Moreover, we demonstrate that our data can be used as reference measurements or 3D labels for videos and LIDAR point clouds captured by devices on the road infrastructure. This work is a continuation of the ongoing research conducted by the Institute of Automated Mobility (IAM) [1] to support the development and validation of an operational safety assessment methodology [2][3][4][5] and intelligent road traffic infrastructure [6][7][8]. In summary, our contributions are as follows. 1. We developed a keypoint-based vehicle tracking and localization pipeline for aerial videos. The average vehicle localization error is from 0.1 m to 0.3 m using a drone flying at 120 meters in various conditions. 2. We built a dataset of vehicle trajectories obtained from about 100 hours of drone video in 50 different road traffic scenes. 3. For each scene, we also provide the map with semantic segmentation at the lane level, which enables further automated traffic analysis and statistics. 4. We demonstrated several downstream applications to show the practicality of our framework. ## II Related Work Unmanned Aerial Vehicles (UAVs), commonly called drones, have been used in 3D mapping of road infrastructures [9], traffic monitoring [10][11][12][13][14][15][16][17][18], road safety analysis [19], and transportation of humans or goods [20]. They are gaining popularity as an inexpensive and flexible method of obtaining aerial videos of road traffic scenes. To further process the videos, a pipeline of vehicle detection and tracking can be applied, typically with deep neural networks [21]. With such methods, researchers have constructed datasets of vehicle trajectories obtained from drone videos [22][23][24][25][26][27][28], which supplement existing large-scale autonomous driving datasets [29][30][31][32][33] and road infrastructure based traffic monitoring datasets [34][35][36]. These vehicle trajectory datasets further enable a series of analysis tasks, often with the help of a map containing lane-level traffic semantics [37][38]. Compared to existing works, our method provides better vehicle localization accuracy with more rigorous and more extensive evaluations. Besides, thanks to our keypoint-based vehicle localization algorithm, our framework has better flexibility in drone camera angles rather than requiring the camera to always look downward. Similar keypoint-based methods have been explored in 3D reconstruction [39][40][41][42][43][44][45] and autonomous driving from a ego vehicle's view point [46][47][48][49], but not from a drone camera. Additionally, our dataset is larger and more diverse than similar datasets from existing works. ## III The CAROM Air Framework The CAROM Air framework contains three layers, as shown in Fig. 2. The foundational layer is a pipeline to track and localize vehicles captured on the aerial video (detailed in this section). The middle layer is the dataset of tracked vehicle trajectories and traffic scene maps with lane-level semantic annotation (section V). After that, the downstream applications form the third layer (section VI). ### _Camera Calibration_ The camera calibration step is illustrated in Fig. 3. We use a pinhole camera model with no distortion and a flat ground model, which can usually achieve enough accuracy. For each video track, we usually annotate 8 to 16 point correspondences on a satellite map (e.g., a screenshot on Google Maps) and a reference aerial image (typically the first image in a video). With these point correspondences, the 3D pose of the camera is solved through Perspective-n-Points (PnP) [50] given the camera intrinsics (calibrated in the lab) and the 3D coordinates of the points (computed using the scale of the map by assuming the annotated points are on the flat ground). Different from a stationary camera installed on the road infrastructure, the pose of the drone camera can drift. Hence, recalibration is needed for each video image. To achieve this, we detect the corners features [51] on the ground (denoted as the reference points in Fig. 2) for the reference aerial image, track them across the whole video, and recompute the camera pose using PnP. The semantic annotation of the map helps to determine those points on the ground, e.g., the map shown in Fig. 1(d). With the camera parameters and the map, we can back-project any image pixel to a 3D location if that pixel is on the ground. Fig. 3: An illustration of camera calibration. Fig. 2: The CAROM Framework Architecture. ### _Vehicle Detection and Tracking_ We use a Keypoint RCNN [52] to detect vehicle keypoints and bounding boxes on each image. We define 33 keypoints in total, as shown in TABLE I and illustrated in Fig. 4 (a). Keypoints are usually defined in groups of two (i.e., right-left) or four (i.e., front-right, front-left, rear-right, and rear-left). Among them, 19 keypoints are detected on the image, as shown in Fig. 4 (b). These keypoints are usually related to observable features such as corners. Hence, they can be reliably detected in most cases using a Keypoint RCNN trained from a small dataset constructed by us (4,386 images, about 12,000 vehicles in total). With the detected vehicle object instances on two adjacent video frames, we associate them if the intersection-over-union (IoU) of their bounding boxes exceeds a certain threshold (i.e., tracking by detection). ### _Vehicle Model Fitting_ We collected 200 vehicle 3D models from the Internet and annotated all 33 keypoints in 3D for each model. These 3D models include vehicles of various types, and an example is shown in Fig. 4 (a). We also preprocessed the 3D models to the actual scale of real-world vehicles. For each vehicle model, we concatenate the \((x,y,z)\) coordinates of all 33 annotated 3D keypoints as a long vector (denoted as the shape vector \(\mathbf{s}_{i}\)). After that, we run Principal Component Analysis (PCA) [53] on the set of shape vectors \(\{\mathbf{s}_{i}\}\) of all vehicles to find the mean shape (denoted as \(\mathbf{s}_{m}\)), the \(k\) basis vectors (denoted as the columns of a matrix \(W\)) corresponding to the \(k\) largest eigen values, and the \(k\)-dimensional parameter vectors \(\{\mathbf{b}_{i}\}\), such that the reconstructed shapes \(\{\mathbf{s}_{i}=W\mathbf{b}_{i}+\mathbf{s}_{m}\}\) can approximate the original shapes \(\{\mathbf{s}_{i}\}\). Similarly, we can generate a vehicle shape \(\mathbf{\hat{s}^{*}=W\mathbf{b}^{*}+\mathbf{s}_{m}\) from an arbitrary parameter vector \(\mathbf{b}\). Shapes of vehicles of various types can be generated in this way, as demonstrated in Fig. 4 (c). The mean shape vector \(\mathbf{s}_{m}\) and the matrix \(W\) are collectively called the **vehicle shape prior**. Given a vehicle on an image, we try to find a parameter vector \(\mathbf{b}\) and the vehicle pose \((R,\mathbf{t})\), such that the generated vehicle shape best fits the detected keypoints \(\mathbf{p}\) under the camera projection \(\Pi()\) obtained from recalibration, _i.e._, \[\operatorname*{arg\,min}_{\mathbf{b},\mathbf{t}}\sum_{j}^{N}\alpha^{(j)}|| \mathbf{p}^{(j)}-\Pi(R(W^{(j)}\mathbf{b}+\mathbf{s}_{m}^{(j)})+\mathbf{t})||+ \lambda||\mathbf{b}-\mathbf{b}_{\mathbf{t}}||.\] Here, \(N\) is the total number of detected keypoints (19 in our case); \(\alpha^{(j)}\) is the visibility of the \(jth\) keypoint reported by the detector, i.e., either 1 (visible) or 0 (invisible); \(\mathbf{p}^{(j)}\) is the pixel coordinates of the \(jth\) keypoint; \(W^{(j)}\) and \(\mathbf{s}_{m}^{(j)}\) are the vehicle shape prior components for the \(jth\) keypoint. Assuming the vehicle is always on the flat ground (_i.e._, the XOY plane), there are essentially three unknown variables in the vehicle pose, _i.e._, the vehicle position \((x,y)\) in \(\mathbf{t}\) and the heading angle \(\psi\) in \(R\) (\(R\) is the rotation matrix along the z-axis by the angle \(\psi\)). With this parameterization, the model fitting problem is simplified to an unconstrained nonlinear least square problem, which can be solved efficiently using the Levenberg-Marquardt method [54]. The initial position of the vehicle is approximated by the center of the bounding box, and the initial heading of the vehicle is obtained using a set of vectors through random sample consensus (RANSAC) [55]. These vectors are derived from a set of keypoint pairs pointing in the vehicle's forward direction, such as {(0, 2), (1, 3), (4, 6), (8, 10),...}. In fact, since \(\Pi()\) is close to a weak perspective projection for aerial videos, if the initial estimation of the vehicle heading is reasonably accurate (which is usually the case), this problem is very close to a linear least square problem. Hence, it generally converges very fast (sub-millisecond in our implementation). The last term \(\lambda||\mathbf{b}-\mathbf{b}_{\mathbf{t}}||\) is a regularizer, where \(\mathbf{b}_{\mathbf{t}}\) is the categorical "template" parameter vector. For examples, if the vehicle is detected as a sedan, \(\mathbf{b}_{\mathbf{t}}\) is the average of \(\{\mathbf{b}_{\mathbf{i}}\}\) from all sedans among the 200 vehicle 3D models that are used to construct the vehicle shape prior. Meanwhile, \(\mathbf{b}_{\mathbf{t}}\) is also used as the initial value of \(\mathbf{b}\) in the optimization procedure. After the model fitting, we find the k-nearest-neighbor of \(\mathbf{b}\) in \(\{\mathbf{b}_{\mathbf{i}}\}\) and use them to determine the type of the vehicle. \begin{table} \begin{tabular}{|c|c|l|} \hline ID & detected? & keypoint definition \\ \hline \hline 0 - 3 & Yes & corners of roof top \\ \hline 4 - 7 & Yes & corners of font and rear windshields \\ \hline 8 - 11 & Yes & centers of front and rear lights \\ \hline 12 - 15 & No & corners of front and rear bumpers \\ \hline 16 - 19 & No & centers of wheels \\ \hline 20 - 23 & No & corners of chassis bottom surface \\ \hline 24 - 25 & Yes & outermost corners of side mirrors \\ \hline 26 - 27 & No & corners of the front door windows \\ \hline 28 - 31 & Yes & wheel-ground contact points \\ \hline 32 & Yes & center of the brand logo in the front \\ \hline \end{tabular} \end{table} TABLE I: Definition of vehicle keypoints. Fig. 4: Vehicle keypoints: (a) defined in 3D, (b) detected on the image, and (c) generated from a vehicle shape prior. Fig. 5: The simplified vehicle kinematic bicycle model (left) and state prediction rules (right). ### _Vehicle State Estimation_ The model fitting step provides us the position and orientation of each detected vehicle on every image of the aerial video. We further run an Extended Kalman Filter (EKF) with a simplified kinematic bicycle model, as illustrated in Fig. 5. The vehicle state prediction rules are listed in equations (1) to (6) in the figure. We assume that the vehicle maintains its steering angle and speed, i.e., equation (1) and (2). Among all the states, the position (\(x\), \(y\)) and heading \(\psi\) are considered directly observable, while the other three states are hidden (highlighted in red in Fig. 5). The parameter vector \(\mathbf{b}\) and the vehicle dimension are also estimated iteratively using the model fitting results in a similar way, assuming they do not change among images. The EKF approximation works well since the model fitting uncertainty is generally small and the vehicle motion between two adjacent frames is also small. Finally, the estimated states of all vehicles on all video images are exported as the vehicle trajectory data in Fig. 2. ### _Implementation Details_ We built a prototype system that implements the proposed framework with a few small improvements. First, for some scenes, a piecewise flat ground model was used to better capture the uneven ground surface. The added cost is that more point correspondences are required to be annotated at carefully chosen places. Second, we augmented the camera realibration step to a sparse monocular Simultaneous Localization and Mapping (SLAM) pipeline with key frame selection to improve the robustness. Third, we implemented a backup vehicle tracking and localization pipeline using the instance segmentation masks of vehicles, which is similar to [6]. When the keypoint detector misses a vehicle but the mask detector detects it, this backup pipeline works. Two additional estimators were implemented for the backup pipeline. When the vehicle heading can be obtained, an EKF with a point-mass and no-side-slip kinematic model is used. If the vehicle heading cannot be obtained, the Kalman Filter estimator in [6] is used. ## IV Empirical Evaluation We conducted several experiments to evaluate the proposed framework and our prototype implementation. First, we evaluate the vehicle detection and tracking performance with three video tracks taken from three different scenes. The results are shown in TABLE II, most of the metrics are from [56]. Here, "#Veh" is the number of vehicles in the video track. "IDE" is the number of vehicles with tracking ID errors. "MT" is the number of vehicles that are tracked for over 80% of the time (i.e., "mostly tracked"). "ML" is the number of vehicles that are tracked for less than 20% of the time (i.e., "mostly lost"). "VFP" is the number of non-vehicle objects that are wrongly tracked as vehicles (i.e., "vehicle false positive"). A vehicle is considered "tracked" if it is either tracked by the proposed pipeline (using keypoints) or the backup pipeline (using masks). We only track vehicles on the traversable ground area labeled on the map, and we only assign a tracking ID to a vehicle if it can be detected and associated for at least five consecutive video images. We intentionally set a more strict score threshold for the keypoint detector so that there are less false positive and more false negatives (as shown in the "FN (kp)" column in TABLE II). In most cases, these false negatives can be handled by the backup pipeline with slight loss of vehicle localization accuracy. Overall, our prototype can track most of the vehicles correctly. Qualitative results and visualizations are available online in our GitHub repository [57]. Next, we quantitatively evaluate the model fitting performance. As shown in Fig. 6 (a), we parked a test vehicle in an empty lot and flew the drone at 85 meters. We moved the drone in a way such that the test vehicle can be seen in different places in the camera field of view (FOV), which is illustrated as the dashed yellow trajectory in the figure. Four large ArUco markers [58] were placed on the ground to facilitate camera recalibration. Three different test vehicles with known dimensions are used (a sedan, a hatchback, and an SUV), and six video tracks are collected (20 minutes in total). For each video track, we marked the four contact points of the wheels and the ground to derive the ground truth vehicle pose. The evaluation results are shown in TABLE III, averaging over all images from all video tracks. In this table, the first two data columns represent the position error in the vehicle's longitudinal direction (x) and lateral direction (y). The third data column means the error of heading angles (\(\psi\)) in degrees. The last three data columns are vehicle dimension errors (i.e., length, width, height) in meters. Additionally, in Fig. 6 (b), we show the average localization error across the camera FOV. These results show that the vehicle pose and shape can be captured precisely. After that, we quantitatively evaluate the performance of vehicle localization and speed estimation. As shown in Fig. 6 (c), we drove two test vehicles equipped with differential GPS devices in an intersection and compared the GPS data with our results obtain from a drone at 120 meters. The differential GPS has a localization accuracy of about 2 cm, and its measurements are used as references. We drove each test vehicle across the intersection 24 times in various \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Videos} & MOTA & MME & FP & FN & MOTA & MME & FP & FN & \#Objects & \#Images & \#Veh & IDE & MT & ML & VFP \\ \hline track 1 & 98.1\% & 512 & 16 & 639 & 88.5\% & 430 & 1 & 11808 & 107140 & 29300 & 195 & 1 & 193 & 1 & 0 \\ \hline track 2 & 99.2\% & 0 & 73 & 1399 & 90.1\% & 0 & 3 & 17887 & 180438 & 42390 & 650 & 0 & 648 & 2 & 0 \\ \hline track 3 & 97.4\% & 35 & 943 & 1503 & 89.6\% & 10 & 405 & 9681 & 96975 & 42796 & 498 & 2 & 495 & 1 & 6 \\ \hline \end{tabular} \end{table} TABLE II: Tracking evaluation results. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Metric & x (m) & y (m) & \(\psi\) (\({}^{\circ}\)) & L (m) & W (m) & H (m) \\ \hline Avg Error & 0.092 & 0.084 & 0.891 & 0.075 & 0.044 & 0.099 \\ \hline Std Dev & 0.113 & 0.090 & 1.055 & 0.108 & 0.047 & 0.116 \\ \hline \end{tabular} \end{table} TABLE III: Model fitting evaluation results. directions. Two example trajectories are shown in Fig. 6 (c). With our prototype system, the keypoints of vehicle A can be reliably detected all the time, and the proposed pipeline is used. In contrast, the keypoints of vehicle B can only be detected occasionally, and the backup pipeline is used most of the time. The average location difference between our method and the reference is 0.10 m and 0.26 m for vehicles A and B, respectively. The average speed estimation difference is 0.22 m/s and 0.36 m/s for vehicles A and B, respectively. Additionally, the distribution of location differences in the vehicle's reference frame is shown in Fig. 6 (d). In this figure, the red ovals show the approximated two-sigma range, i.e., about 95% of the measurement differences are inside the ovals. These results indicate that our framework can localize vehicles accurately. We believe that the primary sources of errors are as follows: (a) camera lens distortion, (b) inaccurate drone camera pose estimation in recalibration, (c) ground flatness, and (d) keypoint detection errors. In some cases, under strong sunlight, the detector can also make mistakes with featureless black vehicles, vehicle shadows, and the specular reflection on the vehicle surface. Besides, sometimes vehicles with similar shapes are misclassified into the wrong types, _e.g._, sedan to coupe, SUV to minivan, etc. ## V The CAROM Air Dataset We constructed a vehicle trajectory dataset from about 100 hours of drone videos in 50 different traffic scenes covering a variety of traffic patterns, including roundabouts, intersections, local road segments, and highway segments. For a few scenes, we flew two drones simultaneously to cover larger areas, and we manually synchronize the videos using a flashlight visible from both drones. Besides the vehicle motion data, we also segmented the map at the lane level and annotated the type of these segmented areas, e.g., vehicle driving lanes, curb areas, sidewalks, crosswalks, buffer areas, etc. Examples are shown in Fig. 7. More details about the dataset content and the data format are available online [57]. Fig. 6: Empirical evaluation: (a) experiment setting for model fitting evaluation, (b) vehicle position error in the camera FOV, (c) experiment setting for vehicle tracking and localization, and (d) vehicle location error in the ego vehicle frame. Fig. 7: Examples from the CAROM Air dataset. ## VI Applications In this section, we demonstrate five different applications enabled by our framework and dataset. **(1) Fine-grained traffic counting**. Traffic counting and statistical analysis are crucial for traffic management. Our framework can automate the counting and analysis at the lane level by utilizing a semantically segmented map. Each vehicle's trajectory can be converted into a list of map segments traversed by the vehicle, and a program can count vehicles that follow a specified pattern. For example, in Fig. 8 (a), on the southbound (blue trajectories), we observed that the percentages of left-turning vehicles that leave the intersection in the leftmost lane, middle lane, and the rightmost lane are 45%, 45%, 10%. On the northbound (cyan trajectories), the percentages are 23%, 55%, and 22%. Similarly, in Fig. 8 (b), we can obtain the speed of vehicles on each lane using the segmented map, which shows that 54% of the vehicles on the leftmost lane of the highway segment in both directions exceed the speed limit. **(2) Driving safety analysis**. Various assessment metrics have been proposed to objectively evaluate driving safety [2][3][5]. For example, in Fig. 8 (b), utilizing our vehicle trajectory data and the segmented map of a traffic scene, we can compute the Time-To-Collision (TTC) metric [59] for any pair of adjacent vehicles in the same lane. Similarly, in Fig. 8 (c), given a pair of intersecting vehicle trajectories and the area of encroachment (shown as the blue-shaded circle), we can compute the Post Encroachment Time (PET) metric [60]. A low TTC or PET generally indicates unsafe driving behavior. Moreover, we can also "re-simulate" the motion of vehicles using our data, and then probe the safety envelope by changing the physical properties of the vehicle [61]. **(3) Traffic incident detection**. Researchers spend hundreds of hours studying traffic data, which is laborious and costly. With our framework, an automated program can search through the dataset and detect incidents of interest. For example, in Fig. 8 (d), a vehicle drives through an area separating the main lanes on the highway and the ramp (shown as the red-shaded area). This is a traffic rule violation. We can check whether each vehicle trajectory passes through that area on the segmented map in our dataset to detect incidents of this type. Similarly, in Fig. 8 (c), we can detect a close call incident if the PET is lower than a threshold or an aggressive driving incident if the acceleration of a vehicle is higher than a threshold. **(4) Reference measurement and labeling**. In order to deploy cameras and LiDARs on road infrastructure to monitor traffic, effective neural network models are needed to detect and track vehicles. However, it is expensive to construct labeled datasets to train these models, especially if it is required to label vehicle 3D bounding boxes manually. With accurate cross-sensor calibration, vehicle trajectory data generated from our framework can be used as labels for the data obtained from other sensors or as reference measurements to evaluate the performance of other traffic monitoring systems [6][8]. For example, in Fig. 8 (f), we show the vehicle localization results on the aerial video. These results are projected onto an image obtained from an infrastructure-based camera in Fig. 8 (e). They are also shown in the 3D space together with the point cloud obtained from an infrastructure-based LiDAR in Fig. 8 (g). **(5) Generalization to roadside perspectives**. We can also apply our keypoint-based vehicle localization method to videos from non-aerial perspectives. An example is shown in Fig. 8 (h). However, the keypoint detector must be trained with data from the same perspective. If some keypoints are not observable, _e.g._, when a vehicle moves toward the camera or when it is partially occluded by another vehicle, more robust regularization is required for the model fitting step. ## VII Conclusions This paper presents CAROM Air, a keypoint-based vehicle localization and traffic scene reconstruction framework using aerial videos recorded by drones. Our framework achieves decimeter-level localization accuracy and enables many practical downstream traffic analysis applications. Still, it has certain limitations, such as short flight time, restricted fly zones in cities, potential risks of drone crashes, etc. The drone camera also has a limited dynamic range, and the detector can produce errors on certain vehicles that appear infrequently in our training data (e.g., motorcycles, trucks, and trailers). With further development, we hope it can serve as a flexible method for road traffic analysis and eventually help improve road safety and transportation efficiency. Fig. 8: Example applications of the CAROM Air framework.
2310.20443
Ontologies for Models and Algorithms in Applied Mathematics and Related Disciplines
In applied mathematics and related disciplines, the modeling-simulation-optimization workflow is a prominent scheme, with mathematical models and numerical algorithms playing a crucial role. For these types of mathematical research data, the Mathematical Research Data Initiative has developed, merged and implemented ontologies and knowledge graphs. This contributes to making mathematical research data FAIR by introducing semantic technology and documenting the mathematical foundations accordingly. Using the concrete example of microfracture analysis of porous media, it is shown how the knowledge of the underlying mathematical model and the corresponding numerical algorithms for its solution can be represented by the ontologies.
Björn Schembera, Frank Wübbeling, Hendrik Kleikamp, Christine Biedinger, Jochen Fiedler, Marco Reidelbach, Aurela Shehu, Burkhard Schmidt, Thomas Koprucki, Dorothea Iglezakis, Dominik Göddeke
2023-10-31T13:24:28Z
http://arxiv.org/abs/2310.20443v2
# Ontologies for Models and Algorithms in Applied Mathematics and Related Disciplines ###### Abstract In applied mathematics and related disciplines, the modeling-simulation-optimization workflow is a prominent scheme, with mathematical models and numerical algorithms playing a crucial role. For these types of mathematical research data, the Mathematical Research Data Initiative has developed, merged and implemented ontologies and knowledge graphs. This contributes to making mathematical research data FAIR by introducing semantic technology and documenting the mathematical foundations accordingly. Using the concrete example of microfracture analysis of porous media, it is shown how the knowledge of the underlying mathematical model and the corresponding numerical algorithms for its solution can be represented by the ontologies. ## 1 Introduction Mathematical research data have a multitude of appearances. Classically, these are documents with mathematical proofs and formulae, but increasingly models, algorithms and software as well as the associated generated data (numerical, symbolic) are included [24, 2]. Those data stem from the mathematical core sciences as well as from the applied sciences, e.g., engineering or physics. Consider the following situation as a guiding example: We want to examine a sample of some porous medium, like open-pored asphalt concrete, and analyze it using a micro _X-ray computed tomography_ (X-RCT) scan to detect micro fractures in the material [18]. The measurement process can be mathematically modeled via a Radon transform in the following sense: When an X-ray travels on a line through an object it will be attenuated by the material on this line. This attenuation depends on the density of the material, which we want to reconstruct. Mathematically, the signal measured in the detector can now be expressed as a Radon transform, the so called X-ray transform of the density function. Hence, to reconstruct the fracture images, an algorithm for inversion of the X-ray transform has to be applied to the observed data. Among others, the choice of the algorithm depends on the measured data and the properties of the model, like the coordinate system used. These metadata are often not stored systematically, causing violations of the FAIR principles [28], as the reusability cannot be guaranteed. Consequently, researchers who are interested in applying X-RCT, possibly in other research areas like archaeology or biomedicine, cannot simply reuse, but may have to redo the literature search on algorithms, software implementations and parameters from scratch again. Due to their origin from engineering, data from different domains are not linked to the underlying general mathematical concept. Hence, synergies between applications are not exploited, although the basic mathematical model may be exactly the same. The creation of a _Knowledge Graph_ (KG) including models, algorithms, related literature and further metadata is in the scope of this paper. In general, questions arising in a typical _Modeling-Simulation-Optimization_ (MSO) workflow like shown in fig. 1 should be picked up. These include the existence of models, availability of solution algorithms, input or observation data or model validity. Usually, answering those questions requires a huge amount of effort that can be reduced if the required information is accessible and dis Figure 1: Typical MSO workflow and resulting competency questions [24]. coverable from a single starting point in a KG, shareable by a unique identifier, and browseable through a portal. In the following sections we present our work on the ontologies for mathematical models (MathModDB) and for algorithms (AlgoData) and how they are connected, as the basis for a KG. The work presented is conducted within the _Mathematical Research Data Initiative_ (MaRDI) project of the _German National Research Data Infrastructure_ (NFDI). The NFDI is a part of a nationwide coordinated effort that aims at setting up data infrastructure and semantic technologies to foster the FAIR principles. MaRDI is the NFDI consortium for mathematics with the mission to develop a robust research data infrastructure for mathematics and bridging towards other scientific disciplines [24]. ## 2 Related Work Using ontologies for the semantic representation of mathematical knowledge is not yet widespread. Among the few examples are _Open Mathematical Documents_ (OMDoc) [12], which is a semantic markup format and ontology for mathematical documents consisting of classes such as _Definition_, _Theorem_ and _Proof_. Furthermore, the ontology of mathematical knowledge concepts (OntoMath) was initially developed to unify terminology in the field of mathematics and introduces a taxonomic approach for mathematical knowledge representation [3]. It was developed for educational purposes and eventually merged into the OntoMathEdu ontology, which can be used to model "prerequisite" relations to create educational plans of mathematics [5]. Ontologies for models and algorithms remain restricted to the implementation aspects of computational models in plasma physics [23] or mathematical models in biological processes [10]. A more general way to represent the semantics in mathematical models are _Model Pathway Diagrams_ (MPDs). They are based on _Quantities_ (represented by terms) which are connected by _Laws_ (represented by equations) [13]. The work presented in this paper can be seen as a generalization of MPDs with respect to more detailed information on mathematical models. In _Wikidata_[27], mathematical models and algorithms are semantically described using classes like _computational problem_, _algorithm_ or _mathematical model_ and properties such as _defining formula_ or _computes solution to_. However, until now a thorough standardization is missing. For example, the _filtered back projection_ algorithm (wikidata:Q20665529) is described as an instance of _algorithm_ and subclass of _Radon transfer_, but no information about the _defining formula_ and other important aspects is provided. Within the NFDI, ontologies and KGs play a major role as semantic technology to implement FAIR principles [11]. Thus, in many NFDI consortia there are efforts to use these to describe and connect research assets. For our work, _NFDI4Ing_[22] is particularly relevant. The ontology _metadatajing_[1] (m4i) developed by that project describes scientific processes with a focus on the engineering domain. Conceptually, m4i is based on the metadata scheme En gMeta [19, 20], on existing terminologies [16, 4, 8] and on top-level ontologies [17, 6]. In the following, two ontologies that we develop for mathematical models and numerical algorithms are presented, that aim to express as much semantic information as reasonable, but at the same time being compact and simple. ## 3 A Combined Ontology for Models and Algorithms In this section, we describe the evolution and development of ontologies for models and numerical algorithms (a preliminary version has been published in [21]). Furthermore, we explain how these two graphs are connected (cf. fig. 2). Modelsare fundamental mathematical research data (cf. sec. 1 and [14]), allowing to abstract, formalize, analyze and ultimately understand complex phenomena from nature and technology. As depicted in fig. 1, models are - together with the algorithms - at the heart of the typical MSO workflow. They lie at the intersection of mathematics and its fields of application. Models from Figure 2: The ontologies for algorithms (AlgoData, light blue) and models (MathModDB, dark blue). The ontologies are connected between the _Mathematical Model_ and the _Algorithmic Problem_ classes. different application areas often share the same mathematical characteristics, and in some cases even structurally identical models are used, differing only in the quantities involved or the problems they are modeling [14]. This universality of mathematical models justifies the construction of the _Mathematical Models Ontology_ (MathModDB) to improve the semantic representation of knowledge in mathematics. By analyzing use cases and their workflows, such as, amongst others, X-RCT (cf. sec. 1), it is apparent that the following classes are relevant for a comprehensive description of mathematical models: _Mathematical Model_ as the central class and anchor, _Application Domain_ as the domain in which the model is used, _Application Problem_ as the application problem that shall be solved by using the model, _Mathematical Formulation_ that formalizes the model (e.g., a LaTeX formula) as well as the _Quantities_ that enter into the mathematical formulation. This ontology for mathematical models, named MathModDB, consists of these five main classes and is shown in dark blue in fig. 2. Referring back to the X-RCT use case, the MathModDB ontology can be used to represent that the _Mathematical Model_ of X-ray transformation _models_ the _Application Problem_ of microfracture detection of porous media, which in the specific case of asphalt _is contained in_ the _Application Domain_ of civil engineering. Moreover, it _contains_ a _Mathematical Formulation_, which is _generalized by_ the transport equation, and contains _Quantities_ such as radiant energy. Algorithmsare the basic building block of applied mathematics. They are at the center of the AlgoData ontology [7], which is visualized in light blue in fig. 2. For instance, starting from the mathematical problem given above, inversion of the X-ray transform, the KG then provides a structured path to the relevant algorithms (e.g. filtered back projection and discrete algorithms), to scientific papers that have _invented_, _studied_ or _analyzed_ them, to software packages _implementing_ them, and benchmarks _testing_ them. Algorithms are grouped in classes based on common properties or algorithmic ideas, which makes the analysis easier and reveals connections between algorithms that were challenging to observe before. As with X-RCT, user stories for the AlgoData graph start with an application-specific problem formulation. The _Algorithmic Problems_ considered in AlgoData are specified with respect to their mathematical properties, stating assumptions made on involved matrices, vector spaces, expected output etc. The level of detail for these specifications varies between mathematical disciplines. For instance, in the context of model order reduction, very fine-grained specification is typically neither possible nor desired. Hence, different fields of applied mathematics require different notions of problems to be able to incorporate their algorithms. AlgoData does not cover the application specific part of the problem. To become useful, it must be connected to an ontology that defines application problems and corresponding models, i.e. MathModDB. Connecting the MathModDB and AlgoData OntologiesIn order to later exploit potential synergies of the two (and other) ontologies, we connect the MathModDB and the AlgoData ontologies. The merging was done after the parallel development of the single ontologies for algorithms and models and the result is shown in fig. 2. Solving a mathematical problem, that has been derived from an application problem via modeling, represented in MathModDB, requires solving algorithmic subproblems, represented in AlgoData. This link is provided by the relation/object property _uses algorithmic problem_ and its inverse _used by model problem_. This relation represents the process from the MSO workflow in which the mathematical modeling step is followed by solving the equations formalizing a model using an algorithm (cf. fig. 1), which is however mediated through the class of _Algorithmic Problems_. Making such a link for an application requires finding the most specific AlgoData algorithmic problem object, taking into account the problem properties like symmetry. Linking the mathematical model to the algorithmic problem rather than directly to the algorithms is beneficial: The initial collection of available algorithms needs to be done only once for each problem class, and new algorithms for a specific algorithmic subproblem immediately become available for all applications that use it, without an explicit link to the mathematical formulation. However, the distinction between a mathematical problem and an algorithmic subproblem is diffuse. _Mathematical Models_ in the MathModDB ontology can be so specific and have such peculiar properties, that no suitable corresponding _Algorithmic Problems_ in the AlgoData graph exist. In these cases the mathematical problem should be added to AlgoData as an _Algorithmic Problem_. The current state of this implementation of the KG based on the ontologies is available on the beta version of the AlgoData site [25]. The interface includes documentation of the ontology, keyword search, and a guided query tailored to the ontology. Technically, the frontend is based on an _Apache Jena Fuseki_ with OWL reasoner and SPARQL interface, operated on a _Django server_. The GUI invites the user to browse the data, discovering new relations and alternatives. Returning to the X-RCT working example from sec. 1, we can now include a complete MSO workflow in the KG and show the full workflow chain starting in the MathModDB from the _Application Domain_ (civil engineering), choosing the _Application problem_ (porous media analysis/micro fracture detection), choosing the mathematical model of the transport equation and its specialization X-ray transform, moving over to the AlgoData application problem of its inversion, retrieving the basic _Algorithms_ classes (filtered back projection, algebraic reconstruction technique) and their implementations. ## 4 Conclusion and Outlook In our work, we show the development of the two ontologies for mathematical models and algorithms. Since both artifacts have a central place in the MSO workflow, but cannot be separated from each other for a holistic knowledge representation, they are merged. While the ontology for algorithms already contained a lot of data, since its development started earlier, for the merging we used the X-RCT use case to demonstrate what the KG can accomplish, namely representing knowledge of a concrete MSO workflow. The ontologies presented here allow a standardized description of mathematical models and algorithms. We will contribute to the _WikiProject Mathematics_ to further improve their representation in _Wikidata_. As future work, we intend to link our ontology to existing approaches. It is planned that _Quantities, Units, Dimensions and Types_ (QUDT) [4] as an elaborated data model and a vocabulary will be used for the class of _Quantities_. For the _Application Domain_, subject classification ontologies are currently being analyzed, such as the _Mathematics Subject Classification_ (MSC) or the _Physics Subject Headings_ (PhysSH). The _Publication_ class is evaluated to be outsourced to a scholarly KG [15, 26]. A class like _Application Problem_ can later be used to link to other discipline-specific ontologies from the NFDI. Furthermore, to represent the steps of the MSO workflow, the embedding in a process ontology is planned. Here, the m4i ontology is highly relevant as it offers a general process model that reuses specific instances for methods and tools from other, discipline-specific terminologies. The presented work on mathematical models, algorithms, implementations and benchmarks are ideal entry points for that. Furthermore, the embedding in upper ontologies (e.g. BFO [17]), as well as the inclusion of epistemic metadata (e.g. PIMS-II [9]) is envisioned. After clarification of these open points, the ontology will be continuously instantiated with further data, becoming a living knowledge graph and serving as a knowledge base for mathematicians, application domain experts and beyond. ### Acknowledgements The co-authors B.S., C.B., J.F., M.R., A.S., B.Sch. acknowledge funding by MaRDI, funded by the DFG (German Research Foundation), project number 460135501, NFDI 29/1 "MaRDI - Mathematische Forschungsdateninitiative". The co-authors F.W. and H.K. acknowledge funding by the DFG under Germany's Excellence Strategy EXC 2044-390685587, Mathematics Munster: Dynamics-Geometry-Structure. The co-author D.G. acknowledges funding by the DFG under Germany's Excellence Strategy EXC 2075: Data-Integrated Simulation Science (SimTech), project number 390740016.
2309.13854
Semidefinite programming bounds for distance distribution of spherical codes
We present an extension of known semidefinite and linear programming upper bounds for spherical codes. We apply the main result for the distance distribution of a spherical code and show that this method can work effectively In particular, we get a shorter solution to the kissing number problem in dimension 4.
Oleg R. Musin
2023-09-25T03:47:07Z
http://arxiv.org/abs/2309.13854v2
# Semidefinite programming bounds for distance distribution of spherical codes ###### Abstract We present an extension of known semidefinite and linear programming upper bounds for spherical codes. We apply the main result for the distance distribution of a spherical code and show that this method can work effectively. In particular, we get a shorter solution to the kissing number problem in dimension 4. **Mathematics Subject Classification (2010)** 90C22, 52C17 ## 1 Introduction Let \(G_{k}^{(n)}(t)\) (with \(G_{k}^{(n)}(1)=1\) and \(\deg(G_{k}^{(n)})=k\) ) be Gegenbauer polynomials that are orthogonal on the interval \([-1,1]\) with respect to the weight function \((1-t^{2})^{(n-3)/2}\). Let \(C\) be an \(N\)-element subset of the unit sphere \(\mathbb{S}^{n-1}\subset\mathbb{R}^{n}\). We define the \(k\)-th moment of \(C\) as \[M_{k}(C):=\sum_{(c,c^{\prime})\in C^{2}}G_{k}^{(n)}(c\cdot c^{\prime})\] The positive semidefinite property of Gegenabauer polynomials yields that \[M_{k}(C)\geq 0\ \ \mbox{for all}\ \ k=1,2,... \tag{1.1}\] Let \(f\) be a non-negative linear combination of Gegenabauer polynomials: \[f(t)=\sum_{k=0}^{d}f_{k}G_{k}^{(n)}(t),\ f_{k}\geq 0\ \ \mbox{for all}\ \ k=1,.,.,d.\] Then \[S_{f}(C):=\sum_{(c,c^{\prime})\in C^{2}}f(c\cdot c^{\prime})=f_{0}N^{2}+\sum_ {k=1}^{d}f_{k}M_{k}(C)\geq f_{0}N^{2}. \tag{1.2}\] The _distance distribution_ of \(C\) with respect to \(u\in C\) is the system of numbers \(\{A_{t}(u):-1\leq t\leq 1\}\), where \[A_{t}(u):=|\{v\in C:v\cdot u=t\}|,\] and the _distance distribution_ of \(C\) is the system of numbers \(\{A_{t}:-1\leq t\leq 1\}\), where \[A_{t}(C)=A_{t}:=\frac{1}{N}\sum_{u\in C}A_{t}(u).\] Denote \[R_{f}(C):=\sum_{t\in[-1,1)}f(t)A_{t}(C).\] Then \[S_{f}(C)=NR_{f}(C)+Nf(1)\] and (1.2) implies \[R_{f}(C)\geq f_{0}N-f(1). \tag{1.3}\] The semidefinite programming (SDP) method for spherical codes was proposed by Bachoc and Vallentin [1] with further applications and extensions in [2, 3, 9, 10, 12, 15, 16, 18, 20]. The positive-semidefinite property of Gegenbauer polynomials yields the positive-semidefinite property of matrices \(S_{k}^{n}\). Now consider polynomials \(F\) that were defined by Bachoc and Vallentin. Let \(F(t,u,v)\) be a symmetric polynomial with expansion \[F(t,u,v)=\sum_{k=0}^{d}\left\langle H_{k},S_{k}^{n}(t,u,v)\right\rangle\] in terms of the matrices \(S_{k}^{n}\). Suppose that all matrices \(H_{k}\) with \(k>0\) are positive semidefinite and for a given \(F_{0}\in\mathbb{R}\), \(H_{0}-F_{0}E_{0}\) is also positive semidefinite. (Here \(E_{0}\) denote a matrix whose only nonzero entry is the top left corner which contains \(1\)). Then \[S_{F}(C):=\sum_{(x,y,z)\in C^{3}}F(x\cdot y,x\cdot z,y\cdot z)\geq F_{0}N^{3}. \tag{1.4}\] In [20] we consider an extension of known semidefinite and linear programming upper bounds for spherical codes and a version of this bound for distance graphs. This paper is a continuation and extension of [20]. We show how the bounds from [20] can be applied to the distance distribution of spherical codes. In Section 2 for \((N,n,T)\) spherical codes \(C\) we present a general \(3\)-point bound, see Theorem 2.2. Actually, this theorem can be considered as a lower bound for \(E_{g}(C)\) (or equivalently for \(R_{g}(C)\), see Theorem 3.1), where \(g:T\rightarrow\mathbb{R}\). We obtain functions \(g\) on \(T\subset[-1,1)\) that can play the same role as a nonnegative linear combination of Gegenbauer polynomials on \([-1,1]\). In Section 4 we show that this method can work effectively. For \(T=[-1,0.5]\) we consider two polynomials \(g_{1}\) and \(g_{2}\) that found using the SDP. The expansion of the first polynomial in Gegenbauer polynomials has negative coefficients. However, the resulting boundary for it is almost exact and with its help we get a shorter solution to the kissing number problem in dimension 4. The second polynomial has only positive coefficients, but the SDP bound for it is much stronger than (1.3). Section 5 contains several possible applications of Theorems 2.2 and 3.1 and their generalizations. ## 2 General bounds for spherical codes Let \(C\) be an \(N\)-element subset of the unit sphere \(\mathbb{S}^{n-1}\subset\mathbb{R}^{n}\). Denote \[I(C):=\{t=x\cdot y\,|\,x,y\in C\,\&\,x\neq y\}.\] Let \(T\subset[-1,1)\). We say that \(C\) is an \((N,n,T)\)_spherical code_ if \(I(C)\subset T\). Let \(g\) be a real function on \(I(C)\). Define \[E_{g}(C):=\sum_{(x,y)\in C^{2},x\neq y}g(x\cdot y)\] ### General 2-point bound **Theorem 2.1**.: _Let \(C\) be an \((N,n,T)\) spherical code. Suppose \(g:T\to\mathbb{R}\), \(f:[-1,1]\to\mathbb{R}\) and \(f_{0}\in\mathbb{R}\) are such that_ 1. \(f(t)\leq g(t)\) _for all_ \(t\in T\)_._ 2. \(S_{f}(C)\geq f_{0}N^{2}\)_._ _Then_ \[Nf(1)+E_{g}(C)\geq f_{0}N^{2}.\] Proof.: Note that for all \(x\in C\subset\mathbb{S}^{n-1}\) we have \(x^{2}=x\cdot x=1\). Then \[f_{0}N^{2}\leq S_{f}(C)=Nf(1)+E_{f}(C)\leq Nf(1)+E_{g}(C).\] Note that if \(f\) is a non-negative linear combination of the Gegenbauer polynomials then (1.2) implies that \(f\) satisfies assumption 2 in Theorem 2.1. Suppose \(f_{0}>0\) and \(g(t)=0\) for all \(t\in T\). Since \(E_{g}(C)=0\), the theorem yields that \[N\leq\frac{f(1)}{f_{0}}.\] This bound is called the _linear programming (LP)_ or _Delsarte's bound_ for spherical codes. Let \(q:(0,4]\to\mathbb{R}\) be any function. Then for positive semidefinite \(f\) and \(g(t)=q(2-2t)\) Theorem 2.1 implies that _Every set of \(N\) points on \(\mathbb{S}^{n-1}\) has potential energy \(E_{q}(C):=E_{g}(C)\) at least_ \[f_{0}N^{2}-Nf(1).\] This fact first proved by Yudin [24] and has a lot of applications. ### General 3-point bound The Gram matrix of a set of vectors \(v_{1},\ldots,v_{n}\) in \(\mathbb{R}^{d}\) is the matrix of inner products, whose entries are given by the inner product \(G_{ij}=v_{i}\cdot v_{j}\). The Gram matrix is symmetric and positive semidefinite. Moreover, a symmetric matrix \(M\) is positive semidefinite if and only if it is the Gram matrix of some vectors \(v_{1},\ldots,v_{n}\). Let \[M_{3}=\left(\begin{array}{ccc}1&u&v\\ u&1&t\\ v&t&1\end{array}\right).\] If this matrix is positive semidefinite (\(M_{3}\succeq 0\)), then there are three distinct points \(x,y,z\) in \(\mathbb{S}^{2}\) such that \(t=x\cdot y,u=x\cdot z\), and \(v=y\cdot z\). It is easy to see that \(M_{3}\succeq 0\) if and only if \(t,u,v\in[-1,1]\) and \(\det(M_{3})=1+2tuv-t^{2}-u^{2}-v^{2}\geq 0\). This fact explains the following definition. \[D_{3}(T):=\left\{(t,u,v):t,u,v\in T\,\&\,1+2tuv-t^{2}-u^{2}-v^{2}\geq 0\right\}, \,\,\,\,T\subset[-1,1).\] **Theorem 2.2**.: _Let \(C\) be an \((N,n,T)\) spherical code and \(F:[-1,1]^{3}\to\mathbb{R}\) be a symmetric function. Suppose \(f:T\to\mathbb{R}\) and \(g:T\to\mathbb{R}\), are such that_ 1. \(F(1,t,t)\leq f(t)\) _for all_ \(t\in T\)_,_ 2. \(F(t,u,v)\leq g(t)+g(u)+g(v)\) _for all_ \((t,u,v)\in D_{3}(T)\)_._ _If \(S_{F}(C)\geq F_{0}N^{3},\) where \(F_{0}\in\mathbb{R}\), then_ \[NF(1,1,1)+3E_{f}(C)+(3N-6)E_{g}(C)\geq F_{0}N^{3}.\] Proof.: We have \(S_{F}(C)=S_{1}+S_{2}+S_{3}\), where \[S_{1}=\sum_{x=y=z}F(x\cdot y,x\cdot z,y\cdot z),\] \[S_{2}=\sum_{(x,y,z)\in H}F(x\cdot y,x\cdot z,y\cdot z),\,\,H=\{(x,y,z)\in C^{ 3}\,|\,x=y\neq z\,or\,x=z\neq y\,or\,x\neq y=z\},\] \[S_{3}=\sum_{x\neq y\neq z\neq x}F(x\cdot y,x\cdot z,y\cdot z).\] Since \(x^{2}=1\) for all \(x\in C\), we have \[S_{1}=\sum_{x\in C}F(x^{2},x^{2},x^{2})=NF(1,1,1),\] \[S_{2}=3\sum_{(x,y)\in C^{2},x\neq y}F(1,x\cdot y,x\cdot y)\leq 3\sum_{x\neq y}f(x \cdot y)=3E_{f}(C)).\] By assumption 2 \[S_{3}\leq\sum_{x\neq y\neq z\neq x}(g(x\cdot y)+g(x\cdot z)+g(y\cdot z))=3(N-2 )E_{g}(C).\] Thus \[S_{F}(C)\leq NF(1,1,1)+3E_{f}(C)+3(N-2)E_{g}(C).\] **Corollary 2.1**.: _Under the assumptions of Theorem 2.2 let \(f(t)=p(t)-q(t)\) with \(p:T\to\mathbb{R}\) and \(q:[-1,1]\to\mathbb{R}.\) If \(S_{q}(C)\geq 0\), then_ \[NF(1,1,1)+3Nq(1)+3E_{p}(C)+(3N-6)E_{g}(C)\geq F_{0}N^{3}.\] Proof.: \[S_{2}\leq 3E_{f}(C)=3E_{p}(C)-3E_{q}(C)).\] Since \(q(C)=Nq(1)+E_{q}(C)\geq 0\), we have \[S_{2}\leq 3Nq(1)+3E_{p}(C).\] Thus \[S_{F}(C)\leq NF(1,1,1)+3Nq(1)+3E_{p}(C)+3(N-2)E_{g}(C).\] **Corollary 2.2**.: _Under the assumptions of Theorem 2.2 let \(f(t)=B+2g(t)-q(t)\) with \(q:[-1,1]\to\mathbb{R}.\) If \(S_{q}(C)\geq 0\), then_ \[F(1,1,1)+3q(1)+3(N-1)B+3E_{g}(C)\geq F_{0}N^{2}.\] Proof.: \[S_{2}\leq 3Nq(1)+3E_{p}(C)=3Nq(1)+3\sum_{x\neq y}(B+2g(x\cdot y))=3Nq(1)+3N(N-1)B +6E_{g}(C).\] Then \[F_{0}N^{3}\leq S_{F}(C) \leq NF(1,1,1)+3Nq(1)+3N(N-1)B+6E_{g}(C)+3(N-2)E_{g}(C)\] \[=NF(1,1,1)+3Nq(1)+3N(N-1)B+3NE_{g}(C).\] Thus \[F(1,1,1)+3q(1)+3(N-1)B+3E_{g}(C)\geq F_{0}N^{2}.\] Note that if \((F-F_{0})\) and \(q\) are positive semidefinite then \(S_{F}(C)\geq F_{0}N^{3}\) and \(q(C)\geq 0\). This makes it possible to find new bounds for \(N\) and \(E_{g}(C)\) using the SDP. We will look at these methods in more detail in later sections. Suppose \(g(t)\equiv 0\) and \(F_{0}>0\). Then Corollary 2.2 yields \[N^{2}\leq\frac{F(1,1,1)+3q(1)+3(N-1)B}{F_{0}}.\] This inequality as well as Corollary 2.1 with \(g=0\) first were proposed by Bachoc and Vallentin [1, 2] with further applications and extensions in [3, 10, 15, 16, 18]. In particular, Cohn and Woo [10] got three-point bounds for potential energy minimization. ### General \(k\)-point bound Theorems 2.1 and 2.2 can be extensed for all \(k\): \(2\leq k\leq n-2\). Theorem 5.4 from our paper [18] is a particular case of this general theorem. It is clear how to derive a generalization of Theorem 2.2, see some details in Section 5 [18] and [12]. However, the resulting formulas for \(k>3\) are quite cumbersome. We decided not to present the general theorem here even for the case \(k=4\). ## 3 SDP bounds for the distance distribution Let \(C\) be an \((N,n,T)\) spherical code. In the Introduction we defined \(A_{t}(C)\) and \(R_{f}(C)\). It is clear that \(A_{1}=1\), \(A_{t}=0\) for all \(t\neq 1\) and \(t\notin T\), \[\sum_{t\in T}A_{t}=N-1\quad\text{and}\quad E_{f}(C)=NR_{f}(C).\] The following theorem is a restatement of Theorem 2.2 for the distance distribution of spherical codes. **Theorem 3.1**.: _Let \(C\) be an \((N,n,T)\) spherical code and \(F:[-1,1]^{3}\to\mathbb{R}\) be a symmetric function. Suppose \(h:T\to\mathbb{R}\) and \(g:T\to\mathbb{R}\), are such that_ 1. \(h(t)+h_{0}+F(1,t,t)\leq 2g(t)\) _for all_ \(t\in T\)_,_ 2. \(F(t,u,v)\leq g(t)+g(u)+g(v)\) _for all_ \((t,u,v)\in D_{3}(T)\)_._ _If \(S_{F}(C)\geq F_{0}N^{3}\), then_ \[R_{g}(C)=\sum_{t\in T}A_{t}\,g(t)\geq\frac{1}{3}F_{0}N+\frac{1}{3}h_{0}-\frac{1 }{3N}F(1,1,1)+\frac{1}{N^{2}}E_{h}\] **Corollary 3.1**.: _Under the assumptions of Theorem 3.1 let \(F_{0}=0\) and \(h_{0}=1\). If \(S_{h}(C)\geq 0\) then_ \[R_{g}(C)=\sum_{t\in T}A_{t}\,g(t)\geq B(N):=\frac{N-M}{3N},\quad M=F(1,1,1)+3h(1)\] Proof.: Since \(S_{h}(C)=Nh(1)+E_{h}(C)\geq 0\), we have \(E_{h}(C)\geq-Nh(1)\). Thus, Theorem 3.1 yields the inequality. ## 4 Some applications. A shorter proof of the kissing number problem in four dimensions. In this section we show that the method discussed in Sections 2 and 3 works effectively. We consider polynomials \(g_{1}\) and \(g_{2}\) found using SDP for the corresponding inequalities on the distribution of distances of spherical codes cannot be found using LP bounds. These polynomial satisfy assumptions of Corollary 3.1 with \(n=4\) and \(T=[-1,0.5]\). Using \(g_{1}\) and the same method from our paper [17], discarding the most difficult case of five points we get a shorter proof that the kissing number in four dimensions \(k(4)=24\). **Remark.** I would like to note that these polynomials were found by Maria Dostert during our work on the uniqueness problem of the maximum kissing arrangement in dimension 4 [13]. Maria also found several tight polynomials for other intervals. To do this, she had to overcome a large number of technical obstacles and, along the way, develop and improve an algorithm for finding SDP bounds for the distance distribution. ### Two examples. Let \(T=[-1,0.5]\). In this case we denote a spherical code \((N,n,T)\) by \((N,n,\pi/3)\). Here we consider the bound given by Corollary 3.1 for \((N,4,\pi/3)\) spherical codes. **1.** Let the expansion of \(g_{1}\) in Gegenbauer polynomials \(G_{k}^{(4)}\) have the following coefficients: \([c_{0},...,c_{22}]\)= [-0.5438, -2.0024, -3.8887, -5.6414, -6.7025, -6.8508, -6.0698, -4.6566, -3.0047, -1.4686, -0.3226, 0.3704, 0.6521, 0.6486, 0.5104, 0.3361, 0.1911, 0.0963, 0.0411, 0.0157, 0.0056, 0.001, 0.0004]. We observe that \(g_{1}(-1)=0.02\) and \(g_{1}(t)\leq 0\) for all \(t\in[-\sqrt{2}/2,1/2]\). Fig. 1 shows the graph of \(g_{1}\) with normalization \(\tilde{g}_{1}(-1)=100\). Since there are negative coefficients \(c_{k}\), we cannot use the LP (Delsarte) bound. The SDP bound in Corollary 3.1 gives \(M=M_{1}:=22.5689\). Let \(B_{1}(N):=(N-M_{1})/N\). Then \(B_{1}(25)=0.0324\) and \(B_{1}(24)=0.0199\). **2.** The coefficients \([c_{0},...,c_{22}]\) in the expansion of \(g_{2}\) in \(G_{k}^{(4)}\) \(=[0.222,\,0.8648,\,1.8875,\,3.1425,\,4.5059,\,5.7052,\,6.5739,\,6.9286,\,6.7119,\,6.0157,\,4.9575,\,3.7767,\)\(2.6446,\,1.6914,\,0.9947,\,0.5249,\,0.2524,\,0.1097,\,0.0409,\,0.0153,\,0.0042,\,0.001,\,0.0 002]\). We have \(g_{2}(-1)=0.02\) and \(g_{2}(t)\leq 0\) for all \(t\in[-0.73,0.5]\). Fig. 1 shows the graph of \(g_{2}\) with normalization \(\tilde{g}_{2}(-1)=100\). This case is very interesting. Note that all coefficients of \(g_{2}\) are positive and therefore we can apply bound (1.3): \[R_{g_{2}}(C)\geq LP(N):=c_{0}N-g_{2}(1)=0.222N-57.5714.\] On the other side the SDP bound in Corollary 3.1 gives \[R_{g_{2}}(C)\geq B_{2}(N)=\frac{N-M}{3N}=\frac{N-22.6452}{3N}\] Then for the most interesting cases \(N=24\) and \(N=25\) we have \[B_{2}(24)=0.0188>-52.2431=LP(24),\quad B_{2}(25)=0.0314>-52.0211=LP(25).\] ### A shorter proof of the kissing number problem in four dimensions. In [17] we proved that \(k(4)=24\). Let \(t_{0}\in(-1,-0.5)\) and \(f(t)\) be is a nonnegative combination of \(G_{k}^{(4)}\) with coefficients \(c_{k}\geq 0\) such that \(f(t)\leq 0\) for all \(t\in[t_{0},0.5)\). From [17, Theorem 1] follows that \[k(4)\leq\frac{1}{c_{0}}\max\{h_{0},h_{1},...,h_{\mu},\}\] where \(\mu=A(4,\pi/3,\psi_{0}),\,\psi_{0}=\arccos|t_{0}|\), and \(h_{m}\) is the maximum of \(H_{f}(Y)=f(1)+f(e_{1},y_{1})+...+f(e_{1},y_{m})\) over all configurations of \(m\) unit vectors \(y_{j}\) in the spherical cap in \(\mathbb{S}^{3}\) given by \(e_{1}\cdot y_{j}\leq t_{0}\) whose pairwise scalar products are at most \(\frac{1}{2}\). We considered a polynomial \(p_{4}\) (see Fig. 1) with \(t_{0}=-0.6058\). In this case \(\mu=6\). Technically, the most difficult case turned out to be m=5. This case takes up a significant part of the proof. We tried to exclude this case and reduce \(t_{0}\). However, numerous experiments with an extended LP bound did not lead to success. **Theorem 4.1**.: \(k(4)=24\)__ Proof.: The \(24\)-cell is an example of a kissing arrangement. Then \(k(4)\geq 24\). It remains to prove \(k(4)<25\). Assume the converse: \(N\geq 25\). Let \(C\) be an \((25,4,\pi/3)\) spherical code. Since \(g_{1}(t)\leq 0\) for all \(t\in[-\sqrt{2}/2,1/2]\), we have \(t_{0}=-\sqrt{2}/2\) and \(\mu=4\). Using the same method as in [17] we consider the cases \(\mu=0,1,2,3,4\) to find the maximum of \(R_{g_{1}}(C)=\sum_{t}A_{t}g_{1}(t)\). This maximum is achieved at \(\mu=2\) and is \(0.0266\), i.e. \(R_{g_{1}}(C)<0.0266\). On the other side we have \(R_{g_{1}}(C)>B(25)=0.0324\), a contradiction. ### New bounds for the distance distribution on a \((24,4,\pi/3)\) spherical code Now we consider a maximal kissing arrangement in dimension \(4\) that is a \((24,4,\pi/3)\) spherical code. The long-standing open uniqueness conjecture on this code states that this arrangement is isometric to the \(24\)-cell. In fact, the uniqueness conjecture is equivalent to the following **Conjecture**.: _Let \(C\) be a \((24,4,\pi/3)\) spherical code. Then_ \[A_{-1}=1,\quad A_{-1/2}=8,\quad A_{0}=6,\quad A_{1/2}=8,\quad\ A_{t}=0,\,t\neq \pm 1,\pm 1/2,0. \tag{3.1}\] Note that in this dimension the equality \(A_{-1}=1\) yields (3.1), see [7]. Moreover, in [13] we show that every equality in (3.1) implies all other equalities. Recently, using Corollaries 1 and 2 from [20] we found several bounds on the distance distribution of a kissing arrangements in four dimensions [13]. For several intervals these bounds are sharp. Let \(S\subset[-1,0.5]\). Denote \[A(S):=\sum_{t\in S:A_{t}>0}A_{t}.\] **Theorem 4.2**.: _Let \(C\) be a \((24,4,\pi/3)\) - spherical code. Then_ \[A([-1,-0.45])\leq 9;\quad A([-1,0.05])\leq 15,\quad A([-0.55,0.05])\leq 14, \quad A([-0.05,0.5]\leq 14,\] \[A([-1,-0.73])\geq 1,\quad A([0.35,0.5])\geq 8\] ## 5 Concluding Remarks In conclusion, we outline some applications of Theorems 2.2 and 3.1 and their generalizations. ### Towards a proof of the uniqueness conjecture We know that \(k(4)=24\)[17]. However, in dimension \(4\) the uniqueness of the maximal kissing arrangement is conjectured to be the \(24\)-cell but not yet proven. Equivalently, the uniqueness conjecture is the following: Denote by \(s_{d}(n)\) the optimal SDP bound on \(k(n)\) given by (3) with \(\deg(F)=d\) (see [16]). In the following table it is shown that this minimization problem is a semidefinite program and that every upper bound on \(s_{d}(4)\) provides an upper bound for the kissing number in dimension \(4\). * Bachoc & Vallentin [1]; * Mittelmann & Vallentin [16]; * \(s_{12}(4)<24.09098111\)[16]; * \(s_{13}(4)<24.07519774\)[16]; * \(s_{14}(4)<24.06628391\)[16]; * Machado & de Oliveira Filho [15]; * \(s_{16}(4)<24.056903\)[15]. This table shows that \(s_{d}\) with \(d>12\) is relatively close to \(24\), \(s_{d}-24<2/N=1/12\). We think that our approach which is based on Theorems 2.2 and 3.1 can help to prove the uniqueness conjecture. ### Towards a proof of the 24-cell conjecture The sphere packing problem asks for the densest packing of \(\mathbb{R}^{n}\) with unit balls. In four dimensions, the old conjecture states that a sphere packing is densest when spheres are centered at the points of lattice \(D_{4}\), i.e. the highest density \(\Delta_{4}\) is \(\pi^{2}/16\), or equivalently the highest center density is \(\delta_{4}=\Delta_{4}/B_{4}=1/8\). For lattice packings, this conjecture was proved by Korkin and Zolatarev in 1872. Currently, for general sphere packings the best known upper bound for \(\delta_{4}\) is \(0.130587\), a slight improvement on the Cohn-Elkies bound of \(\delta_{4}<0.13126\), but still nowhere near sharp. In [19] we considered the following conjecture: **The 24-cell conjecture.**_Consider the Voronoi decomposition of any given packing \(P\) of unit spheres in \(\mathbb{R}^{4}\). The minimal volume of any cell in the resulting Voronoi decomposition of \(P\) is at least as large as the volume of a regular 24-cell circumscribed to a unit sphere._ Note that a proof of the 24-cell conjecture also proves that \(D_{4}\) is the densest sphere packing in 4 dimensions. In [18, Sect. 4] and [19, 3.3] we considered polynomials \(H_{k}\) that are positive-definite in \(\mathbb{R}^{n}\). Actually, \(H_{k}\) are polynomials that extend the Bachoc-Valentin polynomials \(S_{k}\). It is an interesting problem to find generalizations of Theorems 2.2 and 3.1 for sphere packings in \(\mathbb{R}^{n}\). Perhaps, these bounds for \(n=4\) can help to prove the 24-cell conjecture. ### SDP bounds for Thomson's and related problems The objective of the Thomson problem is to determine the minimum electrostatic potential energy configuration of \(N\) electrons constrained to the surface of a unit sphere. More generally, a configuration is called \(h\)-optimal for a potential interaction \(h:[-1,1]\to\mathbb{R}\), if it minimizes the \(h\)-energy. The logarithmic potential, Thomson, Newton potential, and more generally the Riesz potential as well as as well as the Gaussian potential have been well studied in the literature [6]. All of these potentials are absolutely monotone potentials. Cohn and Kumar invented universally optimal configurations, namely they minimize the energy for all absolutely monotone potentials \(h\)[8]. The regular simplex, the cross polytope and the so-called isotropic spaces are the only known classes of are universally optimal configurations, (see [8, Table 1]). All other known optimal configurations in the literature, even when the interacting potential \(h\) is fixed, have particular values of the dimension \(d\) and the cardinality \(N\), see the fundamental monograph on this topic [6]. For instance, the Thomson problem is solved only for \(N\leq 6\) and \(N=12\). Note that one of the most powerful tool for lower bounding \(h\)-energy is the Delsarte-Yudin linear programming method. This method was applied to most of the configurations in [8, Table 1] to prove that they are universally optimal. Let a potential \(h\) is given. Let \(g(t)\leq h(t)\) on some interval \(T\). Then Theorem 2.2 can be considered as a Yudin type bound the minimum energy. It is a very interesting task to consider various cases of optimal configurations for known potentials and see what kind of bounds can be obtained using this method. ### SDP bound for contact graphs and the Tammes problem The following problem was first asked by the Dutch botanist Tammes in 1930: _Find the largest angular separation \(\theta\) of a spherical code \(C\) in \(\mathbb{S}^{2}\) of cardinality \(N\)._ In other words, _How are \(N\) congruent, non-overlapping circles distributed on the sphere when the common radius of the circles is as large as possible?_ The Tammes problem is presently solved for only a few values of \(N\): for \(N=3,4,6,12\) by L. Fejes Toth; for \(N=5,7,8,9\) by Schutte and van der Waerden; for \(N=10,11\) by Danzer; for \(N=24\) by Robinson.; and for \(N=13,14\) by Musin & Tarasov [21, 22]. The computer-assisted solution of Tammes' problem for \(N=13\) and \(N=14\) consists of three parts: (i) creating the list \(L_{N}\) of all planar graphs with \(N\) vertices that satisfy the conditions of [22, Proposition 3.1]; (ii) using linear approximations and linear programming to remove from the list \(L_{N}\) all graphs that do not satisfy the known geometric properties of the maximal contact graphs [22, Proposition 3.2]; (iii) proving that among the remaining graphs in \(L_{N}\) only one is maximal. In fact, the list \(L_{N}\) consists of a huge number of graphs. (For \(N=13\) it is about \(10^{8}\) graphs.) We think that this paper can help to reduce the number of graphs in \(L_{N}\). ### Generalization of the \(k\)-point SDP bound for spherical codes In [18] we invented the \(k\)-point SDP bound for spherical codes. Note that for \(k=2\) that is the classical Delsarte bound. The 3-point SDP bound was first considered by Bachoc and Vallentin [1]. Recently, this method with \(k=4,5,6\) was apply for upper bounds of the maximum number of equiangular lines in \(n\) dimensions [12]. It is an interesting problem to find generalizations of results in this paper using the \(k\)-point SDP bounds and apply these bounds for \(s\)-distance sets and equiangular lines. **Acknowledgments.** I am very grateful to Maria Dostert, Alexander Kolpakov and Philippe Moustrou for helpful discussions and useful comments. We spent a lot of time together discussing ideas, algorithms and current results on the uniqueness problem of kissing arrangements. We have encouraging results and I hope that we can complete our paper [13].
2309.09824
Effective sample size: a measure of individual uncertainty in predictions
Clinical prediction models are estimated using a sample of limited size from the target population, leading to uncertainty in predictions, even when the model is correctly specified. Generally, not all patient profiles are observed uniformly in model development. As a result, sampling uncertainty varies between individual patients' predictions. We aimed to develop an intuitive measure of individual prediction uncertainty. The variance of a patient's prediction can be equated to the variance of the sample mean outcome in n* hypothetical patients with the same predictor values. This hypothetical sample size n* can be interpreted as the number of similar patients n_eff that the prediction is effectively based on, given that the model is correct. For generalised linear models, we derived analytical expressions for the effective sample size. In addition, we illustrated the concept in patients with acute myocardial infarction. In model development, n_eff can be used to balance accuracy versus uncertainty of predictions. In a validation sample, the distribution of n_eff indicates which patients were more and less represented in the development data, and whether predictions might be too uncertain for some to be practically meaningful. In a clinical setting, the effective sample size may facilitate communication of uncertainty about predictions. We propose the effective sample size as a clinically interpretable measure of uncertainty in individual predictions. Its implications should be explored further for the development, validation and clinical implementation of prediction models.
Doranne Thomassen, Saskia le Cessie, Hans van Houwelingen, Ewout Steyerberg
2023-09-18T14:47:12Z
http://arxiv.org/abs/2309.09824v1
# Effective sample size: a measure of individual uncertainty in predictions ###### Abstract Clinical prediction models are estimated using a sample of limited size from the target population, leading to uncertainty in predictions, even when the model is correctly specified. Generally, not all patient profiles are observed uniformly in model development. As a result, sampling uncertainty varies between individual patients' predictions. We aimed to develop an intuitive measure of individual prediction uncertainty. The variance of a patient's prediction can be equated to the variance of the sample mean outcome in \(n_{*}\) hypothetical patients with the same predictor values. This hypothetical sample size \(n_{*}\) can be interpreted as the number of similar patients \(n_{\text{eff}}\) that the prediction is effectively based on, given that the model is correct. For generalised linear models, we derived analytical expressions for the effective sample size. In addition, we illustrated the concept in patients with acute myocardial infarction. In model development, \(n_{\text{eff}}\) can be used to balance accuracy versus uncertainty of predictions. In a validation sample, the distribution of \(n_{\text{eff}}\) indicates which patients were more and less represented in the development data, and whether predictions might be too uncertain for some to be practically meaningful. In a clinical setting, the effective sample size may facilitate communication of uncertainty about predictions. We propose the effective sample size as a clinically interpretable measure of uncertainty in individual predictions. Its implications should be explored further for the development, validation and clinical implementation of prediction models. ## 1 Introduction When using a statistical model to predict a patient's outcome, there is uncertainty in predictions, even when the model is correctly specified. This uncertainty arises from the fact that a prediction model is estimated using a random sample of limited size from the target population. Typically, sampling uncertainty about predictions varies between individual patients, as not all patient profiles are represented uniformly in model development. Paradoxically, although a rarer type of patient stands to benefit most from personalised prediction, their predictions are usually more uncertain compared to the average patient. Yet between-patient variation in prediction uncertainty is typically not viewed as a target of optimisation in model development. Neither is it explored as a performance metric in model validation. The TRI-POD guideline for prediction model reporting does not mention any explicit metrics for individual prediction uncertainty [1]. In the risk communication literature, it has been recommended to present absolute risks in terms of frequencies with a common denominator, preferably with pictograms [2], [3]. For example, a patient's predicted risk can be expressed as a number out of "100 people like you"[4]. This presentation fails to capture the uncertainty of the prediction. There is no guideline on the communication of uncertainty around predictions, or whether to communicate uncertainty at all. Statisticians tend to present sampling uncertainty about estimates with a 95% confidence interval or a standard error. However, communicating sampling uncertainty to patients in the form of confidence intervals has been suggested to have little effect on risk perceptions [5], [6] and to increase patient worry [6], [7]. Another suggestion has been to use verbal qualifiers, such as "it is most likely that your risk is 30%" or "our best estimate is that you have a 40% risk, though we are not certain about this exact number". It has also been suggested to communicate numbers only when they are estimated with sufficient certainty [8]. As an alternative to standard errors and confidence intervals, we aimed to translate the standard error into an "effective sample size" being close in spirit to a "number of patients like you". This could serve as a more intuitive measure of individual sampling uncertainty in predictions. The effective sample size can then be used as a communication tool in clinical practice and as a performance measure in model validation. In Section 2, we define effective sample size in this context and we derive explicit formulas for effective samples sizes in linear and logistic regression. In addition, we illustrate the concept in an example dataset of patients with acute myocardial infarction (Section 3). Finally, we discuss the potential applications of effective sample size in prediction modelling and we suggest directions for further investigation (Section 4). ## 2 Effective sample size and relative variance ### Data setting Suppose that for patients \(i\in\{1,...,n\}\), we have observed the \(n\times 1\) outcome vector \(Y\). We have also observed baseline covariates and coded these as desired in the \(n\times p\) design matrix \(X\). \(Y\) and \(X\) constitute our _development data_. To the development data, a generalised linear model (GLM) \(\mathbb{E}[Y]=g^{-1}(X\beta)\) was fitted. The estimated linear predictor for patient \(i\) is denoted as \(x_{i}^{\top}\hat{\beta}\), where \(\hat{\beta}\) is an estimate of \(\beta\). After fitting the model to the development sample, it can be used to predict the expected value of \(y_{\text{new}}\) for a new patient, given their covariates \(x_{\text{new}}\). ### Effective sample size The overall predictive performance of the model can be assessed by comparing predicted and observed values in the development sample (internal validation) or in an external sample of observed patients. A standard validation procedure assesses discrimination and calibration[9]. It does not give any personal measure of uncertainty for a new individual for whom we make a prediction. Say there is such a new patient, named Sam. We use the prediction model to predict their expected outcome \(y_{\text{new}}\) based on their covariates \(x_{\text{new}}\). We denote the predicted outcome as \(\hat{y}_{\text{new}}\). To express the individual uncertainty in Sam's prediction, we draw a parallel between the variance of this prediction and the variance of the sample mean \(\bar{y}_{*}\) in a hypothetical independent sample of \(n_{*}\) patients like Sam. Here, being 'like Sam' is defined with respect to the model, so having the same design matrix entries as Sam. Suppose that we observe \(\bar{y}_{*}=\hat{y}_{\text{new}}\) in this hypothetical sample. We define Sam's effective sample size as \(n_{*}\), such that we have as much certainty about Sam's prediction as we would have had, had we directly observed the predicted value in the hypothetical sample of \(n_{*}\) patients like Sam. In other words, given that the model is correct, Sam's prediction is effectively based on \(n_{*}\) similar patients in the development of the model. The concept of effective sample size can be used as a clinically interpretable measure of individual uncertainty. If the effective sample size for a new patient is small, their covariate profile is exceptional relative to the development data. The development data did not contain much information about patients like them, so the prediction should be interpreted with caution. By contrast, if the effective sample size for a new patient is large, we can have more confidence in their prediction. We show in detail how \(n_{*}\) can be derived for linear and logistic regression models and give a general formulation for other generalized linear models. ### Linear regression model In the linear model, \(y_{i}\) are assumed to be independent samples from a normal distribution with mean \(\mu_{i}=x_{i}^{\top}\beta\) and variance \(\sigma^{2}\). The least squares and maximum likelihood estimator for \(\beta\) is \(\hat{\beta}=(X^{\top}X)^{-1}X^{\top}Y,\) whose covariance matrix is \(\text{Cov}(\hat{\beta})=\sigma^{2}(X^{\top}X)^{-1}\). Therefore, the variance of a prediction for a new patient, say, Sam, is \(\text{Var}(\hat{\mu}_{\text{new}})=\text{Var}(x_{\text{new}}^{\top}\hat{\beta} )=\sigma^{2}x_{\text{new}}^{\top}(X^{\top}X)^{-1}x_{\text{new}}\). Now suppose there was an independent sample of \(n_{*}\) patients who all have the same design matrix entries \(x_{\text{new}}\) and the same prediction \(\hat{\mu}_{\text{new}}\) as Sam. In addition, suppose that we had observed the sample mean \(\bar{y}_{*}\) to be equal to Sam's prediction \(\hat{\mu}_{\text{new}}\). We could use the sample mean \(\bar{y}_{*}\) as an estimator for \(\mu_{\text{new}}\), which would have variance \(\sigma^{2}/n_{*}\). Solving \(\frac{\sigma^{2}}{n_{*}}=\text{Var}(\hat{\mu}_{\text{new}})=\sigma^{2}x_{\text {new}}^{\top}(X^{\top}X)^{-1}x_{\text{new}}\) for \(n_{*}\) yields \[n_{*}=\frac{1}{x_{\text{new}}^{\top}(X^{\top}X)^{-1}x_{\text{new}}}. \tag{1}\] In the linear model case, the effective sample size is fully determined by the design matrix and does not depend on the outcomes \(Y\) through the fitted model. The inverse effective samples sizes for the patients in the development data are equal to the diagonal elements \(h_{ii}\) of the hat (projection) matrix, sometimes called leverages. It follows that the harmonic mean of the effective sample sizes in the development data is equal to \(n/p\) (where \(p\) is the number of parameters in the model). Usually, leverages are examined as part of model diagnostics. High leverage points have a large potential influence on the fitted model, due to their relatively 'extreme' position in the \(X\)-space. Analogously, the effective sample size of a new prediction expresses how exceptional this new patient's profile is relative to the development data and the fitted model. ### Logistic regression model We now turn to binary outcome variables \(y_{i}\) and assume \(y_{i}\sim\text{Bin}(n,p_{i})\), \(p_{i}=\frac{e^{x_{i}^{\top}\beta}}{1+e^{x_{i}^{\top}\beta}}=g^{-1}(x_{i}^{\top} \beta)\), such that \(g\) is the logit function. To derive the effective sample size for a new prediction, again for Sam, we require the variance of the prediction \(g^{-1}(x_{\text{new}}^{\top}\hat{\beta})\). Using the delta method (local linearisation), we have \[\text{Var}(g^{-1}(x_{\text{new}}^{\top}\hat{\beta}))\approx\text{Var}(x_{\text{ new}}^{\top}\hat{\beta})\cdot\left(\frac{\delta g^{-1}}{\delta(x_{\text{ new}}^{\top}\hat{\beta})}\right)^{2}. \tag{2}\] By general properties of the variance, the first factor on the right hand side of (2) can be expressed as \[\text{Var}(x_{\text{new}}^{\top}\hat{\beta})=x_{\text{new}}^{\top}\text{Cov}( \hat{\beta})x_{\text{new}}. \tag{3}\] When \(\hat{\beta}\) is the maximum likelihood estimator, \(\text{Cov}(\hat{\beta})\) is usually estimated as \((X^{\top}VX)^{-1}\), where \(V\) is a diagonal matrix with elements \(v_{ii}=\hat{p}_{i}(1-\hat{p}_{i})\). For the second factor on the right hand side of (2), we have \[\frac{\delta g^{-1}}{\delta(x_{\text{new}}^{\top}\hat{\beta})} =\frac{e^{x_{\text{new}}^{\top}\hat{\beta}}}{\left(1+e^{x_{\text{ new}}^{\top}\hat{\beta}}\right)^{2}} \tag{4}\] \[=g^{-1}(x_{\text{new}}^{\top}\hat{\beta})(1-g^{-1}(x_{\text{new}} ^{\top}\hat{\beta}))=\hat{p}_{\text{new}}(1-\hat{p}_{\text{new}}). \tag{5}\] Substituting (3) and (4) into (2) yields \[\text{Var}(g^{-1}(x_{\text{new}}^{\top}\hat{\beta}))\approx x_{\text{new}}^{ \top}\text{Cov}(\hat{\beta})x_{\text{new}}\cdot(\hat{p}_{\text{new}}(1-\hat{p} _{\text{new}}))^{2} \tag{6}\] for the variance of Sam's prediction. For a binary variable, the variance is fully determined by the mean. Again, we imagine we have an independent sample of \(n_{*}\) patients with the same design matrix entries \(x_{\text{new}}\) and prediction \(\hat{p}_{\text{new}}\) as Sam. In this sample, we assume to have observed a sample mean of \(\bar{y}_{*}=\hat{p}_{\text{new}}\). The sample mean \(\bar{y}_{*}\) as an estimator for \(\mathbb{E}[y_{\text{new}}]=p_{\text{new}}\) would have estimated variance \(\frac{\hat{p}_{\text{new}}(1-\hat{p}_{\text{new}})}{n_{*}}\). Equating this expression to (6) and solving for \(n_{*}\) results in an estimate for Sam's effective sample size: \[n_{*} \approx\frac{\hat{p}_{\text{new}}(1-\hat{p}_{\text{new}})}{x_{ \text{new}}^{\top}\text{Cov}(\hat{\beta})x_{\text{new}}\cdot(\hat{p}_{\text{ new}}(1-\hat{p}_{\text{new}}))^{2}} \tag{7}\] \[=\left(x_{\text{new}}^{\top}\text{Cov}(\hat{\beta})x_{\text{new}} \cdot(\hat{p}_{\text{new}}(1-\hat{p}_{\text{new}}))\right)^{-1}. \tag{8}\] Here, the inverse of the estimated effective sample size for patients in the development data is equal to their corresponding diagonal element of the approximated 'hat' matrix (approximated leverage) in logistic regression: \(V^{\frac{1}{2}}X\left(X^{\top}VX\right)^{-1}X^{\top}V^{\frac{1}{2}}\)[10]. In contrast with the linear model case, the effective sample size depends explicitly on \(\hat{\beta}\), and thereby on \(Y\), for logistic regression models. It also depends on the predicted risk \(\hat{p}\), which will draw the effective sample size towards infinity for patients whose predicted risk is close to 0% or 100%. On one hand, it is counterintuitive that extreme extrapolation can lead to exceedingly large effective sample sizes. On the other hand, if a monotonic relation between \(x\) and \(p\) is assumed, and \(p\) is bounded, then it makes sense that certainty about \(\hat{p}\) being equal to 0 or 1 increases when \(x\) becomes more extreme and the model is correct. ### Relative variance The inverse of the effective sample size, \(n_{*}^{-1}\), can be interpreted as a _relative variance_, i.e., the prediction variance normalised by the residual or conditional variance. For the linear model, \(n_{*}^{-1}\) is equal to the variance of the prediction \(\hat{\mu}_{\text{new}}\) divided by the residual variance of \(y_{\text{new}}\): \[\text{RelVar}=n_{*}^{-1}=x_{\text{new}}^{\top}(X^{\top}X)^{-1}x_{\text{new}}= \text{Var}(\hat{\mu}_{\text{new}})/\sigma^{2}. \tag{9}\] For the logistic regression model, we have \[\text{RelVar}=n_{*}^{-1}\approx x_{\text{new}}^{\top}\text{Cov}(\hat{\beta})x_ {\text{new}}\cdot(\hat{p}_{\text{new}}(1-\hat{p}_{\text{new}})). \tag{10}\] Though less suitable as a communication tool, the relative variance can be used in lieu of effective sample size in model validation. ### Other generalised linear models For further generalised linear models, an approximation of the effective sample size can be obtained as in Section 2.4. Suppose again that we have a model prediction \(\hat{y}_{\text{new}}=g^{-1}(x_{\text{new}}^{\top}\hat{\beta})\) for Sam, based on a fitted GLM, where \(g\) is the link function of the GLM. As before, we then assume that we have observed Sam's prediction as the sample mean \(\bar{y}_{*}\) in a hypothetical sample of \(n_{*}\) patients with the same \(x\)-values as Sam. Given that the model is correct, the sample mean \(\bar{y}_{*}\) has variance \(\frac{\text{Var}(Y|x_{\text{new}}^{\top}\hat{\beta})}{n_{*}}\). To approximate the prediction variance, we can apply the delta method in (2): \(\text{Var}(\hat{y}_{\text{new}})\approx x_{\text{new}}^{\top}(X^{\top}VX)^{-1}x _{\text{new}}\cdot\left(\frac{\delta g^{-1}}{\delta(x_{\text{new}}^{\top}\hat{ \beta})}\right)^{2}.\) Here, \(V\) is a diagonal matrix with diagonal elements \(v_{ii}=\left(\frac{\delta g^{-1}}{\delta(x_{i}^{\top}\hat{\beta})}\right)^{2} \left(\text{Var}(Y_{i}|x_{i}^{\top}\hat{\beta})\right)^{-1}\). An expression for Sam's effective sample size follows: \[n_{*}\approx\frac{\text{Var}(Y|x_{\text{new}}^{\top}\hat{\beta})}{x_{\text{ new}}^{\top}(X^{\top}VX)^{-1}x_{\text{new}}\cdot\left(\frac{\delta g^{-1}}{ \delta(x_{\text{new}}^{\top}\hat{\beta})}\right)^{2}} \tag{11}\] For patients in the development data, this expression reduces to the inverse of their approximated leverages \(V^{\frac{1}{2}}X\left(X^{\top}VX\right)^{-1}X^{\top}V^{\frac{1}{2}}\). ## 3 Application to GUSTO data ### Data description We will illustrate the effective sample size in a large data set of patients with acute myocardial infarction (GUSTO-I) [11], publicly available from [https://hbiostat.org/data/gusto.rda](https://hbiostat.org/data/gusto.rda). This data set has been used to illustrate prediction modelling methodology before [12]-[16]. The dataset contains data on 40,830 patients, of whom 2851 (7.0%) died within the first 30 days after myocardial infarction. ### Illustration with two predictors To illustrate the concept of effective sample size, we apply the theory of the previous section to a smaller subset of the GUSTO-I dataset (\(n=1214\), comprising study sites from one US region and one non-US region). The variables that we used are DAY30 (a binary indicator of death within 30 days), age, height and shock (a binary indicator of whether the patient was in shock at hospital admission). The variables age and height were centered at their marginal means for the analyses. On average, patients were 61 (SD 11) years old and stood 170 (SD 10) centimeters tall. A minority of 28 (2.3%) patients presented with shock at admission. 76 patients (6.3%) died within 30 days after admission. In previous analyses, age and shock were strong predictors of death within 30 days. Height was uncorrelated with age. For illustrative purposes, we calculated effective sample sizes for two linear regression models. In the first model, the predictors were age and height; in the second model, age and shock were predictors. As effective sample sizes are independent of the outcome in linear regression, the outcome is not relevant here. We also determined effective sample sizes for the following logistic regression models: (1) DAY30 \(\sim\) age; (2) DAY30 \(\sim\) age + height; (3) DAY30 \(\sim\) age + shock. For the linear regression models, lower effective sample sizes were observed in patients whose covariate values are more 'rare' relative to the joint distribution of the covariates (Figures 1a, 1b). Patients whose covariate values are closer to the average have higher effective sample sizes. Note that in the linear regression case, none of the effective sample sizes exceed the actual total sample size of 1214. In the logistic models, similar properties appear, though now combined with the impact of the predicted risk being close to zero for some patients (Figures 1c-1d,2). When the predicted risk is close to 0 or 100%, the effective sample sizes increase, even when the prediction was based on rarer covariate values. This is clearly visible in the upper left corner of Figure 1c. Younger patients have a lower 30-day mortality risk, which is so close to 0 that their effective sample sizes are actually larger than those of patients whose age was closer to average. Some effective sample sizes even exceed 1214. This illustrates that the effective sample size is not a'real' sample size, but a re-expression of the uncertainty of the prediction. In contrast to linear regression, effective sample sizes for logistic regression may first decrease with extrapolation and then increase again as covariate values become so extreme that predictions approach 0 or 100%. The models with shock illustrate effective sample sizes when the model contains a rare binary covariate. From 1214 GUSTO patients, only 28 presented with shock at hospital admission. The effect of shock on death within 30 days is large but uncertain, due to the small number of patients. This is reflected in effective sample sizes for patients with shock, ranging from 25.2 to 28.0 in the linear regression case (Figure 1b). In the logistic regression case, effective sample sizes for these patients ranged from 22.1 to 55.7, as shock is a strong predictor for the outcome and the outcome influences \(n_{\text{eff}}\) here (Figures 1d, 2b). Telling a patient with shock that their prediction is based on a study of 1214 patients does not express that there were effectively about 28 patients like them in that study. Finally, we visualised a comparison of effective sample sizes in the logistic model with age and shock to a model with age only (Figure 3). For the majority of patients without shock, the effective sample size is increased slightly when shock is added to the model, whereas for patients with shock \(n_{\text{eff}}\) decreases dramatically. There is a bias-variance tradeoff here: adding shock to the model may bring predictions closer to the truth for some patients, though at the cost of increased uncertainty. Figure 1: Heatmap of effective sample sizes in the GUSTO subsample (\(n=1214\)) when two-predictor regression models are fitted to predict the outcome of 30-day mortality. Effective sample sizes in linear regression are independent of the outcome, hence the outcome is denoted more generally as \(Y\) for these models. In all models, the variables Age and Height were centered at their respective marginal means (61 years and 170 cm). Only a minority of patients in the data presented with shock at admission to hospital. Figure 3: Effective sample sizes and the corresponding predicted probabilities for patients in the GUSTO dataset. Two logistic regression models were fitted to the data to obtain predictions: DAY30 \(\sim\) Age and DAY30 \(\sim\) Age + Shock. Predictions for the same patient are connected by a solid line. Figure 2: Effective sample sizes and the corresponding predicted probabilities of 30-day mortality for patients in the GUSTO subsample (\(n=1214\)), based on two logistic regression models. In all models, the variables Age and Height were centered at their respective marginal means (61 years and 170 cm). Only a minority of patients in the data presented with shock at admission to hospital. ### Development and validation of published prediction models To illustrate the use of effective sample sizes in model development and external validation, we used the full GUSTO dataset. We split the data into a development sample of data that was collected within the United States (US, n=23,034) and an external validation sample of data that was collected elsewhere (non-US, n=17,796). Two previously published prediction models were fitted to the US data: one model with 7 predictor variables and 13 model parameters [17]; and a more complex model with 15 predictor variables and 29 model parameters [18]. We then calculated effective sample sizes for each patient in the development (US) data and summarized their distribution (Figure 4). Lower effective sample sizes are more frequent for the 15 predictor model compared to the 7 predictor model. The total sample size of 23,034 is large, however, hence rarer patient profiles are still quite well-represented in absolute numbers. There were five and sixteen patients with \(n_{\text{eff}}\) below 30 for the 7- and 15-predictor models, respectively. We then calculated effective sample sizes for the previously fitted models in the external validation sample (Figure 5). For the 7-predictor model, the distribution of effective sample sizes in the validation sample was very similar to that in the development sample. So, with respect to the 7-predictor model the non-US patients overall seem to have a similar prediction uncertainty to the US patients and the occurrence of rare or common covariate patterns seems similar as well. For the 15-predictor model, lower effective sample sizes occur more in the validation sample than they did in the development sample. This indicates that the covariate patterns of patients in the validation sample were somewhat further away from the 'average patient' in the development sample. Figure 4: Histograms of effective sample sizes in the GUSTO US data (development sample) for a 7- and 15-predictor model for 30-day mortality. ## 4 Discussion We have defined the effective sample size, which can be used as a measure of individual sampling uncertainty in predictions. We derived explicit formulas for the effective sample sizes for linear and generalized linear models. With linear regression models, the effective sample sizes are the inverted leverages and they do not depend on the outcome. In the logistic regression case, effective sample sizes depend on the prediction (and therefore on the outcome) as well as the covariate values, and they increase when the predictions approach 0 or 1. Even with an exceptionally large total development sample size (\(>23,000\)), we have shown examples in which effective sample sizes dropped below 30 for some patients, indicating large differences in effective representation of patient types in the development of the 7- and 15-predictor models. We see three directions for application of the concept of effective sample size: in model development and validation; to communicate predicted risks to individual patients; and as a tool to detect patient groups who were underrepresented in model development. In model development, there can be at least two aims regarding the effective sample sizes. The first is that for all patients in the (development/validation) data, their \(n_{\text{eff}}\) is not too low, adding to an overall sample size requirement for prediction model development[19]. This would require specification of a minimal \(n_{\text{eff}}\) for the prediction to be of added value in clinical practice. The minimal effective samples size in model development can be increased by refraining from adding rare binary covariates to a model, like the shock variable in the GUSTO example. There is a bias-variance tradeoff here, however, since shock is a strong predictor for the outcome. Furthermore, \(n_{\text{eff}}\) is defined relative to a specific model and provides a measure of uncertainty conditional on the model. Therefore, it is debatable whether \(n_{\text{eff}}\) can be compared between different models. Another objective in model building could be that the effective sample size is more or less the same in all patients, or that their spread remains within certain bounds. Note that shrinkage methods such as Ridge and Lasso regression shrink predictions via the regression coefficients[20], Figure 5: Distribution of effective sample sizes in the GUSTO US data (development sample) vs the non-US data (external validation sample) for a 7- and 15-predictor model for 30-day mortality. which are not individual to each patient. To even out effective sample sizes in a sample of patients, individual shrinkage of the covariates similar to Winsorizing [21] can be applied. This creates a nonlinear relationship between the original covariate and the linear predictor. Further investigation is needed in this direction. In a model validation setting, the distribution of effective sample sizes in a validation sample summarises individual sampling uncertainty around predictions in a sample of interest. This provides an idea of the variability of prediction uncertainty across patients and patient profiles, as well as a measure of how different (with respect to the model) the validation sample is compared to the development sample. Therefore, the distribution of effective sample sizes could be an addition to standard performance metrics based on point estimates, such as the c-statistic and calibration summaries [1], [9]. In addition to the statistical applications of effective sample sizes in prediction, we propose \(n_{\text{eff}}\) as a communication tool in clinical practice. The effective sample size could be used to inform both patients and clinicians about sampling uncertainty in predictions, with statements as "this number is effectively based on 55 patients like you". Further empirical research is needed to determine whether \(n_{\text{eff}}\) indeed has beneficial effects on the communication between clinician and patient, and between clinician and model developer. Furthermore, effective sample sizes can detect groups of patients to whom we would like to apply a prediction model in clinical practice, but who were effectively underrepresented in the development of the model. We have shown that effective sample sizes can be very small for certain patients, even when the total development sample size was as large as 23,000. Most prediction models are developed with far smaller sample sizes, however. In such a case, effective sample sizes function as a warning sign that the model may not be sufficiently trustworthy to apply in practice for specific groups of patients. When such groups of patients are defined by characteristics historically subject to social biases, detection of their underrepresentation is important in light of recent discussions on algorithmic fairness [22]-[25]. In this paper, we focused our derivations on linear and logistic regression models, and provided a generalisation for GLMs. To make the concept of effective sample size more widely applicable, expressions for other types of regression models need to be obtained. In the context of clinical prediction models, the Cox model is a very relevant example. As the variance around a predicted risk from a Cox model is based not only on observed covariates but also on a censoring process, a sample of hypothetical patients "like" our new patient is not immediately clearly defined. Another direction is to develop a generic numerical algorithm to obtain effective sample sizes when an analytical expression is not available and/or in cases of more complex and flexible models. We suspect that some modelling assumptions are necessary to move from the intuitive definition of \(n_{\text{eff}}\) as 'hypothetical sample with the same variance' to a formulation as 'prediction variance divided by outcome variance'. If these assumptions can be made, then bootstrap or other sampling-based estimates of the prediction and outcome variances may be used to approximate effective sample sizes. We intend to explore this further, with links to a recently proposed 'instability index'[26]. All in all, the concept of effective sample size may prove useful as an intuitive measure of individual prediction uncertainty. Its implications should be explored further for the development, validation and clinical implementation of prediction models. ### Financial disclosure None reported. ### Conflict of interest The authors declare no potential conflict of interests. ### Data availability statement The data that support the results of this paper were loaded from a public repository and are publicly available from [https://hbiostat.org/data/gusto.rda](https://hbiostat.org/data/gusto.rda). ## Supporting information All R[27] code used for the illustrations on the GUSTO data and corresponding output is provided as supporting information.
2309.14875
Enhanced Channel Estimation in mm-Wave MIMO Systems Leveraging Integrated Communication and Sensing
This paper tackles the challenge of wideband MIMO channel estimation within indoor millimeter-wave scenarios. Our proposed approach exploits the integrated sensing and communication paradigm, where sensing information aids in channel estimation. The key innovation consists of employing both spatial and temporal sensing modes to significantly reduce the number of required training pilots. Moreover, our algorithm addresses and corrects potential mismatches between sensing and communication modes, which can arise from differing sensing and communication propagation paths. Extensive simulations demonstrate that the proposed method requires 4x less pilots compared to the current state-of-the-art, marking a substantial advancement in channel estimation efficiency.
Silvia Mura, Marouan Mizmizi, Umberto Spagnolini, Athina Petropulu
2023-09-26T12:15:55Z
http://arxiv.org/abs/2309.14875v1
# Enhanced Channel Estimation in Mm-Wave MIMO Systems Leveraging Integrated Communication and Sensing ###### Abstract This paper tackles the challenge of wideband MIMO channel estimation within indoor millimeter-wave scenarios. Our proposed approach exploits the integrated sensing and communication paradigm, where sensing information aids in channel estimation. The key innovation consists of employing both spatial and temporal sensing modes to significantly reduce the number of required training pilots. Moreover, our algorithm addresses and corrects potential mismatches between sensing and communication modes, which can arise from differing sensing and communication propagation paths. Extensive simulations demonstrate that the proposed method requires \(4\times\) less pilots compared to the current state-of-the-art, marking a substantial advancement in channel estimation efficiency. Silvia Mura\({}^{\dagger}\), Marouan Mizmizi\({}^{\dagger}\), Umberto Spagnolini\({}^{\dagger}\) and Athina Petropulu\({}^{*}\)\({}^{\dagger}\)Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano \({}^{*}\)Dept. of Electrical and Computer Engineering, Rutgers University Wi-Fi, mmWave, Channel Estimation, Integrated Sensing and Communication ## 1 Introduction Future wireless networks are exploiting higher frequencies, notably millimeter waves (mmWaves), to meet the ever-growing demand for user throughput. However, mmWave propagation poses significant challenges due to high path and blockage attenuation, particularly for indoor scenarios [1]. In this context, the channel state information (CSI) acquisition assumes pivotal significance for accurate signal decoding. A prevalent approach to CSI estimation consists of multiplexing known pilots with data, thereby enabling CSI acquisition at the receiver's side [2]. The effectiveness of CSI acquisition depends on the estimation method and the number of pilots. Numerous channel estimation techniques have been proposed spanning conventional Bayesian methodologies, e.g., least square (LS), maximum likelihood, and minimum mean square error (MMSE), along with novel approaches such as compressed sensing (CS) [3], low-rank [4], and machine learning techniques [5]. CS reduces pilot overhead by exploiting channel sparsity, making it a viable choice for mmWave communications, which exhibit a limited number of dominant scatterers. However, in indoor settings, CS-based estimation can be computationally intensive due to the increased number of scatterers, which leads to rapid CSI variations and requires higher training samples compared to outdoor scenarios [6]. Integrated Sensing and Communication (ISAC) is a novel approach that combines sensing and communication at the transmitter [7, 8]. ISAC can leverage the correlation between sensing and communication channels to reduce the number of training samples required for channel estimation. Prior research has predominantly focused on examining this correlation in outdoor MIMO ISAC systems. For instance, [9] exploits the sensing covariance to estimate communication channel covariance, reducing beam training overhead. By leveraging the correlation between the uplink channel and sensing, [10] suggests a sensing-aided Kalman filter-based method to enhance CSI estimation accuracy. [11] formulates orthogonal time frequency space channel estimation as a sparse recovery problem by utilizing the sensing information to determine the delay and Doppler support. [12] proposes a turbo sparse Bayesian inference for target detection and channel estimation in narrowband ISAC systems. In ISAC systems, the location of scattering objects is estimated and then used to determine the space-time modes of the communication channel, i.e. the delay and angular characteristics of the channel. However, the sensing operation is typically done in a monostatic configuration, while communication channels use a bistatic setup. This difference in setup, as well as the presence of extended scatterers, may lead to communication and sensing (C&S) mode mismatch. As depicted in Fig. 1, C&S mismatch may involve discrepancies between sensing and communication space-time modes [13], Figure 1: Possible mismatches between communication and sensing channel modes in the reference indoor scenario. as well as missing modes in the sensing channel, and vice versa. Most previous works in [9]-[12] have not explored this aspect, which may lead to inaccurate channel estimation and system performance degradation. This paper presents a sensing-aided CS channel estimation algorithm that, differently from prior works, addresses C&S mode mismatch by defining the initial communication space-time modes based on the sensing signals and compensating for the possible C&S mismatches. Subsequently, the communication channel modes are augmented by identifying communication missing modes, related to scatterers not visible in the sensing signals, within an angular and temporal mode codebook, tailored to the indoor scenario. This approach is suitable for wideband channel estimation and represents a major improvement over existing methods. Numerical simulations demonstrate that the sensing-assisted channel estimation method requires 4\(\times\) less of training pilots compared to current state-of-the-art solutions. The paper is organized as follows: Section II defines the system and channel model, Section III details the proposed sensing-assisted CS algorithm, numerical results are discussed in Section IV, and Section V draws the conclusions. ## 2 System and Channel Model Consider the indoor setting depicted in Figure 2. Here, the access point (AP) includes two \(N\) elements ULA antenna arrays for simultaneous transmission and reception to facilitate ISAC functionality. Within each channel coherence interval, the AP communicates with a single antenna user equipment (UE) in downlink mode, while concurrently sensing \(Q\) extended targets, including the UE, namely \(T_{1},...,T_{Q}\) in Fig. 2. Subsequently, the UE transmits its data to the AP in uplink mode. The transmitted signal is an orthogonal frequency-division waveform with a bandwidth \(B=K\Delta f\), where \(K\) and \(\Delta f\) denote the number of subcarriers and the subcarrier spacing, respectively. Within the \(k\)th sub-carrier, the downlink signal transmitted by the AP is expressed as \[\mathbf{x}[k]=\mathbf{f}[k]s_{d}[k], \tag{1}\] where \(s_{d}[k]\) denotes the \(k\)th transmitted symbol such that \(\mathbb{E}[s_{d}[k]s_{d}[m]^{*}]\) = \(\sigma_{s}^{2}\delta[k-m]\), with power \(\sigma_{s}^{2}\), and \(\mathbf{f}[k]\in\mathbb{C}^{N\times 1}\) represents the precoding vector at the \(k\)th subcarrier. The sensing signal received by the AP is \[\mathbf{r}[k]=\mathbf{H}_{s}[k]\mathbf{x}[k]+\mathbf{n}_{s}[k], \tag{2}\] where \(\mathbf{H}_{s}[k]\in\mathbb{C}^{N\times N}\) denotes the sensing channel matrix and \(\mathbf{n}_{s}[k]\sim\mathcal{CN}(0,\sigma_{s}^{2}\mathbf{I}_{N})\) is the noise. The downlink signal received by the UE is expressed as \[y_{d}[k]=\sqrt{\rho_{d}}\mathbf{h}_{d}[k]\mathbf{x}[k]+n_{d}[k], \tag{3}\] where \(\rho_{d}\) represents the average downlink received power, \(\mathbf{h}_{d}[k]\in\mathbb{C}^{1\times N}\) denotes the communication channel vector such that \(\mathbb{E}[\mathbf{h}_{d}\mathbf{h}_{d}^{\mathrm{H}}]=N\) and \(n_{d}[k]\sim\mathcal{CN}(0,\sigma_{n}^{2})\) is the additive noise. In the uplink operation, the received signal is expressed as \[\mathbf{y}_{u}[k]=\sqrt{\rho_{u}}\mathbf{h}_{u}[k]s_{u}[k]+\mathbf{n}_{u}[k], \tag{4}\] where \(\rho_{u}\) is the average uplink received power, \(\mathbf{h}_{u}[k]\in\mathbb{C}^{N\times 1}\) denotes the uplink communication channel vector such that \(\mathbb{E}[\mathbf{h}_{u}^{\mathrm{H}}\mathbf{h}_{u}]=N\), and \(\mathbf{n}_{u}[k]\sim\mathcal{CN}(0,\sigma_{n}^{2}\mathbf{I}_{N})\) is the additive noise. The symbol transmitted by the UE on the \(k\)th sub-carrier, denoted as \(s_{u}[k]\), is designed such that \(\mathbb{E}[s_{u}[k]s_{u}^{*}[m]]=\sigma_{s}^{2}\delta[k-m]\). Here, \(\sigma_{s}^{2}\) refers to the transmitted power. Among the transmitted symbols \(\mathbf{s}_{u}=[s_{u}[0],s_{u}[1],\ldots,s_{u}[K-1]]\), there are \(K_{p}<K\) pilots regularly placed across the subcarriers, explicitly for channel estimation purposes. ### Communication Channel Model The high free-space pathloss that is a characteristic of mmWave propagation leads to limited space-time selectivity. For this reason, we adopt a block-fading clustered channel representation, based on the extended Saleh-Valenzuela model, which allows us to accurately capture the mathematical structure present in mmWave channels [14]. Hence, the uplink channel in the frequency domain can be expressed as \[\mathbf{h}_{u}[k]= \sqrt{\frac{N}{P}}\sum_{p=1}^{P}\alpha_{p}\mathbf{a}(\theta_{p})e ^{-j\frac{2\pi k\tau_{p}}{K}}, \tag{5}\] where \(P\) denotes the number of propagation paths, \(\alpha_{p}\) is the complex gain, \(\mathbf{a}(\theta)\) denotes the AP array response vector, \(\theta_{p}\) and \(\tau_{p}\) denote the angle of arrival, and delay of the \(p\)th path, respectively. Herein, we assume channel reciprocity [15], hence, \(\mathbf{h}_{d}[k]=\mathbf{h}_{u}^{\mathrm{T}}[k]\). ### Sensing Channel Model Similarly, the sensing channel is expressed as \[\mathbf{H}_{s}[k]=\sum_{q=1}^{Q}\beta_{q}\mathbf{a}(\theta_{q})\mathbf{a}( \theta_{q})^{\mathrm{H}}G[k]e^{-j\frac{2\pi k\tau_{q}}{K}}, \tag{6}\] where \(\theta_{q}\) is the azimuth angle related to the \(q\)th target, \(\tau_{q}=2d_{q}/c\) denotes the two-way propagation delay between the AP and the \(q\)th target, with \(d_{q}\) being the distance between the AP Figure 2: Reference indoor scenario and the \(q\)th target. The model of the scattering coefficient in (6) follows the radar equation [16] \[\beta_{q}=\sqrt{\frac{\lambda^{2}N^{2}}{(4\pi d_{q})^{4}}\Gamma_{q}}\ e^{j\xi_{q}}, \tag{7}\] where \(\Gamma_{q}\) denotes the radar cross-section of the target and \(\xi_{q}\) is an additional random phase term accounting for the Tx/Rx circuitry and Doppler shift due to the target's mobility. ## 3 Sensing-Aided Channel Estimation This section details the proposed sensing-aided channel estimation method. Let us reformulate the communication channel vector in (5) as \[\mathbf{h}_{u}[k]=\mathbf{A}(\boldsymbol{\theta})\mathbf{T}[\boldsymbol{\tau},k]\boldsymbol{\alpha}=\boldsymbol{\Phi}[k]\boldsymbol{\alpha} \tag{8}\] where \(\mathbf{A}(\boldsymbol{\theta})=\sqrt{\frac{N}{P}}[\mathbf{a}(\theta_{1}), \ldots,\mathbf{a}(\theta_{P})]\in\mathbb{C}^{N\times P}\), \(\mathbf{T}[\boldsymbol{\tau},k]=\mathrm{diag}\left(e^{-j\frac{2\pi k\tau_{1}}{ \kappa}},\ldots,e^{-j\frac{2\pi k\tau_{P}}{\kappa}}\right)\in\mathbb{C}^{P \times P}\) and \(\boldsymbol{\alpha}\in\mathbb{C}^{P\times 1}\) denotes the communication channel coefficient vector. The matrix \(\boldsymbol{\Phi}[k]\) represents the combined space-time modes of the communication channel. The goal of the proposed method is to estimate the matrix \(\boldsymbol{\Phi}[k]\) and the channel coefficient vector \(\boldsymbol{\alpha}\) by exploiting sensing information with limited pilots resources. ### From Sensing to Communication Modes The estimated sensing space-time modes of the targets, specifically \(\boldsymbol{\hat{\tau}}\) and \(\boldsymbol{\hat{\theta}}\), are derived from the signal in (2) using a range-angle compression as in [17]. To distinguish between line-of-sight (LoS) and non-line-of-sight (NLoS) modes, we employ the initial access procedure detailed in [18]. This results in the decomposition of \(\boldsymbol{\hat{\tau}}\) into \(\hat{\tau}_{LoS}\) and a set of NLoS delays denoted as \(\hat{\tau}_{1},\hat{\tau}_{2},\ldots,\hat{\tau}_{Q-1}\), as well as the partitioning of \(\boldsymbol{\hat{\theta}}\) into \(\hat{\theta}_{LoS}\) and the NLoS angles denoted as \(\hat{\theta}_{1},\hat{\theta}_{2},\ldots,\hat{\theta}_{Q-1}\). Assuming only single reflections, the temporal modes are defined by employing the geometric cosine law, such as \[\bar{\tau}_{q}=\frac{\hat{\tau}_{q}}{2}+\sqrt{\frac{\hat{\tau}_{LoS}^{2}}{4} +\frac{\hat{\tau}_{q}^{2}}{4}-\frac{\hat{\tau}_{LoS}\hat{\tau}_{q}}{2}}\, \cos(\hat{\theta}_{q}-\hat{\theta}_{LoS}), \tag{9}\] and \(\bar{\tau}_{LoS}=\hat{\tau}_{LoS}/2\). To account for C&S mismatch, we assume that the space-time features are affected by an additional error, such that: \(\bar{\theta}_{q}=\hat{\theta}_{q}+\delta_{\theta}\) and \(\bar{\tau}_{q}=\bar{\tau}_{q}+\delta_{\tau}\), where \(\delta_{\theta}\) and \(\delta_{\tau}\) represent the additional error arising from C&S mismatch, as illustrated in Fig. 3. These additional errors \(\boldsymbol{\delta}=[\boldsymbol{\delta}_{\theta}\,^{\mathrm{T}},\,\boldsymbol {\delta}_{\tau}\,^{\mathrm{T}}]^{\mathrm{T}}\in\mathbb{R}^{2Q\times 1}\) will be estimated in the subsequent analysis. Using \(\boldsymbol{\bar{\theta}}\) and \(\boldsymbol{\bar{\tau}}\), we can compute the communication space-time modes \(\boldsymbol{\hat{\Phi}}_{\boldsymbol{\delta}}[k]\) as in (8), which will be further refined in the following. ### Modal Mismatch Compensation and Channel Estimation Estimation of the channel parameters and compensation of mismatch can be formally framed as the following optimization problem: \[\underset{\boldsymbol{\alpha},\boldsymbol{\delta}}{\mathrm{minimize}}\left\| \bar{\mathbf{y}}-\bar{\boldsymbol{\Phi}}_{\boldsymbol{\delta}}\hat{\boldsymbol {\alpha}}\odot\bar{\mathbf{s}}\right\|_{2}+\lambda\left\|\hat{\boldsymbol{ \alpha}}\right\|_{1}, \tag{10}\] where \(\bar{\mathbf{y}}=[\mathbf{y}[0]^{\mathrm{T}},\ldots,\mathbf{y}[K_{p}-1]^{ \mathrm{T}}]^{\mathrm{T}}\) represents the received pilot symbols, \(\boldsymbol{\bar{\Phi}}_{\boldsymbol{\delta}}\) = \([\boldsymbol{\hat{\Phi}}_{\boldsymbol{\delta}}[1]^{\mathrm{T}},\ldots, \boldsymbol{\hat{\Phi}}_{\boldsymbol{\delta}}[K_{p}]^{\mathrm{T}}]^{\mathrm{T} }\in\mathbb{C}^{NK_{p}\times Q}\) is the estimated modes matrix at the pilot subcarriers, \(\hat{\boldsymbol{\alpha}}\in\mathbb{C}^{Q\times 1}\) denotes the vector of channel coefficients to be estimated, \(\bar{\mathbf{s}}=\bar{\mathbf{s}}_{u}\otimes\mathbf{1}_{N}\in\mathbb{C}^{NK_{p} \times 1}\) represents the transmitted pilot symbols where \(\bar{\mathbf{s}}_{u}=[s_{u}[0],...,s_{u}[K_{p}-1]]^{\mathrm{T}}\) and \(\lambda>0\) serves as a hyperparameter. The optimization problem in (10) is not convex. Hence, we propose the algorithm 1 that iterates through three pivotal steps: channel coefficient estimation, mode mismatch compensation, and mode augmentation. In the first step, the cost function is minimized with respect to the channel coefficients \(\hat{\boldsymbol{\alpha}}\) through orthogonal matching pursuit (OMP) [19], while the second step involves estimating C&S mismatches \(\boldsymbol{\delta}\) based on Figure 3: Initial Communication mode estimate and Communication and Sensing mismatch the reconstructed channel. Due to the non-convex nature of the objective function, the optimal mismatch vector \(\mathbf{\delta}^{(i+1)}\) is determined using genetic algorithms [20]. Before proceeding to the next iteration, the space-time mode matrix \(\mathbf{\bar{\Phi}}_{\mathbf{\delta}}^{(i)}\) is updated using the optimal mismatch. Additionally, we enhance the space-time mode matrix by adding a new mode from the codebook space-time matrix \(\mathbf{\bar{\Phi}}_{c}=[\mathbf{\Phi}_{c}[1]^{\mathrm{T}},\dots,\mathbf{\Phi}_{c}[K_{p}]^{ \mathrm{T}}]^{\mathrm{T}}\) that best aligns with the estimation error \(\mathbf{\epsilon}^{(i+1)}\). The codebook is determined according to the room geometry, represented by \(\mathbf{\theta}_{c}\) and \(\mathbf{\tau}_{c}\), as \(\mathbf{\Phi}_{c}[k]=\mathbf{A}(\mathbf{\theta}_{c})\mathbf{T}[\mathbf{\tau}_{c},k]\). A novel space-time mode \(\hat{\mathbf{b}}\), with the highest correlation to the estimation error \(\mathbf{\epsilon}^{(i+1)}\), is obtained and, if more correlated to the residual than the initial mode, it is incorporated into the current space-time mode matrix \(\mathbf{\bar{\Phi}}_{\mathbf{\delta}}^{(i+1)}\). Finally, the uplink channel over all the subcarriers is obtained by interpolating the estimated channel \(\hat{\mathbf{h}}_{u}\). ## 4 Numerical Results Herein, we consider an indoor office scenario. The carrier frequency is \(60\) GHz and both sensing and communication channels are simulated using Matlab Ray-Tracing package. We assume Ricean fading with a 4 dB Ricean factor and \(N=8\) antenna elements arrays [15]. The performance is evaluated in terms of mean square error (MSE), i.e., \[\mathrm{MSE}=\mathbb{E}_{k}\left[\left\|\hat{\mathbf{h}}_{u}[k]-\mathbf{h}_{u }[k]\right\|^{2}\right] \tag{11}\] and symbol error rate (SER), defined as the ratio between the erroneous estimated symbols and the total number of data symbols. The estimated data symbols at the AP are \[\hat{s}_{u}[k]=\mathbf{w}_{u}^{\mathrm{H}}[k]\mathbf{y}_{u}[k], \tag{12}\] where \(\mathbf{w}_{u}[k]\) denotes the MMSE combiner, derived as in [4], using the estimated channel \(\hat{\mathbf{h}}_{u}[k]\). The performance is evaluated by varying the percentage of pilots, namely \(\eta=K_{p}/K=5\%\) and \(20\%\) and the communication SNR at the antenna, defined as \(\gamma_{0}=\sigma_{s}^{2}\rho_{u}/\sigma_{n}^{2}\). The proposed solution is compared to the conventional LS and the sensing-aided algorithm of [12]. The results in Fig. 4 and 5 reveal the limitations of the classical LS method when pilot resources are insufficient. In contrast, both the proposed method and [12] exhibit notable performance improvements, even with low pilot density. This highlights the efficacy of integrating sensing information, facilitating reliable channel estimation with minimal overhead. Remarkably, the proposed algorithm's ability to compensate for C&S mismatch leads to a substantial enhancement in MSE in Fig. 4, resulting in a reduction of approximately \(3\) dB and 5 dB for \(\eta=5,20\%\), respectively, at \(\gamma_{0}=10\) dB. A comparable improvement is noticeable in Fig. 5, where the proposed algorithm reduces the SER by half for \(\eta=5\%\) and \(\gamma_{0}=10\) dB. The proposed method attains similar SER performance with respect to the state-of-the-art solutions while requiring 4\(\times\) less overhead. ## 5 Conclusion This paper introduces a new approach to channel estimation in indoor mmWave scenarios exploiting the novel ISAC paradigm. The method considers and corrects for differences between the communication and sensing channel modes. Extensive numerical simulations reveal that the proposed method provides better MSE and SER performance than both the conventional LS approach and the sensing-assisted state-of-the-art method, which do not account for these differences. In particular, in the considered setup, the proposed method is able to achieve similar levels of MSE and SER performance with \(4\times\) less overhead. ## 6 Acknowledgment This work is supported by ARO grants, W911NF2110071, W911NF2320103, NSF grants, ECCS-2033433, and ECCS-2320568. Figure 4: MSE vs Communication SNR at the antenna Figure 5: SER vs Communication SNR at the antenna.
2305.00519
Planar graphs as distinguished graph of Morse flows on the 2-disk
To investigate the topological structure of Morse flows on the 2-disk we use the planar graphs as destinguished graph of the flow. We assume, that the flow is transversal to the boundary of the 2-disk. We give a list of all planar graph with at least 3 edges and describe all planar graphs with 4 edges. We use a list of spherical graph with at least 4 edges.
Oleksandr Pryshliak
2023-04-30T16:33:08Z
http://arxiv.org/abs/2305.00519v1
# Planar graphs as distinguished graph of Morse flows on the 2-disk ###### Abstract To investigate the topological structure of Morse flows on the 2-disk we use the planar graphs as destinguished graph of the flow. We assume, that the flow is transversal to the boundary of the 2-disk. We give a list of all planar graph with at least 3 edges and describe all planar graphs with 4 edges. We use a list of spherical graph with at least 4 edges. _Key words and phrases._ Morse flow, planar graphs, spherical graph, topological invariant. ## Introduction The topic of this paper refers to applications of topological graph theory to the classification of Morse flows on surfaces. It uses both topological and algebraic methods when working with embeddings of graphs on the surface. The main construction, which dates back to the works of Peixoto, consists in the construction of a discriminant graph, the vertices of which are sources, and the edges of which are one-dimensional flowing manifolds. Classic papers on the topological theory of graphs are books [56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 76]. Embedded graphs as topological invariants of flows on closed surfaces were constructed in [7, 19, 23, 24, 25, 43, 1, 32, 35, 48, 33, 34, 41, 31], and on surfaces with a boundary y [5, 8, 21, 20, 35, 32, 28, 40, 25, 41, 44]. For a 3-manifold, a Hegaard diagram is an embedded 4-valent graph in a surface [49, 51, 39, 27, 29, 13, 4, 3]. Morse streams are gradient streams for Morse functions. If we fix the value of the function at special points, then the structure of the flow determines the structure of the function [22, 47]. Topological invariants, as Reeb graphs, of functions on oriented surfaces were constructed in [18] and [45] and in [22] for unoriented surfaces, and in [10, 16, 17, 30] for of surfaces with a boundary, in [38] for non-compact surfaces. Embedded graphs as topological invariants of smooth functions were also studied in papers [6, 9, 17, 16, 38, 37, 42, 22, 27, 55, 50, 2, 46], for manifolds with a boundary in papers [14, 17, 15], and on 3- and 4-dimensional manifolds in [36, 26]. To get acquainted with the topological theory of functions and dynamical systems, we recommend [30, 52, 53, 54, 55]. The purpose of this paper is to describe all possible structures of connected graphs on a plane with no more than 4 edges. ## 1 Spherical graphs All graphs in this and the next section are connected. We use spherical graphs to describe all planar graphs. To specify a planar graph, one of the faces must be selected in the spherical graph, which will be external on the planar graph. By projecting a graph from any point of the selected graph by spherical projection onto the plane, we get a flat graph. We will use the list and numbering of spherical graphs given in the work [7]. ### Spherical graphs with one edge Only two such graphs are possible: a loop (one vertex) and a segment (two vertices) (1). ### Spherical graphs with two edges There are 4 spherical graphs with two edges. They are shown in fig. 2. ### Spherical graphs with three edges All spherical graphs with three edges are shown in fig. 3. ### Spherical graphs with 4 edges Spherical graphs with 4 edges are shown in Fig. 4. ### Spherical graphs with three edges Spherical graphs with three edges are shown in Fig. 5. ### Spherical graphs with three edges Spherical graphs with three edges are shown in Fig. 6. [MISSING_PAGE_POST] Spherical graphs with three edges Spherical graphs Spherical graphs with three edges Spherical graphs Spherical graphs with three edges Spherical graphs [MISSING_PAGE_POST] Spherical graphs with three edges Spherical graphs [MISSING_PAGE_POST] Spherical graphs with three edges Spherical graphs Spherical graphs with three edges Spherical graphs Spherical graphs with three edges Spherical graphs Spherical graphs with three edges Spherical graphs with three edges Spherical graphs Spherical graphs with three edges Spherical graphs with three edges Spherical graphs with three edges Spherical graphs Spherical graphs with three edges Spherical graphs Spherical graphs with three edges Spherical graphs with three edges So, there are a total of 26 non-isomorphic spherical graphs with 4 edges. We took this list and numbering from the work [7]. ## 2 Planar graphs The graphs depicted in the previous section are flat (they have a fixed outer edge). The process of changing the outer face can be described as follows: the edge between the layer and the new outer face is torn and connected in a new way, that is, it moves through the point at infinity (the pole of the stereographic projection). At the same time, this process can be repeated several times and with different edges lying within the boundary of the outer face. After obtaining planar graphs in this way, they should be checked for isomorphism among themselves. Figure 4: spherical graphs with 4 edges ### Planar graphs with one edge As for the sphere, only two such graphs are possible: a loop (one vertex) and a segment (two vertices) (1). If we apply the process of overturning due to infinity to the loop, we get the same graph. This process cannot be applied to a segment. ### Planar graphs with two edges There are 6 planar graphs with two edges. They are shown in fig. 5. Here, graph 2a is obtained from graph 2 by looping. Similarly, graph 4a is derived from graph 4. ### Planar graphs with three edges All planar graphs with three edges are shown in Fig. 7. Graphs obtained from each other by flipping edges through infinity have the same numbers but different letters. For example, columns 10a and 10b can be obtained from column 10. Figure 5: flat graphs with two edges Figure 6: planar graphs with 3 edges (part 1) So, there are a total of 25 non-isomorphic planar graphs with 3 edges ### Planar graphs with 4 edges To list all planar graphs with 4 edges, we will use the list of spherical graphs with 4 edges shown in Fig. 4. In this figure, the following graphs have a single face (or two faces for which there is an isomorphism that translates one into the other): 1), 2), 3) 4). Graphs with two asymmetric faces each: 5), 6), 7), 8), 9), 10), 11), 12), 13), 15), 16), 18), 19), 20), 21 ). Graphs each have three asymmetric faces: 14), 17), 22), 23), 24), 25), 26). Therefore, there are \(4+2\times 15+3\times 7=55\) non-isomorphic planar graphs with 4 edges. ## Conclusion In this paper, we have constructed a complete list of planar graphs with no more than 4 edges. During the construction, we used spherical graphs, fixed the projection face in them, and checked the obtained projections for isomorphism as planar graphs. The results of the work can be applied in many fields, for example, such graphs are distinguished graphs of gradient vector fields on the 2-disk. So, there two such flow structures with one saddle, 6 structures with two saddles, 25 structures with tree saddles and 55 structures with four saddles. Figure 7: planar graphs with 3 edges (part 2)